wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/07\/05\/openai-is-forming-a-new-team-to-bring-superintelligent-ai-under-control\/<\/a><\/br> OpenAI is forming a new team led by Ilya Sutskever, its chief scientist and one of the company\u2019s co-founders, to develop ways to steer and control \u201csuperintelligent\u201d AI systems.<\/p>\n In a blog<\/a> post published today, Sutskever and Jan Leike, a lead on the alignment team at OpenAI, predict that AI with intelligence exceeding that of humans could arrive within the decade. This AI \u2014 assuming it does, indeed, arrive eventually \u2014 won\u2019t necessarily be benevolent, necessitating research into ways to control and restrict it, Sutskever and Leike say.<\/p>\n \u201cCurrently, we don\u2019t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,\u201d they write. \u201cOur current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans\u2019 ability to supervise AI. But humans won\u2019t be able to reliably supervise AI systems much smarter than us.\u201d<\/p>\n To move the needle forward in the area of \u201csuperintelligence alignment,\u201d OpenAI is creating a new Superalignment team, led by both Sutskever and Leike, which will have access to 20% of the compute the company has secured to date. Joined by scientists and engineers from OpenAI\u2019s previous alignment division as well as researchers from other orgs across the company, the team will aim to solve the core technical challenges of controlling superintelligent AI over the next four years.<\/p>\n How? By building what Sutskever and Leike describe as a \u201chuman-level automated alignment researcher.\u201d The high-level goal is to train AI systems using human feedback, train AI to assist in evaluating other AI systems and ultimately to build AI that can do alignment research. (Here, \u201calignment research\u201d refers to ensuring AI systems achieve desired outcomes or don\u2019t go off the rails<\/a>.)<\/p>\n It\u2019s OpenAI\u2019s hypothesis that AI can\u00a0make faster and better alignment research progress than humans can.<\/p>\n \u201cAs we make progress on this, our AI systems can take over more and more of our alignment work and ultimately conceive, implement, study and develop better alignment techniques than we have now,\u201d Leike and colleagues John Schulman and Jeffrey Wu postulated\u00a0in a previous blog post<\/a>. \u201cThey will work together with humans to ensure that their own successors are more aligned with humans. . . . Human researchers will focus more and more of their effort on reviewing alignment research done by AI systems instead of generating this research by themselves.\u201d<\/p>\n Of course, no method is foolproof \u2014 and Leike, Schulman and Wu acknowledge the many limitations of OpenAI in their post. Using AI for evaluation has the potential to scale up inconsistencies, biases or vulnerabilities in that AI, they say. And it might turn out that the hardest parts of the alignment problem might not be related to engineering at all.<\/p>\n But Sutskever and Leike think it\u2019s worth a go.<\/p>\n \u201cSuperintelligence alignment is fundamentally a machine learning problem, and we think great machine learning experts \u2014 even if they\u2019re not already working on alignment \u2014 will be critical to solving it,\u201d they write. \u201cWe plan to share the fruits of this effort broadly and view contributing to alignment and safety of non-OpenAI models as an important part of our work.\u201d<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nOpenAI is forming a new team to bring \u2018superintelligent\u2019 AI under control<\/br>
\n2023-07-05 21:36:53<\/br><\/p>\n