wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/22\/openai-leaders-propose-international-regulatory-body-for-ai\/<\/a><\/br> AI is developing rapidly enough and the dangers it may pose are clear enough that OpenAI\u2019s leadership believes that the world needs an international regulatory body akin to that governing nuclear power \u2014 and fast. But not too fast.<\/p>\n In a post to the company\u2019s blog<\/a>, OpenAI founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever explain that the pace of innovation in artificial intelligence is so fast that we can\u2019t expect existing authorities to adequately rein in the technology.<\/p>\n While there\u2019s a certain quality of patting themselves on the back here, it\u2019s clear to any impartial observer that the tech, most visibly in OpenAI\u2019s explosively popular ChatGPT conversational agent, represents a unique threat as well as an invaluable asset.<\/p>\n The post, typically rather light on details and commitments, nevertheless admits that AI isn\u2019t going to manage itself:<\/p>\n We need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.<\/p>\n We are likely to eventually need something like an [International Atomic Energy Agency] for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.<\/p>\n<\/blockquote>\n The IAEA<\/a> is the UN\u2019s official body for international collaboration on nuclear power issues, though of course like other such organizations it can want for punch. An AI-governing body built on this model may not be able to come in and flip the switch on a bad actor, but it can establish and track international standards and agreements, which is at least a starting point.<\/p>\n OpenAI\u2019s post notes that tracking compute power and energy usage dedicated to AI research is one of relatively few objective measures that can and probably ought to be reported and tracked. While it may be difficult to say that AI should or shouldn\u2019t be used for this or that, it may be useful to say that resources dedicated to it should, like other industries, be monitored and audited. (Smaller companies could be exempt so as not to strangle the green shoots of innovation, the company suggested.)<\/p>\n
\nOpenAI leaders propose international regulatory body for AI<\/br>
\n2023-05-22 21:40:07<\/br><\/p>\n\n