wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/03\/28\/1100-notable-signatories-just-signed-an-open-letter-asking-all-ai-labs-to-immediately-pause-for-at-least-6-months\/<\/a><\/br> More than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter that was posted online Tuesday evening that calls on \u201call AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.\u201d<\/p>\n The letter reads:<\/p>\n Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.<\/p>\n<\/blockquote>\n The letter argues that there is a \u201clevel of planning and management\u201d that is \u201cnot happening,\u201d and that instead, in recent months, unnamed \u201cAI labs\u201d have been \u201clocked in an out-of-control race to develop and deploy ever more powerful digital minds that no one \u2013 not even their creators \u2013 can understand, predict, or reliably control.\u201d<\/p>\n The letter\u2019s signatories<\/a>, some of whom are AI experts, say the pause they are asking for should be \u201cpublic and verifiable, and include all key actors.\u201d If said pause \u201ccannot be enacted quickly, governments should step in and institute a moratorium,\u201d the letter says.<\/p>\n Certainly, the letter is interesting both because of the people who have signed \u2014 which includes some engineers from Meta and Google, Stability AI founder and CEO Emad Mostaque, and people not in tech, including a self-described electrician and an esthetician \u2014 and those who have not. No one from OpenAI, the outfit behind the large language model GPT-4, has signed this letter, for example. Nor has anyone from Anthropic, whose team spun out of OpenAI to build a \u201csafer<\/a>\u201d AI chatbot.<\/p>\n Wednesday, OpenAI CEO Sam Altman spoke with the WSJ<\/a>, saying OpenAI has not started training GPT-5. Altman also noted that the company has long given priority to safety in development and spent more than six months doing safety tests on GPT-4 before its launch. \u201cIn some sense, this is preaching to the choir,\u201d he told the Journal. \u201cWe have, I think, been talking about these issues the loudest, with the most intensity, for the longest.\u201d<\/p>\n Indeed, Altman sat down<\/a> with this editor in January, where he argued that \u201cstarting these [product releases] now [makes sense], where the stakes are still relatively low, rather than just put out what the whole industry will have in a few years with no time for society to update.\u201d<\/p>\n Altman more recently sat down with computer scientist and popular podcaster Lex Fridman, and spoke<\/a> about his relationship with Musk, who was a cofounder of OpenAI but stepped away from the organization in 2018, citing conflicts of interest. (A newer report from the outlet Semafor says Musk left after his offer to run OpenAI was rebuffed<\/a> by its other cofounders, including Altman, who assumed the role of CEO in early 2019.)<\/p>\n Musk is perhaps the least surprising signatory of this open letter, given that he has been talking about AI safety<\/a> for many years and has more lately taken aim at OpenAI specifically, suggesting the company is all talk and no action. Fridman asked Altman about Musk\u2019s recent and routine tweets bashing the organization<\/a>. Said Altman: \u201cElon is obviously attacking us some on Twitter right now on a few different vectors, and I have empathy because I believe he is \u2014 understandably so \u2014 really stressed about AGI safety. I\u2019m sure there are some other motivations going on too, but that\u2019s definitely one of them.\u201d<\/p>\n That said, suggested Altman, he finds some of Musk\u2019s behavior hurtful. \u201cI definitely grew up with Elon as a hero of mine. You know, despite him being a jerk on Twitter or whatever, I\u2019m happy he exists in the world. But I wish he would do more to look at the hard work we\u2019re doing to get this stuff right.\u201d<\/p>\n We\u2019re still digesting this letter (others are already tearing it to shreds<\/a>).<\/p>\n In the meantime, you can read it in full here<\/a>.<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\n1,100+ notable signatories just signed an open letter asking \u2018all AI labs to immediately pause for at least 6 months\u2019<\/br>
\n2023-03-29 22:27:46<\/br><\/p>\n\n