wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/18\/psst-gary-marcus-is-happy-to-help-regulate-ai-on-behalf-of-the-u-s-government\/<\/a><\/br> On Tuesday of this week, neuroscientist, founder and author Gary Marcus, sat between OpenAI CEO Sam Altman and Christina Montgomery, who is IBM\u2019s chief privacy trust officer, as all three testified before the Senate Judiciary Committee for over three hours. The senators were largely focused on Altman because he runs one of the most powerful companies on the planet at the moment, and because Altman has repeatedly asked them to help regulate his work. (Most CEOs beg Congress to leave their industry alone.)<\/p>\n Though Marcus has been known in academic circles for some time, his star has been on the rise lately thanks to his newsletter (\u201cThe Road to AI We Can Trust<\/a>\u201c), a podcast (\u201cHumans vs. Machines<\/a>\u201c) and his relatable unease around the unchecked rise of AI. In addition to this week\u2019s hearing, for example, he has this month appeared on Bloomberg television and been featured in The New York Times Sunday Magazine<\/a> and Wired<\/a>, among other places.<\/p>\n Because this week\u2019s hearing seemed truly historic in ways \u2014 Senator Josh Hawley characterized AI as \u201cone of the most technological innovations in human history,\u201d while Senator John Kennedy was so charmed by Altman that he asked Altman to pick his own regulators \u2014 we wanted to talk with Marcus, too, to discuss the experience and see what he knows about what happens next. Our chat below has been edited for length.<\/p>\n Are you still in Washington?\u00a0<\/b><\/p>\n I am still in Washington. I\u2019m meeting with lawmakers and their staff and various other interesting people and trying to see if we can turn the kinds of things that I talked about into reality.<\/p>\n You\u2019ve taught at NYU. You\u2019ve co-founded a couple of AI companies, including one<\/a> with famed roboticist Rodney Brooks. I interviewed Brooks on stage back in 2017 and he said then he didn\u2019t think Elon Musk really understood AI and that he thought Musk was wrong<\/a> that AI was an existential threat.\u00a0<\/b><\/p>\n I think Rod and I share skepticism about whether current AI is anything like artificial general intelligence. There are several issues you have to take apart. One is: Are we close to AGI, and the other is how dangerous is the current AI we have? I don\u2019t think the current AI we have is an existential threat but that it is dangerous. In many ways, I think it\u2019s a threat to democracy. That\u2019s not a threat to humanity. It\u2019s not going to annihilate all humans. But it\u2019s a pretty serious risk.<\/p>\n Not so long ago, you were debating<\/b> Yann LeCun, Meta\u2019s chief AI scientist.<\/b> I\u2019m not sure what that flap<\/a> was about \u2014 the true significance of deep learning neural networks?<\/b><\/p>\n So LeCun and I have actually debated many things for many years<\/a>. We had a public debate that David Chalmers, the philosopher, moderated in 2017. I\u2019ve been trying to get [LeCun] to have another real debate ever since and he won\u2019t do it. He prefers to subtweet me on Twitter and stuff like that, which I don\u2019t think is the most adult way of having conversations, but because he is an important figure, I do respond.<\/p>\n One thing that I think we disagree about [currently] is, LeCun thinks it\u2019s fine to use these [large language models] and that there\u2019s no possible harm here. I think he\u2019s extremely wrong about that. There are potential threats to democracy, ranging from misinformation that is deliberately produced by bad actors, from accidental misinformation \u2014 like the law professor who was accused of sexual harassment even though he didn\u2019t commit it \u2014 [to the ability to] subtly shape people\u2019s political beliefs based on training data that the public doesn\u2019t even know anything about. It\u2019s like social media, but even more insidious. You can also use these tools to manipulate other people and probably trick them into anything you want. You can scale them massively. There\u2019s definitely risks here.<\/p>\n You said something interesting about Sam Altman on Tuesday, telling the senators that he didn\u2019t tell them what his worst fear is, which you called \u201cgermane,\u201d and redirecting them to him. What he still didn\u2019t say is anything having to do with autonomous weapons, which I talked with him about a few years ago as a top concern. I thought it was interesting that weapons didn\u2019t come up.<\/b><\/p>\n We covered a bunch of ground, but there are lots of things we didn\u2019t get to, including enforcement, which is really important, and national security and autonomous weapons and things like that. There will be several more of [these].<\/p>\n Was there any talk of open source versus closed systems?<\/b><\/p>\n It hardly came up. It\u2019s obviously a really complicated and interesting question. It\u2019s really not clear what the right answer is. You want people to do independent science. Maybe you want to have some kind of licensing around things that are going to be deployed at very large scale, but they carry particular risks, including security risks. It\u2019s not clear that we want every bad actor to get access to arbitrarily powerful tools. So there are arguments for and there are arguments against, and probably the right answer is going to include allowing a fair degree of open source but also having some limitations on what can be done and how it can be deployed.<\/p>\n
\nGary Marcus is happy to help regulate AI for US government: \u2018I\u2019m interested\u2019<\/br>
\n2023-05-19 21:57:00<\/br><\/p>\n