wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/06\/22\/get-a-clue-says-panel-about-generative-ai-its-being-deployed-as-surveillance-devices\/<\/a><\/br> Earlier today at a Bloomberg conference in San Francisco, some of the biggest names in AI turned up, including, briefly, Sam Altman of OpenAI, who just ended his two-month world tour, and Stability AI founder Emad Mostaque. Still, one of the most compelling conversations happened later in the afternoon, in a panel discussion about AI ethics.<\/p>\n Featuring Meredith Whittaker (pictured above), the president of the secure messaging app Signal; Credo AI co-founder and CEO Navrina Singh; and Alex Hanna, the director of Research at the Distributed AI Research Institute, the three had a unified message for the audience, which was: Don\u2019t get so distracted by the promise and threats associated with the future of AI. It is not magic, it\u2019s not fully automated and \u2014 per Whittaker \u2014 it\u2019s already intrusive beyond anything that most Americans seemingly comprehend.<\/p>\n Hanna, for example, pointed to the many people around the world who are helping to train today\u2019s large language models, suggesting that these individuals are getting short shrift in some of the breathless coverage about generative AI in part because the work is unglamorous and partly because it doesn\u2019t fit the current narrative about AI.<\/p>\n Said Hanna: \u201cWe know from reporting<\/a> . . .that there is an army of workers who are doing annotation behind the scenes to even make this stuff work to any degree \u2014 workers who work with Amazon Mechanical Turk, people who work with [the training data company] Sama \u2014 in Venezuela, Kenya, the U.S., actually all over the world . . .They are actually doing the labeling, whereas Sam [Altman] and Emad [Mostaque] and all these other people who are going to say these things are magic \u2014 no. There\u2019s humans. . . .These things need to appear as autonomous and it has this veneer, but there\u2019s so much human labor underneath it.\u201d<\/p>\n The comments made separately by Whittaker \u2014 who previously worked at Google, co-founded NYU\u2019s AI Now Institute and was an adviser to the Federal Trade Commission \u2014 were even more pointed (and also impactful based on the audience\u2019s enthusiastic reaction to them). Her message was that, enchanted as the world may be now by chatbots like ChatGPT and Bard, the technology underpinning them is dangerous, especially as power grows more concentrated by those at the top of the advanced AI pyramid.<\/p>\n Said Whittaker, \u201cI would say maybe some of the people in this audience are the users of AI, but the majority of the population is the subject<\/em> of AI . . .This is not a matter of individual choice. Most of the ways that AI interpolates our life makes determinations that shape our access to resources to opportunity are made behind the scenes in ways we probably don\u2019t even know.\u201d<\/p>\n Whittaker gave an example of someone who walks into a bank and asks for a loan. That person can be denied and have \u201cno idea that there\u2019s a system in [the] back probably powered by some Microsoft API that determined, based on scraped social media, that I wasn\u2019t creditworthy. I\u2019m never going to know [because] there\u2019s no mechanism for me to know this.\u201d There are ways to change this, she continued, but overcoming the current power hierarchy in order to do so is next to impossible, she suggested. \u201cI\u2019ve been at the table for like, 15 years, 20 years. I\u2019ve been<\/em> at the table. Being at the table with no power is nothing.\u201d<\/p>\n Certainly, a lot of powerless people might agree with Whittaker, including current and former OpenAI and Google employees who\u2019ve reportedly been leery at times of their companies\u2019 approach to launching AI products.<\/p>\n Indeed, Bloomberg moderator Sarah Frier asked the panel how concerned employees can speak up without fear of losing their jobs, to which Singh \u2014 whose startup helps companies with AI governance \u2014 answered: \u201cI think a lot of that depends upon the leadership and the company values, to be honest. . . . We\u2019ve seen instance after instance in the past year of responsible AI teams being let go.\u201d<\/p>\n In the meantime, there\u2019s much more that everyday people don\u2019t understand about what\u2019s happening, Whittaker suggested, calling AI \u201ca surveillance technology.\u201d Facing the crowd, she elaborated, noting that AI \u201crequires surveillance in the form of these massive datasets that entrench and expand the need for more and more data, and more and more intimate collection. The solution to everything is more data, more knowledge pooled in the hands of these companies. But these systems are also deployed as surveillance devices. And I think it\u2019s really important to recognize that it doesn\u2019t matter whether an output from an AI system is produced through some probabilistic statistical guesstimate, or whether it\u2019s data from a cell tower that\u2019s triangulating my location. That data becomes data about me. It doesn\u2019t need to be correct. It doesn\u2019t need to be reflective of who I am or where I am. But it has power over my life that is significant, and that power is being put in the hands of these companies.\u201d<\/p>\n Added Whittaker, the \u201cVenn diagram of AI concerns and privacy concerns is a circle.\u201d<\/p>\n Whittaker obviously has her own agenda up to a point. As she said herself at the event, \u201cthere is a world where Signal and other legitimate privacy preserving technologies persevere\u201d because people grow less and less comfortable with this concentration of power.<\/p>\n But also, if there isn\u2019t enough pushback and soon \u2014 as progress in AI accelerates, the societal impacts also accelerate \u2014 we\u2019ll continue heading down a \u201chype-filled road toward AI,\u201d she said, \u201cwhere that power is entrenched and naturalized under the guise of intelligence and we are surveilled to the point [of having] very, very little agency over our individual and collective lives.\u201d<\/p>\n This \u201cconcern is existential, and it\u2019s much bigger than the AI framing that is often given.\u201d<\/p>\n We found the discussion captivating; if you\u2019d like to see the whole thing, Bloomberg has since posted it here<\/a>.<\/p>\n Meredith Whittaker and more of the sharpest minds and professionals in cybersecurity will join us at TechCrunch Disrupt to discuss the biggest challenges in the industry today<\/a>. Join us to hear from hackers, front-line defenders and security researchers draw on their firsthand knowledge and experience about the most critical threats and what can be done about them. Plus, you\u2019ll find out how a tech giant works to keep billions of users safe, and you will meet the startups working to secure crypto, hardware and messaging for the masses. Join us there and take 25% off your Disrupt 2023 pass with promo code secure25 \u2014 buy here<\/a>.<\/em><\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nGet a clue, says panel about buzzy AI tech: It\u2019s being \u2018deployed as surveillance\u2019<\/br>
\n2023-06-23 22:03:56<\/br><\/p>\n
\n