wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/04\/03\/the-week-in-ai-the-pause-request-heard-round-the-world\/<\/a><\/br> Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here\u2019s a handy roundup of the last week\u2019s stories in the world of machine learning, along with notable research and experiments we didn\u2019t cover on their own.<\/p>\n In one of the more surprising stories<\/a> of the past week, Italy\u2019s data protection authority (DPA) blocked OpenAI\u2019s viral AI-powered chatbot, ChatGPT, citing concerns that the tool breaches the European Union\u2019s General Data Protection Regulation. The DPA is reportedly opening an investigation into whether OpenAI unlawfully processed people\u2019s data, as well as over the lack of any system to prevent minors from accessing the tech.<\/p>\n It\u2019s unclear what the outcome might be; OpenAI has 20 days to respond to the order. But the DPA\u2019s move could have significant implications for companies deploying machine learning models not just in Italy, but anywhere within the European Union.<\/p>\n As Natasha<\/a> notes in her piece about the news, many of OpenAI\u2019s models were trained on data scraped from the internet, including social networks like Twitter and Reddit. Assuming the same is true of ChatGPT, because the company doesn\u2019t appear to have informed people whose data it has repurposed to train the AI, it might well be running afoul of GDPR across the bloc.<\/p>\n GDPR is but one of the many potential<\/a> legal<\/a> hurdles<\/a> that AI, particularly generative AI (e.g. text- and art-generating AI like ChatGPT), faces. It\u2019s becoming clearer with each mounting challenge that it\u2019ll take time for the dust to settle. But that\u2019s not scaring away VCs, who continue to pour<\/a> capital into the tech like there\u2019s no tomorrow.<\/p>\n Will those prove to be wise investments, or liabilities? It\u2019s tough to say at present. Rest assured, though, that we\u2019ll report on whatever happens.<\/p>\n Here are the other AI headlines of note from the past few days:<\/p>\n At AI enabler Nvidia, BioNeMo<\/a> is an example of their new strategy, where the advance is not so much that it\u2019s new, but that it\u2019s increasingly easy for companies to access. The new version of this biotech platform adds a shiny web UI and improved fine-tuning of a bunch of models.<\/p>\n \u201cA growing portion of pipelines are dealing with heaps of data, amounts we\u2019ve never seen before, hundreds of millions of sequences we have to feed into these models,\u201d said Amgen\u2019s Peter Grandsard, who is leading a research division using AI tech. \u201cWe are trying to obtain operational efficiency in research as much as we are in manufacturing. With the acceleration that tech like Nvidia\u2019s provides, what you could have done last year for one project, now you can do five or 10 using the same investment in tech.\u201d<\/p>\n This book excerpt by Meredith Broussard<\/a> over at Wired is worth reading. She was curious about an AI model that had been used in her cancer diagnosis (she\u2019s OK) and found it incredibly fiddly and frustrating to try to take ownership of and understand that data and process. Medical AI processes clearly need to consider the patient more.<\/p>\n Actually nefarious AI applications make for new risks, for instance attempting to influence discourse. We\u2019ve seen what GPT-4 is capable of, but it was an open question whether such a model could create effective persuasive text in a political context. This Stanford study suggests so:<\/a> When people were exposed to essays arguing a case in issues like gun control and carbon taxes, \u201cAI-generated messages were at least as persuasive as human-generated messages across all topics.\u201d These messages were also perceived as more logical and factual. Will AI-generated text change anyone\u2019s mind? Hard to say, but it seems very likely that people will increasingly put it to use for this kind of agenda.<\/p>\n
\nThe week in AI: The pause request heard \u2019round the world<\/br>
\n2023-04-03 22:01:59<\/br><\/p>\n\n
\nMore machine learnings<\/h2>\n