wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/03\/31\/chatgpt-blocked-italy\/<\/a><\/br> Two days after an open letter called for a moratorium<\/a> on the development of more powerful generative AI models so regulators can catch up with the likes of ChatGPT, Italy\u2019s data protection authority has just put out a timely reminder that some countries do<\/em> have laws that already apply to cutting edge AI: it has ordered<\/a> OpenAI to stop processing people\u2019s data locally with immediate effect.<\/p>\n The Italian DPA said it\u2019s concerned that the ChatGPT maker is breaching the European Union\u2019s General Data Protection Regulation (GDPR), and is opening an investigation.<\/p>\n Specifically, the Garante<\/em> said it has issued the order to block ChatGPT over concerns OpenAI has unlawfully processed people\u2019s data as well as over the lack of any system to prevent minors from accessing the tech.<\/p>\n The San Francisco-based company has 20 days to respond to the order, backed up by the threat of some meaty penalties if it fails to comply. (Reminder: Fines for breaches of the EU\u2019s data protection regime can scale up to 4% of annual turnover, or \u20ac20 million, whichever is greater.)<\/p>\n It\u2019s worth noting that since OpenAI does not have a legal entity established in the EU, any data protection authority is empowered to intervene, under the GDPR, if it sees risks to local users. (So where Italy steps in, others may follow.)<\/p>\n The GDPR applies whenever EU users\u2019 personal data is processed. And it\u2019s clear OpenAI\u2019s large language model has been crunching this kind of information, since it can, for example, produce biographies of named individuals in the region on-demand (we know; we\u2019ve tried it). Although OpenAI declined to provide details of the training data used for the latest iteration of the technology, GPT-4, it has disclosed that earlier models were trained on data scraped from the Internet, including forums such as Reddit. So if you\u2019ve been reasonably online, chances are the bot knows your name.<\/p>\n Moreover, ChatGPT has been shown producing completely false information about named individuals, apparently making up details its training data lacks. That potentially raises further GDPR concerns, since the regulation provides Europeans with a suite of rights over their data, including the right to rectification of errors. It\u2019s not clear how\/whether people can ask OpenAI to correct erroneous pronouncements about them generated by the bot, for example.<\/p>\n The Garante<\/em>\u2018s statement also highlights a data breach the service suffered earlier this month, when OpenAI admitted<\/a> a conversation history feature had been leaking users\u2019 chats, and said it may have exposed some users\u2019 payment information.<\/p>\n Data breaches are another area the GDPR regulates with a focus on ensuring entities that process personal data are adequately protecting the information. The pan-EU law also requires companies to notify relevant supervisory authorities of significant breaches within tight time-periods.<\/p>\n Overarching all this is the big(ger) question of what legal basis OpenAI has relied upon for processing Europeans\u2019 data in the first place. In other words, the lawfulness of this processing.<\/p>\n The GDPR allows for a number of possibilities \u2014 from consent to public interest \u2014 but the scale of processing to train these large language models complicates the question of legality. As the Garante <\/em>notes (pointing to the \u201cmass collection and storage of personal data\u201d), with data minimization being another big focus in the regulation, which also contains principles that require transparency and fairness. Yet, at the least, the (now) for-profit company behind ChatGPT does not appear to have informed people whose data it has repurposed to train its commercial AIs. That could be a pretty sticky problem for it.<\/p>\n If OpenAI has processed Europeans\u2019 data unlawfully, DPAs across the bloc could order the data to be deleted, although whether that would force the company to retrain models trained on data unlawfully obtained is one open question as an existing law grapples with cutting edge tech.<\/p>\n On the flip side, Italy may have just banned all machine learning by, er, accident\u2026 \ud83d\ude2c<\/p>\n \u201c[T]he Privacy Guarantor notes the lack of information to users and all interested parties whose data is collected by OpenAI but above all the absence of a legal basis that justifies the mass collection and storage of personal data, for the purpose of \u2018training\u2019 the algorithms underlying the operation of the platform,\u201d the DPA wrote in its statement today [which we\u2019ve translated from Italian using AI].<\/p>\n \u201cAs evidenced by the checks carried out, the information provided by ChatGPT does not always correspond to the real data, thus determining an inaccurate processing of personal data,\u201d it added.<\/p>\n The authority added that it is concerned about the risk of minors\u2019 data being processed by OpenAI since the company is not actively preventing people under the age of 13 from signing up to use the chatbot, such as by applying age verification technology.<\/p>\n Risks to children\u2019s data is an area where the regulator has been very active, recently ordering a similar ban on the virtual friendship AI chatbot, Replika<\/a>, over child safety concerns. In recent years, it has also pursued TikTok over underage usage, forcing the company to purge over half-a-million accounts<\/a> it could not confirm did not belong to kids.<\/p>\n So if OpenAI can\u2019t definitively confirm the age of any users it\u2019s signed up in Italy, it could, at the very least, be forced to delete their accounts and start again with a more robust sign-up process.<\/p>\n OpenAI was contacted for a response to the Garante<\/em>\u2018s order.<\/p>\n Lilian Edwards, an expert in data protection and Internet law at Newcastle University who has been ahead of the curve in conducting research on the implications of \u201calgorithms that remember<\/a>,\u201d told TechCrunch: \u201cWhat\u2019s fascinating is that it more or less copy-pasted Replika in the emphasis on access by children to inappropriate content. But the real time-bomb is denial of lawful basis, which should apply to ALL or at least many machine learning systems, not just generative AI.\u201d<\/p>\n She pointed to the pivotal \u2018right to be forgotten\u2019 case involving Google search, where a challenge was brought to its consentless processing of personal data by an individual in Spain. But while European courts established a right for individuals to ask search engines<\/a> to remove inaccurate or outdated information about them (balanced against a public interest test), Google\u2019s processing of personal data in that context (internet search) did not get struck down by EU regulators over the lawfulness of processing point, seemingly on the grounds that it was providing a public utility. But also, ultimately, because Google ended up providing rights of erasure and rectification to EU data subjects.<\/p>\n \u201cLarge language models don\u2019t offer those remedies and it\u2019s not entirely clear they would, could or what the consequences would be,\u201d Edwards added<\/a>, suggesting that enforced retraining of models may be one potential fix.<\/p>\n Or, well, that technologies like ChatGPT may simply have broken data protection law\u2026<\/p>\n This report was updated with additional comment. We also fixed a misspelling of the regulator\u2019s name.<\/em><\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nItaly orders ChatGPT blocked citing data protection concerns<\/br>
\n2023-03-31 22:00:28<\/br><\/p>\nSuite of GDPR issues<\/h2>\n