wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/17\/cnil-ai-action-plan\/<\/a><\/br> France\u2019s privacy watchdog, the CNIL, has published an action plan<\/a> for artificial intelligence which gives a snapshot of where it will be focusing its attention, including on generative AI technologies like OpenAI\u2019s ChatGPT, in the coming months and beyond.<\/p>\n A dedicated Artificial Intelligence Service has been set up within the CNIL to work on scoping the tech and producing recommendations for \u201cprivacy-friendly AI systems\u201d.<\/p>\n A key stated goal for the regulator is to steer the development of AI \u201cthat respects personal data\u201d, such as by developing the means to audit and control AI systems to \u201cprotect people\u201d. <\/span><\/p>\n Understanding how AI systems impact people is another main focus, along with support for innovative players in the local AI ecosystem which apply the CNIL\u2019s best practice.<\/p>\n \u201cThe CNIL wants to establish clear rules protecting\u00a0the personal data of European citizens in order to contribute to the development of privacy-friendly AI systems,\u201d it writes.<\/p>\n Barely a week goes by without another bunch of high profile calls from technologists asking regulators to get to grips with AI<\/a>. And just yesterday, during testimony in the US Senate, OpenAI\u2019s CEO Sam Altman called for lawmakers to regulate the technology<\/a>, suggesting a licensing and testing regime.<\/p>\n However data protection regulators in Europe are far down the road already \u2014 with the likes of Clearview AI already widely sanctioned<\/a> across the bloc for misuse of people\u2019s data, for example. While the AI chatbot, Replika, has faced recent enforcement in Italy<\/a>.<\/p>\n OpenAI\u2019s ChatGPT also attracted a very public intervention by the Italian DPA at the end of March<\/a> which led to the company rushing out with new disclosures and controls for users, letting them apply some limits on how it can use their information.<\/p>\n At the same time, EU lawmakers are in the process of hammering out agreement on a risk-based framework<\/a> for regulating applications of AI which the bloc proposed back in April 2021.<\/p>\n This framework, the EU AI Act, could be adopted by the end of the year and the planned regulation is another reason the CNIL highlights for preparing its AI action plan, saying the work will \u201calso make it possible to prepare for the entry into application of the draft European AI Regulation, which is currently under discussion\u201d.<\/p>\n Existing data protection authorities (DPAs) are likely to play a role in enforcement of the AI Act so regulators building up AI understanding and expertise will be crucial for the regime to function effectively. While the topics and details EU DPAs choose focus their attention on are set to weight the operational parameters of AI in the future \u2014 certainly in Europe and, potentially, further afield given how far ahead the bloc is when it comes to digital rule-making.<\/p>\n On generative AI, the French privacy regulator is paying special attention to the practice by certain AI model makers of scraping data off the Internet to build data-sets for training AI systems like large language models (LLMs) which can, for example, parse natural language and respond in a human-like way to communications.<\/p>\n It says a priority area for its AI service will be \u201cthe protection of publicly available data on the web against the use of scraping, or scraping,<\/em> of data for the design of tools\u201d.<\/p>\n This is an uncomfortable area for makers of LLMs like ChatGPT that have relied upon quietly scraping vast amounts of web data to repurpose as training fodder. Those that have hoovered up web information which contains personal data face a specific legal challenge in Europe \u2014 where the General Data Protection Regulation (GDPR), in application since May 2018, requires them to have a legal basis for such processing.<\/p>\n There are a number of legal bases set out in the GDPR however possible options for a technology like ChatGPT are limited.<\/p>\n In the Italian DPA\u2019s view<\/a>, there are just two possibilities: Consent or legitimate interests. And since OpenAI did not ask individual web users for their permission before ingesting their data the company is now relying on a claim of legitimate interests in Italy for the processing; a claim that remains under investigation by the local regulator, Garante<\/em>. (Reminder: GDPR penalties can scale up to 4% of global annual turnover in addition to any corrective orders.)<\/p>\n The pan-EU regulation contains further requirements to entities processing personal data \u2014 such as that the processing must be fair and transparent. So there are additional legal challenges for tools like ChatGPT to avoid falling foul of the law.<\/p>\n And \u2014 notably \u2014 in its action plan, France\u2019s CNIL highlights the \u201cfairness and transparency of the data processing underlying the operation of [AI tools]\u201d as a particular question of interest that it says its Artificial Intelligence Service and another internal unit, the CNIL Digital Innovation Laboratory, will prioritize for scrutiny in the coming months.<\/p>\n Other stated priority areas the CNIL flags for its AI scoping are:<\/p>\n Giving testimony to a US senate committee yesterday, Altman was questioned by US lawmakers about the company\u2019s approach to protecting privacy and the OpenAI CEO sought to narrowly frame the topic as referring only to information actively provided by users of the AI chatbot \u2014 noting, for example, that ChatGPT lets users specify they don\u2019t want their conversational history used as training data. (A feature it did not offer initially, however.)<\/p>\n Asked what specific steps it\u2019s taken to protect privacy, Altman told the senate committee: \u201cWe don\u2019t train on any data submitted to our API. So if you\u2019re a business customer of ours and submit data, we don\u2019t train on it at all\u2026 If you use ChatGPT you can opt out of us training on your data. You can also delete your conversation history or your whole account.\u201d <\/span><\/p>\n But he had nothing to say about the data used to train the model in the first place.<\/p>\n Altman\u2019s narrow framing of what privacy means sidestepped the foundational question of the legality of training data. Call it the \u2018original privacy sin\u2019 of generative AI, if you will. But it\u2019s clear that eliding this topic is going to get increasingly difficult for OpenAI and its data-scraping ilk as regulators in Europe get on with enforcing the region\u2019s existing privacy laws on powerful AI systems.<\/p>\n In OpenAI\u2019s case, it will continue to be subject to a patchwork of enforcement approaches across Europe as it does not have an established base in the region \u2014 which the GDPR\u2019s one-stop-shop mechanism does not apply (as it typically does for Big Tech) so any DPA is competent to regulate if it believes local users\u2019 data is being processed and their rights are at risk.\u00a0So while Italy went in hard earlier this year with an intervention on ChatGPT that imposed a stop-processing-order in parallel to it opening an investigation of the tool, France\u2019s watchdog only announced an investigation back in April, in response to complaints. (Spain has also said it\u2019s probing the tech, again without any additional actions as yet.)<\/p>\n In another difference between EU DPAs, the CNIL appears to be concerned about interrogating a wider array of issues than Italy\u2019s preliminary list<\/a> \u2014 including considering how the GDPR\u2019s purpose limitation principle should apply to large language models like ChatGPT. Which suggests it could end up ordering a more expansive array of operational changes if it concludes the GDPR is being breached.\u00a0<\/span><\/p>\n<\/div>\n \u201cThe CNIL will soon submit to a consultation a guide on the rules applicable to the sharing and re-use of data,\u201d it writes. \u201cThis work will include the issue of re-use of freely accessible data on the internet and now used for learning many AI models. This guide will therefore be relevant for some of the data processing necessary for the design of AI systems, including generative AIs.<\/p>\n \u201cIt will also continue its work on designing AI systems and building databases for machine learning. These will give rise\u00a0to several publications starting in the summer of 2023, following the consultation which has already been organised with several actors, in order to provide concrete recommendations, in particular as regards the design of AI systems such as ChatGPT.\u201d<\/p>\n Here\u2019s the rest of the topics the CNIL says will be \u201cgradually\u201d addressed via future publications and AI guidance it produces:<\/p>\n<\/div>\n On audit and control of AI systems, the French regulator stipulates that its actions this year will focus on three areas: Compliance with an existing position on the use of \u2018enhanced\u2019 video surveillance<\/a>, which it published in 2022; the use of AI to fight fraud (such as social insurance fraud); and on investigating complaints.<\/p>\n It also confirms it has already received complaints about the legal framework for the training and use of generative AIs \u2014 and says it\u2019s working on clarifications there.<\/p>\n \u201cThe CNIL has, in particular, received several complaints against the company OpenAI which manages the ChatGPT service, and has opened a control procedure,\u201d it adds, noting the existence of a dedicated working group that was recently set up within the European Data Protection Board<\/a> to try to coordinated how different European authorities approach regulating the AI chatbot (and produce what it bill as a \u201charmonised analysis of the data processing implemented by the OpenAI tool\u201d).<\/p>\n In further words of warning for AI systems makers who never asked people\u2019s permission to use their data, and may be hoping for future forgiveness, the CNIL notes that it\u2019ll be paying particular attention to whether entities processing personal data to develop, train or use AI systems have:<\/p>\n So, er, don\u2019t say you weren\u2019t warned!<\/p>\n As for support for innovative AI players that want to be compliant with European rules (and values), the CNIL has had a regulatory sandbox up and running for a couple of years \u2014 and it\u2019s encouraging AI companies and researchers working on developing AI systems that play nice with personal data protection rules to get in touch (via ia@cnil.fr<\/a>).<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nFrance\u2019s privacy watchdog eyes protection against data scraping in AI action plan<\/br>
\n2023-05-17 22:19:43<\/br><\/p>\nData scraping in the frame<\/h2>\n
\n
\n
\n