wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/11\/eu-ai-act-mep-committee-votes\/<\/a><\/br> In a series of votes in the European Parliament this morning MEPs have backed a raft of amendments to the bloc\u2019s draft AI legislation \u2014 including agreeing a set of requirements for so called foundational models which underpin generative AI technologies like OpenAI\u2019s ChatGPT<\/a>.<\/p>\n The text of the amendment agreed by MEPs in two committees put obligations on providers of foundational models to apply safety checks, data governance measures and risk mitigations prior to putting their models on the market \u2014 including obligating them to consider \u201cforeseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law\u201d.<\/span><\/p>\n The amendment also commits foundational model makers to reduce the energy consumption and resource use of their systems and register their systems in an EU database set to be established by the AI Act. While providers of generative AI technologies (such as ChatGPT) are obliged comply with transparency obligations in the regulation (ensuring users are informed the content was machine generated); apply \u201cadequate safeguards\u201d in relation to content their systems generate; and provide a summary of any copyrighted materials used to train their AIs.<\/p>\n In recent weeks MEPs have been focused on ensuring general purpose AI will not escape regulatory requirements, as we reported earlier<\/a>.<\/p>\n Other key areas of debate for parliamentarians included biometric surveillance \u2014 where MEPs also agreed to changes aimed at beefing up protections for fundamental rights.<\/p>\n The lawmakers are working towards agreeing the parliament\u2019s negotiating mandate for the AI Act to unlock the next stage of the EU\u2019s co-legislative process.<\/p>\n MEPs in two committees, the Internal Market Committee and the Civil Liberties Committee, voted on some 3,000 amendments today \u2014 adopting a draft mandate on the planned artificial intelligence rulebook with 84 votes in favour, 7 against and 12 abstentions.<\/p>\n \u201cIn their amendments to the\u00a0Commission\u2019s proposal<\/a>, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow,\u201d the parliament said in a press release<\/a>.<\/p>\n Among the key amendments agreed by the committees today are an expansion of the list of prohibited practices \u2014 adding bans on \u201cintrusive\u201d and \u201cdiscriminatory\u201d uses of AI systems such as:<\/p>\n The latter, which would outright ban the business model of the controversial US AI company Clearview AI comes a day after France\u2019s data protection watchdog hit the startup with another fine<\/a> for failing to comply with existing EU laws. So there\u2019s no doubt enforcement of such prohibitions on foreign entities that opt to flout the bloc\u2019s rules will remain a challenge. But the first step is to have hard law.<\/p>\n Commenting after the vote in a statement<\/a>, co-rapporteur and MEP Dragos Tudorache, added:<\/p>\n Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It\u2019s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe. We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement.<\/p>\n<\/blockquote>\n A plenary vote in parliament to seal the mandate is expected next month (during the 12-15 June session), after which trilogue talks will kick off with the Council toward agreeing a final compromise on the file.<\/p>\n Back in 2021<\/a>, when the Commission\u2019 presented its draft proposal for the AI Act it suggested the risk-based framework would create a blueprint for \u201chuman\u201d and \u201ctrustworthy\u201d AI. However concerns were quickly raised<\/a> that the plan fell far short of the mark \u2014 including in areas related to biometric surveillance, with the Commission only proposing a limited ban on use of highly intrusive technology like facial recognition in public.<\/p>\n Civil society groups and EU bodies pressed for amendments to bolster protections for fundamental rights \u2014 with the European Data Protection Supervisor and European Data Protection Board among those calling for the legislation to go further<\/a> and urging EU lawmakers to put a total ban on biometrics surveillance in public.<\/p>\n MEPs appear to have largely heeded civil society\u2019s call. Although concerns do remain. (And of course it remains to be seen how the proposal MEPs have strengthened could get watered back down again as Member States governments enter the negotiations in the coming months.)<\/p>\n Other changes parliamentarians agreed in today\u2019s committee votes include expansions to the regulation\u2019s (fixed) classification of \u201chigh-risk\u201d areas \u2014 to include harm to people\u2019s health, safety, fundamental rights and the environment.<\/p>\n AI systems used to influence voters in political campaigns and those used in recommender systems by larger social media platforms (with more than 45 million users, aligning with the VLOPs classification in the Digital Services Act<\/a>), were also put on the high-risk list.<\/p>\n At the same time, though, MEPs backed changes to what counts as high risk \u2014 proposing to leave it up to AI developers to decide if their system is significant enough to meet the bar where obligations applying, something digital rights groups are warning (see below) is \u201ca major red flag\u201d for enforcing the rules.<\/p>\n Elsewhere, MEPs backed amendments aimed at boosting citizens\u2019 right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that \u201csignificantly\u201d impact their rights.<\/p>\n The lack of meaningful redress for individuals affected by harmful AIs was a major loophole raised by civil society groups in a major call for revisions in fall 2021<\/a> who pointed out the glaring difference between the Commission\u2019s AI Act proposal and the bloc\u2019s General Data Protection Act, under which individuals can complain to regulators and pursue other forms of redress.<\/p>\n Another change MEPs agree on today is a reformed role for body called the EU AI Office, which they want to monitor how the rulebook is implemented \u2014 to supplement decentralized oversight of the regulation at the Member State level.<\/p>\n While, in a nod to the perennial industry cry that too much regulation is harmful for \u201cinnovation\u201d, they also added exemptions to rules for research activities and AI components provided under open-source licenses, while noting the law promotes regulatory sandboxes, or controlled environments, being established by public authorities to test AI before its deployment.<\/p>\n Digital rights group EDRi, which has been urging major revisions to the Commission draft, said everything it had been pushing for was passed by MEPs \u201cin some form or another\u201d \u2014 flagging particularly the (now) full ban on facial recognition in public; along with (new) bans on predictive policing, emotion recognition and on other harmful uses of AI.<\/p>\n Another key win it points to is the inclusion of accountability and transparency obligations on deployers of high risk AI \u2014 applying on them a duty to do a fundamental rights impact assessment and mechanisms by which people affected can challenge AI systems.<\/p>\n \u201cThe Parliament is sending a clear message to governments and AI developers with its list of bans, ceding civil society\u2019s demands that some uses of AI are just too harmful to be allowed, Sarah Chander, EDRi senior policy advisor,\u201d told TechCrunch.<\/p>\n \u201cThis new text is a vast improvement from the Commission\u2019s original proposal when it comes to reigning in the abuse of sensitive data about our faces, bodies, and identities,\u201d added,\u00a0Ella Jakubowska, an EDRi senior policy advisor who has focused on biometrics.\u00a0<\/span><\/span><\/p>\n However EDRi said there are still areas of concern \u2014 pointing to use of AI for migration control as one big one.<\/p>\n On this, Chander noted that MEPs failed to include in the list of prohibited practices where AI is used to facilitate \u201cillegal pushbacks\u201d, or to profile people in a discriminatory manner \u2014 which is something EDRi had called for<\/a>. \u201cUnfortunately, the [European Parliament\u2019s] support for peoples\u2019 rights stops short of protecting migrants from AI harms, including where AI is used to facilitate pushbacks,\u201d she said, suggesting: \u201cWithout these prohibitions the European Parliament is opening the door for a\u00a0 panopticon at the EU border.\u201d<\/p>\n The group said it would also like to see improvements to the proposed ban on predictive policing \u2014 to cover location based predictive policing which Chander described as \u201cessentially a form of automated racial profiling\u201d. She said it\u2019s worried that the proposed remote biometrics identification ban won\u2019t cover the full extent of mass surveillance practices it\u2019s seen being used across Europe. \u201cWhilst the Parliament\u2019s approach is very comprehensive [on biometrics], there are a few practices that we would like to see even further restricted. Whilst there is a ban on retrospective public facial recognition, it contains an exception for law enforcement use which we still consider to be too risky. In particular, it could incentivise mass retention of CCTV footage and biometric data, which we would clearly oppose,\u201d added Jakubowska, saying it would also want to see <\/span><\/span>the EU outlaw emotion recognition no matter the context \u2014 \u201cas this \u2018technology\u2019 is fundamentally flawed, unscientific, and discriminatory by design\u201d.<\/p>\n Another concern EDRi flags is MEPs\u2019 proposal to let AI developers judge if their systems are high risk or not \u2014 as this risk undermining enforceability.<\/p>\n \u201cUnfortunately, the Parliament is proposing some very worrying changes relating to what counts as \u2018high-risk AI. With the changes in the text, developers will be able to decide if their system is \u2018significant\u2019 enough to be considered high risk, a major red flag for the enforcement of this legislation,\u201d Chander suggested.<\/p>\n While today\u2019s committee vote is a big step towards setting the parliament\u2019s mandate \u2014 and setting the tone for the upcoming trilogue talks with the Council \u2014 much could still change and there is likely to be some pushback from Member States governments, which tend to be more focused on national security considerations than caring for fundamental rights.<\/p>\n Asked whether it\u2019s expecting the Council to try to unpick some of the expanded protections against biometric surveillance Jakubowska said: \u201c<\/span><\/span>We can see from the Council\u2019s general approach last year that they want to water down the already insufficient protections in the Commission\u2019s original text. Despite having no credible evidence of effectiveness \u2014 and lots of evidence of the harms \u2014 we see that many member state governments are keen to retain the ability to conduct biometric mass surveillance.<\/p>\n \u201cThey often do this under the pretence of \u2018national security\u2019 such as in the case of the French Olympics and Paralympics<\/a>, and\/or as part of broader trends criminalising migration and other minoritised communities. That being said, we saw what could be considered \u2018dissenting opinions\u2019 from both Austria and Germany, who both favour stronger protections of biometric data in the AI Act. And we\u2019ve heard rumours that several other countries are willing to make compromises in the direction of the biometrics provisions. This gives us hope that there will be a positive outcome from the trilogues, even though we of course expect a strong push back from several Member States.\u201d<\/p>\n Giving another early assessment from civil society, Kris Shrishak, a senior fellow at <\/span>the Irish Council for Civil Liberties (ICCL), which also joined the 2021 call for major revisions to the AI Act, also cautioned over enforcement challenges \u2014 warning that while the parliament has strengthened enforceability by amendments that explicitly allow regulators to perform remote inspections, he suggested MEPs are simultaneously tying regulators hands by preventing them access to source code of AI systems for investigations.<\/p>\n \u201cWe are also concerned that we will see a repeat of GDPR-like enforcement problems,\u201d he told TechCrunch.<\/p>\n On the plus side he said MEPs have taken a step towards addressing \u201cthe shortcomings\u201d of the Commission\u2019s definition of AI systems \u2014 notably with generative AI systems being brought in scope and the application of transparency obligations on them, which he dubbed \u201ca key step towards addressing their harms\u201d.<\/p>\n But \u2014 on the issue of copyright and AI training data \u2014 Shrishak<\/span> was critical of the lack of a \u201cfirm stand\u201d by MEPs to stop data mining giants from ingesting information for free, including copyright-protected data.<\/p>\n The copyright amendment only requires companies to provide a summary of copyright-protected data used for training \u2014 suggesting it will be left up to rights holders to sue.<\/p>\n Asked about possible concerns that exemptions for research activities and AI components provided under open source licenses might create fresh loopholes for AI giants to escape the rules, he agreed that\u2019s a worry.<\/p>\n \u201cResearch is a loophole that is carried over from the scope of the regulation. This is likely to be exploited by companies,\u201d he suggested. \u201cIn the context of AI it is a big loophole considering large parts of the research is taking place in companies. We already see Google saying they are \u2018experimenting\u2019 with Bard. Further to this, I expect some companies to claim that they develop AI components and not AI systems (I already heard this from one large corporation during discussions on General purpose AI. This was one of their arguments for why GPAI [general purpose AI] should not be regulated).\u201d<\/span><\/p>\n
\nEU lawmakers back transparency and safety rules for generative AI<\/br>
\n2023-05-11 22:19:25<\/br><\/p>\n\n
\n
<\/i><\/p>\n