wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/31\/ai-code-of-conduct-us-eu-ttc\/<\/a><\/br> The European Union has used a transatlantic trade and technology talking shop to commit to moving fast and producing a draft Code of Conduct for artificial intelligence, working with US counterparts and in the hope that governments in other regions \u2014 including Indonesia and India \u2014 will want to get involved.<\/p>\n What\u2019s planned is a set of standards for applying AI to bridge the gap, ahead of legislation being passed to regulate uses of the tech in respective countries and regions around the world.<\/p>\n Whether AI giants will agree to abide by what will be voluntary (non-legally binding) standards remains to be seen. But the movers and shakers in this space can expect to be encouraged to do so by lawmakers on both sides of the Atlantic \u2014 and soon, with the EU calling for the Code to be drafted within weeks<\/a>. (And, well, given the rising clamour from tech industry CEO screaming for AI regulation<\/a> it would be pretty hypocritical for leaders in the field to turn their noise up at a voluntary Code.)<\/p>\n Speaking at the close of a panel session on generative AI that was held at the fourth meeting of the US-EU Trade & Tech Council<\/a> (TTC) which was taking place in Sweden this week \u2014 with the panel hearing from stakeholders including Anthropic CEO Dario Amodei and Microsoft president Brad Smith \u2014 the European Union\u2019s EVP Margrethe Vestager, who heads up the bloc\u2019s competition and digital strategy, signalled it intends to get to work stat.<\/p>\n \u201cWe will be very encouraged to take it from here. To produce a draft. To invite global partners to come on board. To cover as many as possible,\u201d she said. \u201cAnd we will make this a question of absolute urgency to have such an AI Code of Conduct for a voluntary signup.\u201d<\/p>\n The TTC was established back in 2021, in the wake of the Trump presidency as EU and US lawmakers sought to repair trust and find ways to cooperate on tech governance<\/a> and trade issues.<\/p>\n Vestager described generative AI as a \u201cseismic change\u201d and \u201ccategorical shift\u201d that she said demands a regulatory response in real-time.<\/p>\n \u201cNow technology is accelerating to a completely different degree than what we\u2019ve seen before,\u201d she said. \u201cSo obviously, something needs to be done to get the most of this new technology\u2026 We\u2019re talking about technology that develops by the month so what we have concluded here at this TTC is that we should take an initiative to get as many other countries on board on an AI Code of Conduct for businesses voluntarily to sign up.\u201d<\/p>\n While Vestager couched industry input as \u201cvery welcome\u201d, she indicated that \u2014 from the EU side at least \u2014 the intent is for lawmakers to draw up safety provisions and companies to agree to get on board and apply the standards, rather than having companies drive things by suggesting a bare minimum on standards (and\/or seeking to reframe AI safety to focus on existential future threats rather than extant harms) and lawmakers swallowing the bait.<\/p>\n \u201cWe will not forget that there are other sorts of artificial intelligence, obviously,\u201d she went on. \u201cThere are a number of things that need to be done. But the thing is that we need to show that democracy is up to speed because legislative procedures they must take their time; that is the nature of legislation. But this is a way for democracies to answer in real time a question that is really, really in our face right now. And I find this very encouraging to do it and I\u2019m looking very much forward to work with as many as possible in depth and very fast.\u201d<\/p>\n The EU is ahead of the regulatory curve on AI since it already has draft legislation on the table. But the risk-based framework the Commission presented, back in April 2021, is still winding its way through in the bloc\u2019s co-legislative \u2014 looping in lawmakers in the European Council and Parliament (and for a sense of how live that process is parliamentarians recently proposed amendments targeted at generative AI<\/a>) \u2014 so there\u2019s no immediate prospect of those hard rules applying to generative or any other type of AI.<\/p>\n Even with the most optimistic outlook for the EU adopting the AI Act, Vestager suggested today that it would be two or three years before those hard rules bite. Hence the bloc\u2019s urgency for stop-gap measures.<\/p>\n Also present at the TTC meeting was Gina Raimondo, US secretary of state for commerce \u2014 who indicated the Biden administration\u2019s willingness to engage with discussion toward shaping a voluntary AI Code of Conduct. Although she kept her cards close to her chest on what kind of standards the US might be comfortable to push onto what are predominantly US AI giants.<\/p>\n \u201c[AI] is coming at a pace like no other technology,\u201d observed Raimondo. \u201cLike other technologies, we are already seeing issues with data privacy, misuse, what happens when the models get into the hands of malign actors, misinformation. Unlike<\/em> other technology, the rate of the pace of innovation is at a breakneck pace, which is different and a hockey stick that doesn\u2019t exist in other technologies.<\/p>\n \u201cIn that respect, I think the TTC could play an incredibly relevant role because it will take a little bit of time for the US Congress or the parliament or other regulatory agencies to catch up. Whereas the risk for some of AI is today. And so we are committed to making sure that the TTC provides a forum for stakeholder engagement, engagement of the private sector, engagement of our companies, to figure out what can we do in the here and now to mitigate the risks of AI but also not to stifle the innovation. And that is a real challenge\u201d<\/p>\n \u201cAs we figure out the benefits of AI, I hope we\u2019re all really eyes wide open about the costs and do the analysis of whether we should do it,\u201d she also warned. \u201cI think if we all are honest with ourselves about other technologies, including social media, we probably wish we had not done things even though we could\u2019ve. You know, we could have but should have we? And so let\u2019s work together to get this right, because the stakes are a whole lot higher.\u201d<\/p>\n As well as high level lawmakers, the panel discussion heard from a handful of industry and civil society groups, chipping in with perspectives on the imperative for and\/or challenge of regulating such a fast moving field of technology.<\/p>\n Anthropic\u2019s Amodei heaped praise on the transatlantic conversation taking place around AI rule-making. Which likely signals relief that the US is actively involving itself in standards-making which might otherwise be exclusively driven by Brussels.<\/p>\n The bulk of his remarks sounded a sceptical note over how to ensure AI systems are truly safe prior to release \u2014 implying we don\u2019t yet have techniques for achieving reliable guardrails around such shape-shifting tools. He also suggested there should be a joint commitment from the US and EU to fund the development of \u201cstandards and evaluation\u201d for AI \u2014 rather than calling for any algorithmic auditing in the here and now.<\/p>\n \u201cWhen I think about the rate at which this technology is bringing new sources of power into the world, combined with the resurgent threat from autocracies that we\u2019re seeing over the last year, it seems to me that it\u2019s all the more important that we work together to prevent [AI] harms and defend our shared democratic values. And the TTC seems like a critical for forum for doing that,\u201d he said early in his allotted time before going on to predict that developments in AI would continue to come at a steady clip and setting out some of his major concerns \u2014 including highlighting \u201cmeasurement\u201d for AI safety as a challenge.<\/p>\n \u201cWhat we\u2019re going to be able to do in one to four years are things that seem impossible now. This is, I would say, if there\u2019s a central fact about the field of AI to know, this is the central fact to know. And though there will be many positive opportunities to come from this, I worry greatly about risks \u2014 particularly in the domain of cybersecurity, biology, things like disinformation, where I think there\u2019s the potential for great destruction,\u201d he said. \u201cIn the longer term, I worry even about the risks of truly autonomous systems. That\u2019s a little further out.<\/p>\n \u201cOn measurement, I think we\u2019re very used to \u2014 when we think about regulating technologies like automobiles or aeroplanes \u2014 we\u2019re measuring safety as a secure field; you have a given set of tests you can run to tell if the system is safe. AI is much more of a wild west than that. You can ask an AI system to do anything at all in natural language and it can answer in any way it chooses to answer.<\/p>\n \u201cYou might try to ask a system 10 different ways whether it can conduct, say, a dangerous cyber attack and find that it won\u2019t. But you forgot to ask it an 11th way that would have shown this dangerous behaviour. A phrase I like to use is \u2018no one knows what an AI system is capable of until it\u2019s deployed to a million people\u2019. And of course, this is a bad thing, right? We don\u2019t want to deploy these things in this cowboy-ish way. And so this difficulty of detecting dangerous capabilities is a huge impediment to mitigating them.\u201d<\/p>\n The contribution looked intended to lobby against any hard testing of AI capabilities being included in the forthcoming Code of Conduct \u2014 by seeking to kick the can down the road.<\/p>\n \u201cThis difficulty of detecting dangerous capabilities a huge impediment to mitigating them,\u201d he suggested, while conceding that \u201csome kind of standards or evaluation are a crucial prerequisite for effective AI regulation\u201d but also further muddying the water by saying \u201cboth sides of the Atlantic have an interest in developing this science\u201d.<\/p>\n \u201cUS and EU have a long tradition of collaborating on [standards and evaluation] which we could extend and then maybe more radically, a commitment to adopt an eventual set of common standards and evaluations as a sort of raw material for the rules of the road in AI,\u201d he added, gazing into the eventual distance.<\/p>\n Microsoft\u2019s Smith used his four minutes\u2019 speaking time to urge regulators to \u201cmove forward the innovation and safety standards together\u201d \u2014 also amping up the AI hype by lauding the potential benefits for AI to \u201cdo good for the world\u201d and \u201csave people\u2019s lives\u201d, such as by detecting or curing cancer or enhancing disaster response capabilities, while conceding safety needs focus with an affirmation that \u201cwe do need to be clear eyed about the risks\u201d.<\/p>\n He also welcomed the prospect of transatlantic cooperation on AI standards. But pressed for lawmakers to shoot for broader international coordination on things like product development processes \u2014 which he suggested would help drive forward on both AI safety standards and innovation.<\/p>\n \u201cCertain things benefit enormously from international coordination, especially when it comes to product development processes. We\u2019re not going to advance safety or innovation if there\u2019s different approaches to, say, how our red team should work in the safety product process for developing a new AI model,\u201d he said.<\/p>\n \u201cOther things there\u2019s more room for divergence and there will be some because the world \u2014 even the countries that share common values \u2014 we\u2019ll have some differences. And there\u2019s areas around licensing or usage where one can manage with that divergence. But in short, there\u2019s a lot that we will benefit from learning now and then putting into practice.\u201d<\/p>\n No one from OpenAI was speaking during the TTC panel but Vestager had a videoconference meeting with CEO Sam Altman in the afternoon.<\/span><\/p>\n In a read-out of the meeting, the Commission said the pair <\/span>shared ideas for the voluntary AI code of conduct that was launched at the TTC \u2014 with discussion touching on how to tackle misinformation; transparency issues, including ensuring users are made aware if they communicate with AI; how to ensure verification (red teaming) and external audits; how to ensure monitoring and feedback loops; and the issue of ensuring compliance while avoiding barriers for startups and SMEs.<\/p>\n The Commission added that there was \u201ca strong overall agreement to advance on the voluntary code of conduct as fast as possible and with G7 and other key partners, as a stopgap measures until regulation is in place\u201d, adding there would be \u201ca continued engagement on the AI Act as the legislative process progresses\u201d.<\/p>\n In a subsequent tweet\u00a0<\/a>Vestager said discussions with OpenAI\u2019s Altman and Anthropic\u2019s Amodei had featured talk of external audits, watermarking and \u201cfeedback loops\u201d.<\/span><\/p>\n In recents days Altman has ruffled feathers in Brussels with some flatfooted lobbying in which he seemingly threatened to pull his tool out of the region if provisions in the EU\u2019s AI Act targeted at generative AI aren\u2019t watered down.<\/p>\n He then quickly withdrew the threat after the bloc\u2019s internal market commissioner tweeted a public dressing down at OpenAI<\/a>, accusing the company of attempting to blackmail lawmakers. So it will be interesting to see how enthusiastically (or otherwise) Altman engages with the substance of the Code of Conduct for AI.<\/p>\n (For its part, Google has previously indicated it wants to work with the EU<\/a> on stop-gap AI standards \u2014 as part of a so-called \u201cAI Pact\u201d which appears to be a separate EU initiative to the Code of Conduct; per a Commission spokesperson the AI Pact is focused on getting companies to agree to front-load the implementation of key AI Act provisions on a voluntary basis, whereas the Code aims to promote guardrails for the use of generative AI or \u201cadvanced GPAI\u201d (general purpose AI) models on a global level.)<\/p>\n While AI giants have been relatively reluctant to focus on current AI risks and how they might be reined in, preferring talk of far-flung fears of non-existent \u201csuperintelligent\u201d AIs, the TTC meeting also heard from Dr. Gemma Galdon-Clavell, founder and CEO of Eticas Consulting \u2014 a business which runs algorithmic audits for customers to encourage accountability around uses of AI and algorithmic technology \u2014 who was eager to school the panel in current-gen accountability techniques.<\/p>\n \u201cI am convinced [algorithmic auditing] is going to be the main tool to understand quantify and mitigate harms in AI,\u201d she said. \u201cWe ourselves are hoping to be the first auditing unicorn that puts the tools [on the table] that maximise engineering possibilities while taking into account fundamental rights and societal values.\u201d<\/p>\n She described the EU\u2019s recently adopted overhaul of ecommerce and marketplace rules, aka the Digital Services Act (DSA), as a pioneering piece of legislation in this regard \u2014 on account of the law\u2019s push to require transparency from very large online platforms on how their algorithms work, predicting algorithmic audits will become the go-to AI safety tool in the coming years.<\/p>\n
\nEU and US lawmakers move to draft AI Code of Conduct fast<\/br>
\n2023-05-31 21:54:01<\/br><\/p>\n\u201c\u2026 a pace like no other\u201d<\/h2>\n
Towards external audits?<\/h2>\n