wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/06\/08\/uks-ai-safety-summit-gets-thumbs-up-from-tech-giants\/<\/a><\/br> Make way for another forum on AI safety. The U.K. government has announced it will convene a \u201cglobal\u201d AI summit this fall with the aim of agreeing \u201csafety measures to evaluate and monitor the most significant risks from AI\u201d, as its PR<\/a> puts it.<\/p>\n There\u2019s no word on who will attend as yet \u2014 but the government says it wants the discussion to feature \u201ckey countries, leading tech companies and researchers\u201d.<\/p>\n \u201cThe summit, which will be hosted in the U.K. this autumn, will consider the risks of AI, including frontier systems, and discuss how they can be mitigated through internationally coordinated action. It will also provide a platform for countries to work together on further developing a shared approach to mitigate these risks,\u201d it adds.<\/p>\n Prime minister Rishi Sunak is in the US today where, per the government line, he will meet with president Biden and press for \u201cjoint leadership\u201d of technologies such as AI, among chat on other economically significant issues.<\/p>\n Notably the press release announcing the U.K.\u2019s ambition to host a global AI summit simultaneously bundles a separate claim, vis-\u00e0-vis \u201cglobal companies expanding their AI work in the U.K.\u201d, with the government spotlighting developments such as OpenAI opening a London office last week.<\/p>\n The PR is also dominated by canned quotes from tech giants and AI firms, with the likes of Google DeepMind, Anthropic, Palantir and Faculty lavishing praise on the summit plan via supporting statements from senior execs.<\/p>\n (For a flavor of the industry flattery embedded in the government\u2019s PR, DeepMind\u2019s Demis Hassabis proclaims Sunak\u2019s \u201cGlobal Summit on AI Safety will play a critical role in bringing together government, industry, academia and civil society\u201d; Anthropic\u2019s Dario Amodei commends the PM for \u201cbringing the world together to find answers and have smart conversations\u201d; and Faculty\u2019s Marc Warner suggests the U.K. is \u201cperfectly placed\u201d to provide \u201ctechnological leadership\u201d and \u201cfoster international collaboration\u201d \u2014 so, er, pass the sick bucket\u2026 )<\/p>\n The strategy the U.K. appears to be plumping for here is to position itself as the AI industry\u2019s BFF (or, well, stooge) \u2014 in a way that could work against existing international efforts to agree meaningful guardrails for AI if it ends up driving a wedge between the US side and other international players.<\/p>\n The summit announcement comes about two weeks after Sunak met with a number of tech execs helming AI giants<\/a>, including Anthropic\u2019s Amodei, DeepMind\u2019s Hassabis and OpenAI\u2019s Sam Altman. After which the government suddenly started squawking about existential AI risk<\/a>, in a clear parroting of the sci-fi concerns AI giants have been promulgating vis-a-vis non-existent \u201csuperintelligent\u201d AI systems \u2014 in a bid to frame the debate about AI safety by zeroing in on theoretical future risks \u2014 while downplaying discussion of actual harms being caused by AI in the here and now (such as privacy abuse, bias, discrimination and disinformation, copyright infringement and environmental damage, to name a few).<\/p>\n In another sign of the lavish AI industry love-in for U.K. Plc rn, Palantir\u2019s CEO Alex Karp was interviewed on AI by BBC Radio 4\u2019s Today program this morning where he made a point of heaping praise on the U.K.\u2019s \u201cpragmatic approach to data protection\u201d, as he put it \u2014 going on to compare the U.K.\u2019s under-enforcement of privacy rules favorably to the EU\u2019s more robust enforcement of the General Data Protection Regulation (which, by contrast, quickly forced ChatGPT to provide users with more information and controls<\/a>), as well as claiming it will \u201cbe much much harder for the continent to come to terms with large language models [than the UK]\u201d.<\/p>\n It remains to be seen what the Biden administration will make of Sunak\u2019s AI safety summit. Or, indeed, whether anyone of significance from the US government will attend. But AI giants being mostly US-based certainly muddies the AI regulation conversation over the pond.<\/p>\n US lawmakers remain concerned about the burden of AI regulation on industry \u2014 and are demonstrably more reluctant to rush in with guardrails than, for example, their counterparts in the European Union.<\/p>\n As a third country to both those sides, the U.K. has a choice to make over where to throw its hat on international AI rules. All the signs are it\u2019s aiming to try to use this topic \u2014 and US AI giants \u2014 as a strategic lever to ratchet itself into a closer relationship with the US, based on aligning over more dilute AI rules (assuming the US agrees to play this game).<\/p>\n The U.K. is actually a late convert to the discussion on how to regulate AI. Only a few months ago<\/a> it put out an AI white paper saying it didn\u2019t see the need for any new bespoke rules or oversight bodies for AI \u2014 preferring to load the responsibility onto existing over-worked regulators (without expanding their budgets) by asking them to devise and issue context-specific guidance<\/a>. The name of that whitepaper? \u201cA pro-innovation approach to AI regulation.\u201d<\/p>\n It\u2019s also making this AI summit move at a time when governments, regulators and lawmakers around the world are already responding to rising alarm about the safety risks flowing from fast developing machine learning technologies by mobilizing a variety of discussion tracks and initiatives with the goal of clinching international agreement on safeguards and safety standards.<\/p>\n The OECD already adopted AI principles<\/a> all the way back in May 2019. While the FTC put out AI guidance<\/a> in April 2021. And even the US Department of Commerce\u2019s National Telecommunications and Information Administration (NTIA) started consulting on how to boost AI accountability<\/a> this April. The UN is also looking at AI.<\/p>\n Then there\u2019s the G7 leaders\u2019 \u201cHiroshima process\u201d<\/a> \u2014 a recent track comprised of cabinet-level discussions between G7 countries on AI governance which is due to report by the end of the year. While, before that, G7 countries and others launched the Global Partnership on AI<\/a> \u2014 which is aimed at promoting responsible, human-centric development and use of AI technologies by sharing research and foster international collaboration toward trustworthy AI.<\/p>\n The European Union, meanwhile, presented its own draft legislative for regulating AI over two years ago<\/a>. The bloc\u2019s lawmakers are now busy hammering out agreement<\/a> on a final text of that framework \u2014 including considering how it should tackle generative AI<\/a> \u2014 with political agreement on the EU AI Act sought for by the end of this year. (Although the pan-EU law won\u2019t be in force for several years after that.)<\/p>\n The EU and US are also working (or at least talking) together on\u00a0an AI Code of Conduct<\/a> which is being conceived as a stop-gap set of voluntary standards until legislation comes in \u2014 doing so via a transatlantic talking shop, called the US-EU Trade and Technology Council (TTC), a forum the U.K. is not party to having left the EU following the Brexit referendum.<\/p>\n Last week<\/a> the EU said it would begin drafting this AI Code of Conduct, saying it hoped to have something on paper within a matter of weeks.<\/p>\n Although it was less clear after the TTC meeting how much buy-in the US side was committed to. But US lawmakers were in the room talking.<\/p>\n Discussing the AI Code in a briefing with journalists last week, EU EVP Margrethe Vestager, who heads up the bloc\u2019s digital strategy, underscored how this EU-led initiative could, very quickly, be moulding global AI guardrails, telling journalists: \u201cIf we can start drafting with the Americans, the rest of G7, invited guests and have industry sign up for it \u2014 of course also for us with some third party validation \u2014 then we could cover one-third of global population within a very, very short timespan. And that may be a good thing.\u201d<\/p>\n So the bloc is clearly working at pace to seize the opportunity to apply the \u2018Brussels effect\u2019 to first-order global AI rules.<\/p>\n
\nUK\u2019s AI safety summit gets thumbs up from tech giants<\/br>
\n2023-06-08 22:08:53<\/br><\/p>\n