wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/03\/29\/uk-ai-white-paper\/<\/a><\/br> The U.K. isn\u2019t going to be setting hard rules for AI any time soon.<\/p>\n Today, the Department for Science, Innovation and Technology (DSIT) published a white paper<\/a> setting out the government\u2019s preference for a light-touch approach to regulating artificial intelligence. It\u2019s kicking off a public consultation process \u2014 seeking feedback on its plans up to June 21 \u2014 but appears set on paving a smooth road of \u2018flexible principles\u2019 that AI can speed through. <\/span><\/p>\n Worries about the risks of increasingly powerful AI technologies are very much treated as a secondary consideration, relegated far behind a political agenda to talk up the vast potential of high tech growth \u2014 and thus, if problems arise, the government is suggesting the U.K.\u2019s existing (overstretched) regulators will have to deal with them, on a case-by-case basis, armed only with existing powers (and resources). So, er, lol!<\/span><\/p>\n The 91-page white paper, which is entitled \u201cA pro-innovation approach to AI regulation\u201d, talks about taking \u201ca common-sense, outcomes-oriented approach\u201d to regulating automation \u2014 by applying what the government frames as a \u201cproportionate and pro-innovation regulatory framework\u201d.<\/p>\n In a press release<\/a> accompanying the white paper\u2019s publication \u2014 with a clear eye on generating newspaper headlines that frame a narrative of ministers seeking to \u201cturbocharge growth\u201d \u2014 the government confirms there will be no dedicated watchdog for artificial intelligence, merely a set of \u201cprinciples\u201d for existing regulators to work with; so no new legislation, rather a claim of \u201cadaptable\u201d (but not legally binding) regulation.<\/p>\n DSIT says legislation \u201ccould\u201d be introduced \u2014 at some unspecified future period, and when parliamentary time allows \u2014 \u201cto ensure regulators consider the principles consistently\u201d. So, yep, that\u2019s the sound of a can being kicked down the road. But expect to see guidance emerging from a number of existing U.K. regulators over the next 12 months \u2014 along with some tools and \u201crisk assessment templates\u201d which AI makers may be encouraged to play around with (if they like).<\/p>\n There will also be the inexorable sandbox (funded with \u00a32M from the public purse) \u2014 or at least a \u201csandbox trial to help businesses test AI rules before getting to market\u201d, per DSIT. But evidently there won\u2019t be a hard legal requirement to actually use it.<\/p>\n The government says its approach to AI will focus on \u201cregulating the use, not the technology\u201d \u2014 ergo, there won\u2019t be any rules or risk levels assigned to entire sectors or technologies. Which is quite the contrast with the European Union\u2019s direction of travel with its risk-based framework that includes some up-front prohibitions on certain users of AI, with define regimes for use-cases specified as high risk and self regulation for lower risk uses.<\/p>\n \u201cInstead, we will regulate based on the outcomes AI is likely to generate in particular applications,\u201d the government stipulates, arguing \u2014 for example, and somewhat boldly in its choice of example here \u2014 that classifying all applications of AI in critical infrastructure as high risk \u201cwould not be proportionate or effective\u201d because there might be some uses of AI in critical infrastructure that can be \u201crelatively low risk\u201d.<\/p>\n Because ministers have opted for what the white paper calls \u201ccontext-specificity\u201d, they decided against setting up a dedicated regulator for AI \u2014 hence the responsibility falls on existing bodies with expertise across various sectors. <\/span><\/p>\n \u201cTo best achieve this context-specificity we will empower existing UK regulators to apply the cross-cutting principles,\u201d it writes on this. \u201cRegulators are best placed to conduct detailed risk analysis and enforcement activities within their areas of expertise. Creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.\u201d<\/p>\n Under the plan, existing regulators will be expected to apply a set of five principles \u2014 setting out \u201ckey elements of responsible AI design, development and use\u201d \u2014 that the government wants\/hopes to guide businesses as they develop artificial intelligence.<\/p>\n \u201cRegulators will lead the implementation of the framework, for example by issuing guidance on best practice for adherence to these principles,\u201d it suggests, adding that they will be expected to apply the principles \u201cproportionately\u201d to address the risks posed by AI \u201cwithin their remits, in accordance with existing laws and regulations\u201d \u2014 arguing this will enable the principles to \u201ccomplement existing regulation, increase clarity, and reduce friction for businesses operating across regulatory remits\u201d.<\/p>\n It says it expects relevant regulators to need to issue \u201cpractical guidance\u201d on the principles or update existing guidance \u2014 in order to \u201cprovide clarity to business\u201d in what may otherwise be a vacuum of ongoing legal uncertainty. It also suggests regulators may need to publish joint guidance focused on AI use cases that cross multiple regulatory remits. So more work and more joint working is coming down the pipe for UK oversight bodies.<\/p>\n \u201cRegulators may also use alternative measures and introduce other tools or resources, in addition to issuing guidance, within their existing remits and powers to implement the principles,\u201d it goes on, adding that it will \u201cmonitor the overall effectiveness of the principles and the wider impact of the framework\u201d \u2014 stipulating that: \u201cThis will include working with regulators to understand how the principles are being applied and whether the framework is adequately supporting innovation.\u201d<\/p>\n So it\u2019s seemingly leaving the door open to rowing back on certain principles if they\u2019re considered too arduous by business.<\/p>\n \u201cWe recognise that particular AI technologies, foundation models for example, can be applied in many different ways and this means the risks can vary hugely. For example, using a chatbot to produce a summary of a long article presents very different risks to using the same technology to provide medical advice. We understand the need to monitor these developments in partnership with innovators while also avoiding placing unnecessary regulatory burdens on those deploying AI,\u201d writes Michelle Donelan, the secretary of state for science, innovation and technology in the white paper\u2019s executive summary where the government sets out its \u201cpro-innovation\u201d stall.<\/p>\n \u201cTo ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI. This will mean supporting innovation and working closely with business, but also stepping in to address risks when necessary. By underpinning the framework with a set of principles, we will drive consistency across regulators while also providing them with the flexibility needed.\u201d<\/p>\n The existing regulatory bodies the government is intending to saddle with more tasks \u2014 drafting \u201ctailored, context-specific approaches\u201d which AI model makers can also only take on advisement (i.e. ignore) \u2014 include the Health and Safety Executive; the Equality and Human Rights Commission; and the Competition and Markets Authority (CMA), per DSIT. <\/span><\/p>\n The PR <\/span>doesn\u2019t mention the Information Commissioner\u2019s Office (ICO), aka the data protection regulator, but it gets several references in the white paper and looks set to be another body pressganged into producing AI guidance (usefully, enough, the ICO has already offered some thoughts on AI snake oil<\/a>).\u00a0<\/span><\/p>\n One quick aside here: The CMA is still waiting<\/em> for the government to empower a dedicated Digital Markets Unit<\/a> (DMU) that was supposed to be reining in the market power of Big Tech, i.e. by passing the necessary legislation. But, last year<\/a>, ministers opted to kick that can into the long grass \u2014 so the DMU has still not been put on a statutory footing almost two years after it soft launched in expectation of parliamentary time being found to empower it\u2026 So it\u2019s becoming abundantly clear this government is a lot more fond of drafting press releases than smart digital regulation.<\/p>\n The upshot is the U.K. has been left trailing the whole of the EU on the salient area of digital competition (the bloc has the Digital Markets Act coming in application in a few months) \u2014 while Germany updated its national competition regime with an ex ante digital regime at the start of 2021 and has a bunch of pro-competition enforcements under its belt already.<\/p>\n Now \u2014 by design \u2014 U.K. ministers intend the country to trail peers on AI regulation, too; framing this as a choice to \u201cavoid heavy-handed legislation which could stifle innovation\u201d, as DSIT puts it, in favor of a mass of sectoral regulatory guidance that businesses can choose whether to follow \u2014 literally in the same breath as penning the line that: \u201cCurrently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.\u201d So, um\u2026 legal certainty good or bad \u2014 which is it?!<\/p>\n In short this looks like a very British (post-Brexit) mess.<\/p>\n Across the English Channel, meanwhile, EU lawmakers are in the latter stages of negotiations over setting a risk-based framework for regulating AI \u2014 a draft law the European Commission presented way back in 2021<\/a>; now with MEPs pushing for amendments to ensure the final text covers general purpose AIs<\/a> like OpenAI\u2019s ChatGPT. The EU also has a proposal for updating the bloc\u2019s liability rules for software and AI<\/a> on the table too.<\/p>\n In the face of the EU\u2019s carefully structured risk-baed framework, U.K. lawmakers are left trumpeting voluntary risk assessment templates and a toy sandbox \u2014 and calling this \u2018DIY\u2019 approach to generating trustworthy AI a \u2018Brexit bonus\u2019. Ouch.<\/p>\n The five principles the government wants to guide the use of AI \u2014 or, specifically, that existing regulators \u201cshould consider to best facilitate the safe and innovative use of AI in the industries they monitor\u201d \u2014\u00a0are:<\/p>\n All of which sound like fine words indeed. But without a legal framework to turn \u201cprinciples\u201d into hard rules \u2014 and ensure consistent application and enforcement atop entities that choose not to bother with any of that expensive safety stuff \u2014 it looks about as useful as whistling the Lord\u2019s Prayer and hoping for the best if it\u2019s trustworthy AI you\u2019re looking for\u2026<\/p>\n
\nUK to avoid fixed rules for AI \u2013 in favor of \u2018context-specific guidance\u2019<\/br>
\n2023-03-29 22:08:00<\/br><\/p>\n\u2018Flexible principles\u2019<\/h2>\n
\n