wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/04\/cma-generative-ai-review\/<\/a><\/br> Well that was fast. The U.K.\u2019s competition watchdog has announced an initial review<\/a> of \u201cAI foundational models\u201d, such as the large language models (LLMs) which underpin OpenAI\u2019s ChatGPT and Microsoft\u2019s New Bing. Generative AI models which power AI art platforms such as OpenAI\u2019s DALL-E or Midjourney will also likely fall in scope.<\/p>\n The Competition and Markets Authority (CMA) said its review will look at competition and consumer protection considerations in the development and use of AI foundational models \u2014 with the aim of understanding \u201chow foundation models are developing and producing an assessment of the conditions and principles that will best guide the development of foundation models and their use in the future\u201d.<\/p>\n It\u2019s proposing to publish the review in \u201cearly September\u201d, with a deadline of June 2 for interested stakeholders to submit responses to inform its work.<\/p>\n \u201cFoundation models, which include large language models and generative artificial intelligence (AI), that have emerged over the past five years, have the potential to transform much of what people and businesses do. To ensure that innovation in AI continues in a way that benefits consumers, businesses and the UK economy, the government has asked regulators, including the [CMA], to think about how the innovative development and deployment of AI can be supported against five overarching principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress,\u201d the CMA wrote in a press release.\u201d<\/p>\n Stanford University\u2019s Human-Centered Artificial Intelligence Center\u2019s Center for Research on Foundation Models is credited with coining the term \u201cfoundational models\u201d, back in 2021, to refer to AI systems that focus on training one model on a huge amount of data and adapting it to many applications.<\/p>\n \u201cThe development of AI touches upon a number of important issues, including safety, security, copyright, privacy, and human rights, as well as the ways markets work. Many of these issues are being considered by government or other regulators, so this initial review will focus on the questions the CMA is best placed to address \u2014 what are the likely implications of the development of AI foundation models for competition and consumer protection?\u201d the CMA added.<\/p>\n In a statement, its CEO, Sarah Cardell, also said:<\/p>\n AI has burst into the public consciousness over the past few months but has been on our radar for some time. It\u2019s a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth.<\/p>\n It\u2019s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection.<\/p>\n<\/blockquote>\n Specifically, the U.K. competition regulator said its initial review of AI foundational models will:<\/p>\n While it may seen early for the antitrust regulator to conduct a review of such a fast-moving emerging technology the CMA is acting on government instruction.<\/p>\n An AI white paper published in March<\/a> signalled ministers\u2019 preference to avoid setting any bespoke rules (or oversight bodies) to govern uses of artificial intelligence at this stage. However ministers said existing U.K. regulators \u2014 including the CMA, which was directly name-checked \u2014 would be expected to issue guidance to encourage safe, fair and accountable uses of AI.<\/span><\/p>\n The CMA says its initial review of foundational AI models is in line with instructions in the white paper, where the government talked about existing regulators conducting \u201cdetailed risk analysis\u201d in order to be in a position to carry out potential enforcements, i.e. on dangerous, unfair and unaccountable applications of AI, using their existing powers.<\/p>\n The regulator also points to its core mission \u2014 to support open, competitive markets \u2014 as another reason for taking a look at generative AI now.<\/p>\n Notably, the competition watchdog is set to get additional powers to regulate Big Tech in the coming years, under plans taken off the back-burner by prime minister Rishi Sunak\u2019s government last month<\/a>, when ministers said it would move forward with a long-trailed (but much delayed) ex ante reform aimed at digital giants\u2019 market power.<\/p>\n The expectation is that the CMA\u2019s Digital Markets Unit, up and running since 2021 in shadow form<\/a>, will (finally) gain legislative powers in the coming years to apply pro-active \u201cpro-competition\u201d rules which are tailored to platforms that are deemed to have \u201cstrategic market status\u201d (SMS). So we can speculate that providers of powerful foundational AI models may, down the line, be judged to have SMS \u2014 meaning they could expect to face bespoke rules on how they must operate vis-a-vis rivals and consumers in the U.K. market.<\/p>\n The U.K.\u2019s data protection watchdog, the ICO, also has its eye on generative AI. It\u2019s another existing oversight body which the government has tasked with paying special mind to AI under its plan for context-specific guidance<\/a> to steer development of the tech through the application of existing laws.<\/p>\n In a blog post<\/a> last month, Stephen Almond, the ICO\u2019s executive director of regulatory risk, offered some tips and a little warning for developers of generative AI when it comes to compliance with U.K. data protection rules. \u201cOrganisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach<\/a>,\u201d he suggested. \u201cThis isn\u2019t optional \u2014 if you\u2019re processing personal data, it\u2019s the law.\u201d<\/p>\n Over the English Channel in the European Union, meanwhile, lawmakers are in the process of deciding a fixed set of rules that are likely to apply to generative AI.<\/p>\n Negotiations toward a final text for the EU\u2019s incoming AI rulebook are ongoing \u2014 but currently there\u2019s a focus on how to regulate foundational models<\/a> via amendments to the risk-based framework for regulating uses of AI<\/a>\u00a0the bloc published in draft over two years ago.<\/p>\n It remains to be seen where the EU\u2019s co-legislators will end up on what\u2019s sometimes also referred to as general purpose AI. But, as we reported recently<\/a>, parliamentarians are pushing for a layered approach to tackle safety issues with foundational models; the complexity of responsibilities across AI supply chains; and to address specific content concerns (like copyright) which are associated with generative AI.<\/p>\n Add to that, EU data protection law already applies to AI, of course. And privacy-focused investigations of models like ChatGPT are underway in the bloc \u2014 including in Italy where an intervention by the local watchdog led to OpenAI rushing out a series of privacy disclosures<\/a> and controls<\/a> last month<\/a>.<\/p>\n The European Data Protection Board also recently set up a task force to support coordination between different data protection authorities on investigations of the AI chatbot. Others investigating ChatGPT include Spain\u2019s privacy watchdog<\/a>.<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nUK\u2019s antitrust watchdog announces initial review of generative AI<\/br>
\n2023-05-04 22:21:47<\/br><\/p>\n\n
\n