UK Government publishes White Paper on AI regulation

artificial-intelligence-21678351920

The UK government has published a white paper detailing how it plans to regulate artificial intelligence.

The paper rules out a regulator in favour of a “pro-innovation” approach that aims to build public trust while making it easier for businesses to innovate around the technology.

The government notes that AI, which it describes as a “technology of tomorrow” contributed £3.7bn ($5.6bn) to the UK economy last year and AI advocates note that it is already delivering many commercial, economic and social benefits.

Critics, however, fear that the rapid growth of AI could threaten jobs or be used for malicious purposes. 1,100 signatories including Twitter/Tesla’s Elon Musk, Apple’s Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter that calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” 

The letter, published today by the nonprofit Future of Life Institute, adds that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”

The government’s new paper certainly doesn’t answer all of the many questions hanging around the development and regulation of AI, and nor does it create a dedicated regulator to monitor and regulate the fast-moving technology as some in the industry had hoped.

Instead of creating a new regulator, it says existing regulators, including the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority, should come up with their own approaches that suit the way AI is actually being used in their sectors.

The white paper outlines five principles that the regulators should consider to enable the safe and innovative use of AI in the industries they monitor:

  • Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
  • Transparency and “explainability”: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
  • Fairness: AI should be used in a way which complies with the UK’s existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes
  • Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

Michelle Donelan, Science, Innovation and Technology Secretary, said: “Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”

It’s an approach that has already won some favour in the tech sector. Newcastle-based tech giant Sage said in a statement following the white paper’s release this morning: “Accessibility and affordability remain key concerns for many SMBs, and complex or unclear regulatory standards could hinder adoption rates.”

The belief was tempered by an acceptance of the need for clear guidance however. The statement went on: “Sage emphasises the importance of a consistent and accessible regulatory framework to ensure that SMBs are not left behind in the AI race. Clear and accessible regulation is crucial for ensuring that progress and potential economic growth are not hindered.”

Michael Birtwistle, associate director from the Ada Lovelace Institute, which carries out independent data and AI research, added that the regulations in their current form, could lack weight. He told the BBC: “Initially, the proposals in the white paper will lack any statutory footing. This means no new legal obligations on regulators, developers or users of AI systems, with the prospect of only a minimal duty on regulators in future. The UK will also struggle to effectively regulate different uses of AI across sectors without substantial investment in its existing regulators.”

Microsoft, Google, Salesforce and others have all recently announced plans to fully integrate large language model-based AI tools into high-profile software such as Microsoft 365 and browsers.

Subscribe to the Prolific North Daily Newsletter Today!

Want all the latest content from Prolific North delivered direct to your inbox daily? Of course you do!

Related News