The Science, Innovation and Technology committee has listed 12 challenges for the government “if public safety and confidence in AI are to be secured.”
The 12 are:
- The Bias Challenge: AI can introduce or perpetuate biases that society finds unacceptable.
- The Privacy Challenge: AI can allow individuals to be identified and personal information about them to be used in ways beyond what the public wants.
- The Misrepresentation Challenge: AI can allow the generation of material that deliberately misrepresents someone’s behaviour, opinions or character.
- The Access to Data Challenge: The most powerful AI needs very large datasets, which are held by few organisations.
- The Access to Compute challenge: The development of powerful AI requires significant compute power, access to which is limited to a few organisations.
- The Black Box Challenge: Some AI models and tools cannot explain why they produce a particular result, which is a challenge to transparency requirements.
- The Open-Source Challenge: Requiring code to be openly available may promote transparency and innovation; allowing it to be proprietary may concentrate market power but allow more dependable regulation of harms.
- The Intellectual Property and Copyright Challenge: Some AI models and tools make use of other people’s content: policy must establish the rights of the originators of this content, and these rights must be enforced.
- The Liability Challenge: If AI models and tools are used by third parties to do harm, policy must establish whether developers or providers of the technology bear any liability for harms done.
- The Employment Challenge: AI will disrupt the jobs that people do and that are available to be done. Policy makers must anticipate and manage the disruption.
- The International Coordination Challenge: AI is a global technology, and the development of governance frameworks to regulate its uses must be an international undertaking.
- The Existential challenge: Some people think that AI is a major threat to human life. If that is a possibility, governance needs to provide protections for national security.
The committee, which is chaired by Greg Clark MP, is there to ensure that policies and decision making across government departments is based on solid scientific evidence and advice.
Today’s 46 page interim report comes following the government’s proposed “pro-innovation approach to AI regulation” white paper.
“The AI white paper should be welcomed as an initial effort to engage with this complex task, but its proposed approach is already risking falling behind the pace of development of AI. This threat is made more acute by the efforts of other jurisdictions, principally the European Union and United States, to set international standards,” reads the report.
“Our view is that a tightly-focussed AI Bill in the next King’s Speech would help, not hinder, the Prime Minister’s ambition to position the UK as an AI governance leader. Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.”
It added that the 12 challenges it highlights should “form the basis for discussion, with a view to advancing a shared international understanding of the challenges of AI—as well as its opportunities.”
“[…] A forum should also be established for like-minded countries who share liberal, democratic values, to ensure mutual protection against those actors—state and otherwise—who are enemies of these values and would use AI to achieve their ends.”
Started in October last year, the committee has received more than 100 written submissions and heard from 24 experts across numerous sectors including health, tech and media.
Amongst its findings were that as datasets were compiled by humans, they contained “inherent bias” which means that there are risks of encoding bias into AI models.
On the Misrepresentation Challenge, this focused on “fake news.”
“The use of image and voice recordings of individuals can lead to highly plausible material being generated which can purport to show an individual saying things that have no basis in fact.
“This material can be used to damage people’s reputations, and— in election campaigns—poses a significant threat to the conduct of democratic contests. Dr Steve Rolf, a researcher at the University of Sussex Business School, highlighted the potential for such material to impact “… democratic processes—for example, algorithmic recommendations on social media platforms that discourage wavering voters from turning out, thus tipping the balance in an election.”
It also looked at how faked content could lead to fraud.
Another key media challenge is IP and Copyright, with Jamie Njoku-Goodwin, CEO of UK Music, explaining that the industry operated on the “basic principle that, if you are using someone else’s work, you need permission for that and must observe and respect the copyright”, but that new tools allowed this to be circumvented.
Ongoing legal cases are likely to set precedents in this area.
The Intellectual Property Office has already begun to develop a voluntary code of practice on copyright and AI, in consultation with the technology, creative and research sectors.
The Government has also said that if agreement is not reached or the code not adopted, it may introduce legislation.
The interim conclusion points out that “AI cannot be un-invented. It has and will continue to change the way we live our lives. Humans must take measures to safely harness the benefits of the technology and encourage future innovations, whilst providing credible protection against harm.”
It added that while some experts had called for the development of certain types of AI to be paused, the committee was “unconvinced that such a pause is deliverable.”
“When AI leaders say that new regulation is essential, their calls cannot responsibly be ignored—although it should also be remembered that is not unknown for those who have secured an advantageous position to seek to defend it against market insurgents through regulation.”
The report ends:
“Without a serious, rapid and effective effort to establish the right governance frameworks—and to ensure a leading role in international initiatives—other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer. We urge the Government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed.”