Insights AI Governance: Science, Innovation and Technology Committee publishes final report


The Science, Innovation and Technology Committee published its final report into the governance of Artificial Intelligence, concluding that “we must fundamentally change the way we think about artificial intelligence”. The report – the last of the 2019-24 Parliament – offers advice to the next government as to how to shape its responses to AI, drawing upon the ‘Twelve Challenges of AI Governance” set out in the Committee’s interim report last year.

The Report notes the UK’s current approach to the regulation of AI as set out in the AI White Paper (on which we commented here). Part of that approach is the development of a framework that avoids “unnecessary blanket rules that apply to all AI technologies, regardless of how they are used”. However, the Committee stresses that the next government should not rule out introducing new AI-specific legislation should it be necessary, and that it should keep its approach to AI regulation under constant review, drawing up criteria which would trigger a decision to legislate.

In the absence of an extensive legislative regime, much of the current work of regulating AI is left to existing regulators who are expected to implement the five high-level principles identified in the White Paper in their respective sectors. According to the Committee’s Report, there remain obstacles to this approach being effective: first, the next government needs to analyse the extent to which there is a ‘regulatory gap’ such that existing regulators do not have the powers or remit to respond to the challenges of AI; second, the Committee warns of a risk of regulatory overlap and the blurring of responsibilities. In addition to conducting a regulatory gap analysis, it recommends that the next government identifies where new AI models will “necessitate closer regulatory co-operation” and that it puts forward suggestions for delivering this co-ordination. Third, and perhaps most importantly, the Committee points to the challenge of capacity: it simply does not think that the regulators have the resources to do their respective jobs effectively. It points out, for example, that the £10 million in funding that was recently announced to “jumpstart regulators’ AI capabilities” would mean that, if split evenly between the 14 relevant regulators, each would receive an amount “equivalent to approximately 0.0085% of the reported annual UK turnover of Microsoft in the year to June 2023”. In the Committee’s view, this is “clearly insufficient to meet the challenge” of AI, and it calls on the next government to announce further financial support, even suggesting that it should consider “the benefits of a one-off or recurring industry levy that would allow regulators to supplement or replace support from the Exchequer for their AI-related activities”.

The Report moves on to revisit the ’12 Challenges of AI Governance” previously identified by the Committee, and urges the next government to adopt an approach to AI that is consistent with them. Each is set out below, along with the Committee’s corresponding recommendations:

  1. The Bias Challenge. The Report states that “developers and deployers of AI models must not merely acknowledge the presence of inherent bias in data sets, they must take steps to mitigate them”. It recommends that the next government requires deployers of AI models both to submit them to “robust, independent testing and performance analysis prior to deployment” and to summarise what steps they have taken to account for bias in datasets.


  1. The Privacy Challenge. According to the Committee, “privacy and data protection frameworks must account for the increasing capability and prevalence of AI models and tools, and ensure the right balance is struck”. To achieve this, it recommends that sectoral regulators publish detailed guidance to help deployers of AI strike the right balance, and that regulators impose appropriate sanctions where relevant laws or regulatory requirements are not followed.


  1. The Misrepresentation Challenge. The Committee is clear that “those who use AI to misrepresent others, or allow such misrepresentation to take place unchallenged, must be held accountable”. It states that the new government should bring forward similar provisions to those set out in Criminal Justice Bill which outlawed sexually explicit deepfakes, but which did not pass into law before the dissolution of Parliament. It also recommends the launching of a cross-government public awareness campaign “to inform the public about the growing prevalence of AI-assisted misrepresentation, the potential consequences, what the Government is doing to address the Challenge, and what steps individuals can take to protect themselves online”.


  1. The Access to Data Challenge. The Committee issues a warning about the small group of AI developers and how this could lead to the potential of dominance in the market. As a result, it recommends that the Competition and Markets Authority take appropriate steps to combat abuse, including imposing fines or requiring the restructuring of proposed mergers. It also proposes that the next government “support[s] the emergence of more AI startups in the UK by ensuring they can access high-quality datasets they need to innovate”.


  1. The Access to Compute Challenge. According to the Committee, “democratising and widening access to compute is a prerequisite for a healthy, competitive and innovative AI industry and research ecosystem”. It welcomes plans to establish a dedicated AI Research Resource and a new cluster of supercomputers, but calls for a detailed plan to be published as to how researchers and startups will be able to use them to access necessary compute.


  1. The Black Box Challenge. The Committee states that “we should accept that the workings of some AI models are and will remain unexplainable and focus instead on interrogating and verifying their outputs”.


  1. The Open-Source Challenge. The Report recognises that “the open-source approach has underpinned many technological breakthroughs” but that a healthy AI marketplace should be sufficiently diverse to support both ‘open’ and ‘closed’ AI systems. It also points to the risks associated with open-source AI tools, particularly in the creation and dissemination of harmful and illegal content, and recommends that the next government sets out “how it will ensure law enforcement and regulators are adequately resourced to respond to such matters”.


  1. The Intellectual Property and Copyright Challenge. The Committee previously expressed concern about the scraping of copyright content without permission to train AI models. In this report, it reiterates that “the status quo allows developers to potentially benefit from the unlimited, free use of copyrighted material, whilst negotiations [between AI developers and the creative industries] are stalled”. In the absence of a voluntary approach, the Committee calls on the next government to enforce an agreement which includes “a financial settlement for past infringements by AI developers, the negotiation of a licensing framework to govern future uses, and in all likelihood the establishment of a new authority to operationalise the agreement”.


  1. The Liability Challenge. The Committee states that “next government together with sectoral regulators should publish guidance on where liability for harmful uses of AI falls under existing law. This should be a cross-Government undertaking. Sectoral regulators should ensure that guidance on liability for AI-related harms is made available to developers and deployers as and when it is required. Future administrations and regulators should also, where appropriate, establish liability via statute rather than simply relying on jurisprudence”.


  1. The Employment Challenge. Disruption to the labour market caused by AI is a frequently-raised source of concern. The Committee recommends that the next government commissions a review into the consequences of AI on future skills and employment and sets out “how it will ensure workers whose jobs are at risk of automation will be able to retrain and acquire the skills necessary to change careers”.


  1. The International Coordination Challenge. The Committee comments upon the divergence in approaches from various countries to regulating AI, but notes that “we do not believe that harmonisation for harmonisation’s sake should be the end goal of international AI governance discussions”. Instead, it recommends that future AI Safety Summits focus on mechanisms to allow the sharing of best practice between difference countries.


  1. The Existential Challenge. Finally, the Committee concludes that previous assessments that “existential risks are high impact but low probability appear to be accurate”. However, such risks should continue to be monitored by the UK’s national security apparatus, supported by the AI Safety Institute.

To read the report in full, click here.