Insights Governance of AI: Government responds to Science, Innovation and Technology Committee’s Report

The Government has published its response to the Science, Innovation and Technology Committee’s Report on the Governance of AI.

A theme that ran throughout the Committee’s report (which we previously discussed here) was the need for any potential regulatory framework to recognise both the benefits and challenges posed by AI, and for the Government to “stand ready to introduce new AI-specific legislation”. It also drew attention to what it had previously identified as the ‘Twelve Challenges of AI Governance’.

In its response to the report, the Government has stated that it welcomes the findings of the Committee in relation to the need to legislate in order to ensure the safety of AI, and commits itself to consulting shortly on “highly targeted” legislation which would “establish binding requirements on the handful of companies developing the most powerful AI systems”.

The current regulatory landscape was a subject of particular interest for the Committee, as it expressed some concern about the capacity of existing regulators to oversee adequately the development and deployment of AI. Not only did it point out the possibility of overlaps that would require coordination between various regulators, but it also identified what it called a ‘regulatory gap’ in which certain matters risked falling between the cracks of the current regulatory environment. To this, the Government reiterated its commitment to a ‘pro-innovation’ approach to the regulation of AI, and expressed confidence that “our existing expert regulators are best placed to apply rules to the use of AI in the contexts they know better than anyone else”. The Government also pointed to the new Regulatory Innovation Office which is intended to support regulators to, among other things, update regulation, speed up approvals, and encourage different regulatory bodies to work together.

On the question of safety, the Response confirms that legislation will be introduced that puts the AI Safety Institute (“AISI”) on a statutory footing, thereby “strengthen[ing] its role leading voluntary collaboration with AI developers and leading international coordination of AI safety”. Work also continues on securing access agreements with frontier AI developers for both pre- and post-deployment testing, as well as the AISI developing its partnership with its US counterpart.

The Response continues by touching briefly on a number of subjects discussed by the Committee, including the work that is being done by Ofcom to address AI-generated mis- and disinformation, the CMA’s role in ensuring that the AI ecosystem remains competitive through its digital markets regime, and action taken by various bodies to consider the impact that AI could have on the labour market.

To read the Response in full, click here.