HomeInsightsAdvanced AI Assistants: Ada Lovelace Institute warns existing laws fail to protect against risks

The Ada Lovelace Institute has published a report examining the extent to which existing laws and regulation are equipped to tackle the challenges posed by advanced AI assistants.

The report makes for sobering reading, concluding that “the Law of England and Wales provides insufficient protection against harms that are likely to result from the use of Advanced AI Assistants. Without urgent policymaker and regulator action, the UK public will remain exposed to these harms, which will become progressively more impactful and harder to manage as Assistants become heavily integrated into our lives, digital infrastructure and society”.

If there is any silver lining, it is that the technology under examination – defined as “AI apps or integrations, powered by foundation models, that are able to engage in fluid, natural-language conversation; can show high degrees of user personalisation; and are designed to adopt particular human-like roles in relation to their users” – remains relatively nascent, meaning there is still time to act.

The consequences of failing to do so are stark. The report warns that if adoption of AI assistants is not carefully manged in the public interest, the technology could, among other things, distort markets, threaten privacy and security, and call into question standards of quality, protection and liability governing professionals.

To illustrate the risks of this technology and the shortcomings of existing legal frameworks, the report considers a series of scenarios involving AI assistants which explore how – if at all – everything from the GDPR and consumer protection regulation to contract and human rights law would apply. Ultimately, it concludes that relying on a patchwork of laws that are not designed to address this technology will likely lead to matters falling in between the cracks and, in turn, inadequate protection for individuals. As the report states, “the scenarios explored in this legal analysis present a sombre picture…where some legal coverage does exists, it is an awkward fit with real-world protection, clouded by uncertainty”.

Given the dire analysis, the obvious question is what can be done. That is next on the agenda for the Ada Lovelace Institute: it plans to work with others to develop a set of concrete recommendations over the next few months, and has invited those working in this area to engage with the process.

To read more, click here.