Insights Generative AI and chatbots: Ofcom confirms position under the Online Safety Act

Contact

Ofcom has published an open letter to online service providers operating in the UK about how the Online Safety Act will apply to Generative AI and chatbots.

According to Ofcom, the letter was prompted by recent incidents that demonstrated the potential dangers associated with the misuse of Generative AI systems, including a platform creating ‘virtual clones’ of dead children (including Molly Russell and Brianna Ghey), and a 14 year old boy in the United States taking his own life after developing a relationship with a chatbot based on a character in Game of Thrones.

The letter leaves no room for doubt that Generative AI systems and chatbot tools are within the scope of the Online Safety Act, and sets out how the Act might apply in various circumstances:

  • Sites or apps that allow their users to interact with each other by sharing images, videos, messages, comments or data with other users of the platform are ‘user-to-user services’ for the purposes of the Act. This includes site or apps that feature a Generative AI chatbot;
  • If a site or app allows users to upload or create their own Generative AI chatbot which is made available to other users, that too is a user-to-user service, and any text, images or videos created by such user chatbots is ‘user-generated content’ under the terms of the Act;
  • More generally, any AI-generated text, audio, images or videos that are shared by users on a user-to-user service are user-generated and regulated in the same way as human-generated content;
  • Generative AI tools that enable users to search more than one website and/or database (for example Generative AI tools that “modify, augment or facilitate the delivery of search results on an existing search engine”) are ‘search services’ for the purposes of the Act;
  • Sites and apps that include Generative AI tools to generate pornographic material are regulated under the Act and are required to use “highly effective age assurance to ensure children cannot normally access pornographic material”.

For companies employing Generative AI tools and chatbots, the letter points to a series of measures in Ofcom’s draft Codes of Practice which will help them comply with their obligations under the Act, including: (a) having a content moderation function that allows for the swift takedown of illegal posts where identified, and for children to be protected from material that is harmful to them; (b) using highly effective age assurance to prevent children from encountering the most harmful types of content where this is allowed on the platform; (c) and having easy to access and easy to use reporting and complaints processes.

Ofcom also makes clear that companies that fall under the Act need to take appropriate action to comply with their regulatory duties sooner than later, and that it will take enforcement action if necessary should companies breach their duties.

The letter concludes by drawing attention to a number of milestones on the horizon for companies that are regulated under the Act, including the publication of Ofcom’s final Illegal Harms Risk Assessment Guidance and Code of Practice next month, and the need for companies to completed Illegal Harms Risk Assessments by mid-March 2025.

To read the letter in full, click here.