Insights Age Assurance: Ofcom publishes guidance

Contact

Ofcom has issued guidance on age assurance mechanisms for user-to-user and search services which fall under the Online Safety Act 2023 (“OSA”) as they prepare to carry out their Children’s Access Assessments (on which separate guidance has been published and which we discuss here).

Under the OSA, regulated services must carry out a Children’s Access Assessment to determine (a) whether it is possible for children to access the service or a part of it, and (b) if so, whether there is a significant number of children who are users of the service, or it is likely to attract a significant number of users who are children.

In order for a regulated provider to conclude that it is not possible for children to access its service, it must have so-called “highly effective age assurance” methods and processes in place. The Guidance sets out Ofcom’s position as to what it considers are methods and processes that are, in theory, capable of meeting this test, in addition to the four criteria that it applies in determining whether a particular age assurance process or method is, in practice, ‘highly effective’.

Examples of methods and processes that Ofcom endorses as being capable of being highly effective include:

  • Accessing information that a bank has on record regarding a user’s age (with the user’s consent);
  • Matching an uploaded photo-ID document with the image that the user uploads;
  • Using ‘facial age estimation’ software;
  • Checking whether the user’s mobile-network operator has removed content restriction filters (which can only occur if the user proves that they are over 18);
  • Requiring users to input credit card details and receiving subsequent confirmation from the issuing bank that their card is valid (and therefore that the user is over 18);
  • Using solutions that estimate the age of a user by analysing the other online services where the email address that the user has provided has been used (such as with mortgage lenders, for example); and
  • Digital identity services.

Examples of methods or processes of age assurance that are not deemed to be capable of being highly effective include:

  • Self-declaration of age;
  • Age verification through online payment methods which, unlike credit cards, do not require a user to be over the age of 18; and
  • General contractual restrictions within a provider’s terms of service prohibiting users who are under 18.

Those age assurance methods that Ofcom deems are – in theory – capable of being highly effective must also be so in practice. That means complying which what Ofcom sets out as the four criteria that must be fulfilled by any method.

First, the method must be technically accurate, meaning that providers should have evaluated it against “appropriate metrics and the results indicate that the method is able to correctly establish whether or not a particular user is a child”. In the case of age estimation methods, Ofcom states that providers should use a “challenge age approach” much like the ‘Challenge 25’ campaign that is employed in retail settings. As Ofcom explains, “where a user is estimated to be under a given challenge age [set at an appropriate level], they must then undergo a second age assurance step (for example, a different age assurance method) to confirm that they are over the required age”.

Second, any method must be robust. This means, for example, ensuring that the technology has been tested in a range of conditions and that the provider has taken steps to mitigate against methods of circumvention by child users. Examples provided include ensuring that a photo-ID method is able to detect falsified or manipulated documentation, or determining whether repeated age checks are necessary.

Third, an age assurance method must be reliable, in that the output must be “reproducible and derived from trustworthy evidence”. There is a particular focus on methods that employ machine learning, and Ofcom expects that organisations using such technology “take steps to ensure that it has been suitably tested during the development process to ensure it produces reproducible results, i.e. to ensure that outputs are consistently produced when the method is presented with the same inputs”. Ofcom also expects service providers to ensure that they have confidence in the evidence that the age assurance method relies on, identifying, for example, the features that they would expect to see in a trustworthy source.

Fourth, an age assurance method must be fair. This means ensuring that it “avoids or minimises bias and discriminatory outcomes” such as, for example, providing outputs with a lower degree of technical accuracy for users of certain ethnicities when relying on facial estimation. To avoid this, Ofcom recommends that technology is tested on diverse datasets and that providers monitor the accuracy of the method across difference characteristics.

Finally, Ofcom reminds service providers that the age assurance process should be easy to use, work for all users, be interoperable, and follow a ‘data protection by design’ approach.

To read the guidance in full, click here.