Insights AI Foundation Models: competition authorities release joint statement

Contact

The Competition and Markets Authority (“CMA”), European Commission, US Department of Justice, and US Federal Trade Commission have published a joint statement on competition in generative AI foundation models and AI products. The Joint Statement declares that we are at a ‘technological inflection point’ and that the four competition authorities share an interest in ensuring that the benefits of AI can be maximised whilst using their powers to address the various risks posed by the technology “before they become entrenched or irreversible harms”.

We have previously commented on the CMA’s work on foundation models here. Many of the themes of that work are repeated in the Joint Statement, including concerns about the current leading companies developing an unassailable market power in digital markets, and the consequent effects on innovation and the availability of choice for consumers.

The Joint Statement highlights three particular risks to competition that the authors state will require vigilance:

  1. Concentrated control of key inputs. Developing AI foundation models requires, among other things, specialised chips, substantial compute, data at scale, and specialist technical knowledge. The authors express concern that this could potentially mean that a small number of companies can “exploit existing or emerging bottlenecks across the AI stack and have outsized influence over the future development of these tools”, including limiting the scope or direction of innovation at the expense of fair competition.
  1. Entrenching or extending market power in AI-related markets. The Joint Statement notes that the ‘technological inflection point’ to which they refer is coming at a time when already large technology companies enjoy significant market power. The role that they play in the development of AI could lead to greater entrenchment of that power, for example by taking steps to protect against AI-driven disruption or controlling the channels of distribution of AI services.
  2. Arrangements involving key players could amplify risks. This is a subject that the CMA has touched on separately as part of its investigation into AI foundation models. The Joint Statement draws attention to the partnerships and financial investments between firms developing AI models and warns that in some cases “these partnerships and investments could be used by major firms to undermine or coopt competitive threats and steer market outcomes in their favour at the expense of the public”.

Further competition and consumer risks associated with AI are also set out, which the authors commit to monitoring and addressing as necessary. These include the risks that algorithms might allow competitors to share competitively sensitive information, fix prices, or collude on other terms or business strategies in violation of competition laws. Equally, there is a risk that algorithms may enable firms to undermine competition through unfair price discrimination or exclusion. As for ‘consumer risks’, attention is drawn to the possibility that AI might “turbocharge deceptive and unfair practices that harm consumers”, and the Joint Statement warns that firms that “deceptively or unfairly use consumer data to train their models can undermine people’s privacy, security, and autonomy”. What is more, if business customers’ data is used to train models, there is a risk that the models could expose competitively sensitive information. As a general point, the authors stress the importance that “customers are informed, where relevant, about when and how an AI application is employed in the products and services they purchase or use”.

The Joint Statement also outlines a series of common principles that the competition authorities will apply and which will “generally serve to enable competition and foster innovation”:

  1. Fair Dealing. According to the Joint Statement, the AI ecosystem will be better off the more firms engage in fair dealing: when firms with market power engage in exclusionary tactics, they undermine competition and discourage innovation and investment.
  1. Interoperability. The Joint Statement argues that “competition and innovation around AI will likely be greater the more that AI products and services and their inputs are able to interoperate with each other. Any claims that interoperability requires sacrifices to privacy and security will be closely scrutinized”.
  2. Choice. Finally, it argues that business and consumers will benefit from being able to choose among a variety of products and business models. To ensure that this is possible, the Joint Statement says that the respective authorities will not only scrutinise ways that companies might seek to prevent users from making meaningful choices (such as ‘mechanisms of lock-in’), but also consider the investments and partnerships between incumbents “to ensure that these agreements are not sidestepping merger enforcement or handing incumbents undue influence or control in ways that undermine competition”.

To read the Joint Statement in full, click here.