By way of an interim measure adopted on 30 March 2023, the Italian Data Protection Authority (Garante per la protezione dei dati personali) (the Garante) ordered  the US company Open AI LLC to temporarily stop ChatGPT’s processing of personal data relating to individuals located in Italy, pending the outcome of the Garante’s investigation into the privacy practices of ChatGPT.

The alleged GDPR infringements:
The Garante apparently took its action after becoming aware of a recent data breach at ChatGPT, where users’ chat titles and payment information was exposed. This led to the Garante seeking further information from Open AI and noting the following alleged GDPR violations:

  • a failure to provide the required transparency information about ChatGPT’s processing of their personal data to users and other data subjects whose data is collected by ChatGPT;
  • the absence of a legal basis for processing personal data for the purposes of “training” the algorithms underlying the platform’s operations;
  • inaccuracy in ChatGPT’s processing of personal data, because the information provided by ChatGPT does not always match the real data; and
  • a failure to verify users’ age, meaning that users under 13 years of age may allegedly obtain answers from ChatGPT not appropriate to their degree of development and self-awareness and in contravention of the ChatGPT terms.

Open AI now has 20 days from the date of receiving the measure to respond to the alleged breaches and provide details of any corrective measures. A failure to do so could result in Open AI being issued with a fine.

The Garante’s AI action and remaining open questions

This is the first action of its kind taken by a data protection authority in the EU in relation to the data processing by the popular generative AI tool, and it deals in particular the hot topic of data processing in the context of “training” of machine learning software.

The order by the Garante does not elaborate specifically on organisations’ ability to rely on legitimate interests for the collection and use of personal data by the AI engine for training purposes.  The interim measure also does not draw a distinction between using personal data to build or train an algorithmic model versus actually inputting personal data into a developed given algorithmic model nor the impact of this distinction on the “balancing test” that must be undertaken when relying on legitimate interests. The Garante’s view on these matters will hopefully become clear following its investigation.

This is, however, not the first order by the Italian DPA in relation to AI more generally, as in February 2023 the regulator issued a measure preventing the app Replika, a chat-bot acting as a “virtual friend” to users, from processing personal data of individuals resident in Italy.

Our take

The Garante’s action is  the most headline grabbing action by a data protection authority in the AI space to date because of its impact on ChatGPT, which is reportedly the fastest growing consumer application in history.

However, it is reflective of the recent increased focus by European data protection authorities on AI, with the Dutch, French, Spanish and UK regulators all increasing their oversight of AI and publishing materials in this area.

Therefore, whilst the Garante’s action may be the first action of its kind, it seems unlikely to be the last.

Relevant materials (press release by the Italian DPA) (press release about the conference call scheduled for 4th April) (measure by the Italian DPA about Chat GPT, in Italian language) (measure by the Italian DPA about Replika, in English translation)