← Back to all briefings
Compliance 6 min read Published Updated Credibility 71/100

Italy’s Garante orders temporary ChatGPT suspension

Italy's Garante suspended ChatGPT in March 2023 over GDPR concerns. The first major regulatory action against generative AI in Europe. OpenAI eventually resolved the issues, but it signaled regulatory scrutiny to come.

Editorially reviewed for factual accuracy

Compliance pillar illustration for Zeph Tech briefings
Compliance controls, audit, and evidence briefings

On 31 March 2023 Italy's Garante per la protezione dei dati personali issued an urgent order requiring OpenAI to stop processing Italian users' data for ChatGPT pending remedial measures. The authority cited insufficient transparency, lack of a lawful basis for training data, inaccurate personal data handling, and absence of age verification to block minors.

The Garante identified four primary GDPR violations. Transparency failures included inadequate privacy notices that did not clearly explain how user conversations and prompts were collected, processed, and used for model training. Users were not informed about the extent of data retention or the specific purposes for processing their inputs.

Lawful basis deficiencies centered on OpenAI's failure to set up a valid legal basis for processing personal data contained in training datasets. The authority questioned whether consent, legitimate interest, or contractual necessity could justify the large-scale collection of personal information scraped from the internet without individual notice or opt-out mechanisms.

Data accuracy concerns addressed ChatGPT's potential to generate false or misleading information about real individuals. Under GDPR Article 5(1)(d), data must be accurate and kept up to date, but generative AI systems may produce fabricated content that attributes false statements, credentials, or actions to identifiable persons.

Age verification gaps highlighted the absence of mechanisms to prevent minors under 13 from accessing the service. Italian data protection law requires parental consent for processing children's personal data, and the lack of age-gating controls exposed OpenAI to liability under both GDPR Article 8 and national child protection provisions.

Remedial Requirements

The order specified several corrective measures before service restoration. OpenAI was required to update privacy disclosures with clear, accessible language explaining data collection practices, retention periods, training data usage, and user rights. Notices must be provided in Italian and presented before account creation or service use.

Legal basis documentation mandated that OpenAI identify and justify the lawful basis for processing user inputs and training data under GDPR Article 6. If relying on legitimate interest, the company must conduct and document a balancing test demonstrating that processing interests do not override data subject rights.

User rights mechanisms required setup of accessible processes for data access requests, rectification of inaccurate outputs, and objection to processing for training purposes. Users must be able to exercise these rights without undue burden, with responses provided within GDPR-mandated timeframes.

Age-gating setup demanded deployment of age verification controls to block access by users under 13 and to obtain verifiable parental consent for users aged 13-17 where required. The Garante did not specify technical methods but showed that self-declaration alone would be insufficient.

Broader Regulatory Implications

The Italian suspension triggered coordination among European data protection authorities through the European Data Protection Board. The EDPB established a ChatGPT task force to harmonize supervisory approaches and prevent fragmented enforcement across member states. French, German, and Spanish authorities then opened their own investigations, examining similar transparency and lawful basis concerns.

The enforcement action established precedent for regulatory scrutiny of generative AI services under existing data protection frameworks. Key principles emerging from the case include the need for clear legal bases for training data, improved transparency about AI processing, strong accuracy safeguards for AI outputs, and proportionate age verification measures.

Enterprise Considerations

Organizations deploying generative AI services should document legal bases for processing, implement user consent and opt-out mechanisms for training data use, and establish content moderation policies addressing accuracy risks. Enterprise agreements with AI providers should include indemnification provisions for regulatory enforcement and data protection commitments addressing the concerns raised by European authorities.

Continue in the Compliance pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
Compliance
Source credibility
71/100 — medium confidence
Topics
GDPR · Enforcement · Generative AI · Youth Safety
Sources cited
3 sources (garanteprivacy.it, iso.org)
Reading time
6 min

Documentation

  1. Garante press release on ChatGPT suspension — Garante per la protezione dei dati personali
  2. Formal suspension order (Italian) — Garante per la protezione dei dati personali
  3. ISO 37301:2021 — Compliance Management Systems — International Organization for Standardization
  • GDPR
  • Enforcement
  • Generative AI
  • Youth Safety
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.