The reasoning behind the ChatGPT block relates to concerns that OpenAI is breaching the European Union’s General Data Protection Regulation (GDPR). A lack of any system to prevent minors from accessing the tech was also cited as a concern, and as a result, the independent authority has opened an investigation.
OpenAI has 20 days to reply to the order and could face some serious penalties if it doesn’t fall in line with European regulations. OpenAI does not have a legal presence in the EU, and the GDPR allows any data protection authority to take measures if it identifies risks to local users.
Whenever the personal data of an EU user is processed, the GDPR applies, and according to Techcrunch, ChatGPT has been leveraging this kind of information, as it can even create biographies of named individuals in the region if asked to. While the training data sources for GPT-4 have not been disclosed at the time of writing, OpenAI did reveal that earlier iterations of the bot used scraped data from the internet, including Reddit.
Then there’s the issue of false information, which ChatGPT can produce and could potentially raise further GDPR concerns. Europeans have several rights over their data, including the right to error rectification, which could be tricky since it’s not yet clear how people could ask OpenAI to correct any false information generated by ChatGPT about them.
GDPR also covers data breaches by making it mandatory for entities that process personal data to implement protection measures for the information. Companies are also required to notify relevant supervisory authorities of significant breaches within specific timeframes. In March 2023, OpenAI revealed that a conversation history feature has been leaking users' chats and may have exposed some users’ payment information.
If OpenAI has processed the data of Europeans in an unlawful manner, DPAs across the bloc could order the data to be deleted. The consequences for the company are unclear, but the measures might force it to retrain models based on that personal data.
In the case of use by minors, OpenAI has not implemented an age verification technology, which means that it isn’t actively preventing people under the age of 13 from using ChatGPT. The Italian regulator has been known to be particularly involved in protecting children’s data, as it previously forced TikTok to remove more than 500,000 accounts in Italy. If OpenAI has no way of verifying the age of users it has signed up in Italy, it could be forced to remove their accounts and implement a more rigorous sign-up process.
Every day we send out a free e-mail with the most important headlines of the last 24 hours.
Subscribe now