The European Commission has opened a new formal investigation into X under the DSA, focusing on the deployment of its AI tool Grok within the EU.
The move expands regulatory scrutiny of the platform’s compliance obligations as a designated very large online platform. The latest proceedings examine whether X adequately assessed and addressed risks linked to the introduction of Grok’s functions on its service. According to the Commission, these risks include the potential spread of illegal material, such as manipulated sexually explicit images, including content that could constitute child sexual abuse material. Regulators indicated that some of these concerns may already have occurred, raising questions about user safety and harm prevention.
Expanded review of recommender systems and AI deployment
Alongside the new investigation, the Commission has expanded an earlier inquiry launched in December 2023 into X’s recommender systems. This expanded scope will assess whether X has met its obligations to identify and mitigate systemic risks associated with how content is prioritised and distributed, including following the platform’s transition to a Grok-based recommender system.
The Commission will assess whether X properly conducted and submitted a targeted risk assessment for Grok prior to its rollout, particularly where the AI tool altered the platform’s overall risk profile. It will also evaluate compliance with obligations covering the prevention of illegal content dissemination, risks related to gender-based violence, and potential harm to users’ physical and mental well-being.
If confirmed, the alleged shortcomings could represent breaches of several DSA provisions governing systemic risk management and transparency. The Commission emphasised that the launch of formal proceedings does not predetermine the outcome.
Irish media regulator Coimisiún na Meán, acting as the national Digital Services Coordinator for X’s EU establishment, is supporting the investigation. During the inquiry, the Commission may request further information, carry out inspections, or impose interim measures if remedial action is deemed insufficient.
The case builds on earlier enforcement actions against X, including a non-compliance decision issued in December 2025 that resulted in a EUR 120 million fine related to advertising transparency, data access for researchers, and interface design practices.
Representatives from the European Commission stated that the investigation aims to determine whether safeguards for European users, particularly women and children, were adequately protected during the deployment of AI-driven features.