Lessons from Grok: Unsupervised platforms go wrong ‘every damn time’
Author: Vincent Stuer
Date:

Even in a world where standards of decency are regularly violated and democratic boundaries broken every day, this week’s incidents with Grok are shameful and demand an immediate EU response.
The Elon Musk chatbot, integrated into X, showed its ugly face with a series anti-semitic comments. Posts included claims that people with Jewish names are ‘radical-left activists every damn time’, and praise for Adolf Hitler as one who would act ‘decisively, every damn time’.
The case raises serious concerns about xAI’s compliance with the Digital Services Act, that sets standards against hate speech and disinformation, and asks questions about the governance of generative AI in the European digital space.
These incidents are only the latest examples of Grok going against both. They will not be the last, unless we make it so, says MEP Sandro Gozi (Mouvement Démocrate, France):
‘The Commission has to act decisively on this case, and prevent it from happening again. Either it is a question of proper implementation by the Commission, or there remains a regulatory vacuum concerning AI generated content, and then we need to address it immediately.
Whichever it is, we need the Commission to step up. It has so far been slow and soft in applying the DSA. The Grok case shows that doesn’t work.
If Commission President Von der Leyen is looking for ways to restore trust among the democratic centre of EU politics. Acting firm against Grok would be exactly the right signal to send.’
