Paula Luckhoff10 July 2025 | 15:05

Grok gone rogue: Elon Musk’s AI firm forced to delete Hitler praise posts

Musk now says the chatbot was 'manipulated' into praising Adolf Hitler.

Grok gone rogue: Elon Musk’s AI firm forced to delete Hitler praise posts

Elon Musk. Wikimedia Commons/JD Lasica

CapeTalk's John Maytham is joined by Steven Boykey Sidley, Professor of Practice at the University of Johannesburg.

Elon Musk’s artificial intelligence firm xAI has had to remove anti-semitic posts from its Grok chatbot, following complaints.

The platform started praising Adolf Hitler, referring to itself as MechaHitler and making anti-semitic comments in response to user queries.

'We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts', Grok posted on X on Wednesday.

Musk followed up with a post claiming that the chatbot was 'manipulated' into praising Hitler.

AI expert, Professor Steven Boykey Sidley, says this can all be traced back to the Communications Decency Act of 1996, which said platforms are not responsible legally for what users put on them.

He contrasts the regulations governing media practitioners with this lack of regulation online.

Prof. Sidley also highlights that social media platforms had fact-checking departments and content moderation partners well until Musk bought Twitter, now X, and changed this protocol.

"Musk fired them all and replaced them with what is called community notes - people write in and complain, and they'll take a look at it."
Steven Boykey Sidley, Professor of Practice - University of Johannesburg

Other platforms like Mark Zuckerberg's Meta followed suit, with the result that people are now saying whatever they like, however awful that may be.

Aligned to that, Sidley goes on, is the fact that an AI chatbot is only as good as what it is powered by.

"Not only do we not know how Grok was trained because that is private information to xAI, but even the people who've built the system don't necessarily know how or why it comes up with its answer."
Steven Boykey Sidley, Professor of Practice - University of Johannesburg
"That is the dirty little secret of these chatbots - it's impossible to go back and ask 'why did you say what you said?' because it is too complicated."
Steven Boykey Sidley, Professor of Practice - University of Johannesburg

He also points out that while other platforms like the widely used ChatGPT added software to stop hate speech from going out as much as they can, Grok did not do this.

Scroll up to the audio player to listen to the full conversation