On May 14, 2025, something unusual happened on X, the social media platform formerly known as Twitter.
Grok, the platform’s artificial intelligence assistant, began repeating a false theory about a so-called “white genocide” in South Africa.
Even when users asked about unrelated topics like sports or entertainment, Grok redirected the conversation back to this conspiracy.
The claim that white farmers in South Africa are being targeted for their race has circulated for years among white supremacist groups.
It suggests that these farmers are being killed in large numbers, purely because of their skin color. However, investigations by international media and organizations such as the Associated Press and the Guardian show no evidence of a racial motive or of any systematic violence targeting white communities in the country.
South African authorities confirm that violence affects many people, regardless of race.
Grok’s behavior triggered reactions across the internet. Several media outlets reported the issue, and by the following day, the AI assistant had stopped referencing the conspiracy.
The company behind Grok, xAI, later explained that the incident was caused by an “unauthorized modification” and that an internal investigation was underway.
However, Grok itself claimed it had been “instructed” to share this information, raising suspicions about the influence of its creator, Elon Musk.
Although it remains unclear whether Musk personally ordered the change, he has previously shared posts promoting the same false theory.
This incident adds to broader concerns about misinformation and the use of artificial intelligence to shape public opinion.
