He Chatbot Grok of Elon Musk seemed to experience a mistake on Wednesday that made her respond to dozens of posts on X with information about “white genocide” in South Africa, even when the user asked nothing about the subject.
The strange answers are derived from the X account for Grok, which responds to users by posts created by him whenever a user labels @grok. When asked about unrelated topics, Grok repeatedly told users for a “white genocide” as well as the anti-aparteid call “Kill Boer”.
Strange, unrelated Grok responses are a reminder that chatbots are still a new technology, and it may not always be a reliable source for information. In recent months, the model providers have fought to moderate the answers of their chatbots, which have led to strange behavior.
Openai recently was forced to return an updating to the chatgt that made him chatbot were too sycophant. Meanwhile, Google has faced problems with her chatbot Gemini refusing to respond, or give misinformation about political topics.
In an example of the bad behavior of Grok, a user asked Groku about the salary of a professional baseball player, and Grok responded that the “white genocide claim” in South Africa has been debated. “
Some users posted on x about their confusing, strange interactions with Grok he chatbot on Wednesday.
It is unclear at this time what is the cause of the strange answers of Grok, but the chatbots of xai have been manipulated in the past.
In February, Grok 3 seemed to have briefly censored the volatile mention of Elon Musk and Donald Trump. At that time, engineering leader Xai Igor Babuschkin seemed to confirm that Grok was briefly instructed to do so, though the company quickly returned the instructions after the reaction attracted the most attention.
Whatever the cause of the bug may have been, Grok seems to be responding to users more normally. An Xai spokesman did not immediately respond to Techcrunch’s request for comment.