Some users at Elon Musk’s X are turning into Musk’s world for controlling facts, raising concerns among human fact controllers that this could promote misinformation.
At the beginning of this month, X enabled users to call the Groce of Xai and ask questions about different things. The mass was similar to the bewilderment, which directed an automated account to X to provide a similar experience.
Immediately after Xai created Grok’s automated account on X, users began experimenting asking questions. Some people in markets, including India, began asking Grok to control comments and questions that aim for specific political beliefs.
Controllers of facts are concerned about the use of Grok-or any other assistant to the one in this way because bots can frame their responses to sound convincing, even if they are not actually accurate. Cases of the spread of fake news and disinformation were seen in the past.
In August last year, five state secretaries urged Musk to implement critical changes in Grok after fraudulent information created by the assistant displayed on social networks before the US elections.
Other Chatbots, including Openai’s Chatgpt and Google Gemini, also saw that they are generating inaccurate information about elections last year. Separately, misinformation researchers found in 2023 that chatbots, including chatg, can be easily used to produce persuasive text with fraudulent narratives.
“He, like Grok, they are really good at using the natural language and give a response that sounds like a human being saying it. And in that way, the products have this claim to the naturalness and authentic responses of sound, even when they are potentially too wrong. This would be the danger here,” Angie Hollan, the International Control Director, Techcrunch.
Unlike the assistants of him, human fact controllers use numerous, reliable resources to verify information. They also take full responsibility for their findings, with their names and organizations attached to ensure reliability.
Pratic Sinha, co -founder of the website of India’s nonprofit facts of Alt News, said that although Grok currently seems to have compelling answers, they are as good as the data they are supplied with.
“Who will decide which data is provided, and this is where the government’s intervention, etc., will come to the picture,” he noted.
“There is no transparency. Everything that lacks transparency will cause harm because anything that lacks transparency can be formed in any way.”
“Can be misused – to spread misinformation”
In one of the answers posted earlier this week, Grok’s account at x admitted that “can be misused – spread disinformation and violate privacy”.
However, the automated account does not indicate any denial for users when receiving its answers, making them misinforms if, for example, it has hallucinated the answer, which is the possible disadvantage of it.

“It may constitute information to provide an answer,” Techcrunch Anushka Jain, a research associate in the digital future lab of the future collective collective.
There are also some questions on how much Grok uses posts on X as training data, and which quality control measures uses to check the facts such posts. Last summer, it sparked a change that seemed to allow Grok to consume user data as default.
Next about the field of assistants as a Grok to be accessible through social media platforms is their dissemination of information in public – unlike chatgt or other privately used chatbots.
Even if a user is aware that the information he receives from the helper may be fraudulent or not completely accurate, others on the platform can still believe it.
This can cause serious social damage. The cases of this were seen earlier in India when the misinformation circulated on WhatsApp led to the crowds. However, those severe incidents occurred before the arrival of Genai, which has made the synthetic content even easier and appear more realistic.
“If you see many of these answers to Grok, you will say, hey, well, most of them are right, and that may be so, but there will be some that are wrong. And how much is a small part. Some of the research studies have shown that it is subject to 20% error levels … and when it goes wrong, it can really go with the consequences of the real world,”
He vs. True fact controllers
While the companies involved Xai are refining their models of him to make them communicate more as people, they are not yet – and cannot replace people.
For recent months, technology companies are exploring ways to reduce confidence in human fact controllers. Platforms including X and Meta began to embrace the new concept of controlling the facts crowded through so -called community notes.
Of course, such changes also cause concerns for fact controllers.
The Alt News Sinha optimistly believes that people will learn to distinguish between human facts and the dues of human facts and will more appreciate people’s accuracy.
“We will see the pendulum finally turn to the fact control,” Ifcn’s Hollan said.
However, she noted that in the meantime, fact controllers are likely to have more work with the information created by the rapidly spreading.
“Many of this issue depend on, do you really care what is really true or not? Are you just looking for veneer of something that sounds and feels true without being true? Because that’s what help you get you,” she said.
X and Xai did not respond to our request for comment.