Openai is facing another complaint of intimacy in Europe over her viral tendency of he chatbot to halucify false information – and this may be complicated for regulators to ignore.
NOYB intimacy advocacy group is supporting an individual in Norway who was terrified when he found chatgpt by returning information he claimed he would be convicted of killing his two children and attempting to kill the third.
Previous complaints of intimacy regarding chatgpt that generate incorrect personal data have included issues such as an inaccurate birth date or biographical details that are wrong. One concern is that Openai does not provide a way for individuals to correct the incorrect information he generates about them. Typically Openai has offered to block the answers to such requests. But under the General Regulation on European Union Data Protection (GDPR), Europeans have a group of data access rights involving the right to correct personal data.
Another component of this data protection law requires data controllers to ensure that the personal data they produce for individuals are correct – and this is a concern that NOYB is making the flag with its latest chatgt complaint.
“GDPR is clear. Personal data should be accurate,” said Joakim Söderberg, a Data Protection Advocate in NOYB, in a statement. “If this is not the case, users are entitled to change it to reflect the truth. Showing chatgt users a little denial that chatbot can make mistakes clearly is not enough. You cannot simply spread false information and finally add a little denial saying that you may not be true.”
Confirmed GDPR violations can lead to sentences of up to 4% of the global annual turnover.
Implementation can also force changes in the products it. In particular, an early GDPR intervention by Italy’s data protection guards saw Chatgpt temporarily blocked in the country in the spring of 2023 led Openai to make changes to the information he detects to users, for example. The supervisor then continued to fined € 15m for processing people’s data without a proper legal basis.
Since then, it is right to say that the guards of intimacy around Europe have adopted a more cautious approach to Genai, while they try to understand how to better apply GDPR to these tools of Buzzy.
Two years ago, the Ireland Data Protection Commission (DPC) – which has a role in implementing GDPR in a previous NOYB ChatPt complaint – requested to be encouraged to stop the Genai tools, for example. This suggests that regulators instead should take time to draft how the law is implemented.
And it is evident that a complaint of intimacy against chatgpt that has been under investigation by Poland’s data protection guards since September 2023 has not yet rendered a decision.
Noyb’s new Chatgpt complaint seems to be aiming to shake the regulators of smart intimacy when it comes to the risks of AIS hallucination.
No -Profit shared the screen (below) with Techcrunch, which shows a chatgt interaction in which he answers a question by asking “Who is Arve Halmar Holmen?” – The name of the individual who brings the complaint – producing a tragic fabrication that falsely says he was sentenced to killing children and sentenced to 21 years in prison for killing his two sons.
While the defamatory claim that Halopa Holmen is a children’s killer is completely false, Noyb notes that the chatgt’s response includes some truths, as the individual in question has three children. Chatbot also received the genres of his right children. And his hometown is correctly named. But it simply makes him increasingly strange and worrying that he halogue such terrible lies at the top.
A Noyb spokesman said they were unable to determine why Chatbot produced such a specific but false story for this individual. “We did research to make sure this was not just a mixture with another person,” the spokesman said, mentioning that they would look at the newspaper archives, but had not been able to find an explanation for the reason why he was fabricated by killing.
Large linguistic models such as the basic chatgpt essentially predicting the other word to a large scale, so we can speculate that the data used to train the tool contained many filicide stories that influenced the word election in answering a man called.
Whatever the explanation, it is clear that such results are completely unacceptable.
NOYB’s assertion is also that they are illegal according to EU data protection rules. And while Openai shows a slight denial at the bottom of the screen that says “chatgt can make mistakes. Control important information”, he says this cannot justify the developer of his task under GDPR not to produce wild lies for people in the first place.
Openai has been contacted for a response to the complaint.
While this GDPR complaint relates to an appointed individual, Noyb points to other cases of chatgt by fabricating legally compromised information – such as the Australian Major who said he was implicated in a bribe and corruption scandal or a German journalist who was a false child abuse He.
One important thing to note is that, following an updating of the basic model of him by empowering the chatgt, Noyb says the chatbot stopped producing dangerous lies about Halmar Holmers – a change it relates to the tool that now searches for information about people when they are (while previously. made that such wrong answer).
In our tests by asking the chatgpt “Who is Arve Halop Holmen?” Chatgpt initially responded with a slightly strange combination featuring some photos of different people, apparently sourced from pages, including Instagram, SoundCloud and Discogs, along with the text that claimed “could not find any information” over an individual of this name (see our view below). A second attempt emerged a response that identified Arve Halopi Holmen as “a Norwegian musician and songwriter” whose albums include “Honky Tonk Inferno”.

While the dangerous lies created by the chatgpt for Halopi Holmen seem to have stopped, both Noyb and Halop Holmen remain concerned that inaccurate and slander information could have been kept within the model.
“Adding a denial that you do not comply with the law does not make the law go away,” Kleanthi Sardeli, another NOYB data protection lawyer noted in a statement. “He’s companies also can’t” hide “false information from users while they still process fake information.”
“He should stop acting as if GDPR would not apply to them when it happens clearly,” she added. “If hallucinations do not stop, people can easily suffer reputational damage.”
Noyb has raised the complaint against Openai with the Norwegian Data Protection Authority – and hopes the guards will decide to be competent to investigate, as OYB is aiming for the complaint to the US Openai Entity, arguing that its office in Ireland is not only responsible for product decisions affecting Europeans.
However, a previous GDP complaint with NOYB against Openai, which was raised in Austria in April 2024, was referred by the Ireland’s DPC regulator due to a change made by Openai earlier this year to mention his Irish separation as chatgt service provider to regional users.
Where is that complaint now? Still sitting at a table in Ireland.
“Receiving the complaint from the Austrian Supervisory Authority in September 2024, the DPC began official complaint treatment and it is still ongoing,” Risteard Byrne, assistant official for DPC for Techcrunch when asked for an update.
He did not offer any direction when the DPC investigation into chatgt hallucinations is expected to be completed.