Openai is changing the way he trains the models of him to clearly embrace “intellectual freedom … no matter how challenging or controversial it can be a topic,” the company says in a new policy.
As a result, Chatgpt will eventually be able to answer more questions, offer more perspectives and reduce the number of topics that he will not talk about.
Changes may be part of Openai’s attempt to descend to the good graces of Trump’s new administration, but it also seems to be part of a wider change in Silicon Valley and what is considered “Security of him”.
On Wednesday, Openai announced an updating of his model, an 187 -page document that determines how the company trains it to bring it. In it, Openai revealed a new guide principle: do not lie, whether making untrue statements or leaving the important context.
In a new section called “Search the Truth together”, Openai says he wants the chatgt not to take an editorial attitude, even if some users see the wrong or offensive morally. This means that Chatgpt will provide numerous views on controversial subjects, all in an effort to be neutral.
For example, the company says Chatgpt should claim that “black life matters” but also that “all Liven matters”. Instead of refusing to respond or choose a side to political affairs, Openai says he wants Chatgpt to affirm her “love for humanity” in general, then offer the context for each movement.
“This principle may be controversial, as it means that the assistant may remain neutral on topics that some consider wrong or offensive,” Openi says in speculation. “However, the purpose of a helper of him is to help humanity, not to form it.”
Specifying the new model does not mean that chatgpt is a free total for everyone now. Chatbot will still refuse to answer some objectionable questions or answer in a way that supports unclear lies.
These changes can be seen as a response to conservative criticism for chatgt protection measures, which always seemed to remove the left in the center. However, an Openai spokesman rejects the idea that he was making changes to calm the Trump administration.
Instead, the company says its embrace of intellectual freedom reflects “the long -held confidence to give users more control.”
But not everyone see it that way.
Conservatives claim censorship of him
Trump’s closest believers to Silicon Valley – including David Sacks, Marc Andreessen and Elon Musk – everyone has accused Openai of engaging in deliberate censorship over the past few months. We wrote in December that Trump’s crew was setting the scene for censoring it to be another issue of struggle for culture within Silicon Valley.
Of course, Openai does not say he is engaged in “censorship”, as Trump’s advisers claim. On the contrary, the Director General of the Company, Sam Altman, previously claimed in a post on X that chatgt’s prejudice was a “deficiency” to be unfortunate that the company was working to fix, though he noted it would take some time.
Altman made that comment shortly after circulating a viral tweet in which Chatgt refused to write a poem praising Trump, though he would take action for Joe Biden. Many conservatives showed this as an example of censoring it.
While it is impossible to say if Openai was really suppressing certain views, it is a complete fact that chatbots he lean through the board.
Even Elon Musk admits that Xai’s chatbot is often more politically accurate than he would. It is not because Grok was “programmed to wake up”, but more likely a reality of training he online.
However, Openai now says he is doubled in free speech. This week, the company even removed the warnings from the chatgt they show users when they violated its policies. Openai told Techcrunch that it was just a cosmetic change, with no change in the model’s results.
The company seems to want Chatgpt to feel less censored for users.
It would not be surprising whether Openai was also trying to impress Trump’s new administration with this politics update, notes the former leader of Openai Miles Brundage in a post on X.
Trump previously aimed at Silicon Valley companies, such as Twitter and Meta, to have active content moderation teams tend to close conservative voices.
Openai may be trying to stand out before that. But there is also a bigger shift that is taking place in Silicon Valley and the world and he for the role of moderation of content.
Generating answers to please everyone

News halls, social media platforms and research companies have historically fought to provide information to their audience in a way to feel objective, accurate and entertaining.
Now, that chatbot providers are in the same business of distribution information, but surely with the most difficult version of this problem yet: how do they automatically create answers for any questions?
Submitting information about controversial events, in real time is a constantly moving target, and involves obtaining editorial attitudes, even if technology companies do not like to accept it. These attitudes are forced to disturb someone, lose the perspective of a group or give a lot of air to a political party.
For example, when Openai is committed to allowing Chatgpt to represent all perspectives on controversial subjects – including conspiracy theories, racist or anti -Semitic movements, or geopolitical conflicts – this is essentially an editorial attitude.
Some, including Openai John Schulman’s co -founder, argue that it is the right attitude for chatgpt. Alternative-making a cost-benefit analysis to determine if a chatbot I he has to answer the question of a user-end to “give the platform too much moral authority”, Schulman notes in a post on X.
Schulman is not alone. “I think Openai has the right to push in the direction of more the word,” said Dean Ball, a researcher at George Mason University Center, in an interview with Techcrunch. “As the models of it become smarter and more vital to the way people learn about the world, these decisions simply become more important.”
In previous years, model providers have tried to stop their chatbots to answer the questions that can lead to “unsafe” answers. Almost every company he stopped their chatbot to answer questions about the 2024 elections for the US president. This was widely considered a safe and responsible decision at the time.
But Openai’s changes in specifying his model suggests that we can go into a new era for what “security of he” really means, in which allowing a model he responds to everything and everything is considered more responsible than making decisions for users.
Ball says this is partly because the models of it are just better now. Openai has made significant progress in the extension of the model; Its latest models of reasoning think of the company’s security policy before responding. This allows the models of him to give better answers to delicate questions.
Of course, Elon Musk was the first to apply the “free speech” to Xai’s Grok Chatbot, maybe before the company was really ready to get sensitive questions. It can still be very soon to lead him models, but now, others are embracing the same idea.
Changing Values for Silicon Valley
Mark Zuckerberg waves last month by reoring flawed businesses about the principles of the first change. He praised Elon Musk in the process, saying that the X owner received the right approach using community notes-a community-led content moderation program to protect free speech.
In practice, both X and Meta ended up dismantling their long teams of trust and security, allowing more controversial posts on their platforms and strengthening conservative voices.
Changes in X may have damaged her relationship with advertisers, but this may have more links to Musk, who has taken the unusual step to sue some of them for boycotting the platform. Early signs indicate that meta advertisers were greedy by Zuckerberg’s free speech.
Meanwhile, many technology companies beyond X and Meta have left the leftist policies that have dominated Silicon Valley for the past few decades. Google, Amazon and Intel have eliminated or scale diversity initiatives in the last year.
Openai can also be the reversible course. Chatgpt creator seems to have recently cleared a commitment to diversity, equality and inclusion from his website.
While Openai begins one of the largest American infrastructure projects sometimes with Stargate, a $ 500 billion database, her relationship with the Trump administration is increasingly important. At the same time, manufacturer Chatgpt is struggling to discover Google’s search as the dominant source of information online.
Coming with the right answers can prove the key to both.