Openai says he will make changes to the way of updating the models of one who empowering Chatgpt, after an incident that made the platform become too sycophant for many users.
Last weekend, after Openai summarized a predetermined GPT-4o-4o-modern GPT that empowered chatgt-user on social media noticed that the chatgt began to respond in a very valuable and acceptable way. Soon it became a meme. Users posted pictures of chatgpt while applauding all kinds of problematic, dangerous decisions and ideas.
In a post on X last Sunday, CEO Sam Altman accepted the problem and said Openai would work on “ASAP” adjustments. On Tuesday, Altman announced that the GPT-4o update was turning back and that Openai was working on “additional adjustments” for the model’s personality.
The company released a postmorm on Tuesday, and in a blog post on Friday, Openai expanded to specific adjustments it plans to make in the process of setting its model.
Openai says he plans to introduce an “alpha phase” of choice for some models that would allow certain Chatgpt users to test the patterns and give the feedback before starting. The company also says it will include explanations of “known restrictions” for future additional models updates in chatgpt, and regulate its security review process to officially consider “model behavior issues” such as personality, deceit, reliability and hallucination.
“Speaking forward, we will communicate proactive about the updates we are making in the chatgt models, whether ‘delicate’ or not,” Openai wrote in the blog post. “Even if these issues are not fully measurable today, we are committed to blocking omissions based on power of attorney or quality signals, even when metrics like A/B testing look good.”
Promised adjustments come as more people return to chatgpt for advice. According to a recent study by Financial Express Legal Financial Financing, 60% of US adults used chat to seek advice or information. Increasing support in chatgpt – and large platform user base – increases shares when issues such as extreme sikophants, not to mention hallucinations and other technical deficiencies.
Techcrunch event
Berkeley, ca
|
June 5
Reserve now
As a mitigating step, earlier this week, Openai said he would experiment with ways to leave the user to give “real -time reactions” to directly influence their interactions “with chatgt. The company also said it will refine techniques to remove patterns away from eyelessness, will potentially allow people to choose from numerous chatgpt models, build additional security guards, and expand evaluations to help identify issues beyond.
“One of the biggest lessons is to fully know how people have started using chatgpt for deeply personal tips – something we haven’t seen so much a year ago,” Openai continued in her blog post. “At that time, this was not a primary focus, but while he and society have co-operated, it has become clear that we must address this case very carefully. Now it will be a more significant part of our security work.”