On Monday, Openai launched a new family of him, GPT-4.1, which the company said exceeded some of its existing models in certain tests, especially standards for programming. However, the GPT-4.1 was not sent with the security report that usually accompanies the Openai model omissions, known as a model or system card.
Since Tuesday morning, Openai had to still publish a security report for GPT-4.1-and seems to be planning. In a statement to Techcrunch, Openai Shakyi AMDO spokesman said “GPT-4.1 is not a border model, so there will be a special system card issued for it.”
It is quite standard for that Labs to issue security reports showing the types of tests they performed from within and with third -party partners to assess the safety of specific models. These reports occasionally reveal unclear information, like him a model tends to deceive people or is dangerous persuasive. In general, the community of it perceives these reports as an attempt to trust that Labs to support independent research and red teams.
But over the past few months, the main laboratories of it appear to have reduced their reporting standards, causing reactions from security researchers. Some, like Google, have attracted their feet in security reports, while others have published reports that do not have the usual details.
Openai’s last dirty record is not even extraordinary. In December, the company attracted criticism of the issuance of a security report that contained standard results for a different model than the version it put in production. Last month, Openai started a model, Research Deep, weeks before publishing the system card for that model.
Steven Adler, a former Openai security student, noted Techcrunch that security reports are not mandated by any law or regulations – they are voluntary. However, Openai has made some commitments to governments to increase transparency about its models. Before the UK security summit in 2023, Openai in a blog post called the “main part” of his approach to responsibility. And leading to the action summit in Paris in 2025, Openai said system cards offer valuable knowledge of the dangers of a model.
“System cards are the main industry tool for transparency and to describe what the security test is done,” Adler Techcrunch told an email. “Today’s norms and commitments of transparency are ultimately voluntary, so it is up to each company to decide whether or when to issue a system card for a particular model.”
GPT-4.1 is sending without a system card at a time when current and previous employees are increasing concerns about Openai’s security practices. Last week, Adler and 11 other former Opanai employees presented a proposed amicus summary in Elon Musk’s case against Openai, arguing that a profitable Openai could cut corners for security work. Financial Times recently reported that manufacturer Chatgpt, driven by competitive pressures, has reduced the amount of time and resources allocated by security testers.
While the most capable model in the GPT-4.1 family, GPT-4.1, is not the highest performance on the OpenAi list, it makes considerable benefits in the departments of efficiency and latency. Thomas Woodside, co -founder and policy analyst at Secure Ai Project, told Techcrunch that performance improvements make an even more critical security report. The more sophisticated the model, the higher the risk it could present, he said.
Many laboratories have dismissed efforts to codify security reporting requirements in law. For example, Openai opposed California’s SB 1047, who would require many developers to audit and publish security ratings for the models they make public.