Meta excluded some of its top advertisers from the usual content moderation process, protecting its multibillion-dollar business amid internal concerns that the company’s systems wrongly penalized major brands.
According to internal documents from 2023 seen by the Financial Times, the owner of Facebook and Instagram introduced a series of “safeguards” that “protected high spenders”.
The previously unreported memos said Meta would “suppress disclosures” based on how much an advertiser spent on the platform, and that some top advertisers would be human-reviewed.
One document suggested that a group called “P95 spenders” — those who spent more than $1,500 a day — were “exempt from ad restrictions” but would still “eventually be sent to manual human review.”
The memos precede CEO Mark Zuckerberg’s announcement this week that Meta was ending its third-party fact-checking program and shutting down automated content moderation as it prepares for Donald Trump’s return as president.
The 2023 filings show that Meta had discovered that its automated systems had wrongly flagged some accounts with higher expenses for violating company rules.
The company told the FT that higher-spending accounts were disproportionately subject to false notifications of potential breaches. He did not respond to questions about whether any of the measures in the documents were temporary or permanent.
Ryan Daniels, a spokesman for Meta, said the FT’s reporting is “simply incorrect” and “based on a selective reading of documents that clearly say this effort was intended to address something we’ve been very public about: prevention of errors in implementation”. .
Advertising accounts for the majority of Meta’s annual revenue, which was nearly $135 billion in 2023.
The tech giant typically displays ads using a combination of artificial intelligence and human moderators to stop violations of its standards, in an effort to remove material such as scams or harmful content.
In a document titled “preventing high spender mistakes,” Meta said it had seven guardrails that protect business accounts that bring in more than $1,200 in revenue over a 56-day period, as well as individual users who spend more than $960 per ad at the same time. period.
He wrote that the guardrails help the company “decide whether a disclosure should proceed with an implementation” and are designed to “shut down disclosures . . . based on characteristics, such as the level of ad spending.”
He gave as an example a business that “is in the top 5 percent of income”.
Meta told the FT that it uses “higher costs” as a buffer because it often means the company’s ads will have greater reach and so the consequences can be more severe if a company or its ads are mistakenly removed. .
The company also admitted it had prevented some high-spending accounts from being disabled by its automated systems, instead sending them for a human review, when the company was concerned about the accuracy of their systems.
However, he said all businesses were still subject to the same advertising standards and no advertiser was exempt from his rules.
In the “preventing high spender mistakes” memo, the company rated different categories of guardrails as “low,” “medium,” or “high” in terms of whether they were “protected.”
Meta staff defined the practice of having cost-related guardrails as “low” protection.
Other safeguards, such as using knowledge of the business’ credibility to help it decide whether a discovery of a policy breach should be automatically acted upon, were labeled “high” safeguards.
Meta said the term “guarded” referred to the difficulty of explaining the notion of guardrails to stakeholders if they were misinterpreted.
The 2023 filings don’t mention the high spenders who fall within the company’s guardrail, but the spending thresholds suggest that thousands of advertisers may have been deemed exempt from the typical moderation process.
Estimates from market intelligence firm Sensor Tower suggest that the top 10 US spenders on Facebook and Instagram include Amazon, Procter & Gamble, Temu, Shein, Walmart, NBCUniversal and Google.
Meta has posted record revenues in recent quarters and its shares are trading at an all-time high as the company recovers from a post-pandemic downturn in the global ad market.
But Zuckerberg has warned of threats to his business, from the rise of AI to ByteDance-owned rival TikTok, which has grown in popularity among young users.
A person familiar with the documents argued that the company was “prioritizing revenue and profits over user integrity and health,” adding that concerns had been raised internally about bypassing the standard moderation process.
Zuckerberg said Tuesday that the complexity of Meta’s content moderation system had led to “a lot of mistakes and a lot of censorship.”
His comments came after Trump accused Meta last year of censoring conservative speech and suggested that if the company meddled in the 2024 election, Zuckerberg would “spend the rest of his life in prison.”
Internal documents also show that Meta considered pursuing other exemptions for some higher-spending advertisers.
In a memo, Meta’s staff proposed “offering more aggressive protection” from over-moderation to what it calls “platinum and gold spenders,” which together bring in more than half of ad revenue.
“Enforcement of false positive integrity against high-value advertisers costs Meta revenue (and) erodes our credibility,” the memo read.
He suggested an option of a blanket exemption for these advertisers from some enforcement, except in “very rare cases.”
The memo indicates that staff concluded that platinum and gold advertisers were not “an appropriate segment” for a broad exemption because about 73 percent of its applications were justified, according to the company’s tests.
Internal documents also show that Meta had discovered multiple AI-generated accounts within the big spender categories.
Meta has previously come under scrutiny for issuing exemptions to important users. In 2021, Facebook whistleblower Frances Haugen leaked documents showing the company had an internal system called “cross-checking” designed to review content from politicians, celebrities and journalists to ensure posts weren’t removed by mistake.
According to Haugen’s documents, this has sometimes been used to protect some users from enforcement, even if they break Facebook’s rules, a practice known as “whitelisting.”
Meta’s oversight board — an independent “Supreme Court”-style body funded by the company to oversee its toughest moderation decisions — found that the cross-checking system had allowed dangerous content online. He requested a system overhaul, which Meta has since undertaken.