Microsoft has taken legal action against a group the company claims has developed and deliberately used tools to bypass the security guardrails of its cloud artificial intelligence products.
According to a complaint filed by the company in December in the US District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly used stolen customer credentials and custom-designed software to break into Azure OpenAI Service , Microsoft’s fully managed service. powered by ChatGPT creator OpenAI technologies.
In the complaint, Microsoft accuses the defendants — referred to only as “Does,” a legal pseudonym — of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and a federal racketeering statute by accessing and illegally used Microsoft software. and servers for the purpose of “creating offensive” and “harmful and illegal content.” Microsoft did not provide specific details about the abusive content that was created.
The company is seeking “other equitable” relief and damages.
In the complaint, Microsoft says it discovered in July 2024 that clients with Azure OpenAI Service credentials — specifically API keys, unique strings of characters used to authenticate an application or user — were being used to generate content that violated the Acceptable Use Policy of service. Then, through an investigation, Microsoft discovered that API keys had been stolen from paying customers, according to the complaint.
“The exact manner in which the defendants obtained all of the API keys used to commit the misconduct described in this complaint is unknown,” Microsoft’s complaint reads, “but it appears that the defendants engaged in a pattern of systematic API key theft that enabled them to steal Microsoft API keys from many Microsoft customers.”
Microsoft alleges that the defendants used stolen Azure OpenAI Service API keys belonging to US-based customers to create a “hacking-as-a-service” scheme. According to the complaint, to pull off this scheme, the defendants created a client-side tool called de3u, as well as software to process and route communications from de3u to Microsoft systems.
De3u allowed users to use stolen API keys to generate images using DALL-E, one of the OpenAI models available to Azure OpenAI Service customers, without having to write their own code, Microsoft claims. De3u also tried to prevent the Azure OpenAI Service from reviewing requests used to generate images, according to the complaint, which can happen, for example, when a text request contains words that trigger Microsoft’s content filtering.
A repository containing the de3u project’s code, hosted on GitHub — a Microsoft-owned company — is no longer accessible at press time.
“These features, combined with the defendants’ illegal programmatic API access to the Azure OpenAI service, enabled the defendants to reverse engineer the tools to bypass Microsoft’s abuse content and measures,” the complaint states. “Defendants knowingly and intentionally accessed Azure OpenAl Service protected computers without authorization, and as a result of such conduct caused damages and losses.”
In a blog post published Friday, Microsoft says the court has authorized it to seize a website “instrumental” to the defendants’ operation that will allow the company to gather evidence, decipher how the services are monetized alleged by the defendants and terminate any additional. the technical infrastructure it finds.
Microsoft also says it has “implemented countermeasures,” which the company did not specify, and “added additional security mitigations” to Azure OpenAI Service targeting the activity it observed.