Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.
OpenAI is winning at the expense of its main rivals.
On Tuesday, the company announced that the Stargate Project, a new joint venture involving Japanese conglomerate SoftBank, Oracle and others to build the AI infrastructure for OpenAI in the US Stargate, could attract up to $500 billion in funding for AI data centers over the next four years. , all should go according to plan.
The news undoubtedly upset OpenAI competitors like Anthropic and Elon Musk’s xAI, which won’t see comparable big infrastructure investments.
xAI aims to expand its data center in Memphis to 1 million GPUs, while Anthropic recently signed an agreement with Amazon Web Services (AWS), Amazon’s cloud computing division, to use and refine custom AI chips of the company. But it’s hard to imagine that any AI company can surpass Stargate, even, as in Anthropic’s case, with Amazon’s vast resources.
Granted, Stargate may not live up to its promises. Other technology infrastructure projects in the US have not. We recall that in 2017, Taiwanese manufacturer Foxconn committed and then failed to spend $10 billion on a factory near Milwaukee.
But Stargate has more supporters — and momentum, than it seems at this point — behind it. The first data center to be funded by the effort has already broken ground in Abilene, Texas. And the companies participating in Stargate have pledged to invest $100 billion from the start.
Indeed, Stargate seems poised to cement OpenAI’s charge in the exploding AI sector. OpenAI has more active users – 300 million per week – than any other AI venture. And there are more customers. Over 1 million businesses are paying for OpenAI services.
OpenAI had the first mover advantage. Now there may be an infrastructure lead. Rivals will have to be smart if they hope to compete. Brute force will not be a viable option.
tidings
Microsoft’s exclusivity is no longer: Microsoft used to be the exclusive provider of data center infrastructure for OpenAI to train and run its AI models. Not anymore. Now the company only has the “right of first refusal”.
Perplexity releases an API: AI search engine Perplexity has launched an API service called Sonar, allowing enterprises and developers to build the startup’s generative AI search tools into their apps.
Artificial intelligence accelerates the “kill chain”: My colleague Max interviewed the Pentagon’s chief of digital and AI, Radha Plumb. Plumb said the Defense Department is using AI to gain a “significant advantage” in identifying, tracking and assessing threats.
The standards in question: An organization that develops mathematical standards for AI did not disclose that it had received funding from OpenAI until relatively recently, drawing accusations of impropriety from some in the AI community.
DeepSeek’s New Model: Chinese artificial intelligence lab DeepSeek has released an open source version of DeepSeek-R1, its so-called reasoning model, that it claims performs as well as OpenAI’s o1 on several AI benchmarks.
Research paper of the week

Last week, Microsoft highlighted a pair of AI-powered tools, MatterGen and MatterSim, which it claims can help design advanced materials.
MatterGen predicts possible materials with unique properties, based on scientific principles. As described in a paper published in the journal Nature, MatterGen generates thousands of candidates with “user-defined constraints” — proposing new materials that meet very specific needs.
As for MatterSim, it predicts which of MatterGen’s proposed materials are viable and viable.
Microsoft says a team at the Shenzhen Institute of Advanced Technology was able to use MatterGen to synthesize a new material. The material was not perfect. But Microsoft has released MatterGen’s source code, and the company says it plans to work with other outside collaborators to further develop the technology.
Model of the week
Google has released a new version of its experimental “reasoning” model, Gemini 2.0 Flash Thinking Experimental. The company claims it outperforms the original on math, science and multimodal reasoning benchmarks.
Reasoning models like the Gemini 2.0 Flash Thinking Experimental are effectively self-checking, which helps them avoid some of the pitfalls that normally hinder models. As a consequence, reasoning models take slightly longer—typically seconds to minutes longer—to reach a solution compared to a typical “non-reasoning” model.
The new Gemini 2.0 Flash Thinking also has a context window of 1 million characters, meaning it can analyze long documents such as research studies and policy documents. One million tokens equals about 750,000 words, or 10 books of average length.
Grab the bag

An AI project called GameFactory shows that it is possible to “generate” interactive simulations by training a model on Minecraft videos and then extending that model to different domains.
The researchers behind GameFactory, most of whom come from the University of Hong Kong and Kuaishou, a Chinese company that is partly state-owned, published several examples of simulations on the project’s website. They leave something to be desired, but the concept is still interesting: a model that can generate worlds in infinite styles and themes.