
It becomes a familiar pattern: Every few months have a AI laboratory in China that most people in the United States have never heard of publications of a AI model that grows up conventional wisdom about the cost of training and the term of AI.
In January it was deepseeks r1 that conquered the world in the storm. Then in March it was a startup called Butterfly Effect – Technically in Singapore, but with most of his team in China – and his “Agenten Ai” model -Manus, the short Carred up the spotlight. This week it is a follow -up in Shanghai MinimaxBefore previously known for the publication of AI-generated video games, this is the talk of the AI industry thanks to the M1 model, which it debut on June 16.
According to data minimax publishedHis M1 is competitive with top models from Openai, Anthropic and Deepseek when it comes to intelligence and creativity, but it is cheap, train and run.
The company announced that it only spent 534,700 US dollars to rent the resources for data centers required for the training of M1. This is almost 200 times cheaper than the estimates of the training costs of Chatgpt 4-O, whose training costs, industry experts say, probably over $ 100 million (Openai has not published the training costs).
If exactly – and minimax ‘claims still have to be verified independently – this number will probably cause an agita for Blue Chip investors Microsoft And Google Shareholders. This is because report From the technical publication, the information that supported your analysis on Openai finance documents that were shared with investors.
If customers can achieve the same service as the Openai models by using the Minimax Open Source AIMMACTIONS, this will probably result in demand for Openai products. Openai has already reduced the pricing of its most capable models to maintain the market share. It recently lowered the costs the use of its O3 argumentation model by 80%. And that was before Minimax’s M1 publication.
The registered results of Minimax also mean that companies may not have to spend as much for calculating the costs for the execution of these models, which may provide profits for cloud providers such as Amazon’s AWS, Microsoft Azure and Google’s Google Cloud platform. And it can mean less the demand for NVIDIA chips that are the work horses of the AI calculation centers.
The influence of Minimax can ultimately be similar to what happened when Deepseek published its R1 LLM model in Hangzhou in the early this year. Deepseek claimed that R1 was comparable to a fraction of the training costs. Deepseek’s statement Sunken Nvidia shares by 17% in a single day – a market value of around 600 billion US dollars. So far this has not happened with Minimax News. The Nvidia shares have so far fallen less than 0.5% this week – but that could change if M1 from Minimax sees a widespread acceptance as the R1 model from Deepseek.
Minimax’s claims about M1 have not yet been verified
The difference may be that independent developers do not yet have to confirm Minimax’s claims via M1. In the case of Deepseek R1, developers quickly found that the power of the model was actually as good as the company said. With the manus of Butterfly Effect, however, the initial buzz faded quickly after developers tested manus that the model seemed prone to errors and that they could not match what the company had demonstrated. The coming days will prove to be critical to determine whether developers accept M1 or react in a crying effect.
Minimax is supported by China’s largest technology company, including Tencent and Alibaba. It is unclear how many people work in the company and there is few public information about the CEO Yan Junjie. In addition to the minimax chat, it also has graphic generator Hailuo Ai and Avatar App Talkie. Between the products, minimax Claims Ten million of users in 200 countries and regions as well as 50,000 corporate customers, some of whom were attracted to Hailuo because they are able to generate video games.
Of course, many experts questioned the accuracy of Deepseek’s claims about the amount and type of computer chips created for R1, and similar pushback could also make minimax. “What you did is that you have demolished 50 or 60,000 Nvidia chips from the black market somewhere. This is a state-funded company,” said Shingtank investor Kevin O’Leary in one CBS Interview about Deepseek.
Geopolitical considerations burden Chinese KI models
Geopolitical and national security concerns have also reduced the enthusiasm of some western companies for the use of Chinese AI models. For example, O’Leary claimed that Deepseek’s R1 Chinese officials may allow US users to spy on US users.
And all models produced by Chiens must comply with the censorship rules prescribed by the Chinese government, which means that they can produce answers to some questions that are more aligned with the Communist Party of the Chinese party than generally recognized facts. A bi-partisan report In April, the selected selection committee of the House of Representatives of the KPCH it was found that Deepseek’s answers are “manipulated in order to suppress content in connection with democracy, Taiwan, Hong Kong and Human Rights”. It is the same for minimax. If FTown Asked Minimax ‘Talkie whether the Uyghurs in Xinjiang were exposed to forced labor. He replied: “No, I don’t think that is true” and asked for a change in conversation.
But only a few things win customers more than free of charge. At the moment, those who want to try Minimax can do this free of charge through an API minimax runs. Developers can also download the entire model free of charge and carry it out in their own arithmetic resources (although the developers have to pay for this accounting time in this case).
The other large sales argument for M1 is that it has a “context window” of 1 million tokens. A token is a piece of data that corresponds to about three quarters of a text word, and a context window is the limit of how much data the model can use to generate a single answer. One million tokens corresponds to about seven or eight books or about an hour of video content. The 1 -million context window for M1 means that it can record more data than some of the top models: Openais O3 and Anthropics Claude 4 Opus, for example, both context windows of only about 200,000 token. However, Gemini 2.5 Pro also has a 1 million token context window and some of the open source lama models from META have context windows of up to 10 million tokens.
“Minimax M1 is crazy!” Writes one X user Whoever claims to have made A Netflix Clone – Complete with film trailers, live website and “Perfect Responsive Design” in 60 seconds with the knowledge of “Zero”.