OpenAI launches two ‘open’ AI reasoning models


Openai announced on Tuesday the launch of two open heavy AI reasoning models with similar skills to its O-series. Both are freely available downloadable from the Internet developer platform, Hugging faceThe company said, describing the models as “state” when measured through several references to compare open models.

The models come in two sizes: a larger and more capable GPT-OSS-120B model, which can operate on a single NVIDIA GPU, and a lighter-heavy GPT-OSS-20B model that can operate on a consumer laptop with 16GB of memory.

The launch -marks the first “open” language model of OpenAI since then GPT-2which was released more than five years ago.

In a conference, Openai said its open models will be able to send complex questions to AI models in the cloud, As Techcrunch previously reported. This means, if the Openai Open Model is unable to a certain task, such as processing an image, developers can connect the open model to one of the company’s more capable closed models.

During Openai Open AI models in its early days, the company generally favored proprietary, closed source development access. This -last strategy helped Openai build a large company selling access to its AI models by API to businesses and developers.

However, Director General Sam Altman said in January that he believed that Openai was “on the wrong side of history” when it comes to opening his technologies. The company today faces increasing pressure from Chinese AI labs – including Deepseek, Qwen of Alibaba, and Moonshot AI – which has developed several of the most capable and popular open models in the world. (While Meta previously ruled the open AI -space, the company’s Lama’s Models have Fall behind in the last year.)

In July, the Trump administration also prompted us AI developers Open source more technology Promote global adoption of AI aligned with US values.

Techcrunch -Event

San -Francisco
|
27-29 October 2025

With the release of GPT-Oss, Openai hopes to cling to favor with developers and the Trump administration alike, both of the Chinese AI labs rise to prominence in the open source.

“Returning when we started in 2015, Openai’s mission is to make sure to act, which benefits all humanity,” said Openai’s Director General, Sam Altman in a statement shared with Techcrunch. “To that end, we excite the world to build on an open AI stack created in the United States, based on democratic values, available for free to everyone and for a wide profit.”

Open AI -Director General Sam Altman
(Photo by Tomohiro OHSumi/Getty Images)Image Credits:Tomohiro Ohsumi / Getty Images

As the models acted

Openai intended to make his Open a model Among other open heavy AI models, and the company claims to have done exactly that.

On CodeForces (with tools), a competitive coding test, GPT-OSS-120B and GPT-OSS-20B score 2622 and 2516 respectively exceeded Deepseek’s R1 R1 while under-understanding O3 and O4 mini.

Open model performance of Openai on CodeForces (credit: Openai).

On the last examination of humanity, a difficult test of crowd questions on a variety of topics (with tools), GPT-Oss-120B and GPT-OSS-20B score 19% and 17.3% respectively. Similarly, this understands O3 but exceeds leading open models of Deepseek and Qwen.

Open model performance from Openai on HLE (Credit: Openai).

Notably, the open models of Openai hallucinas significantly more than its latest AI-reason models, O3 and O4 mini.

Hallucinations were get more severe In Openai’s latest AI models, and the company previously said it doesn’t quite understand why. In white paper, Openai says this is “expected, as smaller models have less world knowledge than larger border models and tend to hallucinate more.”

Openai found that GPT-Oss-120B and GPT-OSS-20B hallucinated in response to 49% and 53% of questions about personqa, the company’s internal house to measure the accuracy of a model of people’s knowledge. This is more than triple the Openai O1 model’s hallucination rate, which gained 16%, and higher than its O4 mini model, which earned 36%.

Training the new models

Openai says its open models have been trained with similar processes to its proprietary models. The company says that each open model exploits mixtures (MOE) to hit fewer parameters for any given question, making it work more efficiently. For GPT-Oss-120B, which has 117 billion total parameters, Openai says the model only activates 5.1 billion parameters per token.

The company also says its open model was trained by high computer Strengthening learning (RL) -A post-training process for teaching AI models exactly from wrong in simulated environments using large groups of NVIDIA-GPUs. This has also been used to train Openai’s O-series of models, and the open models have a similar chain thought process, in which they take additional time and computer resources to work with their responses.

As a result of the post-training process, Openai says its open AI models stand out at operating AI agents, and are able to call tools as an online search or execution of Python codes as part of its chain thought process. However, Openai says that its open models are only textual, which means that they will not be able to process or generate images and headphones like the company’s other models.

Openai releases GPT-Oss-120B and GPT-OSS-20B under the Apache 2.0 license, which is generally considered one of the most permissible. This license will allow businesses to monitor Openai’s open models without having to pay or obtain permission from the company.

However, unlike fully open source offers from AI labs like AI2, Openai says it will not release the training data used to create its open models. This decision is not surprising, as several active processes against AI model providers, including Openai, have claimed that these companies incorrectly trained their AI models on copyrighted works.

Openai delayed the release of its open models several times in recent months, partly to deal with security concerns. Beyond the company’s typical security policies, Openai says in white paper that it has also researched whether bad actors can set their GPT-Oss models to be more helpful in cyber attacks or creating biological or chemical weapons.

After testing Openai and third-party rates, the company says that GPT-Oss may marginally increase biological capabilities. However, it did not find evidence that these open models could achieve their “high capacity” threshold for danger in these domains, even after configuration.

While Openai’s model seems to be most modern among open models, developers are eagerly awaiting the release of Deepseek R2, its next AI-reasoning model, as well as a new open model of Meta’s new superintelligence.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *