During Reddit Ask-I-all Session on Friday, Openai Director General, Sam Altman and Key members of the GPT 5 team, were beeting questions about the new model and requests to bring back their previous model, GPT-4O.
They also asked Altman about the most embarrassing – and perhaps most fun – Snafu in the performance, the “charts.”
One of the new features that GPT-5 rolled is Real -time router This decides which model to use for a particular prompt, either responding quickly or taking another time to “think” with answers.
But numerous people in the AMA on the R/Chatgpt Reddit complained GPT-5 didn’t work for them either as 4o did. Altman said that the reason GPT-5 seemed “Dumber” that the router didn’t work exactly when it was mined on Thursday.
“GPT-5 will seem smarter to start today. Yesterday, we had SEV and the Autoswitcher were out of the fragment of the day, and the result was GPT-5 seemed very empty.
However, people in the Ama lobbyis so hard to bring back 4o for plus -ekeepers, whom Altman promised at least to explore that. “We aim to let users continue to use 4o. We are trying to collect more data on the settlements,” he wrote.
And Altman also promised, “We’re going to double boundaries for plus -users as we finish rolling.” This should give people a chance to play and learn the new model, adopt it to their use cases without worrying to exhaust monthly promises.
Techcrunch -Event
San -Francisco
|
27-29 October 2025
Predictably, he was also asked about the wildly wrong chart that the team presented during the live performance, which quickly became the button of many “Diagram crime ”jokes. The diagram featured a lower reference score with a much higher bar.

Altman did not answer questions about the chart while the AMA, but on Thursday he called the diagram “mega chart” on x. Others noticed the skills in the Published blog pad was correct.
But the damage was done. Jokes happened about using GPT for list companies in corporate presentation. GPT-5 reviewer Simon Willison, who had early access and generally liked the model’s performance, also alerted This transforming data into a table was “a good example of a GPT-5 failure.”
Anyway, Altman promised corrections to the items that seemed to concern people most. He ended the loving with a promise that “we will continue to work for stable things and continue to listen to feedback.”