The End of Bullshit AI


In every conversation about AI, you hear the same chorus: “Yes, but it’s wonderful,” quickly follows, “but it does things,” and “you can’t really trust it.” Even among the most dedicated AI -enthusiasts, these complaints are a legion.

During my recent trip to Greece, a friend who uses Chatgpt to help her projects of public contracts perfectly. “I like it, but it never says ‘I don’t know.’ It just makes you think, “she told me. I asked her if the problem could be her promises. “No,” she replied firmly. “It doesn’t know how to say” I don’t know. “It just comes up with an answer for you.” She shook her head, frustrated, that she paid a subscription that did not fulfill her fundamental promise. For her, the talkboot was the one who did not make it wrong, proves that it could not be reliable.

It seems that Openai has listened to my friend and millions of other users. The company, led by Sam Altman, has recently launched its completely new model, GPT-5, and while it is a major improvement of its predecessor, its major new function could be humility.

As expected, Openai Blog post Masses praise their new creation: “Our smartest, fastest, most useful model yet, with built -in thinking, which puts experienced level intelligence into the hands of all.” And yes, GPT-5 breaks new activity records in mathematics, coding, writing and health.

But what is really remarkable is that GPT-5 is presented as humble. This is perhaps the most critical update of all. It finally learned to say the three words that most AIs – and many people – structure with: “I don’t know.” For artificial intelligence often sold on its god-like intellect, to acknowledge ignorance is a deep lesson in humility.

GPT-5 “more honestly communicates their actions and skills to the user, especially for tasks impossible, unpublished or missing key tools,” Openai claims, acknowledging that past versions of Chatgpt “may learn to lie about successfully performing a task or be too self-confident in an uncertain response.”

Making his AI humble, Openai has just fundamentally changed as we interact with it. The company claims that GPT-5 has been trained to be more honest, less likely to agree with you just to be pleasant, and much more cautious about bluffing its way through a complex problem. This makes it the first consumer AI explicitly designed to suspend bullying, especially its own.

Less flattering, more friction

Earlier this year, many Chatgpt users realized that the AI became strangely Simophantus. No matter what you asked, GPT-4 will shower you with flattery, emojis and enthusiastic approval. It was less of a tool and more of a life coach, a pleasant Lapdog programmed for positivity.

That ends with GPT-5. Openai says the model was specifically trained to avoid this pleasant behavior. To do this, engineers trained it about what to avoid, essentially teaching it not to be a sycophant. In their trials, these overly flattering answers dropped from 14.5% of the time to less than 6%. The result? GPT-5 is more straight, sometimes even cold. But Openai insists that in doing so, its model is more often correct.

“Overall, GPT -5 is less effective, uses less unnecessary emojis, and is more subtle and thoughtful in followers compared to GPT -4o,” Openai states. “It has to feel less like” talking to AI “and more like chatting with a helpful friend with a doctoral level -intelligence.”

Hailing what he calls “another milestone in the AI race,” Alon Yamin, co-founder and general manager of the AI content confirmation company Copyleaks, thinks that a more humble GPT-5 is good “for the relationship of society with truth, creativity and confidence.”

“We enter an era, where to distinguish a fact from manufacturing, automation authority, will be more difficult and more essential than ever,” Yamin said in a statement. “This moment requires not only technological progress, but the continued development of thoughtful, transparent securities around how AI is used.”

Openai says that GPT-5 is significantly less likely to “hallucinate” or lie with confidence. In online searches, the company says GPT-5 answers are 45% less inclined to contain an actual error than GPT-4O. When using your advanced “mindset”, that number jumps to a reduction of 80% in actual errors.

Literally, GPT-5 now avoids inventing answers to impossible questions, something previous models have made with indispensable confidence. It knows when to stop. It knows its limits.

My Greek friend, who is writing public contracts will definitely please. Others, however, may find themselves frustrated by AI, who no longer tells them what they want to hear. But exactly this honesty could eventually make it a tool that we can start relying on, especially in sensitive fields such as health, law and science.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *