Should You Rely on ChatGPT for Information? OpenAI Executive Shares Insights

Synopsis
Key Takeaways
- GPT-5 is an advanced language model from OpenAI.
- It's recommended to use ChatGPT as a second opinion.
- Hallucinations remain a challenge, with a 10% error rate.
- OpenAI is working on improving accuracy through search integration.
- The company has ambitious plans, including a potential browser development.
New Delhi, Aug 17 (NationPress) OpenAI's newest language model, GPT-5, boasts enhanced power and accuracy compared to earlier versions. However, the company cautions users against considering ChatGPT as their primary information source.
Nick Turley, Head of ChatGPT, emphasized that the AI chatbot is best utilized as a “second opinion”, as it still has a tendency to make errors, despite significant advancements.
In a conversation with The Verge, Turley acknowledged that GPT-5 continues to grapple with the issue of hallucinations, where the system may generate plausible-sounding but factually inaccurate information.
While OpenAI claims to have substantially reduced these inaccuracies, the model still produces incorrect responses approximately 10 percent of the time.
Turley highlighted the challenge of achieving 100 percent reliability, stating, “Until we are provably more reliable than a human expert across all domains, we’ll continue to advise users to double-check the answers.”
He further noted, “I believe people will continue to use ChatGPT as a second opinion, rather than their primary source of facts.”
Large language models like GPT-5 are designed to predict words based on patterns derived from vast datasets.
While this capability enables them to generate natural-sounding responses, it can also result in the dissemination of false information on less familiar subjects.
To mitigate this issue, OpenAI has integrated ChatGPT with search functionalities, allowing users to verify results through external sources.
Turley expressed optimism that the hallucination problem will eventually be resolved but cautioned that a solution is not imminent.
“I’m confident we’ll eventually solve hallucinations, and I’m confident we’re not going to do it in the next quarter,” he remarked.
Meanwhile, OpenAI is broadening its horizons. Reports indicate that the company is working on its own browser, and CEO Sam Altman has even suggested that OpenAI might consider acquiring Google Chrome if it ever goes up for sale.