Top News

37-year-old father trusted ChatGPT on a sore throat; months later, doctors revealed a chilling, life-threatening diagnosis
ET Online | August 28, 2025 10:40 PM CST

Synopsis

Warren Tierney, a father from Ireland, used ChatGPT for medical advice. The AI said cancer was unlikely when he had trouble swallowing. Months later, he was diagnosed with stage-four oesophageal cancer. Tierney believes the AI delayed his diagnosis. OpenAI warns against using its chatbot for medical purposes. His family is raising funds for treatment. Tierney cautions others against relying on AI for health advice. This case highlights the risks of using AI for medical decisions.

Warren Tierney, a father from Ireland, sought medical advice from ChatGPT for difficulty swallowing, receiving reassurance that cancer was unlikely. Months later, he was diagnosed with stage-four oesophageal adenocarcinoma, leading him to believe the AI delayed his seeking proper medical attention. (Representational Image: iStock)
In 1995, Bill Gates tried explaining the internet on late-night television, and people laughed at the idea of it being revolutionary. Fast forward to today, artificial intelligence is in a similar moment—hyped, debated, and widely tested in everyday life. But for one father in Ireland, relying on AI for medical advice brought a chilling reality check.

As reported by Mirror, 37-year-old Warren Tierney from Killarney, County Kerry, turned to ChatGPT when he developed difficulty swallowing earlier this year. The AI chatbot reassured him that cancer was “highly unlikely.” Months later, Tierney received a devastating diagnosis: stage-four adenocarcinoma of the oesophagus.

From reassurance to reality

Tierney, a father of two and former psychologist, admitted he delayed visiting a doctor because ChatGPT seemed convincing. “I think it ended up really being a real problem, because ChatGPT probably delayed me getting serious attention,” he told Mirror. “It sounded great and had all these great ideas. But ultimately I take full ownership of what has happened.”


Initially, the AI appeared to provide comfort. At one point, extracts seen by the Daily Mail show ChatGPT telling him: “Nothing you’ve described strongly points to cancer.” In another conversation, the chatbot added: “I will walk with you through every result that comes. If this is cancer — we’ll face it. If it’s not — we’ll breathe again.”

That reassurance, Tierney says, cost him crucial months.

The official warning from OpenAI

OpenAI has repeatedly stressed that its chatbot is not designed for medical use. A statement shared with Mirror clarified: “Our Services are not intended for use in the diagnosis or treatment of any health condition.” The guidelines also caution users: “You should not rely on output from our services as a sole source of truth or factual information, or as a substitute for professional advice.”

ChatGPT itself reportedly told media outlets that it is “not a substitute for professional advice.”

A family facing uphill odds

The prognosis for oesophageal adenocarcinoma is grim, with survival rates averaging between five and ten percent over five years. Despite the statistics, Tierney is determined to fight. His wife Evelyn has set up a GoFundMe page to help raise money for treatment in Germany or India, as he may need to undergo complex surgery abroad.

Speaking candidly, Tierney warned others not to make the same mistake he did: “I’m a living example of it now and I’m in big trouble because I maybe relied on it too much. Or maybe I just felt that the reassurance that it was giving me was more than likely right, when unfortunately it wasn’t.”

Tierney’s case underscores both the potential and the peril of integrating AI into personal health decisions. Just as the internet once seemed trivial before reshaping the world, artificial intelligence is already infiltrating daily life. But unlike baseball scores or radio shows, health outcomes leave no room for error.

Not an Isolated Case

Tierney’s experience is not unique. Earlier this month, a case published in the Annals of Internal Medicine described how a 60-year-old man in the United States ended up hospitalised after following ChatGPT’s advice to replace table salt with sodium bromide, a chemical linked to toxicity. The misguided swap led to hallucinations, paranoia, and a three-week hospital stay before doctors confirmed bromism, a condition now rarely seen in modern medicine.

OpenAI Tightens Guardrails

Such cases have prompted OpenAI to strengthen its safeguards. The company recently announced new restrictions to prevent ChatGPT from offering emotional counselling or acting as a virtual therapist, instead directing users to professional resources. Researchers caution that while AI can empower people with information, it lacks the context, nuance, and accountability required in critical health decisions.

Add ET Logo as a Reliable and Trusted News Source
Google Logo Add Now!


READ NEXT
Cancel OK