Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI may be better than humans at reading cancer scans. Could bots have better bedside manner too?

While teachers worry about students cheating with ChatGPT, doctors might be playing their own bluff with AI. In the largest study of its kind, my team surveyed more than 1,000 GPs working in Britain, and 20 per cent admitted using this and similar tools to “assist with clinical tasks”. Twenty-nine per cent used AI to generate patient documentation after appointments, 28 per cent for diagnosis and 25 per cent to suggest treatment options. Other research that I have conducted in the US also suggests doctors may be turning to “generative AI” in their droves.
Exemplified by OpenAI’s GPT-3.5, and its later versions, GPT-4 and GPT-4o, as well as Google’s Bard, and Microsoft’s Bing AI, these widely available consumer tools work differently from traditional search engines such as Google. Powered by vast swathes of data that help drive their responses, these bots are often dubbed “large language models” (LLMs). Instead of typing requests for information online and receiving a list of internet pages in response, this new generation of computer interface simulates staggeringly fluent “conversations”.
There’s little reason to think UK doctors are unique in this. Nor are doctors and students alone in adopting ChatGPT. In November 2023, a survey by Deloitte found that a third of people in Ireland had used the tool, with 10 per cent incorporating it into their work.
So should the idea of medics using AI for diagnosis or to suggest treatment options be cause for concern? It’s easy to see why it might make us uneasy – but I would argue that embracing this shift in the longer term will be a win for patients.
Already a growing body of research shows that generative AI has considerable potential in gathering patient information and helping doctors with diagnostics. For example, early research shows that GPT-4 – which isn’t even designed for medical use – can produce accurate lists of patient diagnosis, even in complex cases. Medical grade chatbots can do even better. In the United States, 1 in 6 patients may no longer be turning to Dr Google but to Dr Bot instead. Health chatbots have been shown to offer accurate, accessible information to patients with cancer who are concerned about their health. It can also read cancer scans quicker, more consistently and more accurately than the average human doctor, and with potential for patients in underserved areas of the world.
Most people assume that doctors will always be needed for communication. But a clinical conversation system called AMIE can even take patient histories in greater detail and with better bedside manners than human physicians. There’s even some evidences to suggest that cold hard machines may surpass clinicians in compassion. Maintaining high levels of empathy in care can be tough, and doctors often face burnout and compassion fatigue. AI can assist with that. A study using the TalkLife mental health platform showed that responses co-written with a chatbot named “Hailey” were perceived as more empathetic than human-only responses. A study comparing ChatGPT’s responses to those of doctors using 195 real-world health questions on Reddit’s AskDocs found that the bot’s replies were 10 times more empathetic.
Much is made about the risks of AI embedding “algorithmic biases” in its responses, and rightly so: some patients could be treated unfairly. Less discussed, however, is whether AI is worse than what we’ve already got with human doctors. The negative focus on AI bots also misses important new opportunities to better identify and overcome discrimination in healthcare. AI can shine a light on health disparities and prejudice too. For example, crunching through vast electronic health records, computers reveal the ways doctors tend to stigmatise some patient populations more than others. These subtle, buried or unconscious biases are to be found in the notes that physicians write after our visits. It’s also likely that de-biasing bots is easier than debugging humans of bigotry.
AI can also help doctors with the dreaded burdens of administration – one Swedish study found ChatGPT was 10 times faster than doctors in generating clinical documentation. Medical grade tools could be even more efficient. Across the United States, health systems are swiftly adopting LLM-powered tools that uphold stringent privacy standards into everyday clinical practice. A prominent example is “ambient listening”, where an AI system listens to patient-physician conversations and drafts clinical documentation. This technology is already in use in many clinics, and early results are promising – it boosts satisfaction for both patients and providers, without raising serious safety concerns.
[ People with low health literacy turning to AI for advice, study findsOpens in new window ]
In Ireland, digital healthcare is a key component of the Sláintecare reforms aimed at modernising the country’s health services. As clinical records become increasingly digitised and patients gain access to them, doctors and patients will be better equipped to leverage AI. There are opportunities for Ireland to become what tech bros dub the “fast second”: to follow quickly in the footsteps of health systems that are further ahead but by learning from them and adopting better approaches.
[ How AI is already being used in Irish hospitals – and what the future might hold for patient careOpens in new window ]
Integrating AI into patient care offers real benefits. If using chatbots feels like “cheating” on the mission to offer the best possible healthcare to patients, it might be time to rethink what “cheating” means.
Dr Charlotte Blease is associate professor of health informatics at Uppsala University, Sweden, and research affiliate, digital psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School. She is the author of the forthcoming book, Dr Bot: Why Human Doctors are Failing and How AI Can Save Lives (Yale University Press, 2025).

en_USEnglish