
[Photo by OpenAI’s DALL·E]
AI in Healthcare: Why We Still Need a Human at the Controls
AI is rolling full steam into healthcare. From chatbots helping patients with depression and anxiety, to algorithms predicting cancer years in advance, to new antibiotics discovered by machine learning—the breakthroughs sound like science fiction.
But here’s the question we should be asking: Is AI really ready to drive?
Where AI Is Already Making an Impact
One of the most striking examples came from Dartmouth’s Geisel School of Medicine, where researchers conducted the first randomized controlled trial of a generative AI therapy chatbot. The chatbot helped reduce depression and anxiety symptoms in ways comparable to human therapists. Participants even reported a “therapeutic alliance” with it—saying it felt supportive and engaging, almost like a real counselor. Read more on Geisel’s AI research.
And Geisel isn’t alone. Apps like Wysa and Youper are already in the hands of millions of people worldwide, offering CBT-based conversations on demand. Wysa has even earned an FDA breakthrough device designation for its use in chronic pain management—an early signal that regulators are beginning to take AI mental health tools seriously. See Prevention’s overview on AI chatbots.
Beyond mental health, AI is pushing boundaries in diagnostics and discovery:
- At MIT’s Jameel Clinic, researchers used AI to discover entirely new antibiotics—something that had stalled for decades in traditional labs.
- They’ve also developed models like Mirai and Sybil, which can help detect breast and lung cancer earlier than conventional screening methods. Explore the Jameel Clinic’s work.
From therapy to oncology to drug discovery, AI is making its presence felt everywhere.
The Catch: AI Isn’t Always Trustworthy
Here’s the part we can’t gloss over: AI isn’t perfect.
- It hallucinates. When it doesn’t know an answer, it can invent one that sounds convincing but isn’t accurate.
- It lacks true empathy. A chatbot might say the right words, but it doesn’t actually “understand” human emotion.
- It’s under-regulated. Most AI therapy tools haven’t gone through FDA approval. In a crisis, there’s no guarantee a chatbot knows what to do. See AP News on the risks.
In fact, some people have already shared stories of turning to AI when they couldn’t access human therapy. While some found it helpful, others pointed out the risks—primarily when the chatbot provided generic or unhelpful advice in moments that required a human’s judgment. Read a Reuters feature on this trend.
Why Humans Still Need to Be in the Driver’s Seat
I like to think of AI as the train engine: powerful, fast, and capable of taking us places we couldn’t reach otherwise. But a train without a conductor? That’s a disaster waiting to happen.
That’s why human oversight is imperative. AI should extend access, speed up discovery, and complement human care—but clinicians, researchers, and policymakers still need to be firmly at the controls.
Healthcare isn’t just about data. It’s about trust, empathy, and judgment—qualities that algorithms don’t have. Without people guiding the process, the risks outweigh the benefits.
My Takeaway
AI in healthcare is full of promise. It will change how we approach mental health, diagnosis, and even drug discovery. But it’s not magic—and it’s not a replacement for real people.
The future of healthcare won’t be AI vs. humans. It will be AI with humans. And the humans still need to be in charge.
