In an era of rapid technological advancement, artificial intelligence is transforming many aspects of healthcare. However, its integration into clinical practice raises complex ethical questions about its role and limitations. In a recent essay and interview, Sarah C. Hull, MD, MBE, a cardiologist and clinical ethicist at Yale School of Medicine, underscored the irreplaceable value of human moral agency in medicine. Her reflections, rooted in personal and professional experience, illuminate why she believes AI cannot—and should not—replace clinicians in tasks requiring moral judgment and patient care.
AI as a Tool, Not a Decision-Maker
Dr. Hull’s argument centers on the concept that while AI excels at technical tasks, it lacks moral agency—a fundamental requirement for ethical decision-making in medicine. AI can support diagnostic interpretation or aid in decision-making by synthesizing large volumes of data, but Hull cautions against its use in roles requiring empathy or fiduciary responsibility. For example, while a human clinician might spend a sleepless night worrying about a nuanced clinical decision, an AI remains indifferent. Medicine, according to Hull, “is as much a moral endeavor as a technical one.”
Hull also voices concerns about tasks such as delivering bad news to patients, which inherently require human empathy and emotional intelligence. She emphasizes the need for clear boundaries, noting that any task that requires moral agency should not be delegated to an AI.
Why It Matters
The stakes for patients and clinicians are high. Over-reliance on AI could undermine trust in the patient-provider relationship, a cornerstone of effective healthcare. Hull points out that while AI tools might streamline administrative burdens or enhance diagnostic accuracy, they should never supplant the relational aspects of care. Even if AI offers efficiency, she argues, speed cannot come at the cost of quality or compassion.
Transparency in AI deployment is another key concern. Patients interacting with AI systems should be informed when they are not communicating with a human, Hull emphasizes. Deception—whether intentional or not—risks eroding trust and ethical integrity.
Toward an Ethical AI Future
While Hull remains cautious, she sees potential for AI to complement human care when used judiciously. For instance, AI might help address social determinants of health by empowering patients with lifestyle interventions or by improving access to care for underserved populations. However, she warns against creating a two-tiered healthcare system where vulnerable patients are disproportionately managed by AI rather than human clinicians.
In the next decade, Hull hopes AI will enhance the precision and accessibility of healthcare while respecting the moral complexities of patient care. To achieve this, engaging diverse stakeholders and prioritizing patient perspectives will be essential.