I still remember the day my father told me the story of how, in 1996, he had single-handedly prevented other physicians from performing CPR on a woman whose heart had just stopped. He had actually laid his body on top of hers to ensure they couldn’t try.
I was stunned and, frankly, appalled. As someone who taught medical ethics, I knew that interfering with CPR for a patient who had not given a do-not-resuscitate order wasn’t only illegal but unethical.
Or was it? Several years ago, in preparing to write a biography of my father, an infectious diseases specialist, I began reviewing the personal journals that he had kept for decades. Reading his version of what he did that day, in the context of his larger medical career, led me to rethink some of my own basic assumptions about patient autonomy—a concept at the center of contemporary medical ethics.
I began to wonder if recent legal and ethical reforms in medical practice had perhaps tied the hands of physicians trying to do the right thing for their patients.
My father received his medical education in the 1950s and 1960s—an era when doctors routinely made decisions for their patients. Doing otherwise made no sense. During their training, my dad and his colleagues had practically lived in the hospital, not only becoming masterful clinicians but also devoting themselves entirely to the care of their patients. A quote that once hung at my own medical school exemplified this mind-set. It read: “Who is responsible for this patient and where the hell is he at?”
My dad’s inclination to take charge also reflected a series of recent triumphs in medical care, particularly in my father’s field, infectious diseases. Armed with new, powerful antibiotics like penicillin, doctors could cure previously fatal diseases like pneumonia, tuberculosis and endocarditis. Polio and other vaccines had dramatically lowered the incidence of other infections.
If consummate patient care meant misleading one’s patients, so be it. This was the era of paternalism. An often quoted 1961 article reported that 90% of physicians preferred to conceal cancer diagnoses from their patients, in the belief that keeping their hope up would lead to longer survival.
By the time I entered medical school in 1982, times were changing. A series of scandals involving physicians—most notably the Tuskegee syphilis study but also several others in which physicians had entered subjects into harmful medical experiments without obtaining informed consent—had led the media and public to question the “doctor knows best” ethos.
Women with breast cancer in the 1970s led the charge, rejecting the disfiguring Halsted radical mastectomy in favor of less extensive operations that saved their breasts and worked just as well. By the 1980s, homosexual men with a new, rapidly fatal disease, the Acquired Immune Deficiency Syndrome, mastered the scientific literature and demanded access to new medications. Meanwhile, dying patients and their families took end-of-life decision-making away from doctors, who had formerly decided who would be resuscitated or allowed to die.
Having studied the history of medicine and medical ethics, in addition to being a practicing internist, I applauded these developments.
My father’s episode was exactly the type of case that I used to teach students about death and dying. Doctors, I explained, didn’t do a good enough job of addressing CPR and other aggressive interventions with seriously ill patients. At times, families refused to follow do-not-resuscitate orders in cases where they seemed entirely appropriate. But, I emphasized, if the patient had not left such instructions, doctors were obligated to try to resuscitate.
To my father’s credit, he recorded my response after he told me the story of his patient. I had been “aghast,” he wrote.
Ten years later, I read his six-page account of the incident, which was tucked into one of his journals. The woman, who had severe end-stage vascular disease and arthritis, had been hospitalized for months. Moreover, she had not been out of bed for years. The tissue breakdown of her massively swollen body caused recurrent ulcers and infections, which is why my dad had gotten involved in the case.
But what struck me most was the following entry in his journal: “Every time this woman was moved or even touched, the raw denuded skin would be further abraded, bleed and give her agonizing pain as the sheets or dressings pulled away.” Her life was constant misery.
When the woman’s heart stopped shortly before my father entered her room, he concluded that it was time for her to die. Despite months of interventions, she was getting sicker and more unhappy. And end-stage patients had to die of something. The fact that the woman’s primary doctor had not obtained a do-not-resuscitate order didn’t matter to my dad.
The more I read, the more I realized that my father had been the physician who—congruent with his training as a patient-centered, paternalistic doctor—had known the patient best and assessed her not as a series of medical ailments or abnormal lab results but as a suffering human being.
My father wrote that he couldn’t have done anything different. He had acted “in the name of common, ordinary humanity” and based on his “30+ years as a physician responsible for caring [for] and relieving the pain of my patients who can’t be cured.” It was hard to argue with this logic.
My dad’s journals were filled with similar stories in which he made decisions for patients based both on his superb medical knowledge and what he had learned about their personal lives and their values. He believed it was his duty to do so—the exact opposite of providing a menu of choices without any associated opinions.
Medicine has changed greatly since that day in 1996. “Shared decision making” fosters better communication between doctors and patients. Palliative care teams openly broach issues of prognosis and dying.
But there are still cases in which physicians participate in interventions that needlessly prolong pain and suffering without a chance of meaningful recovery. By placing his body over his patient, my father prevented this from happening. Every health-care provider should continue to explore ways—albeit less dramatic ways—to help very ill patients and their families make appropriate medical decisions.
This was originally published in the Wall Street Journal on July 3, 2014.