Doctors complain about ethics oversight – just like anthropologists! (well, almost)
I have been working on an ethics teaching module and just came across this December 2007 editorial in the NY Times by Atul Gawande. Medical anthropologists might have encountered Gawande through his articles for the New Yorker or for his book of collected essays, Complications: A Surgeon’s Notes on an Imperfect Science — which I think is great material to assign to undergraduates in an introductory medical anthropology class. Gawande has an anthropological appreciation for the technological, social, cultural, political, and organizational forces that shape science and medicine. Plus his writing is punchy, dramatic, and neatly wrapped up with concise morals-to-the-story that makes it easy to digest for students who are new to anthropology’s way of complicating everything, especially neat morals-to-the-story.
Still, Gawande is a doctor, not an anthropologist, and I thought it was mostly anthropologists (plus our social science relatives who also do ethnographic research) who chafe at the way ethics oversight developed to regulate biomedical research has crept over to the social sciences. We can all agree that ethical research is a good goal in either domain, but anthropologists are acutely aware (in a way that sometimes IRBs / ethics committees aren’t!) that there are very different research ethics issues at stake depending on whether you’re testing a new drug or doing ethnographic fieldwork.
But in his NY Times article, Gawande shows us a conflict where medicine chafed at the way ethics regulation originally developed for biomedical research crept into applied research on the social organization of medicine.
Here’s an excerpt:
A year ago, researchers at Johns Hopkins University published the results of a program that instituted in nearly every intensive care unit in Michigan a simple five-step checklist designed to prevent certain hospital infections. It reminds doctors to make sure, for example, that before putting large intravenous lines into patients, they actually wash their hands and don a sterile gown and gloves.
The results were stunning. Within three months, the rate of bloodstream infections from these I.V. lines fell by two-thirds. The average I.C.U. cut its infection rate from 4 percent to zero. Over 18 months, the program saved more than 1,500 lives and nearly $200 million.
Yet this past month, the Office for Human Research Protections shut the program down. The agency issued notice to the researchers and the Michigan Health and Hospital Association that, by introducing a checklist and tracking the results without written, informed consent from each patient and health-care provider, they had violated scientific ethics regulations. Johns Hopkins had to halt not only the program in Michigan but also its plans to extend it to hospitals in New Jersey and Rhode Island.
The government’s decision was bizarre and dangerous. But there was a certain blinkered logic to it, which went like this: A checklist is an alteration in medical care no less than an experimental drug is.
Gawande’s editorial was accompanied by objections from major journals and medical associations including the New England Journal of Medicine and led to a letter-writing campaign to Congress.
The case highlights the perversity that we sometimes find in ethics oversight. Research ethics codes, of course, were developed in the wake of the Nuremberg trials that prosecuted the Nazi doctors who performed grotesque experiments on live prisoners. The Nuremberg Code was the first international code of research ethics. It mandated that research involving human beings must follow 10 basic directives, including:
1. voluntary, informed consent from research participants;
2. no coercion to participate in research;
3. only properly trained scientists should carry out research;
4. any risks must be outweighed by the humanitarian benefits of the research;
5. research should be designed to minimize risk and suffering
6. participants can end the experiment at any time, and researchers must stop the research if it becomes apparent that the outcomes are clearly harmful.
Later elaborations of the code have tweaked those basic directives — for example, the Helsinki Declaration allows for proxy consent — but the basic calculus of benefits outweighing risks has guided all subsequent elaborations of research ethics codes (and not without considerable debate about what constitutes an appropriate risk-benefit calculation). And yet as so many have noted, the application of ethics oversight has often focused on the letter, rather than the spirit, of the law, with painful attention to bureaucratic detail. In the Johns Hopkins case, Gawande was essentially pointing out that the humanitarian benefits to this research far outweigh the fact that consent wasn’t obtained from the doctors and patients being ‘studied.’
The OHRP ruling was soon overturned, but not on the grounds of that risk-benefit calculus. Rather, the revised ruling hinged on definitions of whether the Johns Hopkins project counted as research or quality control.
[See the American Nurses Association website for a good (well referenced and linked) summary of how the whole story played out.]