Skip to content

Doctors complain about ethics oversight – just like anthropologists! (well, almost)

22 October, 2008

I have been working on an ethics teaching module and just came across this December 2007 editorial in the NY Times by Atul Gawande. Medical anthropologists might have encountered Gawande through his articles for the New Yorker or for his book of collected essays, Complications: A Surgeon’s Notes on an Imperfect Science — which I think is great material to assign to undergraduates in an introductory medical anthropology class. Gawande has an anthropological appreciation for the technological, social, cultural, political, and organizational forces that shape science and medicine. Plus his writing is punchy, dramatic, and neatly wrapped up with concise morals-to-the-story that makes it easy to digest for students who are new to anthropology’s way of complicating everything, especially neat morals-to-the-story.

Still, Gawande is a doctor, not an anthropologist, and I thought it was mostly anthropologists (plus our social science relatives who also do ethnographic research) who chafe at the way ethics oversight developed to regulate biomedical research has crept over to the social sciences. We can all agree that ethical research is a good goal in either domain, but anthropologists are acutely aware (in a way that sometimes IRBs / ethics committees aren’t!) that there are very different research ethics issues at stake depending on whether you’re testing a new drug or doing ethnographic fieldwork.

But in his NY Times article, Gawande shows us a conflict where medicine chafed at the way ethics regulation originally developed for biomedical research crept into applied research on the social organization of medicine.

Here’s an excerpt:

A year ago, researchers at Johns Hopkins University published the results of a program that instituted in nearly every intensive care unit in Michigan a simple five-step checklist designed to prevent certain hospital infections. It reminds doctors to make sure, for example, that before putting large intravenous lines into patients, they actually wash their hands and don a sterile gown and gloves.

The results were stunning. Within three months, the rate of bloodstream infections from these I.V. lines fell by two-thirds. The average I.C.U. cut its infection rate from 4 percent to zero. Over 18 months, the program saved more than 1,500 lives and nearly $200 million.

Yet this past month, the Office for Human Research Protections shut the program down. The agency issued notice to the researchers and the Michigan Health and Hospital Association that, by introducing a checklist and tracking the results without written, informed consent from each patient and health-care provider, they had violated scientific ethics regulations. Johns Hopkins had to halt not only the program in Michigan but also its plans to extend it to hospitals in New Jersey and Rhode Island.

The government’s decision was bizarre and dangerous. But there was a certain blinkered logic to it, which went like this: A checklist is an alteration in medical care no less than an experimental drug is.

Gawande’s editorial was accompanied by objections from major journals and medical associations including the New England Journal of Medicine and led to a letter-writing campaign to Congress.

The case highlights the perversity that we sometimes find in ethics oversight. Research ethics codes, of course, were developed in the wake of the Nuremberg trials that prosecuted the Nazi doctors who performed grotesque experiments on live prisoners. The Nuremberg Code was the first international code of research ethics. It mandated that research involving human beings must follow 10 basic directives, including:

1. voluntary, informed consent from research participants;
2. no coercion to participate in research;
3. only properly trained scientists should carry out research;
4. any risks must be outweighed by the humanitarian benefits of the research;
5. research should be designed to minimize risk and suffering
6. participants can end the experiment at any time, and researchers must stop the research if it becomes apparent that the outcomes are clearly harmful.

Later elaborations of the code have tweaked those basic directives — for example, the Helsinki Declaration allows for proxy consent — but the basic calculus of benefits outweighing risks has guided all subsequent elaborations of research ethics codes (and not without considerable debate about what constitutes an appropriate risk-benefit calculation). And yet as so many have noted, the application of ethics oversight has often focused on the letter, rather than the spirit, of the law, with painful attention to bureaucratic detail. In the Johns Hopkins case, Gawande was essentially pointing out that the humanitarian benefits to this research far outweigh the fact that consent wasn’t obtained from the doctors and patients being ‘studied.’

The OHRP ruling was soon overturned, but not on the grounds of that risk-benefit calculus.  Rather, the revised ruling hinged on definitions of whether the Johns Hopkins project counted as research or quality control.

[See the American Nurses Association website for a good (well referenced and linked) summary of how the whole story played out.]

–L.L. Wynn

4 Comments leave one →
  1. David Taylor permalink
    25 October, 2008 11:05 pm

    Prof. Wynn’s column title should have read “U.S. Doctors complain…” It is not obvious that physicians in other countries — such as Australia — encounter the same frustrations, though they might, nor is it obvious that anthropologists in other countries — such as Australia — encounter the same frustrations, though they might. However, I’m sure that U.S. physicians, and anthropologists, appreciate an Australian blog commenting on their IRB problems.

  2. 26 October, 2008 6:22 am

    Ah well, even if we’re based in Australia, actually the majority of the bloggers on Culture Matters aren’t Australia. Greg Downey and I are American, Nursel is Turkish, Pal Nyiri is Hungarian, Joana is German I think. And we have ethics oversight here (not called IRBs, called HRECs — Human Research Ethics Committees) and plenty of Australians complain about them!

  3. Michaela permalink
    28 October, 2008 2:45 pm

    Vaguely related: a recent Four Corners (docu-program) on ABC1 about a doctor working at Randwick women’s hospital (in Australia) who performed an abortion on a woman with severe depression 24 weeks into her pregnancy in the 80’s.

    He and a panel of other dr’s discussed the case at length, although the panel were very reluctant to terminate the pregnancy they made the decision to terminate on the grounds that the woman was suffering such severe depression she was a danger to herself and her unborn child. There were no procedures for late abortion nor were there laws against late abortion apparently, anyway, in the end the dr who performed the abortion was fired and after a few years worth of court cases was reinstated, because he had not actually broken the law, just stepped into a fuzzy ethical area of non procedure medical practise… so the lack of some sort of ethical guide line in this case left the dr’s to the decision making and brought up a whole load of religious and personal ethics based protests resulting in a legal case….

    As Lisa brought up using Gawade’s example, ethics committee’s sometimes push the barriers of logic, but it seems ethical procedure at least puts boundaries in place (as absurd as these may be at times) that primarily: protect the institution secondarily: protect the researcher/practitioner and hopefully, but it seems less importantly in some cases: the participants/patients…


  1. Wednesday Round Up #35 « Neuroanthropology

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: