Skip to content

Embodied ethics oversight

29 April, 2009

(If you haven’t read Part I of this series of postings on research ethics, it provides important background to this blog posting.)

I have a colleague in the United States who teaches medical anthropology to third-year university students. It’s a popular class that consists mostly of premed students, along with a handful of anthropology majors. The main assignment that my colleague gives these students in this class is to craft an illness narrative. He asks them to go out and interview a family member about an illness they experienced or an encounter they had with a biomedical doctor or healthcare system or alternative medical practitioner. But there are several restrictions that he places on the students: first, they can only interview family members. The other set of restrictions is this: they can’t interview children, the mentally disabled, or pregnant women.

When I found out about this limitation, I was indignant. It was a bit personal: during my two pregnancies, I was expelled from two medical practices for being a “noncompliant” patient, and I am ready to talk anyone’s ear off about the construction of risk and the way noncompliance is punished during biomedically “managed” pregnancies. (Seriously, colleagues. If you don’t want me to talk your ear off, don’t ask me for details.) So even though I wasn’t in line to be interviewed by any of those students, the idea that some other woman couldn’t be interviewed while she was pregnant bugged me.

But personal gripes aside, why was my anthropologist colleague grouping pregnant women in the same category as children and the mentally disabled?

I asked my colleague. He explained that he was trying to make sure that this student research project was low-risk so that it didn’t need to go through institutional ethics review, and pregnant women were in the category of “vulnerable” populations, which automatically raises the risk flag and necessitates institutional review. (In the U.S. the bodies that review research ethics are called “Institutional Research Boards” (IRBs), the equivalent of Australia’s Human Research Ethics Committees (HRECs); in Canada they are REBs, “Research Ethics Boards.”)

That effectively shut me up, because my own PhD research dates to an era right on the cusp of what Rena Lederman, in the 2006 introduction to a special issue of American Ethnologist on ethics regulation, calls “IRB mission creep”: the expansion of bureaucratic regulations on what is called “human subjects research” to include not just clinical researchers (for which the research ethics codes were originally developed) but also social science researchers. I went out into the field to do my research without ever having sought ethics clearance from an IRB, but by the time I got back 3 years later, other graduate students in my department were going through IRB procedures, and so for a long time I felt furtive, like I had somehow gotten away with something that I shouldn’t have and I’d better keep quiet before anyone found out and rejected my research as unethical. I could sympathize with my colleague’s desire to lie low under the radar of IRB surveillance.

The absurdity of suggesting that pregnant women who tell anthropology students stories are vulnerable like children and the mentally disabled illustrates something that we probably all already know: the fact that human research ethics regulatory regimes that now govern the social sciences are imported from biomedicine. In so doing, they are sometimes poorly translated into the logics of different disciplines and methodologies. (Or, as Lederman argues, it is actually anthropologists and ethnographers who end up having to translate their methods into clinical models of research to get ethics approval.) This story also suggests an incommensurability between disciplinary models of research risks and benefits. The biomedical model restricts “experimenting” on a pregnant woman because she is the vessel for a vulnerable fetus, while the principle of the illness narrative method is to understand holistically a person’s experience with both illness and with cultural ways of managing that illness, and at its base is a logic of empowerment, not risk management.

What this story also illustrates, of course, is how we teach anthropological methods in the penumbra of bureaucratic surveillance.

When I spoke about this recently in a public talk at an anthropology department in Australia, it unleashed a flood of comments about ethics oversight experiences from the audience, and a smaller torrent of e-mail comments that came to me in the next few days, and the anger and the cynicism and frustration were palpable. And I started thinking: isn’t it fascinating that a process that is supposed to ensure ethical research practice makes people so angry? Isn’t it fascinating that so few people believe that the process actually has anything to do with research ethics and that most attribute it to (a) Big Brother bureaucratic control run amok, and/or (b) fear of litigation?

I hear some frustration and complaining at Macquarie too, but anthropologists at Macquarie are not terribly cynical because (a) we’ve managed to get an anthropology representative permanently guaranteed on our ethics committee to make sure that it doesn’t get too biomedical on us, and (b) we’ve found that they’re pretty flexible and responsive when we argue with them or go in person to plead a particular case. Basically we feel like we have some control over the situation.

In contrast, the reaction I was getting from a multi-disciplinary audience at this other university was that no one in that room had ever served on an ethics committee, and people didn’t even know who chaired it. So their experience seemed like much more of a black-box encounter with the bureaucratic facade, with a resulting increase in cynicism.

And what that whole experience made me think was that it would be fascinating to study researchers’ experience of ethics oversight, paying attention to the embodied experiences of researchers and how the process of ethics regulation and oversight makes them *feel*. Think back to how I described my own feelings of shame and furtiveness when I came back from the field and found that everyone else was getting ethics clearance and I didn’t know whether I had somehow just screwed up or whether what I did was normal. Shame, fear and anxiety, furtiveness, cynicism, anger, and reluctant submission seem to be the dominant emotions I have been hearing expressed by other people in their encounter with ethics regulation, but obviously a larger study could elucidate a wider range of emotions associated with the process. Do researchers feel like the advent of ethics regulation has personally made them more ethical? Or do they feel like it has made research in general more ethical? Or do they think it’s just a complete waste of time and paper and money?

We’ve got a perfect “natural” control: an older generation of researchers who spent most of their careers not seeking ethics clearance, a younger generation for whom it is standard operating procedure, and a “middle-aged” group of researchers like myself who started their research under one regime and now live under another (I swear, this is the first time I’ve thought of myself as middle-aged). By correlating responses with different regulatory regimes, we can ask questions like: do researchers who never got ethics clearance have different ideas about what is ethical than researchers who go through ethics review? Does one group consider itself more or less ethical than the other? Or do they feel like ethics oversight hasn’t made any difference to their research practice?

The real difficulty here is in finding some comparative, external measure of ethical research. I don’t know if that’s even possible, given the extent to which research ethics are so contested (as I outlined in my talk). But it would still be interesting to examine self-reported assessments of ethics.  I think that Maureen Fitzgerald is an anthropologist who is doing some of this work — an ethnographic project examining the processes of research ethics review in 5 countries!  Check out her site to read more about this project at: and check out her list of links and references to find out about more people who are doing research on ethics review.

–L.L. Wynn

3 Comments leave one →
  1. Anonymous permalink
    29 April, 2009 11:13 pm

    This post is very timely for me since my med anth IRB proposal is going through the process THIS WEEK at my university. After going through an incredibly long internet teaching module about ethics etc. known as CITI training that directly negated about half of what I was taught during my post-doc and talking on the phone with the IRB office manager who sniped that hardly anyone got through on the first round, I tentatively submitted my first IRB proposal last week for full board review. Having breezed through IRB approval as a master’s student and PhD student, I wasn’t too worried until the CITI training and talking to this woman. I put in for full board review because I was working with a “vulnerable” population – pregnant women, and I was collecting blood spots from them for lab analysis. My IRB group was much more accommodating than I was led to believe because they are changing it to expedited review.
    I’m glad that we have ethics review boards but having an understanding of the social sciences should be mandatory for multiple board members.
    Finally, I would LOVE to talk to you about your non-compliance and experiences during pregnancy.

  2. 30 April, 2009 9:57 am

    Hi Anonymous, send me an e-mail and I’ll tell you about my non-compliance during pregnancy. lisa.wynn[at]

    Good luck with your IRB process. The sense I’m getting from everyone is that the experience can be incredibly different between institutions. I just tell all my students to thoughtfully weigh the IRB’s demands/ questions/ suggestions. If they seem reasonable, agree to them. If they don’t seem reasonable, argue (very politely) with them. You lose ground for not only yourself but your entire discipline when you give in to biomedically-grounded conceptions of ethics that are inappropriate (i.e. not going to lead to more ethical research) for different disciplines or methodologies. As Rena Lederman argues, we can see these encounters as a chance to educate our IRBs. It takes more time to do the back-and-forth, but it will be much more satisfying when you finally do get ethics approval.

  3. Michaela permalink
    5 May, 2009 12:04 pm

    The questions are so non-specifically broad in the human ethics applications we have in Aus.

    When I was doing a small research project with mothers talking about their experiences of pregnancy, birth and the early days of motherhood, my initial response to the ethics form question about whether or not the reserach may induce pschological or physical destress in participants… I thought “well maybe” you never know, one of the participating mothers may have become emotionally destressed when going over a traumatic birth story – I know I couldn’t talk about giving birth to my daughter for about a year after giving birth, and had someone specifically asked me to go into detail I may have become distressed at recalling some parts of the medical-type intervention used. So I initially responded honestly to the question on the ethics form that yes, it is possible some participants may become distressed.

    It was then suggested by the ethics committee that I have to hand counciling service contact information to give out to any participants that seemed to become distressed. But how odd that would have been in situ! Imagining talking to these participants of which most were friends and giving out counciling information if they became a little teary during an interview seemed so entirely inappropriate… it would have damaged the repore I had with the partcipants at the very least – not to mention to suggestion of emotion instability the gesture would have implied. Anyway, in the end I decided to just tick “no” to the question on the ethnics form.

    I guess the question on the ethics form would really require a “yes” only in case of some pyschological research projects or medical research…. so we can’t be to broad/honest in our thinking when filling out these forms…. I think it would make sense to have seperate human ethics forms for different disciplines, what applies to biomedical realms in one way may apply to human sciences another. But then I guess even that would become confusing in cases where research has a foot in each area such as medical anthropology…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: