John Burton analyses ERA rankings
John Burton of the ANU has circulated on the AASNet mailing list an analysis of the ERA rankings compared to the JIF citation analyses of the relevant journals. I asked him if I could share with readers of Culture Matters some of the most striking of his results.
I posted earlier an ‘Anthropology-only’ list of ERA-ranked journals to aid everyone in sorting through the immense list originally issued by the Australian Research Council (see Anthropology Journals ranked by the ARC). That post inspired a few comments, but there’s been more discussion on the mailing list of the Australian Anthropological Societies. Some people are glad that Australian anthropology journals were ranked highly by the ARC, even though they might not have the international visibility of some of their ‘peer’ journals in the ranking. Others have pointed out an odd bias seemingly toward Francophone journals (sometimes even at the expense of English-language journals like the JRAI), the opaque process that produced the list, and the treatment of museum studies and archaeology journals.
But Burton has provided the most interesting analysis thus far, in my opinion. He crunched through the relationship between Journal Impact Factors calculated by Thomson ISI (Institute for Scientific Information) and the ranking into four categories provided by the ARC. (For an example of a purely citation based ranking of Anthropology journals, check out ScienceWatch’s Anthropology Journals Ranked by Impact page, which provides JIF rankings based on both short-term and long-term impact.)
Burton circulated the following table to show the relationship between the ARC’s ranking (A*, A, B, and C) and the average JIF score for the journals at each level, in addition to the spread of journals in each ranked category.
Burton found that the 11 journals ranked A* by the ERA had a mean impact factor of 1.48; those 22 journals ranked A had a mean impact of .80; 13 B ranked journals had a mean impact factor of .56; and the 13 ranked C had a mean impact factor of .76. In all, 59 journals were ranked, and within each rank, the JIF scores varied considerably, overlapping significantly.
Although all the A* journals tended to have high JIF scores (>1), Burton found no relationship between A-C rankings and JIF score.
Outliers: discrepancies between ERA and JIF evaluations
In addition, there were the outliers that Burton lists. Four journals in the A* group had JIF scores lower than 1: American Anthropologist (.88), Cultural Studies (.52), Comparative Studies of Society and History (.48); and Bijdragen (.057). (Please note: I’m relying on Burton for the statistics and information rather than regenerating or replicating his analysis. Yes, it’s lazy, but it’s the way it’s going to be.)
In the A group, there were outliers in both directions: two journals had high JIF scores, suggesting that they might be better considered A* if ranked solely on impact (Social Networks, 2.07, and Cultural Anthropology, 1.84). And several A journals had JIF scores lower than the mean of C: Ethnohistory (.14) and Oceania (.43). Non-Anglophone journals, already the subject of some discussion on AASNet, didn’t fare well by JIF: L’Homme, for example, scored a very low .087.
In the B group, one outlier had an extremely high JIF; the American Journal of Human Biology at 1.98 outperformed most A* journals. In addition, Anthropological Forum (.21), Anthropos (.21) and Ethnology (.14) all scored relatively low.
Perhaps the oddest outlier though was in the C group: the Journal of Human Evolution, which had the highest 2007 JIF (3.55), was ranked in the lowest level of anthropology journals.
The bottom line?
Burton concludes with three statements of the ‘bottom line’:
1. JIF scores cannot have been used for the ERA assessment (but the case for the value of some other metric remains open).
2. Non-Anglophone journals have terrible JIFs.
3. Folk who publish in the Journal of Human Evolution should be pretty cranky (wider implication: an academic who publishes in consistently high-JIF journals, but is denied preferrment on ERA grounds could, hypothetically, argue a case for a faulty performance review).
I think we can also conclude a few other things: the ERA list clearly favours cultural anthropology journals over more widely cited evolutionary and physical anthropology journals. I’m a cultural anthropologist, so I don’t say this as a kind of sour grapes.
But there’s an even bigger question which comes up for me: why doesn’t the ARC just use the JIF or other independent measure of ‘impact’? Is there anything to gain by monkeying around with a relatively straightforward quantitative measure of journal visibility within a discipline, creating a slightly more arbitrary, other metric? The anthropologists may claim that citation indices don’t adequately capture the exchange that goes on in our field because we publish less frequent, longer articles, and so don’t inflate citation numbers — fair enough. And there may be some variation across such a broad discipline as ours, with journals that back onto demographically more populous fields likely to get a boost in the indices. But within our field, surely the radically different scores mean something. What do we gain by jimmying the numbers?
There’s been some discussion, for example, of the embarrassment caused to the major Australian journals in a number of fields when the ARC initially ranked them quite low; the ARC, it was argued, devalued the work of Australian academics by not taking into account the importance of publishing in our own regional and national journals. If that’s the case, and we want to distort the metric to take account of the importance in our own community of national publications, then why are we ranking L’Homme or Bijdragen so highly? That is, I can understand granting a kind of protectionist advantage to the local journals, but there also seems to be a bit of preference for some kinds of scholarship over others; one participant on AASNet suggested that Orientalism was preferred over environmental science, for example.
The overall low scores of anthropology journals on citation indices, however, in my opinion, is also an indictment of the way that anthropologists often do scholarship. Our field’s JIFs, overall, are pathetic. Not a single anthropology journal would show up on citation indices in most other fields. Obviously, this is, in part, a demographic artifact, the effect of working in a smaller field. But the number of anthropologists doesn’t explain the low JIF scores entirely.
In part, the scores may be sign that we need to really think about how we write in cultural anthropology, including the possibility that we need to consider seriously about whether more frequent, shorter articles and journals with more pieces might be better for disciplinary communication. We’re not reading each other’s work enough or citing it, not building up communities of conversation like those arising in other fields. If you ask me how I locate my scholarship, I tend to think oppositionally: who I criticize, work against, or seek to make obsolete. We need to think more about constructive engagement within our own discipline; how else can we explain the fact that LEADING journals in our fields have average citation rates so low that it almost appears no one is reading them? I mean, is no one else in our field mortified that the Annual Review of Anthropology impact factor suggests that the average article was cited 4.19 times in the FIVE YEARS from 2004 to 2008?!
Maybe it’s just the degeneration of my own attention span, but the more I read across different fields, the harder I find it to read the mainline anthropology journals, in part because so many pieces seem to be inflated in length to reach the expectations of an anthropology journal article even if the same thoughts could be communicated in much tighter, efficient prose. Heaven knows, I’m more guilty of over-writing than the next anthropologist, but is it really necessary to write a minimum of 6000 words on everything we want to talk about?
The usual explanation for the low impact scores for anthropology journals always tend to blame the metrics, but I just suspect that we need to look within our field a bit more before we disparage the messenger. One way to promote our field might just be to read, cite and work constructively with each other’s work as often as possible, and, in the process, improve these metrics rather than just attempt to get a layer of subjective judgment imposed upon them.