Last week I wrote about how to improve a conference paper proposal, to make it more likely to impress the peer reviewer assigned to assess it. This week, I’m thinking about how those of us who do peer reviews might do our part of the job better.
I’ve drawn all my inspiration for this post from the program committee of DH2013, the umbrella conference of the Allied Digital Humanities Organizations, which comprises the Canadian Digital Humanities Association, the Association for Computers in the Humanities, the Association for Literary and Linguistic Computing, and others. The conference is global and it is interdisciplinary. It is also highly selective, sometimes prone to controversy about who is in and who is out (and what is DH and what is not). For example, and in retrospect hilariously, even though I regularly review proposals for the conference, all three submissions I have myself made have been rejected. The first time, one peer review gave a one sentence assessment of my work (I paraphrase): “This work is not even interesting and I don’t know why this author would propose to consider it for this group.” That was in 2001 or 2002, and I still remember it as the most dismissive, disrespectful review I have ever received for a conference paper.
So imagine my pleasure this year when I visited the conference website to review the CFP as I prepared to assess my five assigned proposals. This year, the organization has put together not only a guide to writing proposals for its authors, but also, magnificently, a guide to peer reviewing this proposals for its assessors.
Go see it. Then come back.
Aren’t these just the very model of transparency? All the implicit rules by which proposals will be assessed are explicitly outlined. Even better, peer reviewers are reminded that their work is not simply to assess in summative fashion (accept / do not accept) but to mentor in formative fashion (How might this be improved? What are the strengths and weaknesses of this proposal?). Even better, peer reviewers are reminded of … well … the affective dimension of this part of academic work. Nasty peer reviews work to exclude people from the field. Harsh rejections are bad for morale generally. Community is built upon mutual kindness. The “big tent” model is not supported by vicious kicks to the support poles at the edges of the structure. This bit, to me, is the most incredible, and the most awesome: instead of training would-be participants to develop a thick skin and “try harder” when faced with what looks sometimes like gleeful rejection, peer reviewers instead are being asked to consider what might help the not-accepted scholarship fit within the fold next time. It manifests a kind of humility about the field and our expertise as gatekeepers within it, even as we are still very much called upon to review the intellectual merits of each proposal.
I imagine the acceptance rate will still be low. But maybe we all won’t feel so rejected by that.
I loved this so much I sent a mash note to Bethany Nowviskie about it.
The lessons of DH2013 and the wisdom of its program committee extend to all our peer review work. I know it’s given me pause. I get asked to do peer review all the time. And some of the papers I read are, not to mince words, terrible. And sometimes it feels like I’m wasting my time to read all 30 pages. I’m mad at the editors for even forwarding this to two hapless reviewers. I write incredibly sarcastic notes in my printouts, to blow off steam.
But I try, and will try harder, to make my reviews, even the ones about papers that purport to be about user-generated review sites but are actually thinly-veiled screeds about how evolution is just a theory and Richard Dawkins is going to hell (really), constructive. To keep them focused on the intellectual. To not descend into recriminations against the author’s not knowing what the journal is for, or not caring to take the time to copy-edit let alone proofread, or for stuffing an unreconstructed coursework paper into a digital envelope in total cluelessness about how that is not what an article is. That’s all judgy and personal. Try to be constructive.
I’m using the classic “shit sandwich” approach now. You are probably familiar with it.
Here’s how it works. Start by saying something nice (and true): maybe the topic is important. Maybe the approach is worthy. Maybe the primary texts have never really been considered before. Maybe the author has a great knack for fluid and engaging prose. Maybe the reference list is superb. Acknowledge the merits that you find. Next is the “however”: this varies from paper to paper, but might be courseworkitis (all lit review, no argument), or it might be something more complex and unique about the approach to the topic or the methodology or the sample set or whatever. Be specific but try to confine the criticism to the words on the page, not the character of the author. This can be sometimes very difficult to do. Do say: “The paper takes what it describes as “genre analysis” as its theme, but does not reference the major works in that field.” Do not say: “The author has no right to perform as so-called ‘genre-analysis’ when it is obvious that s/he hasn’t read anything at all in this well-developed field.” Finally, end with some suggestions for improvement. This is hopeful. Having perhaps said that it’s unworkable to try to prove point A with reference to text F and L, maybe suggest a set of texts that might be more useful. Or if the author seems unfamiliar with an important subfield he or she generalizes about, suggest one or two texts and one or two major ideas from that subfield the author might consult to better support his or her contentions.
If you feel this all doesn’t convey enough the depth of your rejection, then by all means make use of the field that is labelled “for the editors only”–you can let rip in that section, if you really need to.
Putting stuff out to review is very hard: this is part of the reason we all hold onto our drafts for too long. We are afraid of what the reviewers will say. And yet, as reviewers, we very often lash out at the poor shmucks who’ve let their precious drafts out into the world. Yeah, maybe they’re totally not ready for the big time, but we don’t have to be mean about it.
Do you have any tips and tricks for doing or receiving peer review? Funny stories? Terrible tales?
2 thoughts on “Doing Peer Review Better”
My pet peeve is the review that doesn't actually challenge your thesis (X), but instead suggests that it's a big problem that you haven't *also* discussed Y.
In principle, oversights of this kind can matter. But in practice, “why aren't you also discussing Y?” is very often an expression of the reviewer's own discomfort with an unfamiliar topic. It sometimes just means “I wish you were talking about Y instead of X.”
I have to say that I've encountered this sort of review much less often in DH than in more traditional scholarship. And I don't mean that reviewers shouldn't suggest a potential lead or avenue of expansion — that can be constructive. But “you really can't write on this topic without mentioning Y” drives me crazy. Sometimes it's an older reviewer who wants you to cover canonical author Y, sometimes it's a younger reviewer who wants to see some obeisance to sociopolitical problem Y. But I think both forms of intervention are, in effect, conservative. “I'd rather hear about something about I already know.”
Thanks so much for this great post! As a young researcher just making my first forays into publication, I find your analysis of the problems of the peer review process really refreshing. Of the two negative reviews I've had so far, one was more formative, one more summative. Guess which piece of work I'm more inclined to rework?
Also, thanks for observing that 'Community is built upon mutual kindness' could apply to academia.
Comments are closed.