O, the teaching evaluation.
Often eviscerating, periodically baffling, and sometimes edifying. (OK, always edifying if one can take a step back and read them using some of Tenured Radical’s tips) What do teaching evaluations tell us about the institution in which we work? What do they tell us about our students? I daresay the teaching evaluation has something to tell us about the profession of professing that extends beyond a single course (this is something Heather brought up from a different angle some months ago)
Evaluations are on my mind for two pressing reasons: first, it is January, and we’ve just received the evaluations of fall 2010 courses. Second, because I am on the job market and without a dossier service to do it for me, I go through evaluations to select examples for my teaching portfolio. I find myself wondering what the evaluations say about the state of the institution as a whole. Of course, as the representative for the institution, I also find myself trying to interpret what the students are evaluating when they evaluate me.
Sure, my teaching evaluations are, on the whole, pretty good. What I mean by ‘good’ is this:
-When asked to comment on the “course content and organization,” students generally speak to the texts (whether they liked or disliked them, whether they felt the texts were useful for the course itself). Mostly, they say ‘yes,’ and generally when there are concerns with the texts the students speak to why they didn’t feel the texts were useful.
-When asked to comment on the instructor’s success in making the course interesting and intellectually stimulating, they generally speak to just that: my carriage in the classroom, or my ability–or disability–to demonstrate, generate, and sustain intellectually rigorous conversation.
Which is to say that for the most part students respond to the material as well as my efficacy as an instructor. And that’s useful for me on a number of levels.
However, things get a little more complicated when asked to comment on any of the instructor’s “special qualities” as a teacher (including “specific complaints” and “constructive suggestions”). Here students tend to address my personality and, sometimes, my mode of dress. I suspect that in part the variety of responses may have to do with the way that the question is posed (what makes a quality “special,” for example? Is this an earnest or a sarcastic question?) They have commented on my intelligence (or lack there of, in a few stinging cases), my approachability, my inapproachability, my overuse of theoretical language, my over-simplified language, and my shoes. These aren’t terrible responses; they are–or can be–edifying, and I’m slowly trying to learn to make use of them rather than take them to heart.
But here’s the open secret I find myself returning to each time I think about or read evaluations: some of us statistically rate lower on student evaluations. That would be women and minorities, not to mention women who are minorities of various kinds. shh! Which is to say that while it is sometimes impossible to prove for certain (and other times devastatingly easy to prove) that a given response is based on one’s gender or skin tone, or accent, or orientation the fact remains that it *might.* And that’s a problem, especially given that evaluations are used for job applications, as well as tenure and promotion. I believe–or want to believe–that members of/in the profession know how to read evaluations. But to what extent does this open secret reflect some deeper issues in the academy?
For me some of these deeper issues include a systematic failure to address inequity, systematic discrimination, and maybe, just maybe, a general failure to talk about these lived realities in a large-scale way that manages not to make them feel like they are someone else’s problem. Or worse, that these are “problems” that have been dealt with already.
And further, as Canadian universities continue the move to making student evaluations public (a move that I’m not certain I am against in principle, mind you) to what extent is the gendering and racing of evaluations made a significant part of the conversation? What would it look like to choose to make the communal aspects of the teaching evaluation–that we who teach all get them, that we are being evaluated on our own, but also as representatives (fairly or un-) of what the university looks like–public amongst our colleagues?
So what can I–an instructor who depends on “good” evaluations to get a contract renewal–do to address these fundamentally important issues? I don’t have the power to suggest a systematized teaching evaluation, and frankly the notion of such a thing smacks of over-systemization of public education. Here’s what I’ve been thinking about by way of making some change in the classroom:
-I’ve started introducing course by having the student spend 10 minutes early in the term writing about their values and concerns (I didn’t know about this study until recently, but it would seem I certainly haven’t invented this). The idea here is that they articulate their values to themselves and to me. I work to integrate their values, concerns, etc. into lectures about the material.
-I also have the students set three goals for themselves in the course. This seems to get them thinking about what they want to learn, and it also seems to underscore that this is a collaborative process. Sure, I’m the professor, but I use this exercise to stress that their intellectual participation is vital for a rich, engaged classroom.
-I often do mid-term evaluations of the class.
-I’ve started talking about my evaluations, for one thing. On a personal level, and as a relatively new member of the department, it helps to hear about other colleagues’s experiences. On an institutional level, discussing the form, function, and trends of the evaluation process as a department would get the larger, more trenchant issues into an open discourse.
What strategies do you employ when reading your own evaluations, the evaluations of applicants, or your colleagues? What are your thoughts on making evaluations public? (Don’t forget that they kind of already are) And what can we do about interrogating and interpreting the language of the evaluation when it asks, for example, about “special qualities?”
______________________________________
*For a very few of the many examples out there, you can see the CAUT policy on evaluations here (note the caution in part 1), you can see Vanderbilt’s large accumulation of data here, a related post about the gender and service here, and for personal perspectives you can read Alfred Young Man on being a Native teacher in Canada here, and a compelling anonymous guest post at University of Venus here