From the Other Side of the Desk: student evaluations and annual reviews
April 6, 2011 § 14 Comments
I have really hesitated to write this post because I fully intend to criticize that most sacred of qualitative measure: the student evaluations.
If you are unfamiliar with student evaluations, allow me to educate you. A student evaluation is a form typically consisting of two parts. The first portion is commonly a Scantron sheet where students will rate elements of the classroom experience: the professor’s knowledge base, the clarity of the professor’s voice, the level of preparation required for this course. The second portion is ofttimes optional and can come in the form of a short-answer questionnaire where the students will “honestly” respond to questions specifically directed to that course. (For instance, there is a questionnaire for the composition classes as well as for the literature classes.) Students complete these forms on the last day of class meeting, and they typically take anywhere from 10 to 20 minutes to complete. While the students evaluate their professor and overall classroom experience, the professor is proctoring someone else’s evaluations–no professor remains in his or her own classroom during this time. It is less intimidating to the students this way and encourages them to be more honest in their responses.
The evaluations are sent off to a school somewhere else in the country (ours are sent somewhere to the West…I think) where the Scantrons are scored and averages on a scale of 0-5 are mathematically figured. The reports return to the home university and are submitted to the appropriate professors typically around mid-semester of the following term. Sure, it’s too late now to really implement any changes or recommendations stated within the evaluations, but at least the students’ responses are kept anonymous, grades for that class have already been reported, and the professor likely won’t remember a specific student’s handwriting any more.
Although many professors wish this is where the evaluations might end, on their own desks to be used at their own discretion, this is typically not the evaluations’ final resting place. In many instances, particularly when it comes to junior colleagues and graduate students, student evaluations are normally requested to appear in a teaching portfolio for an annual review. (I believe this is also true for many jobs on the academic market. Potential employers would like to see the evaluations from previous students in order to glean an idea of the caliber teacher they might hire.) And this, my friends, is where I struggle with the usefulness of student evaluations.
Take calendar year 2010, for instance. I had three back-to-back-to-back tricky semesters. I had students who were highly combative, accusatory, and presumptive. I often felt nervous, panicked, and unconfident. I spent office hours dreading the footsteps echoing down the hallway, silently willing those footsteps not to be for me. This came to a head last semester when my office hours were after dark and a couple of my more combative students had spent the majority of the course shooting daggers at me. What had I done? Well, given them a quiz on a day they hadn’t read, of course. Or returned a paper with a lower grade than the student believed s/he deserved. Certainly worthy of a threatening glare. Because it’s entirely my fault a student did not achieve to his or her ability. Absolutely. Bad Mrs. H.
Because 2010 was so terrible, I refused to read my evaluations. Normally, I read my evaluations once the following semester has ended. Because we receive our evaluations in the middle of a semester, I never find it appropriate to read horrible comments and destroy my otherwise unwitting confidence. Normally, I read evaluations from Spring after Summer semester had ended. This way, I don’t waste my time midway through a semester with languishing energy and enthusiasm. 2010 was so truly awful that there has been little reason for me to read the evaluations from that year. And last Monday, during my annual review, my assumptions were confirmed. My students claimed that I was enthusiastic (a comment I always receive on evaluations), but they were unhappy with the blogs and quizzes. They believed the blogs were a waste of time and did not actually help their grade in the first place. So, students had little incentive to complete the blogs. This was a large portion of my annual review–and I just sat there, frozen into stunned silence, unable or unwilling to defend myself. I realize now what I should have said, but what’s the point?
Student evaluations have been infused with this sort of ethos that implies immediate expertise. Because Student A took Mrs. H’s World Lit. II class, Student A is an expert and is capable of evaluating his teacher.
It seems to me that in other professions where evaluations are considered during annual reviews, those evaluations are conducted by other professional peers/colleagues or (even better yet) by administrators. To be evaluated by someone who has absolutely no training in this field and little consideration for the relevance of the course, is laughable. Absurd. Of course my students didn’t want to do extra work. They would prefer to do no work. They would prefer to watch movies based on the books we’re reading. They would prefer not to have to read these books. They would prefer not to come to class at all. (I realize I’m generalizing–there are a few literature students out there who see the value in these courses, but rest assured that those students are few and far between. And their voices do not get heard nearly as well as the others’.) Judging from the recommendations of my annual review (and, mind, I still have not read the evaluations–why would I? my semester is going really well so far), I would guess that my students had absolutely no understanding for the concept of teaching and writing pedagogies as they apply to a literature classroom. When I discuss my methods with others, entirely devoid of student evaluations, I am met with encouragement and often words of support. When I discussed my methods with my reviewer, I was met with phrases like “I’m not sure this accomplishes your pedagogy as well as you think it does.” Really? Did my students who wrote the evaluations read every single student’s paper like I did? How could they properly assess just how well these methods have worked in my classroom? From my perspective, they were a stroke of genius (one likely never to be repeated–I have a feeling we’re all given one stroke of genius in our lifetimes…well, the normal people…the geniuses of course are granted more). But what do my untrained, 20-year-old students know about my methods? Those who care to ask me know a great deal more than those who do not care.
And, from my perspective at least, the number of students who do not care far outweigh the students who do. Yet both categories are encouraged to evaluate and assess me. I find it stunning that their assessments are taken seriously in the first place.
My conclusion is this: student evaluations should be kept to the absolute most basic of functions, and that should be to evaluate the course curriculum. Let the teaching professionals evaluate their junior colleagues. Leave the real evaluations and assessments to the professionals.