Is student course evaluation actually useful?

Virtually all modern university courses end with a request for feedback. But are students’ reactions even useful for improving future course design, never mind assessing lecturers? Seven academics discuss their experiences

Published on
April 16, 2020
Last updated
April 29, 2020
Anatomy
Source: Getty

POSTSCRIPT:

Print headline: At the sharp end

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (12)

Everyone who is really interested in teaching and learning, that is teaching students on the basis of an understanding of the psychology, sociology and philosophy of education, would know that students feedback is always useful for improving our teaching of specific subjects and consequently what students learn. Indeed should be the case independent of what any professor (lecturer) would like to hear. Those who do not like student's feedback are simply not teachers because they have not been trained to teach i.e. Exposed to courses in psychology and sociology of childhood, adolescence and adulthood, lesson planning, rubric development, teaching practice, clinical supervision, assessment and evaluation, teaching methodologies. The key issue driving such aversion to students feedback is the fact that most university professors are lecturers. This who taught at lower levels of the education system or underwent some teacher training would never be subversive or would not be put off by students feedback. They would use that as an opportunity to improve their lesson objectives, content sequencing, learning styles, teaching methods. They would be welcoming feedback during the course knowing fully well that end-of-semester examinations is only one form of assessment/feefback.
In an ideal world, you might be right. Unfortunately, most university feedback forms are so badly designed, they are worse than useless. The list of faults with standardised feedback forms is long, but suffice it to say the feedback forms that I am forced to use (across three different universities): * Have statistically meaningless response rates. The vast majority don't fill them in, so how does it reflect the cohort's experience? * Rely almost entirely on Likert scales, often with 5 being a high rating on one question and 5 being low on another. The results then being summed to get an overall rating! * Have no context or actionable feedback (open field usually left blank). What do you do when some on the course rate a question 5 and other rate it 1? What do you change, if anything? * Are like TripAdvisor, in that they get filled in by people who either love the course or hated it. I suspect usually determined by the mark they got (You can tell from the few comments that are left). * Are often factually incorrect/just plain wrong in their description of the course and its delivery. I'm not talking about opinion or a different point of view, but objective facts e.g. Complaining about in-class tests when we don't do any. I am afraid I have lost all faith in the validity and usefulness of course feedback. It's a good idea and I wish I could get useful, actionable feedback, but the ways it is most often implemented makes it a waste of time. This is a reflection of the simplistic metrics driven environment within universities and the obsession with 'student satisfaction' rather than the quality of learning and education. The feedback forms are more 'Do I like my lecturer' forms, rather than 'Do I think I learnt something'.
Not all kinds of feedback are legitimate and helpful. To make the categorical claim that 'if all lecturers do not like student feedback, then they are not teachers' is simplistic. When looking at student feedback, there are questions about the truthfulness of the feedback (i.e., not all forms of feedback relies on postmodernist interpretation of truth; lies), and the qualifications on the person providing the specific feedback. Like what was reported in this article, students can feedback on the teaching and learning experience but they are not qualified to provide feedback on how learning or teaching should be done or what they should be learning or taught - because they are not qualified to do so. It is analogous to a patient giving feedback on how they were treated by hospital staff (legitimate) and how they like the surgery to be done (illegitimate). There is simply no evidence that student feedback reflects reality. Rather than just making claims - show me the evidence for its veracity.
There is always one: one student who has to write something abusive; one student who gives a feedback grade of 1 out of 5 across all the questions because of a mark awarded they felt was too harsh; one student who has to write that, "the lecturer is past his sell-by-date"; one student who gets his/her lecturer mixed-up with someone else and hurls abuse at the wrong person; one head of department who sacks lecturers because their course evaluations are below the mean for the third year in a row; and one reader of commentaries like these who has no empathy for those lecturers for whom unjustified negative feedback from students can be both health and career destroying. Formal student evaluations are so generic in their design that they are incapable of producing much that is a positive help to quality improvement. On the other hand, they do give the customers a sense of power and an opportunity for payback for perceived wrongs, real, imagined, and misunderstood, but is that why we use them? Informal student evaluations, such are described in these commentaries, are a positive boon to any teacher. Quite how they morphed from informal to formal is beyond me, though i do remember vividly the introduction of formal student feedback in the early 90s: being denied permission at that point to continue to use my own customised feedback instrument, one that did actually help me greatly for the four years i used it. It is time to abandon this destructive formal feedback mechanism and replace it with an informal one that all staff must use, such as the two-question version presented in these commentaries. Then, student feedback will mean something positive for all which, as all "trained" teachers know, is why it is was sought in the first place.
It's a shame that this article did not start with a good review of the research on SETs. Most Universities do not use validated instruments which is far from ideal, most are knocked up by a committee, which results in a camel. By validated feedback instruments do exist and they should be used and researched more in-depth. They are not perfect, but then no measurement instrument is, particularly those that measure social interactions and their impact. I am however struck by how even the outcomes of rough and ready evaluation questionnaires tend to match what Faculty already know about each other.
Can you please post some examples here?
Student evaluations replicate the infantilising scores or gold, silver, bronze, tin "stars" so commonplace in academia - and beyond. While the comments can be really useful, I really don't know anyone who believes that knowing they have scored 3.86 (or 4.24 for that matter) overall is in any way informative. The comments can shine a light on things one should rethink when they're about what worked, what worked less well, etc., but all too often there is always some student more interested in being spiteful or finding their most personally offensive put-down as a chance to express their frustratio, rather than engaging with constructive criticism. If we gave feedback of the sort some of us have received (and yes, I have also received some gratifying comments!), we would probably be had up before some committee or other. The idea that evaluations are anything else than management tools is disproved by the fact that few have been called in by their line manager to be commended for their teaching. But if the scores are poor, you'll be the first to know! If these were designed to be helpful, scores would be dropped leaving only comments, with someone having gone through beforehand to remove all offensive or needlessly unpleasant comments. Innovations in pedagogy jar with ways with which students have become comfortable and, by definition, anything that jars risks producing senses of disorientation. These can be incredibly transformative for students who throw themselves into things, but the unfamiliar can equally be a source of anxiety. Risk taking for its own sense is foolish, but if one's approach to teaching is to transform your own and students' relation to the world, then it is all important.
Law firms do let clients determine who makes partner. Not by reviews, but by number of clients and how much they have been willing to pay. Literally if you don't have clients and large billings you do not make partner. I don't think this would be what most professors would support.
A lot of the flaws in SETs pointed out in the article and the comments can be designed out if there is concerted collaboration between the SET administrators, academic leaders and representative teachers and students across each discipline. Some universities chose to make this investment, some do not. Here is a starting list of what can be 'fixed': shifting emphasis in questions from teacher 'performance' to student learning experience; periodically updated and validated instruments; not running the standard SET when a teacher is trialing a new approach; running some form of mid-semester feedback (either formal or informal); regularly reminding students of their obligation to provide constructive feedback and potential consequences if they do not; providing students with a free-text field for each item not just the overall satisfaction item; dealing with low response rates in reporting of results (e.g. not reporting quantitative results if a statistically valid response rate threshold has not been met; use moving averages over multiple semesters); analysing student feedback for your university to determine the actual level of gender or racial bias to inform localised responses; taking the time to tell students what has been changed in response to their feedback and that of previous cohorts; inviting students to co-design improvements such as group tasks, assessments, etc; having a clear policy to distinguish how feedback on the course/subject vs. feedback on the teacher will be used/not used. While SETs will always be an imperfect instrument, with a bit of effort they can be turned into something approaching fit for purpose.
Thank you for this post it really helped me. Keep it up. https://islamiclives24.blogspot.com/2020/04/prophet-muhammads-love-for-children.html
Two stories and one observation. In the days before standardized university issued student feedback forms, I issued my own short and free response only questionnaire to part time students on a masters course, who were all employed senior managers. Given the block release delivery and high price of the course, I used a local hotel. One individual on an early cohort consistently offered the comment that 'the coffee is awful' at the end of each residential workshop. I consistently responded that his subjective one out of 30 plus opinions was insufficient for me to take any action. The second story related to a similar course, this time designed and delivered in-house to a group of middle managers employed by a large corporation. I was asked by my then Dean to replace a colleague to teach a particular module because the colleague had received negative feedback from the students, passed onto the Dean by the corporation's Management Development Manager. I asked my Dean, ''Do you want me to get consistent (high) scores of 5 on the student evaluations, or do you want me to do my job of educating and developing these managers?'' He thought for a moment, I think about whether to challenge my cynicism, and then replied, ''Get a positive evaluation; we need the business from this client''. I did just that. My observation is hinted at in some of the above. It is simply what grounds are there for believing that changing something to satisfy this group of individuals will help satisfy a new and different group of individuals? As my first story indicates, I believe collecting feedback is useful and should be done. But always tailored to specific courses. And student evaluation itself always has to be evaluated.
I encourage diaglogue with students throughout the module, and am always ready to listen to (but not necessarily act upon) their comments... but this is a habit developed through some years teaching in FE before slithering into a university. I do sometimes wonder at the questions that are asked on the evaluation forms that the students are sent, and the over-reliance on Likert scales rather than getting them to say what they think. OK, it's easier to use metrics as an overview, but they are not very informative or helpful when you are looking to improve.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT