Does university assessment still pass muster?

Most universities still rely on exams and assessed essays to grade their students. But as the fourth industrial revolution, employability and student satisfaction all rise up the agenda, many experts are suggesting that assessment needs to much more closely resemble real-world tasks. Anna McKie marks the arguments   

Published on
May 23, 2019
Last updated
May 23, 2019
Source: Getty

POSTSCRIPT:

Print headline: Time to  get real

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Reader's comments (15)

This article seems focused on humanities and other “essay based” areas in which exams do (maybe) focus on “regurgitating facts”. However it seems to be totally unaware of the nature of assessment in mathematics and physical sciences in which examinations mainly test problem solving and calculations skills. Testing this sort of thing through coursework or other open assessments causes difficulties as students can easily copy others’ work, or just get someone else to do the assignment . Of course, much assessment is already through other means than examinations, where this is appropriate (eg laboratory work) but there seems to be no good argument to change the balance.
A very timely article, particularly when we seem to be suffering from a moral panic over "authenticity". My only concern is that much of this work has been going on for over 20 years and it is only just being noticed by "education advisers" (let alone policy makers). I make no apologies for signposting colleagues to the work of David Boud - Boud, D., 2007. Reframing assessment as if learning were important. In Rethinking assessment in higher education (pp. 24-36). Routledge. Focusing on learning would be a good starting point.
No need for apologies - Boud is great - but just a small point: 'education adviser' is the title of an educational developer: you seem to think it's some sort of management or policy role, but it's an academic role. I notice the work, because I do the work. I'm in the field. It... sounds like you aren't?
Thanks Emma, sorry if this seemed to diminish your role. My point here is that advisers or developers are part of a small community with a specific teaching focus - where is the rest of the academic community?
Meanwhile the secondary education system has moved back to 100% exams because the above methods don't seem to work. Admittedely there's a lot of right-wing idedology that influenced this change but it's also worth thinking through the benefits of exams before disparaging them. Surely that is critical thinking at its best?
For final degree classifications, a useful tool is a 'capstone project' - like the final year project common in STEM subjects, where the student spends a large amount of time during their final year working on an independent extended piece of work with supervision from an academic: then present that work in a 'demo' as well as writing a report/dissertation about it which is marked. It works well in computer science (my discipline, and I'm the final year project tutor), but I'm not sure how well it would work in the humanities. However, that doesn't address the issue for the entirity of a student's career in university. As the article states, there are some facts in any discipline that need to be LEARNED, however much you want to concentrate on interpretation of information rather than rote learning of it. We have to remember that those determining such things are those who rose to the top of the educational system as it is, and to step outside of "it worked for me" can be quite hard.
Universities are really the oddest place for teaching/learning workplace skills. The best place for that is – the workplace. Universities should focus on cultivating scholarship: an array of critical-analytical-creative skills of mind, adaptable to all contexts, but sadly threatened and diminished since the rise of crass utilitarianism in the 1980s.
Agreed! There should be no training in universities but employers will happily let the taxpayer fund this if we do not keep universities as places of learning. Nobody can surely contend that university is any more than the start of the process of building expertise even for an academic.
A long comment on a narrow subject. Some of the most valuable bits are in the comments. One size of assessment will not fit all. Some degrees are highly vocational. STEM subjects differ from humanities. We first need to decide what is the "higher purpose" of a University Education and only then consider the role of an Undergraduate Degree. ( which might be undertaken elsewhere than at a University) We need to define what outcomes and outputs we seek from a degree before we can design a better way of achieving those objectives. Preventing cheating should be way down the list of considerations when designing the process of assessment
I see a lot of things worth aspiring to here. However, there are several factors that act as obstacles, some of which are outlined in the article. The first is legislation and the quality regime. Another comes down to identity politics: the idea of ipsative assessment presupposes that you know the student and have ways of gauging their personal progress (it is also best served by some level of ongoing contact by the same few members of staff). This is anathema to those who support anonymous marking regimes. The idea of programme-driven assessment is very compelling but very hard to bring about when you have high levels of programme flexibility and students from different programmes taking the same classes (unless you are prepared to test them differently according to programme; viz. the first point above). Another point comes down to resources: in a mass education system, we need to work with assessment types that are economical, trustworthy, and have been proven to work. Finally (for now) we assume that teaching staff are equipped with the will and expertise to put into place new forms of assessment. This assumption, as many, is problematic.
Essay assessment is also not without flaws. Too many students get their way around by utilising the essay mills. In fact many wealthy middle class students' parents go out of their way to employ dedicate postgraduate students or unemployed post docs to help the undergrad students to write up the essays (one of my former student's father employed an unemployed post doc on a full-time contract for GBP £30,000 with performance related bonus to ensure the student gets the top grades. People are getting around to be fool proof from the usual filters for plagiarism and out of pattern style writing authorship which would've been picked up by softwares eg Turnitin etc which I have seen that the gamers of the system always seem to be a step ahead of the sophisticated technological measures against cheating. The old fashion way of manually sifting through students' works and calling them to the tutor's room for quick chat on grounds of suspicion, though such method is more reliable and trusted, is just not practical or workable, given the sheer number of students and administrative workload the tutors and professors are already burdened with. (We struggle to even meet the marking deadline of the assignments on time; many just glimpse through with the second additional marker at times relying on first marker's observation as he or she just can't be bothered to look through or as it is more often the case, being overburdened with other work duties, often prioritising the works of graduate students or final year dissertation/thesis). A better solution would be to introduce classroom seminar participation and active critical engagement in the classes as part of the overall assessment, all of which should be graded alongside other usual methods. In fact such method of grading classroom participation and interactive engagement are already utilised at some American universities/colleges and in most postgraduate professional schools in the US eg law schools providing the JD programme. Many academics are all but giving up hope on the essay style assessments being the correct indicator for students. As a former lecturer/professor, I don't have much faith on the essay assignment system. An alternative should be essay assessments which either should be ungraded (ie pass or fail only, with constructive feedback) or essay assessment should be conducted as formative assessment not counting towards the final degree grading classification so not to make students pressurised to aim for the ultimate grade at any cost, which drives them towards cheating their way out/gaming the system.
Many of us have been listening to views on assessment for a considerable number of years. In my time I've seen views and ideas come and go; as well as their protagonists. New ideas were, however, little more than old repackaged thoughts linked to those other problems coming from the 'new age' which had to be 'managed' by the emerging new kids on the block. One such problem was, and still is, massification. It's simply futile discussing any different assessment pattern without discussing how time hungry and how expensive it is. Managers will naturally support any innovative approach as long as it requires the same, or fewer, resources. 'Ipsative' assessment, for example, requires the assessor to know the student and to remember the work ... !! The time required for such assessments would devour an academic's time and could lead to him/her skimming the work and inventing the feedback unless extra resources are allocated. Please don't be naive if you don't think such things happen. Additionally, trying to justify linking a student's work to an imaginary 'real world', in to which they will eventually emerge, is just silly. We should stop using this phrase. Employers, collectively, can't agree on what a skill is, let alone articulate what they want from a graduate. They may be able to offer definitions for their own purposes but there is no common ground and no industry voice. Even if there were, why should we 'train' students for industry needs when they should be doing that themselves. Training is not our brief, it's theirs ... and they should fund it. Any assessment pattern which purports to mimic reality and to prepare a graduate for employment is quite simply a fraud. As I've previously written, our brief is to provide employers with the educated raw material, and they should do the rest.
The article rehearses many of the drivers of assessment choice - tradition, funding (almost all alternatives to exams will cost more to set up or perfect), a focus on other things that academics do that bring rewards (and its not assessment). One, as yet, unexplored driver is digital technology. We should not simply think about traditional assessments (exams and essays) being facilitated by IT, we should think about the other key things that technology can assist and that we expect in our graduates and then develop ways of valuing them. Just because we have used exams for years does not mean that they are the right tool to use.
The argument I hear time and time again is that "they NEED to know this stuff and prove they know via an essay based exam," or "they NEED a chunky piece of writing to really get into the detail". And then most UG students (science based) come out able to ramble on spectacularly for writing research papers that take 12 pages to get anywhere resembling a point, but still use Google for the actual knowledge and are unable to write concisely to get an important point across. Yet module/unit leaders are all too often unwilling to budge as to the importance of "their" assessment, without proper consideration of the bigger picture and the ways in which we access knowledge now compared to even the 90's. That "perceived innovative ideas are rehashed versions of old ones" is a nonsense counterpoint to change - yes, they aren't actually "new", but they may work better now than then because the sector, the students, and the technology is so different in comparison you may as well be talking about not using a mobile because they tried that in the 80's and it just wasnt that good. I now run an online open book unseen exam in final year to try to better mimic the kind of expectations and environment and means graduates may experience once they're finished - and it makes marking and feedback SO much easier.
"Moreover, educators must not focus all their attention on designing assessment to stamp out contract cheating because that would be to the detriment of education and those who do not cheat", Ellis adds The biggest detriment to students who do not cheat is the fact that those who do cheat get better grades and outcompete the honest students for jobs.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT