What do rankings measure?
Rankings measure what we can count. In a way this seems an obvious point, but the not so obvious point is that there might be some things that can’t be quantified but are important. What do you do? Areas hard to gauge can be things like how good the research and facilities are, what the atmosphere and sense of community are like, or whether graduates are long-term successful in the labour market. You can use ‘proxies’, which are things that you can count and that look like they might be connected to ‘the thing you’re after’. One option is to use them and admit you’ve not captured the essence of the thing and that your method might tell you something but is flawed. (This is standard practice in research in general.) The second option is to use them and ignore the limitations. Either way, rankings use proxies a lot, as we’ll see.
Many rankings use the difficulty of admission as a measure of quality — the hardest universities to get into are the best; one Japanese ranking is completely based on it. It makes sense if the courses are so difficult that only the brightest will be able to complete them. But this is a hard thing to establish accurately, and entry requirements alone are not enough. I could set up a university tomorrow and only allow people in if they have an IQ of 160 or higher. But if, under the surface, it’s a holiday camp staffed by unqualified, uninterested, and unknowledgeable people, then the standards might be lacking. I know that’s a flippant way of putting it, but the basic point is that a good façade alone doesn’t make a dependable institution. You also have to acknowledge that some people can be very bright but don’t do well at school and/or never think about going to university; equally some might not be the brightest but do well and never think about not studying (read here for more on this). So, needing high grades to get in might be a useful indication, but isn’t a marker of ‘goodness’ in its own right.
Can we assume that the universities doing their jobs well have the happiest students? Student satisfaction is a bit of a catch-all, in that it is supposed to encompass the quality of the learning experience. It may also try to capture non-educational aspects such as careers support, accommodation, sporting and social facilities, and so on. It is often broken down into separate areas, like feedback, teaching, student-staff interaction and so on. Of course you’d hope that the teaching is engaging and the feedback useful, but you can’t dress all topics up to be as fun as Monty Python, and there is only so much help you can give students. The manner of teaching also differs widely between subjects. Part of going to university is about learning new things, learning for yourself, and changing the way you think. It is supposed to be challenging and difficult, and this isn’t always enjoyable. On the non-educational side of things, this varies between universities and countries. Oxford is seen as a world-leading university and the social side of things is excellent, but the sporting facilities are pretty bad (outside rowing), especially when you compare it with US universities which have quasi-professional sports teams with enormous budgets. Overall, you also have the issue that people who are disgruntled – failed an exam, lived next door to noisy neighbours, didn’t like the atmosphere – aren’t satisfied but this is not necessarily the university’s fault. What do you do, give everyone great grades, make the courses easy, and focus on the non-academic side of life? Would students be happy if you did? Student satisfaction is a useful indicator, but is also a pretty vague proxy.
How do you measure if a university does good research? Perhaps if it’s world-changing, or widely-read and talked about? Not all academic work has an immediate or obvious effect like the discovery of a new medicine or a radical economic theory. You can measure research publications, but this focuses on academic articles which aren’t very accessible to the public, and a lot of research goes into public or official reports that aren’t counted. In any case, pumping out lots of articles doesn’t necessarily make you good, particularly if you’re constantly recycling old material. Citations – how much people refer to your publications – is seen as another measure, but there are certain kinds of work that get more citations, and it’s not just the ‘shiny’ stuff. Two types of article likely to attract a lot of citations are summaries of a subject area and very weak research. The first are cited a lot as they’re a useful overview and shortcut, while the second are used as examples of what not to do or to disagree with findings that don’t add up. How about the money you attract for research? If a university does a lot of work in robotics, engineering, or space exploration, the sums you need are – excuse the pun – astronomical. If another place focuses on less expensive subjects like maths or the humanities, then it might look weaker even if the research is really interesting and/or life-changing. Research reputation is also sometimes used. This in a way is a bit of a guesstimate, in that even if you do a massive survey, it’s still a measure of what people think is going on rather than what actually is happening. It’s interesting, may be somewhat accurate, but this accuracy is certainly not guaranteed.
The universities whose students get jobs quickly after graduating are the best workers. This assumes two things. Firstly, that employers have a genuinely accurate understanding of how good graduates really are and how this varies between and within universities. What if employers just use rankings to make their employment decisions? They might find that the people they get are fine, but this doesn’t mean the people they never considered weren’t up to the job. Secondly, you might have someone who was good at interview and can do the job from the first day, but these can be fixed through short-term training. It doesn’t tell you how good they are over their careers or how they cope if the nature of the job changes. Sometimes you need time to get going and really understand the field that you work in, rather than hitting the ground running.
The International Factor
Some rankings imply that having a lot of international staff and students is a marker of quality. It could be useful if you know that you recruit the best academics and students, skimming off the top of the global and domestic cream. This might not work out, though, as some people want to (or have to) work or study in their home country. We don’t all speak the same language, we don’t all know where the ‘best’ places are, and we don’t all have the resources to move around. This means that our ability to live wherever we like in the world is limited, and there may also be serious issues around visas for study or work. From a skeptical perspective, you could just as equally recruit people who are unemployable or can’t get into university at home to make yourself ‘international’. It’s nice if universities are mixed as you get a range of experiences and opinions, but it’s not a measure of quality in its own right.
So, rankings: dinkum or bunkum?
I’ve missed out quite a few commonly used things, like student-staff ratios, spending per student, and average final grades. The general point here, though, remains the same: any of those things could be connected with high quality, but don’t automatically create it in themselves. In all of this there are a number of risks. One is that universities intentionally (or accidentally) game the system by focusing on the measures themselves and ignoring the underlying principles. You could end up with great ranking scores but the soul of the thing you’re trying be is lost. Much of the data collected in rankings refers to the here and now, rather than how things pan out over time. Some research takes a long time to have an effect, potentially centuries, and employability is a long-term, variable thing that depends on the economy as much as anything. Perhaps using a lot of criteria cancels out some of the risks, but I can’t help feeling you’re still missing something. The head of one the most widely-used international rankings said recently that these things should come with a health warning, in that they’re useful but not the authoritative be-all and end-all. Rankings do have a purpose, and they do tell us a lot, but the underlying question is whether their flaws are widely known or acknowledged.