After having spend the same amount of time (one academic year) at two different higher education institutions, I couldn´t help but compare. While one can quite correctly argue that it has similar - thus none - use as to compare apples and pears, it made me think: why one of the two institutions – the University of Edinburgh and Masaryk University in Brno are so differently ranked?

Why the University of Edinburgh is very highly ranked in the three most cited world university rankings (#17 by the Quacquarelli Symonds – the QS, #39 by the Times Higher Education -the THE, and #51 by the Academic Ranking of the World Universities – the ARWU) but the other one, my current alma mater, made it to stunning #551-#600 out of 700 according to THE while the other two didn´t mention it at all?

The two universities probably can´t differ more in terms of location, history and reputation. Can these aspects skew the rankings that much, or is it that Masaryk University is just a very bad institution of higher education? Given my personal experience, I refused to acknowledge this and decided to find out more about the methodology the rankings use.

Postsecondary education has, like more or less everything, become a subject of globalization. As it has internationalized, the rankings have, not surprisingly, become global too. Millions of young people are studying outside their home countries aiming to receive the best possible education and universities often compete for students and staff as well. For this reasons, the rankings have great impact on the academia worldwide and must be taken into account. 

What do the rankings measure then?

The rankings rely on perhaps the only thing that can be plausibly assessed: research productivity measured by the number of publications. In case of the THE, data based on research and academic publishing make up 60 percent of the final score. The implications are considerable. In all three rankings there is no non-English-speaking institution in the top10. Is it really that English-speaking countries have vastly superior universities, or is there a bias in the criteria?

Report, published by the European University Association (EUA), argues that the ranking heavily favours English-speaking institutions. Publications in English are more widely read and cited, operating in English the lingua franca of academia means better access to top journals and informal networks. As a result non-English publications are omitted altogether.

Also, focus on scientific research (most privileged areas are the STEM fields – science, technology, engineering, mathematics) is smashing for research-intensive Anglo-American universities, but not so good for for example Germany – where much of the best scientific work is done at labs such as the Max Planck institutes. The EUA report also points out “relative neglect of arts, humanities and social sciences”.

Hold on – and where is the teaching?

Further, some of the rankings, especially the QS, emphasize reputational surveys. What do scholars think about particular university? Which institution is in their opinion the most influential in certain field? The implication of such methodology is that many universities, not mentioned by anyone, are not represented at all.

Simply put – nowhere. The quality and impact of teaching is almost impossible to quantify. As a result, it has been largely ignored. THE has recently decided to include questions about teaching (such as teacher-student ratio etc.) into the reputational survey. However, it is doubtful that asking narrow section of academics about teaching quality will bring meaningful results. It is thus clear that the rankings as they are tend to overrate research, at the expense of performance in teaching and learning.

The 'super brands' and the others

The top six universities in the ranking for 2014: Harvard, MIT, Stanford, Cambridge, Oxford and UC Berkeley were found to be "head and shoulders above the rest", and were touted as a group of globally recognised super brands. This encourages the view that it is the central or peripheral status of a country or academic culture that influences greatly the placement of universities in the rankings

Ellen Hazelkorn, the author of the UNESCO and OECD report on university rankings argues that it is high time to boost the profile of universities outside the traditional elite, especially with regard to disproportionate under-representation of mainland Europe in current listings. She also expresses concerns that the rankings may affect the labour market as employers focus on graduates from few expensive universities as well as increase societal stratification, as being elite associates with being very well-resourced.

The case study of comparison between Edinburgh and Brno relates more or less to the issues of the world university rankings. While Masaryk University quite likely lacks funding for conducting research as extensive as University of Edinburgh does, it says nothing at all about quality or impact of teaching at my faculty (of social studies). In fact, if I underwent the thought experiment and tried to compare incomparable, Brno would beat Scotland on all counts.

To conclude with, I do not think that there are no aspects at all, in which my university (or Charles University in Prague, or a whole range of universities in Germany, France, the Netherlands etc. for that matter) would cope with or even surpass the “elite”. For that reason, university rankings that will flood the headlines once again in early September should be treated with caution.


Bottom line: If you were about to decide where to study, would you rely on data provided by the rankings?