Accessibility links

The assessment of research is of enormous public importance – the findings of research determine the evidence base for a vast range of decisions which affect every aspect of our lives. The rise of bibliometrics as the primary means by which we gauge the quality of research is therefore something that the public should know more about, especially because it is deeply flawed.

For obvious reasons, this isn’t something that’s had much attention from the media (although the Guardian published an article relating to the topic last year). It might not be sexy, but it’s an issue which is critically important, with implications that touch more or less every aspect of our lives.

It might not be sexy, but it’s an issue which is critically important, with implications that touch more or less every aspect of our lives.

As an early career academic I had the mantra ‘publish or perish’ bashed into my consciousness on a more or less daily basis. To have a successful academic career, researchers need to publish as much of their work as possible, in the highest rated journals. If you publish a research article, and it gets referred to a lot by other academics, this is regarded as an indication of the research being ‘good.’ Even if the only reason your research is being cited all the time is because others are using it to illustrate what is wrong with it. There’s also this strange academic norm, in which researchers and theorists are expected to mainly cite research published recently, which is a bit bonkers, really. Old research can often be the most important or most relevant basis for new studies.

In order to publish, you need to provide a context for your work, citing key texts from the big players in your field (thereby driving up their citation indices). If you’re being natty about it and you have recent publications yourself, you’ll also make sure you always cite all of them (even if they’re not relevant), in order to drive up your own citation rating.

The whole process is mediated by journal editors who rely on peer review to ensure that what’s being published is of high quality. And, the main measure of quality on which journals are judged is how much they are cited by others. It’s a kind of crazy game, with the rules being set, refereed, and maintained by the players. One way of looking at it would be to call it an international popularity contest.

The general drive towards transparency and accountability in the academic world is a good thing. But, I believe to do this properly, we need to take a realist stance, capable of recognising the complexity of the world we are researching. Instead, we’ve created a culture of numbers, in which the whole academic edifice has come to believe that fair judgements of quality can be reached by using algorithms to analyse statistical data, even though those data are unable to measure quality in any meaningful sense. The use of citation indices as a proxy for quality is completely illusory, and yet, the system has taken hold amongst an international community which really ought to know better.

the whole academic edifice has come to believe that fair judgements of quality can be reached by using algorithms to analyse statistical data, even though those data are unable to measure quality in any meaningful sense

But, once you’ve invested into this system, and start to do well at it, it would be against your own self-interest to be critical of it. That’s not to say there have not been attempts to get the academic world to examine its own practices. In an excellent report from the Joint Committee on Quantitative Assessment of Research, a group of mathematicians and statisticians explain all of this in much greater detail and with a much more thorough grasp of the issues than I can offer.

One of the most crucial things they say is that although citation counts seem to be correlated with quality, the precise interpretation of rankings based on citation statistics is not very well understood. And, given that citation statistics play such a central role in research assessment, it is clear that authors, editors, publishers and institutions are becoming adept at finding ways to manipulate the system to their advantage.

The report concludes that the long term implications of this are unclear and unstudied. But, it doesn’t take a hugely active imagination to envisage some of the potential consequences. The report was published in 2008, so, by some standards is itself already out of date, and given that practices for assessing quality in academic research show no signs of changing, it seems, so far, to have fallen on deaf ears.

 

Comments

Be the first to write a comment

Please login to post a comment or reply.

Don't have an account? Click here to register.