Metrics: An Addendum on RAE / REF

Not everything that counts can be counted, and not everything that can be counted counts...

We have had overwhelming support from a wide range of academics for our paper on why metrics are inappropriate for assessing research quality (200+ as of June 22nd). However, some have also posed interesting follow-up questions on the blog and by email which are worth addressing in more depth. These are more REF-specific on the whole and relate to the relationship between the flaws in the current system and the flaws in the proposed system. In my view the latter still greatly outweigh the former but it is useful to reflect on them both.

Current REF assessment processes are unaccountable and subjective; aren’t metrics a more transparent, public and objective way of assessing research?

The current REF involves, as the poser of the question pointed out, small groups of people deliberating behind closed doors and destroying all evidence of their deliberations. The point about the non-transparency and unaccountability of this process is an important one to keep in mind.

The question is then posed, are metrics more transparent, public and objective? On a surface level, metrics are more ‘transparent’ because they are literally visible (public) and given a number, making them easily rankable. But what they represent, as we argued in our paper, is fundamentally non-transparent given the wide variety of reasons there might be for citing work, and more besides those we cited. In fact, it is the very simulation of transparency in the use of a numerical marker that becomes threatening to the act of actually reading work for assessment purposes. Continue reading

Why Metrics Cannot Measure Research Quality: A Response to the HEFCE Consultation

Pacioli Euclid Measurement

Update 24th June: 7,500+ views, 100s of shares, 200+ signatories! And a new post with some responses to further issues raised.

The Higher Education Funding Council for England are reviewing the idea of using metrics (or citation counts) in research assessment. We think using metrics to measure research quality is a terrible idea, and we’ll be sending the response to them below explaining why. The deadline for receiving responses is 12pm on Monday 30th June (to metrics@hefce.ac.uk). If you want to add an endorsement to this paper to be added to what we send to HEFCE, please write your name, role and institutional affiliation below in the comments, or email either ms140[at]soas.ac.uk or p.c.kirby[at]sussex.ac.uk before Saturday 28th June. If you want to write your own response, please feel free to borrow as you like from the ideas below, or append the PDF version of our paper available here.


Response to the Independent Review of the Role of Metrics in Research Assessment
June 2014

Authored by:
Dr Meera Sabaratnam, Lecturer in International Relations, SOAS, University of London
Dr Paul Kirby, Lecturer in International Security, University of Sussex

Summary

Whilst metrics may capture some partial dimensions of research ‘impact’, they cannot be used as any kind of proxy for measuring research ‘quality’. Not only is there no logical connection between citation counts and the quality of academic research, but the adoption of such a system could systematically discriminate against less established scholars and against work by women and ethnic minorities. Moreover, as we know, citation counts are highly vulnerable to gaming and manipulation. The overall effects of using citations as a substantive proxy for either ‘impact’ or ‘quality’ could be extremely deleterious to the standing and quality of UK academic research as a whole.

Why metrics? Why now? Continue reading