On 5th April 2017, the third in the Cardiff Hub’s series of lunchtime debates on science and research cast the spotlight on the evaluation of research quality.
Professor Ole Petersen introduced the debate by underlining the significant investment by governments in research. Such levels of funding made some form of evaluation of research quality inevitable. The question to be considered by the panellists was how assessment could be undertaken effectively, particularly when senior politicians are saying that the current mechanisms are not working as well as they could, for example, in stimulating major breakthroughs in innovation.
Professor Yves Barde has served on three rounds of ERC assessments and spent much of his career in well-resourced research institutions in Switzerland and Germany. He concurred with the opening statement of Professor Petersen that reviewers generally tended to support ‘safe’ projects rather than very adventurous, potentially high-risk ones.
Jane Boggan project-managed the last REF submission for Cardiff, which resulted in an impressive 5th place in the UK (by grade point average), around 40 million GBP per year in research income and a 56% share in quality-related research funding in Wales. Impact was assessed for the first time in REF2014 and Jane described the challenge of establishing criteria by which to measure it. The Stern Review listed a set of recommendations for REF2021. These included reducing the administrative burden, less ‘playing the system’, an increased focus on institutional research quality rather than the individual, a focus on the longer term, increased interdisciplinarity and a fresh look at impact. Practical measures to achieve this could include submitting all research-active staff, reducing the number of research outputs submitted, using a set of metrics with less peer review, and a broader definition of impact.
Professor Meredith Gattis addressed her message to potential research grant candidates, particularly the ERC. The challenge is to persuade reviewers that finding an answer to your research question is crucial and that you are the right person to uncover it. Be convinced of the importance of your idea. Engage with the panellists and listen to what they are asking you. Respond genuinely and consider their perspectives.
Professor Val O’Donnell spoke about the overall impact of research evaluation, particularly within the context of major change in the status of science under President Trump and in the wake of Brexit. She noted the movement towards open science and data and she highlighted a new initiative by the Wellcome Trust. She finished by enquiring whether the REF took into account such new developments in how science is done.
In the audience discussion, Professor Petersen underlined the difficulty of judging whether research evaluation is high-quality or not. The specific strengths of UK and Cardiff research were reflected on by the panel. Good research relies on modern technologies and facilities, which requires investment. The UK has an open system that encourages excellent researchers from outside the UK to work here. The challenge is for Cardiff researchers to have the ambition and drive to go out and win funding in a highly competitive environment.
The competing challenges of life under the REF were considered. In particular, it requires the flexibility to allow researchers to engage in a range of activities at different stages of their career.
The desirability of a research elite focusing on the Golden Triangle of Oxford, Cambridge and London was debated. It was recognised that countries like the Netherlands had research excellence more evenly distributed across its universities but was very successful in research. The importance of high-quality review processes was generally agreed. To facilitate innovation, the point was made that faster speed from research idea to final outcome was important.
Val O’Donnell reported a very good experience of working with the Wellcome Trust’s open peer review journal, the biggest challenge being the collation of primary datasets. Ole Petersen underlined the importance of long-term reproducibility of research. The REF could help here by examining the long-term results from institutions, not only short-term evaluation. There was concern that funders that also operated as journal publishers might act in self-interest.
The debate closed by looking at possible changes in the REF. Jane Boggan spoke about HESA definitions of research-active staff and the possible impact on staff in Russell Group and post-92 universities. The concluding comment was that universities should still be able to shape their REF submission, even though a core set of criteria was likely to be defined.