Skip to main content

Indicators: from excess to toolbox

By 9 June 2021March 13th, 2024Indicatoren

There is no shortage of indicators when it comes to determining the impact of research. Bornmann (2013), for instance, lists some 60 indicators on social impact, and U-Multirank’s 100+ indicators to compare universities include dozens of indicators on research. Standard protocols for research evaluation in the Netherlands use a relatively limited set of indicators. The protocol used to evaluate research at research universities is the Standard Evaluation Protocol (SEP). The SEP adopted for the 2015 – 2021 period (KNAW/VSNU/NWO, 2016) contains 30 indicators. For research at universities of applied sciences, this is the Sector Protocol for Quality Assurance in Research (Brancheprotocol Kwaliteitszorg Onderzoek, BKO) of the Netherlands Association of Universities of Applied Sciences (NAUAS). The most recent version of the BKO covers the 2016 – 2022 period and contains 40 indicators. While this is a relatively manageable number, there have been several additions to both protocols, such as from humanities for the SEP (QRiH, 2017). Within universities of applied sciences, two committees have further supplemented the BKO (Pijlman et al. 2017; Franken et al., 2018). In short, there is enough to choose from, but quantity of indicators is not the only issue.

One of the issues is that lists of indicators are often biased, i.e. most indicators come from specific domains and focus exclusively on output. Moreover, in some cases the indicators are more theoretical constructs than having actually been tested in real-life situations. We quote here the conclusion on valorisation indicators from the report by STW, Rathenau Institute and Technopolis (Drooge et al., 2011, p. 17):

“ •   Most indicators have not been tested and are not known to have been used since the study concerned was published.

  • Most indicators concern economic use; few indicators relate to social use.
  • Most indicators apply to research in medical, technical or natural sciences; indicators for other disciplines such as humanities or social sciences are scarce.
  • Most indicators relate to output; there are few indicators relating to impact, interaction or other stages of the research process.”

While there have certainly been developments in some areas since the report was published in 2011, such as the growing focus on impact, including social impact, it is wise to keep these points of critique in mind when selecting indicators.

One method put forward to solve this problem is to use different sources with different perspectives when choosing indicators so as not to fall into the trap of using only ‘traditional’ indicators that have emerged from a particular historical context. For research at universities of applied sciences, this means a clear representation of recent sources focusing on applied research, e.g. studies by the NAUAS, by Katapult and reports on applied research by third parties. A comprehensive database of indicators, including source information, as a reference is a precondition for this.

A second concern relates to whether – when there is great diversity in goals, implementation and results of research – a shared set of indicators can be provided. A negative answer to this is echoed in the Franken Committee’s report: “Drawing on experiences at home and abroad and the scientific literature, we may conclude that it is almost impossible to find a common set of indicators for all higher education institutions, all the more so because the objectives of research and practice in different research areas are often very different” (Franken et al., 2018, p. 28). Pedersen, Grønvad & Hvidtfeldt (2020) put forward a different perspective: “Drawing on the strong methodological pluralism emerging in the literature, we conclude that there is considerable room for researchers, universities, and funding agencies to establish impact assessment tools directed towards specific missions while avoiding catch-all indicators and universal metrics” (p. 1). A similar view can be found among an EU expert group: “An EU expert group that was to develop indicators for the evaluation of RRI [Responsible Research and Innovation], concluded that RRI, being a dynamic and multifaceted concept, would not benefit from a fixed set of indicators. It was rather in need of a toolbox of quantitative and qualitative indicators. The expert group concluded that the assessment of RRI required both indicators of the process and the outcome and impact of research and innovation. The indicators should support the learning process of the actors and organizations involved (Expert Group on Policy Indicators for RRI 2015)” (in: Drooge & Spaapen, 2017, p. 6).

There is still room between prescribing a number of indicators for everyone on a mandatory basis and leaving this optional, such as in the BKO when it comes to ‘use’ (“report something about use”). This is basically the call for a toolbox. The League of European Research Universities (LERU) articulates it as follows: “LERU is in favour of the creation of a ‘toolbox’ of indicators, some short, some medium and some longer term, from which, per call or programme, those indicators can be chosen that are most relevant” (Keustermans et al., 2018, p. 10). The humanities use the same toolbox concept: “The indicators form, as it were, a toolbox or set of tools from which to choose for self-evaluation reports” (QRiH, 2017, p. 11). A call for such a toolbox has not yet been articulated concretely by the universities of applied sciences, but can be seen as giving substance to a joint statement by OCW, the Taskforce for Applied Research SIA and NAUAS to “work towards an impact measurement system” (2019, p. 9) with as ultimate aim: “Universities of applied sciences are able to structurally identify the impact of their research, both in qualitative and quantitative terms” (ibid., p. 4).

What could we say about such a toolbox? In any case, the toolbox will have to include an overview of possible indicators from which to choose. In preparation for such an overview, a database of indicators for evaluation of research at universities and colleges can be created. Given that this easily involves hundreds of indicators, it will be difficult to keep a clear overview. This means that a certain amount of categorisation and/or designing a proper search structure will still have to take place.

The demand for categorisation or a specific ordering of indicators is concurrent with the observation that when the concept of impact was introduced to characterise research at universities of applied sciences, little attention was paid to analysing what impact actually is. Such a structural analysis can provide insight into key aspects of continuous effects, which then allows for indicators to be sought. A first attempt of such an analysis (van Vliet et al., 2020) already yields a tool that can be deployed to look at reporting on continuous effects in a more structured way: the continuous effects matrix.

Harry van Vliet, Juni 2021

Sources

Bornmann, L. (2013). What is Societal Impact of Research and How Can It Be Assessed? A Literature Survey. Journal of American Society for Information Science and Technology, 64(2), 217-233.

Drooge, L. van, & Spaapen, J. (2017). Evaluation and monitoring of transdisciplinary collaborations. The Journal of Technology Transfer. doi:https://doi.org/10.1007/s10961-017-9607-7

Drooge, L. van, Vandeberg, R., Zuijdam, F., Mostert, B., Meulen, B. van der, & Bruins, E. (2011). Waardevol. Indicatoren voor Valorisatie. Den Haag.

Franken, A., Andriessen, D., van der Zwan, F., Kloosterman, E. & van Ankeren, M. (2018). Meer waarde met HBO. Doorwerking praktijkgericht onderzoek van het hoger onderwijs. Den Haag: Vereniging Hogescholen.

Keustermans, L., Wells, G., Maes, K., Ruiter, E., Alexander, D., Meads, C., & Noble, A. (2018). Impact and the next Framework Programme for Research and Innovation (FP9). Note from the League of European Research Universities.

KNAW, VSNU & NWO. (2016). Standard Evaluation Protocol 2015 – 2021. Protocol for research assessments in the Netherlands. Amsterdam/Den Haag: KNAW.

Ministerie van OCW, Regieorgaan SIA, & Vereniging Hogescholen. (2019). Verkenning praktijkgericht onderzoek op hogescholen. Den Haag: Ministerie van OCW.

Pedersen, D. B., Grønvad, J. F., & Hvidtfeldt, R. (2020). Methods for mapping the impact of social sciences and humanities – A literature review. Research Evaluation, 0(0), 1-18.

Pijlman, H., Andriessen, D., Goumans, M., Jacobs, G., Majoor, D., Cornelissen, A., de Jong, H. (2017). Advies wekgroep Kwaliteit van Praktijkgericht Onderzoek en het lectoraat. Den Haag: Vereniging Hogescholen.

QRiH. (2017). Handleiding evaluatie van geesteswetenschappelijk onderzoek volgens het SEP. Retrieved from https://www.qrih.nl/nl/over-qrih/de-handleiding

Van Vliet, H., Wakkee, I., Fukkink, R., Teepe, R., & van Outersterp, D. (2020). Rapporteren over doorwerking van Praktijkgericht Onderzoek. Amsterdam: Hogeschool van Amsterdam.