Skip to main content

Before deploying indicators

By 19 August 2021March 13th, 2024Indicatoren

Often, indicators are ‘filled in’ quantitatively: number of publications, percentage of student participation in research, size of total second-flow funding, and so on. Figures tend to take on a life of their own when they become detached from the context in which they were created. An ‘impact’ figure provides little insight into who that impact was for, what the effect of that impact was or what contributed to achieving the impact, for instance. Especially in an evaluative context, a figure can take on a (new) meaning of ‘too little’ or ‘below standard’.

Decontextualising information into indicators and then contextualising the indicator again to ‘programmes is a realistic danger (Coombs, 2019). In other words, indicators should always be selected for a context and presented in a context, for “indicators used in impact assessment cannot be universal. Instead, they need to be developed for given contexts” (Pedersen, Grønvad & Hvidtfeldt, 2020, p. 1). By way of a pointer, the following statement can be used: “A reality reduced to indicators is poor. A reality supplemented with indicators is rich and leads to clear communication” (Drooge et al., 2011, p. 7).

Two points can be made with regard to contextualising indicators. First, an indicator is about providing evidence, which can be formulated either qualitatively (e.g. positive feedback from stakeholders in interviews) or quantitatively (e.g. number of trade publications). To emphasise this point, it makes sense to shift from talking about ‘measuring’ impact or continuous effects to ‘demonstrating’ impact/continuous effects.

Second, contextualising means properly describing the context in which the indicators are used. A guideline on what that description should satisfy, as a minimum, can be drawn from the aspects identified in the idea of valorisation cards (Drooge et al., 2011):

  1. The scope or level of aggregation of the research evaluation. This is often done on the basis of an organisational unit (knowledge centre, research group) or a programme-based line (Centre of Expertise, large research programme). But in principle, the scope can also be geographical, e.g. neighbourhood, city or region. The choice of scope has consequences for the indicators that matter and how they are weighted. What also comes into play here is the issue of how individual projects ‘add up’ to a larger whole, such as a programme or an institute, and how this is dealt with.
  2. The domain, sector or discipline that is reported on. Creative industry, healthcare, agriculture and transport are different in essence, which means that the activities and (nature of) research outcomes will differ. These differences must be honoured and taken into account. Indicators are domain ‘neutral’, which is both a strength and a weakness. The advantage is that indicators can be used across a wide range of disciplines and represent a shared framework, which makes it easier to compare notes on and communicate about research. The drawback is that the uniqueness of a domain is not reflected in reports based on indicators. This will have to be introduced by outlining the domain, including questions such as: what is relevant in the domain? How is knowledge developed, shared and used? Who are salient stakeholders and why? For each domain, this will lead to a certain choice of indicators, weighing of indicators and the necessary contextualisation of indicators.
  3. The party or target groups about which the indicator says something, such as the distinction made in the BKO between professional practice/society, education (students and lecturers) and the research domain (NAUAS, 2015). An indicator of contribution to society is very likely to be different from an indicator of contribution to education.
  4. The research stage. A new research group or an emerging sector/discipline, for example AI, requires a different set of indicators than a research group that is more mature and is already part of an established tradition of research methods, networks and shared research agendas.

One more aspect can be added to these four aspects that describe the context of using an indicator. Another important aspect is the choice for a formative or summative perspective, i.e. are the indicators used as a means of learning (formative) or accountability (summative). The current quality cycles of, for example, assessments are mainly retrospective, a snapshot of what has been achieved to arrive at qualifications of insufficient, good, and so forth. There are also methods that place more emphasis on gathering knowledge on how impact can be achieved and how this can be done better, be it by analysing impact pathways or by mapping contributions from different stakeholders much more meticulously. This involves greater focus on further optimisation of the process, on the assumption that this will also increase impact. The question is to what extent the two perspectives are at odds with one another. What is clear is that the formative perspective is often not adequately expressed or at least made visible and valued.

Harry van Vliet, Augustus 2021

 

Sources

Coombs, S. (2019). Towards Evaluating the Research Impact of Dutch Universities of Applied Sciences: How do we begin? Enschede: Saxion Hogescholen.

Drooge, L. van, Vandeberg, R., Zuijdam, F., Mostert, B., Meulen, B. van der, & Bruins, E. (2011). Waardevol. Indicatoren voor Valorisatie. Den Haag.

Pedersen, D. B., Grønvad, J. F., & Hvidtfeldt, R. (2020). Methods for mapping the impact of social sciences and humanities – A literature review. Research Evaluation, 0(0), 1-18.

Vereniging Hogescholen (2015). Brancheprotocol Kwaliteitszorg Onderzoek 2016 – 2022. Kwaliteitszorgstelsel Prakijkgericht Onderzoek Hogescholen. Den Haag: Vereniging Hogescholen