Skip to main content

A new BKO: opportunities for a rethink

By 18 August 2021December 6th, 2023Blog

The government’s mission-driven innovation policy places great emphasis on the contribution research is expected to make to social issues such as sustainability, health and safety. The impact of research on social challenges is gaining prominence in applications for, and evaluation of, research. Several players in the field of knowledge infrastructure like NWO and ZonMw have developed, or at least made more explicit, an impact strategy to this end. Research at universities of applied sciences will also have to be in tune with this development.

Indeed, the starting position for this seems favourable, given that the designation of research at universities of applied sciences as ‘applied’ or ‘practice-based’ (Brouns, 2016) assumes a close link with practice. This link must ensure that relevant issues are addressed jointly and research findings are directly applicable to that practice. There is no shortage of examples illustrating this (NAUAS, 2020), which will get a further boost with the National Platform for Applied Research to be launched next year (Woertman & Doove, 2019). In addition, universities of applied sciences continue to expand their range of instruments through, for example, Centres of Expertise. The task of these centres is to establish links between higher education, top sectors and social challenges through public-private partnerships. The elaboration (Reiner et al., 2019) contains suggestions such as ‘work from the outside in’, and conditions such as a ‘quadruple’ representation that also brings citizens to the table in ‘directing’ the research. Also worth mentioning is the development of the Professional Doctorate, the action plan of which is entitled ‘Learning to intervene in complex practices’ (Andriessen et al., 2021). This is a great springboard to start contributing to mission-driven innovation policy at a high level.

A determining factor for ‘success’ here, however, is how universities of applied sciences will structurally report on the impact achieved other than with anecdotal evidence, however convincing that may sometimes be. The current framework for this is the Dutch Sector Protocol for Quality Assurance Research (NAUAS, 2015), which also stipulates reporting on impact by universities of applied sciences. In this article, we explore what questions this framework needs to answer if it is to align with the developments outlined – not only because it is an opportune time to do so, since the current protocol has come to an end, but also because it is necessary.

Dutch Sector Protocol for Quality Assurance in Research
The Dutch Sector Protocol for Quality Assurance in Research (Brancheprotocol Kwaliteitszorg Onderzoek, BKO) – a general research evaluation framework/protocol of NAUAS – is divided into five standards, of which the fourth deals with the results and impact of research. The current BKO (2016 – 2022) is in need of a ‘minor overhaul’, given a number of unfortunate wordings in the protocol, to say the least. One example is that reporting on impact is required in three areas, namely: professional practice and society, education and professional development, and knowledge development. These are not the same units of measurement. It is a mix of who the impact is aimed at (professional practice, society, education), and a characterisation of the impact itself (professional development, knowledge development). It is better to consider this as two separate dimensions with which, for example, a matrix can be made with focus of impact (target groups such as professional practice, education, science) on one axis and intention of impact (knowledge development, knowledge sharing, knowledge implementation, etc.) on the other.

Another example is the presentation of indicators. For the research effort, this is prescriptive: reporting on generated revenue and staff deployment is required, with precise how-to counts such as the number of FTEs. For the use of research results, this is formulated much more loosely: a hodgepodge of possible indicators is given and it is up to the institution to choose which indicators to use, as long as there are six. As a consequence, no aggregation is possible at national level and thus no substantiation of the impact of applied research. That is one reason we are left with mainly anecdotal evidence to point to the value of applied research.

A final example is the emphasis on products in the sector protocol. While other types of products such as artefacts, demonstrations and teaching modules are mentioned in addition to publications, networking, for example, is not. In the current BKO, the word ‘network’ appears only a few times and, in the context of impact, only as an example of an indicator of use (‘participation in networks’). A wide variety of knowledge interactions such as co-creation sessions, training sessions, hackatons and graduate labs are also absent. A more systematic overview of types of research results would be helpful, especially since such overviews are in fact available (Spaapen & van Drooge, 2011; van Vliet et al., 2020).

A number of reports published in recent years also acknowledge that the current BKO needs an overhaul. The Pijlman Committee (Pijlman et al., 2017) issued an opinion on quality criteria for applied research. Following on from the BKO, the committee notes a need for clarification of Standard 3 in particular (quality of research). The Franken Committee (Franken et al., 2018) focus on social valorisation of applied research, and thus mainly on Standard 4 of the BKO. They make a strong case here for the term ‘continuous effects’ as already advanced by the Pijlman Committee. The concept of continuous effects is not new in the context of applied research. It is already alluded to in the innovation model outlined in the WRR report ‘Towards a learning economy’ (2013), for example. We find the term even earlier than that in the assessment framework of Regieorgaan SIA’s RAAK scheme, where ‘sustainable continuous effects’ is alluded to as one of the five pillars (van Vliet & Slotman, 2006). The term continuous effects mainly emphasises the fact that value is already created in the preparation and implementation of research, driven by collaboration with and in the real world. Framing impact only as being an outcome (long) after the end of the research does not sufficiently capture this. The term ‘continuous effects’ therefore seems to do more justice to the nature of applied research as focused on interventions in practice, with a short-cycle multidisciplinary approach in co-makership with various stakeholders.

The Franken Committee report does not sufficiently elaborate on what introducing this concept would imply for Standard 4. There is no reflection on what this means for evaluating the research, there is no translation into indicators, and the definition of impact given is based on a process characterisation, namely that there is impact not only after the research process has ended but also during this process. It is a meagre definition, as it says nothing about the influence itself, in other words what the nature of continuous effects is. 

Food for thought
The ‘to do’ list for the new BKO is already starting to crystallise, and as important as the points mentioned are, they do not yet address more profound aspects of the BKO that require a rethink. One such deeper question is what underlying perspective is applied in terms of the impact framework? The current BKO seems to implicitly follow a so-called logic model. A logic model aims to provide an insightful roadmap on how to achieve certain results. That roadmap consists of a description of a causal chain of activities and expected returns. The most straightforward form of a logic model is a chain of input – activities – output – outcome – impact. This chain distinguishes between planned efforts (inputs and activities) and expected results (outputs, outcomes and impact). An important contribution of logic models is the focus on outcomes and continuous effects of programmes, rather than evaluating programmes only by inputs (amount of budget) and activities (effort expended). The BKO refers to inputs, products (output/outcome?) and use/recognition (impact?) but is not explicit about whether this is based on an underlying logic model. If such a logic model is chosen, the question arises as to how it relates to the concept of continuous effects. A logic model sees impact as the outcome of a production chain (Gibbon’s Mode 1), continuous effects accentuates the process of knowledge development, knowledge exchange and knowledge application that arises in complex interactions before, during and after research (Gibbon’s Mode 2).

The lack of an explicit impact framework in the current BKO also makes the indicators that are presented seem arbitrary. It is not so much a question of quantity; after all, there is no shortage of indicators when it comes to research impact. Bornmann (2013), for instance, lists some 60 indicators on social impact, and 100+ indicators include dozens of indicators on research. There is ample choice, therefore, and all the more pressing is the question of which ones to choose. The choice of indicators should be guided by a coherent perspective on the subject of evaluation, in this case applied research. This perspective also ensures that the values found for the indicators can be interpreted coherently and in context. Conversely, the indicators give relevant insight into the state of play of applied research. Moreover, a shared framework facilitates communication on results and also allows for side-by-side comparisons. A framework that focuses mainly on quantitative data runs the risk of disassociating figures from the context in which they were created. An ‘impact’ figure provides little insight into who that impact was for, what the effect of that impact was or what contributed to achieving the impact, for instance. Especially in an evaluative context, a figure can take on a (new) meaning of ‘too little’, or ‘below standard’. Decontextualising information into indicators and then contextualising the indicator again to ‘check-off’ programmes is a realistic danger. A framework that places indicators more in the context of research can give a much better understanding of what is actually going on. This last point raises an important issue – that of the research context in which impact is created: if there is great diversity in the objectives, implementation and results of applied research, and there is (NAUAS, 2016), can a shared set of indicators be provided?

The demand for indicators of applied research raises another question – what is the purpose of ‘measuring’ the current state of applied research? The current quality cycles of assessment on the basis of the BKO are mainly retrospective (ex post), a snapshot of what has been achieved to arrive at qualifications of insufficient, good, and so forth. Although there is room for changes in plans (ex ante) by asking the research group to engage in self-reflection, it remains limited, not least because of the six-year cycle. Opportunities for ‘real-time’ evaluation are not discussed at all. This is related to the distinction between formative and summative – the distinction between emphasising learning rather than accountability. There are also methods that place more emphasis on gathering knowledge on how impact can be achieved and how this can be done better, be it by analysing impact pathways or by mapping contributions from different stakeholders much more meticulously. This involves greater focus on further optimisation of the process, on the assumption that this will also enhance continuous effects. The BKO as aimed at improving the quality of applied research could propagate this formative aspect much more. But the question is whether the BKO aspires to be that ‘type’ of evaluation framework or would rather adhere to accountability.

Agenda for a new BKO
The Pijlman Committee and Franken Committee advisory reports are both presented from the observation that the importance of applied research is increasingly being recognised and named, and even that applied research is “on the brink of a new phase” (2017, p. 5). They are not alone in characterising the development of applied research in this way (NAUAS and the Dutch Ministry of Education, Culture and Science (OCW), 2018). A new BKO cannot ignore this. The next version of the BKO cannot suffice with only some light (editorial) overhauls, however useful that may be. There are a number of deeper issues that need to be discussed. In line with the points raised earlier, we highlight three here:

  1. What impact framework is chosen to look at the impact of applied research? The shortcomings of simple logic models have not gone unnoticed, not least because of the increasing focus of research, including applied research, on relevant complex social issues and the increasing pressure on accountability of publicly funded research. This has led to many detailed elaborations of logic models for different application areas as well as the introduction of many partial solutions and alternative methods for measuring the social impact of research. These will need to be assessed as to how well they align with the nature of applied research at universities of applied sciences (see Coombs & Meijer, 2021).
  2. How do we arrive at an aggregate picture of continuous effects/impact of applied research given that we are dealing with a multitude of different research contexts and professional practices? Opinions differ on whether this question can be answered at all. A negative answer to this is echoed in the Franken Committee’s report: “Drawing on experiences at home and abroad and the scientific literature, we may conclude that it is almost impossible to find a common set of indicators for all higher education institutions, all the more so because the objectives of research and practice in different research areas are often very different” (Franken et al., 2018, p. 28). A more positive view is expressed by an EU expert group on ‘Policy indicators for Responsible Research and Innovation’ that proposes a toolbox of quantitative and qualitative indicators (see van Drooge & Spaapen, 2017). What position will the BKO take on this?
  3. What type of protocol does the BKO aspire to be? Despite how much has been achieved over the past 20 years with applied research, it is clear that much can still be learned. Will the BKO opt for ‘accountability’ with, for example, a precise count of the number of FTEs of researchers or will it opt for a protocol in which the guiding question is ‘how or why does this work, for whom and under what circumstances’? The emphasis on co-production in the concept of continuous effects means that the question is not whether there is an impact on that practice but how that impact came about and how it can be further improved (van Drooge & Spaapen, 2017). Or can these two perspectives be made compatible?

The current BKO has fallen victim to fast-paced developments in applied research and in some cases already seems to be overlooked by its own association: “This [the continuous effects of research] could be the subject of study and consultation at the level of individual institutions, preferably also at association level, to see how continuous effects of applied research can be monitored at universities of applied sciences” (Franken et al., 2018, p. 28). In any case, a joint statement by OCW, the Taskforce for Applied Research SIA and NAUAS to “work towards an impact measurement system” (2019, p. 9) puts the mission firmly on the horizon again, with as ultimate aim: “Universities of applied sciences are able to structurally identify the impact of their research, both in qualitative and quantitative terms” (ibid., p. 4). That starts with asking the right questions.

Harry van Vliet & Sarah Cooms

 

Sources

Andriessen, D., van der Zwan, F., Vossensteyn, H., Paans, W., van Vliet, H., & van de Giessen, M. (2021). University of Applies Sciences Professional Doctorate. Leren interveniëren in complexe praktijken. Den Haag: Vereniging Hogescholen.

Bornmann, L. (2013). What is Societal Impact of Research and How Can It Be Assessed? A Literature Survey. Journal of American Society for Information Science and Technology, 64(2), 217-233.

Brouns, M. (2016). Van Olympus naar agora. Een frisse blik op praktijkgericht onderzoek. Thema (4), 69-74.

Coombs, S. K., & Meijer, I. (2021). Towards evaluating the research impact made by Universities of Applied Sciences. Science and Public Policy, 1-9.

Franken, A., Andriessen, D., van der Zwan, F., Kloosterman, E., & van Ankeren, M. (2018). Meer waarde met HBO. Doorwerking praktijkgericht onderzoek van het hoger beroepsonderwijs. Den Haag: Vereniging Hogescholen.

Ministerie van OCW, Nationaal Regieorgaan Praktijkgericht Onderzoek SIA., & Vereniging Hogescholen. (2019). Verkenning praktijgericht onderzoek op hogescholen. Den Haag.

Ministerie van OCW, & Vereniging Hogescholen. (2018). Sectorakkoord hoger beroepsonderwijs 2018. Den Haag.

Pijlman, H., Andriessen, D., Goumans, M., Jacobs, G., Majoor, D., Cornelissen, A., de Jong, H. et al. (2017). Advies werkgroep Kwaliteit van Praktijkgericht Onderzoek en het Lectoraat. Den Haag: Vereniging Hogescholen.

Reiner, C., Bekke, H., Hooghiemstra, E., van Mil, T., de Ruiter, H., & Rullens,L. (2019). Centres of Expertise: groeibriljant voor excellente samenwerking in het hbo. In allianties werken aan maatschappelijke impact voor de toekomst. Den Haag: Vereniging Hogescholen.

Spaapen, J., & van Drooge, L. (2011). SIAMPI final report: Social Impact Assessment Methods for research and funding instruments through the study of productive interactions between science and society. Brussel.

van Drooge, L., & Spaapen, J. (2017). Evaluation and monitoring of transdisciplinary collaborations. The Journal of Technology Transfer. doi:https://doi.org/10.1007/s10961-017-9607-7

van Vliet, H., & Slotman, R. (2006). De regio als basis voor innovatie in het MKB. Thema, 6(1), 30-36.

van Vliet, H., Wakkee, I., Fukkink, R., Teepe, R., & van Outersterp, D. (2020). Rapporteren over doorwerking van Praktijkgericht Onderzoek. Amsterdam: Hogeschool van Amsterdam.

Vereniging Hogescholen. (2015). Brancheprotocol Kwaliteitszorg Onderzoek 2016 – 2022. Kwaliteitszorgstelsel Prakijkgericht Onderzoek Hogescholen. Den Haag: Vereniging Hogescholen.

Vereniging Hogescholen. (2016). Onderzoek met Impact: Strategische Onderzoeksagenda HBO 2016 – 2020. Den Haag: Vereniging Hogescholen.

Vereninging Hogescholen, Nationaal Regieorgaan Praktijkgericht Onderzoek SIA, & Netwerk Onderzoekscommunicatie Hogescholen. (2020). De (verborgen) waarde van praktijkgericht onderzoek bij hogescholen. 17 voorbeelden voor journalisten en programmamakers. Den Haag.

Wetenschappelikke Raad voor het Regeringsbeleid. (2013). Naar een lerende economie. Investeren in het verdienvermogen van Nederland. Den Haag: WRR.

Woertman, E., & Doove, J. (2019). Nationaal Platform Prakijkgericht Onderzoek. Utrecht: Surf.