Working towards an integrated teaching/research environment means that we will also encounter the question of how to assess whether things are going well and what we base satisfaction or dissatisfaction with outcomes on. A good starting point for answering that question is to consider how to view assessment of teaching and research separately, as this will allow us to identify more clearly whether we should start looking for new assessment criteria or integrate existing ones. Below, I briefly outline how to look at research assessment.
There are several ways to address the demand for research assessment criteria. One way is to have a number of researchers compile lists of criteria and come to a consensus on them. While this may well produce meaningful insights, we should not underestimate how different researchers think about the value of their research, or rather the value of other people’s research. This is not only the case between disciplines but also within the same discipline; especially when criteria have to be ‘weighted’, a ‘battle of methods’ can easily erupt. Another approach is to borrow from what is already there. Research is often carried out in project form, and many research funds are linked to research programmes. It is tempting, in those cases, to start using project criteria to assess research, such as process indicators (turnaround time) and output indicators (deliverables). This can lead to situations where, as a researcher, you already have to specify exactly what you are going to deliver after x years of research when you apply for funding for a multi-year research project.
A method can also be chosen that uses philosophical enquiry into what research is and how it works. Not only does this accommodate a systematic approach to assessment criteria (it is more than just a list) but also the uniqueness of the research (it does not coincide with e.g. a project). Addressing the demand for research criteria thus requires a framework that systematically identifies the types of criteria that do justice to the uniqueness of research. A possible starting point is the distinction made in philosophy of science between the context of discovery and the context of justification. It is the difference between the questions “How do you arrive at a research question?” and “How do you answer the research question?”.
The context or discovery is about how a question to be researched is arrived at. The answer lies in a range from lone genius with a clever hunch to making painstaking observations and figuring out what ultimately leads to a question. The history of science is steeped in examples of this. The question is whether this context of discovery is subject to a certain logic, has certain heuristic rules by which you can measure whether it is ‘going well’ (process) and is relevant (output). The latter is especially salient, because from what do you derive that relevance? In other words, which peers or communitas validate that relevance: are these research colleagues and/or are these practitioners in a specific domain? Research at universities of applied sciences is characterised by deriving relevance primarily from – or putting it in the mouths of – professional practitioners. In other words, one might refer not so much to applied research as to practice-based research, which has been established through, to use a fashionable term, demand articulation.
The context or justification is about how a research question is answered and especially the legitimisation of the correctness, or the accuracy, of the answer. We generally synthesise this in the so-called ‘scientific method’, with all the attendant discussion. Here, too, many examples can be found in the history of science. There are manuals, codes of conduct and committees to guide and review this. Within the researchers’ guild, we address each other as professionals to make that legitimisation as transparent as possible and collectively contribute to the ‘advancement’ of scientific knowledge. From that vantage point, it is always research, not applied or academic research, just research. There is nothing unique about the context of justification of research at universities of applied sciences.
One aspect that remains underexposed with these two contexts is the consequence of research findings or, in other words, what happens to them next? What is actually needed is a context of application. Incidentally, this carefully sidesteps that other fashionable term, valorisation. Valorisation is linked to a view of innovation as a linear and rational process (research – development – production – market), on the basis of exclusion (strong position of incumbents, regional concentration, fixed outcomes) and an overemphasis on technology and science, i.e. R&D. It is a line of reasoning eagerly adopted by the political world, read: former Minister of Foreign Affairs of the Netherlands Maxim Verhagen, and compressed into the adage: knowledge – skill- cash till. A different view is to see innovation as a process with unpredictability as its essential characteristic. Unpredictability is a consequence of an evolutionary perspective on innovation, where things can emerge without specific goals set beforehand, followed by wide dissemination of that which proves successful. Here, the focus is instead on efficient selection of ideas and elaboration of those ideas, in other words, a context of application. This is where the challenge lies in formulating criteria, since we understand that this ‘application’ is not fully covered by publication counts in peer-reviewed journals. So what are these other applications, which forms do they take and, perhaps more importantly, what underlying criteria can be formulated? Is it about how ‘open’ that application is, how ‘generic or specific’ it is, is it about reach, who benefits from the application, et cetera.
The discussion on research assessment criteria can be made more precise by adopting a framework that takes into account the uniqueness of research. Distinguishing between the contexts of discovery, justification and application could be a first step towards this.
* This text previously appeared in 2017 on the AMOO (Amsterdam Model Education and Research) website.