Critics see the letters on the skybridge as a way to further expand the university’s corporate profit mentality or as a marketing strategy to show the outside world that Erasmus University is doing good work. Incidentally, in the university’s strategy document there is no definition of what impact actually means. This means that questions remain unanswered: What is impact exactly? How new is this aspiration for Erasmus University? And how do you measure whether science has impact?

1. How old or new is impact?

The use of the word ‘impact’ in combination with science is relatively new. The idea behind it has been around for some time, says Jorrit Smit, who earned a PhD in Leiden on how science endeavours to benefit society. He is part of a working group that is looking into how Erasmus University could best evaluate its ‘impact’. At its core, impact is about ‘the usefulness’ or ‘the value’ of science to society, Smit explains. How relevant is your research to social issues outside the university?

That discussion is as old as modern science itself, Smit notes, but has evolved in recent decades. “Universities have always had a relationship with their environment. In the nineteenth century, universities mainly trained ‘good citizens’ to serve society, for example, as civil servants or bureaucrats. Even in the 1970s, scientists were already talking about their social relevance.”

In the 1980s, legislators added a section to the Dutch Higher Education and Scientific Research Act. In it, alongside education and research, knowledge transfer to society became an explicit task of higher education. This led to the discussion on valorisation (more on this later). Outgoing Minister Ingrid van Engelshoven (Science, D66) emphasised in recent years that making an impact should constitute one of the three pillars of science

2. If the idea is not new, why is it written in huge letters on the skybridge?

Because in recent decades, the focus of science has been more on publishing in high-profile scientific journals and not on solving social issues, according to impact researcher Smit. From the 1990s onwards, academics were mainly judged on their figures: The number of publications, high citation scores and the ratings of the journals they publish their work in. “Universities implicitly selected people who played that game very well”, says Smit. “Researchers who forge relationships with the outside world are undervalued in this system. If you have been selecting employees for thirty years who are mainly good at publishing, then the academics with other talents will have left.”

In saying that, Smit also provides an explanation for the recent attention paid by universities to profiling with a focus on social impact. “Now that social impact is considered important again, we perhaps need to call for more attention to impact to attract those people again who are good at it.”

The extra attention for social impact also stems from the fact that over the past 25 years, the government has made universities more accountable for the way they spend public money that they receive.

Wilfred Mijnhardt is policy director of the Rotterdam School of Management and is also involved in shaping this strategy. He says: “The university has shifted to a more neoliberal context where accountability for public investments is the norm rather than just getting the money and practicing science.”

3. Where did that other catchword ‘valorisation’ go? And what is the difference with ‘impact’?

The use of the word impact may be relatively new, but it is very reminiscent of the term valorisation, which was introduced to science ten years ago.

This term did not please everyone though. “Whenever I hear the word valorisation, I grab my Kalashnikov”, said Professor of Theoretical Astronomy Vincent Icke in 2013 on the Dutch TV talk show De Wereld Draait Door in an item about prickly words. Valorisation is all about value creation stemming from science. The emphasis soon veered towards economic value in particular. Moreover, Icke considered it self-evident that academics create value. Why was a special word needed for that?

Valorisation came into fashion at the same time that VVD Prime Minister Mark Rutte became prime minister in 2010. Science had to produce demonstrable results, so the government’s liberal course dictated. And science had to be guided more by questions from society, in particular the business community. “Kennis, kunde, kassa”, (Knowledge, expertise, cash register) was one of the controversial statements on this theme made by Halbe Zijlstra (VVD), the State Secretary responsible for science policy at the time.

“Valorisation lost popularity due to the focus on economic gains,” says valorisation researcher Linda van de Burgwal from the Vrije Universiteit in Amsterdam.  “The image emerged that valorisation was all about money.” So it became a dirty word. According to van de Burgwal, unjustifiably so. “It can also involve value creation for science, or for the general public or professionals in a particular field.” But the image was already formed. In Smit’s view, this was an important reason why Van Engelshoven replaced the word with ‘impact’.

But experts themselves within science were not in agreement on the meaning of the term valorisation. According to Mijnhardt, valorisation implies a linear model of knowledge production: the scientist invents something and brings it to society where it yields a return, whereas knowledge production often emerges in interaction with society, i.e., non-linearly. He therefore regards the term valorisation as too limited.

Van de Burgwal, for her part, says that this is (once again) a case of image formation and that valorisation is meant to be a process of knowledge production in interaction with society. And Smit thinks that impact is precisely what the linear model still implies, as if scientists from outside society were hardly making an impression at all. “The impact of an impression, as it were. Valorisation, in any case, at least still indicated a process.”

Finally, the demise of the term valorisation also has to do with the origin and meaning of the word, Van de Burgwal notices. “It comes from Flemish and French science. Translating it into English is impossible. Valorisation is a concept borrowed from Karl Marx and does not cover the full meaning. In Anglo-Saxon countries, scientists used ‘impact’ for more or less the same idea. English is the language of science, so we now use the word impact.

4. How do you measure impact?

Well, the term is obvious: Impact. But how do you measure whether science has impact? Smit: “This is mainly a question from policymakers and administrators. They need figures so they can easily show the outside world the extent of this impact.

Measuring is difficult if the definition of impact is not clear-cut. For each faculty, programme and department, impact means something different, policy director Mijnhardt believes. Where RSM can help companies and organisations with business models for the energy transition, for example, the Erasmus School of Law can help governments and organisations with the formulation of good regulations and contractual relationships. “We have left the definition of impact open at the institutional level on purpose because it is slightly different for each domain”, Mijnhardt adds.

Smit sees methodological problems looming where measurements are concerned. “It takes a long time before you know what your impact is, perhaps twenty years. Maybe people will forget your research and others will rediscover it after forty years. And if there is social change in the area you are researching, you can rarely ascertain to what extent that is due to your research.”

But there are solutions. The customary way to map out the impact of a research project is to write narratives. In simple terms, a narrative is a written story about what you do as a scientist, how that turned out to be socially relevant and what relationships and interactions you have with civil society. “Since reality is so complex, describing it in a narrative works best”, Smit states. “Then you describe what works in day-to-day practice, but also what does not work and could be improved. Preferably, you discuss this with stakeholders from society to hear their feedback.” Mijnhardt: “You could back that up with figures, but the story has to be the driving force.”

positive impact klein – bas van der schot
Image credit: Bas van der Schot

Smit: “The Netherlands Organisation for Scientific Research, for example, works with narrative CVs. Applicants describe on one A4 page what they have accomplished, what their background is and how a particular grant would help them further.” The irony of this, says Smit, is that the urge to evaluate on quantitative indicators remains. “Everyone has gotten used to that. Partly for that reason, there is still a great deal of discussion and doubt surrounding the selection of these narrative CVs. Diversity departments also point out the dangers of narratives, because implicit biases can play a bigger role in assessments. Therefore, we must also teach managers how to deal with other evaluation and assessment forms.”

5. Scientists are already prone to burnout. Aren't we pushing scientists to the brink of burnout if they also need to make an impact?

Then you end up writing a narrative in which you try to establish your impact, but in your performance appraisal you are still assessed and judged on the number of publications and citations you have. Isn’t creating impact just another task in the already busy schedule of a scientist? Shouldn’t there be another way of making assessments?

Smit notices that there is a lot of criticism on the current assessment method because it leads to perverse incentives. “When publishing is the most important thing, academics sometimes come up with questionable strategies to publish as much as possible.” One example: ‘Salami-slicing’. Instead of presenting the results of your research in one place, you spread it out in slices over several publications. This can be at the expense of content. “So if you are going to focus more on impact, you need to make the number of citations and publications less important.”

Moreover, according to Smit, we should not fall into the trap of quantifiable measurements when assessing impact. “You must avoid building a new system of incentives around impact with the same quantifiable values such as: ‘How much do you publish in the media; how often are you cited in social media?’ That doesn’t say much about the actual value of research for society.”

Mijnhardt: “Ultimately, we need to move towards a situation where scientists are freer in their careers to specialise. At present, we all too often demand that scientists should be able to do everything. You have to be good at teaching, good at science and be socially relevant. If we accept that this diversity can exist, then we will have made some headway.”

Read one comment