The current code of scientific integrity is only seven years old. Yet, an update is already needed, a KNAW committee decided last year.

The committee felt that the code should better reflect practice-oriented research at universities of applied sciences. In addition, the use of artificial intelligence has exploded and, with all the international tensions, knowledge security has also risen much higher on the agenda.

Politics

And then there was corona. In the draft code published on Friday, the KNAW states that ‘political involvement’ in scientific research has increased in recent years. One could think of the lockdowns, which led many to wonder: is it scientists or politicians who decide whether schools should close?

Scientists must be clear when they are engaging in politics. And politicians should not interfere with those who establish the facts. “For everyone to fulfil their societal role properly, all parties must respect each other’s distinctiveness and autonomy”, the new code advises. Moreover, scientists do not only provide facts, but also ‘methodical doubt’.

Rogue journals

The huge growth of rogue ‘scientific’ journals has also prompted a new addition: do not collaborate with them. Publish only in journals that apply serious quality controls.

On the basis of the code of integrity, anyone who has doubts about the integrity of scientific research can lodge a complaint with the university. Appeals can be taken to the National Board for Research Integrity (LOWI).

The KNAW is inviting everyone to comment on the draft text. Feedback is welcome until 20 October. Is the language in order? Is anything missing? Have the consequences of a rule been fully considered?

Open science and AI

Some of the new additions are phrased cautiously. For example, the principle of open science has been included in the code. Data and publications must be ‘freely accessible where possible and desirable’.

The new code of conduct regarding generative artificial intelligence has also been deliberately drafted in broad terms, the authors explain in an accompanying note. Developments are moving quickly, and the new rules must also be suited to technology that does not yet exist.

One requirement is: “Only use technologies whose functionality is known and scientifically validated.” That will not be easy. AI programmes such as ChatGPT, Perplexity and Mistral generate texts and images based on statistics and data. It is difficult to say what sources have been used or how the data have been processed.

Not easy

Another rule is: “Do not use technologies that obstruct compliance with the principles and standards in this code.” The drafting committee urges employers to help their staff with this and to develop guidelines.

The new code of conduct will apply to both universities and universities of applied sciences. The latter are not mentioned very explicitly, except, for instance, when the funding of chairs and professorships is discussed. That must be transparent, the code states.

The new version of the Netherlands Code of Conduct for Research Integrity is scheduled to come into force early next year.

AI-personification-AI in education-ChatGPT-Femke Legué

Lees meer

‘Give me a positive review’: secret AI prompts in publications are effective

Tens of scientists are embedding instructions to AI software in their publications:…