Moniek Buijzen is Erasmus Professor of Societal Impact of AI and professor of Communication and Behavioural Change at the ESSB. She is the initiator of AICON, a project that connects art, science and people in society and does small-scale experiments to see how AI could be implemented responsibly.
What would you say is the biggest societal challenge to emerge from the rise of AI?
“Every technological revolution comes with major societal change. The invention of the steam engine and the industrial revolution accelerated the transition from a feudal system to a capitalist system. Roles and power shifted. And the same is happening now.
“The development of AI is inevitable; we can’t stop it. The question is how to ensure it’s implemented in a humane and planet-friendly way. AI presents us with wonderful opportunities and possibilities, but also poses risks. For example, AI today is having an impact on the environment because the servers it runs on guzzle water and energy. And the costs and benefits of AI are not distributed evenly across society either. Because the technology is based on existing data, it acts as a huge magnifying glass of what is happening in society.”
You mean that algorithms are biased?
“No, the outcomes can be biased; not the technology itself. It shows what’s skewed in society. A well-known example is the AI generated picture of an old white man when you ask AI to create an image of a bank manager or a professor. In the Netherlands, the childcare benefits scandal showed us just how dire the consequences can be. This isn’t because the technology is sexist or racist, but because the outcome is based on input. This is a systemic problem. But it’s also an opportunity, because it can reveal issues that need to be addressed more quickly and more explicitly.”

Which role do you see for Big Tech in the development of AI?
“The huge power that Big Tech has is a major risk. These companies are building AI applications with profit as their only goal. This is dangerous because public, human and democratic values may take a backseat to that goal.
“Let’s take the algorithms on social media as an example. They’re designed to keep you on a platform as long as possible. The company makes money by getting consumers addicted to the product. It’s also true to say that companies don’t shy away from misinformation if it keeps users on the platform longer.
“Privacy and integrity are two public and individual values that are at stake. ChatGPT is based on data from artists, authors and performers. It processes and uses this data without citing the sources, which violates copyright. It also feeds on the data of its users, which can include very intimate, personal data. People are not always aware that they are feeding the system whenever they use it. But even if you are aware of this, there’s no way to use these applications without sharing your data. Is it even possible to avoid sharing your data in any way in today’s society?”
Can science offset this development?
“Technological development is mainly driven by Big Tech. However, there are research institutes where scientists are setting up systems that are more protective and ethical, such as the Erasmian Language Model and the European BigScience Initative Bloom. But Big Tech companies are enticing good scientists to join them with big salaries: offers they can’t refuse. The public sector can’t compete with big business for development financing either.
“The innovations necessary on the societal side are being developed in academic settings. In any event, the need to manage the societal changes that AI brings about has now been acknowledged. This is clear from the projects on which research funding is being spent and the activities that public organisations are choosing to focus on. For example, our national project, Public Values in Algorithmic Society. We have had some big wake-up calls, including the role played by automated risk selection systems in the childcare benefits scandal.”
Can governments protect citizens from the power of Big Tech companies?
“The laws and regulations in place in our democracies reflect what we believe to be acceptable and appropriate. But big companies have the means they need to evade these laws and regulations. Let’s take a look at the age-rating system in the Netherlands (Kijkwijzer) and Europe. Today, this system applies not just to TV but to social media as well. But the big players in Big Tech do not comply with this system. They have the power to decide not to join this system and that they will decide what is and isn’t acceptable or appropriate. This is undemocratic and very dangerous.
“In the EU, companies are regulated by far-reaching laws like the EU Artificial Intelligence Act and the EU Digital Services Act. These laws are now being criticised for focusing more on the regulation of content than on the companies themselves. This is making it easier for companies to evade responsibility; they say that users post the content, not them.
“Regulation is very important. Individuals are no match for big companies and the power they have; governments are. Some people see EU regulation as a curtailment of individual freedom; it actually aims to restrict the freedom of commercial parties.”
Some people worry that they will lose their jobs to robots. Is this fear well-founded?
“Jobs will indeed disappear and new kinds of jobs will be created. There will be a clear division between people who can manage AI and people who can’t. Many people will find themselves in jobs that cater for the back end of AI. My colleague, Professor Claartje ter Hoeven, is doing research about this. Many people are working in sectors where they do the parts of a job that a machine cannot do. This often isn’t the most inspiring work and it’s poorly paid and unprotected. For example, people who code images to train a machine or who screen AI-generated texts for objectionable content. Work like this is being outsourced to African countries and is already being compared to sweatshop work.
“The same fear of replacement is also evident in creative sectors like the arts and journalism, but we are already seeing how these sectors are cashing in on opportunities as well. Artist Peim van der Sloot is taking part in our AICON project and has found that a real symbiosis has developed between him and AI. This is clear from his ‘Future Jobs’ project, where he used ChatGPT and Stable Diffusion to create advertising posters for the jobs of the future.
“If you see AI as a partner that can help you enhance your skills, great things can be achieved. This won’t happen if you think that AI will do all your work for you, while you sit back and do nothing.”

You are advocating for properly managing the implementation and development of AI. What would this involve?
“The only way to achieve the acceptable implementation of AI is co-creation. In other words, everyone needs to take part, share their thoughts and ideas and become co-owners, not just the rich, not just Big Tech and not just governments. We need to come together to decide which conditions AI in society needs to meet so that everyone benefits – not just people, but our entire ecosystem too. We will only be able to do this if we change the way we think.”
The development of AI seems to evoke both very utopian and dystopian feelings in people. Why is that? How does it make you feel?
“I recognise the wonder and disgust that AI evokes. On the one hand, I’m impressed by how advanced the technology is. Personally, I’m very happy with ChatGPT now because it helps me cope with the limitations that long COVID has left me with. I’m really impressed by the texts it generates. And then there are also the ever more impressive applications being developed for the recognition of cancers and brain diseases, etc. in the medical world.
“On the other hand, like many other people, I sometimes feel a little uneasy when I’m interacting with a robot. The idea of machines taking over from humanity is a classic fear. I’m a big fan of apocalyptic science fiction – like Foundation and Westworld – that covers this theme. But people with real technological knowledge don’t recognise these doom scenarios. So, I don’t know if we are facing an existential threat in a technological sense.
“But even if there is any future existential risk to begin with, we shouldn’t fixate on it. This just distracts from the social risks we are currently facing, which are much bigger, more urgent and more specific too. Perhaps I’m naive, but I believe that if we address the risks and all work together to decide on the conditions AI needs to meet to be acceptable in our world, we can make it something that’s really special. Do we have any other choice?”