‘We need to recognise that AI systems are meant to discriminate’
Artificial intelligence comes with a lot of risks, such as discrimination. This is why PhD student Joris Krijger, who researches the ethical aspects of AI, is advocating for a broader perspective on EM TV. “I think it’s dangerous.”

Joris Krijger in conversation with EM TV presenter Feba Sukmana.
Image by: Lukáš Holub
Krijger, PhD student at the Erasmus School of Philosophy, has seen that discrimination due to the use of AI is already common in practice. “I mean examples such as an AI application used for personnel matters at Amazon. The system, used to screen CVs, had been trained with historical data from the last ten years. Based on that data, the AI concluded that women were not suited to technical positions and automatically discarded all female candidates.”
Useful tool
According to Krijger, such incidents should not prevent us from using AI at all, but we should be more aware about how AI works. “We need to recognise that these systems are meant to discriminate. We all say that we don’t want algorithms to discriminate, but that is precisely their goal in a statistical sense: to distinguish between data. And they can be a very useful tool, for example to determine who does or doesn’t get a loan and who’s at risk of committing fraud or who isn’t.”
Undesirable patterns in AI systems often originate in training data, which come straight from society. “And we don’t have a perfectly fair society.” Krijger called this replication of inequality in systems ‘dangerous’. “You’re automating the status quo, which isn’t beneficial for everyone, and thereby making existing inequalities worse.”
Legitimate characteristics
Society needs to engage in dialogue on how these systems could be made fairer. These are ethical rather than technical considerations, Krijger argues. “We need to think ethically and critically about what are legitimate characteristics to use in distinguishing between data.”
According to Krijger, EUR is already working to improve the way in which AI is used. “The most important step is to acknowledge that it isn’t a strictly technical issue, but that you need structures and processes within an organisation to deal with these ethical considerations.”
Watch the interview with Joris Krijger on EM TV below:
De redactie
-
Feba SukmanaEditor
-
Elmer SmalingDeputy editor-in-chief
Latest news
-
Loud explosion in transformer room caused power cut at Erasmus Sport
Gepubliceerd op:-
Campus
-
-
Canvas down: ‘This couldn’t happen at a worse time… Today is my deadline’
Gepubliceerd op:-
Education
-
-
Protest on campus against diplomacy with Iranian regime
Gepubliceerd op:-
Protest
-
Comments
Comments are closed.
Read more in EM TV
-
How non-European students are suffering by a doubling of their tuition fees
Gepubliceerd op:Article type: Video-
EM TV
-
-
EM TV newsflash: Measures against antisemitism, EUR’s own ChatGPT and a positive review from NVAO
Gepubliceerd op:Article type: Video-
EM TV
-
-
How to become a Rotterdammert: Defne learns slang phrases: ‘Tebbie nou op je muil?’
Gepubliceerd op:Article type: Video-
EM TV
-