Krijger, PhD student at the Erasmus School of Philosophy, has seen that discrimination due to the use of AI is already common in practice. “I mean examples such as an AI application used for personnel matters at Amazon. The system, used to screen CVs, had been trained with historical data from the last ten years. Based on that data, the AI concluded that women were not suited to technical positions and automatically discarded all female candidates.”

Useful tool

According to Krijger, such incidents should not prevent us from using AI at all, but we should be more aware about how AI works. “We need to recognise that these systems are meant to discriminate. We all say that we don’t want algorithms to discriminate, but that is precisely their goal in a statistical sense: to distinguish between data. And they can be a very useful tool, for example to determine who does or doesn’t get a loan and who’s at risk of committing fraud or who isn’t.”

Undesirable patterns in AI systems often originate in training data, which come straight from society. “And we don’t have a perfectly fair society.” Krijger called this replication of inequality in systems ‘dangerous’. “You’re automating the status quo, which isn’t beneficial for everyone, and thereby making existing inequalities worse.”

Legitimate characteristics

Society needs to engage in dialogue on how these systems could be made fairer. These are ethical rather than technical considerations, Krijger argues. “We need to think ethically and critically about what are legitimate characteristics to use in distinguishing between data.”

According to Krijger, EUR is already working to improve the way in which AI is used. “The most important step is to acknowledge that it isn’t a strictly technical issue, but that you need structures and processes within an organisation to deal with these ethical considerations.”

Watch the interview with Joris Krijger on EM TV below:

Source: youtu.be