Progress thoughts: Dr. Anna Christmann MdB
02.10.2022
About the format:
"Fortschrittsgedanken" appeared weekly on the FAIR.nrw blog 2020 and presents the opinions of various experts from research and practice on questions that always remain the same. The primary topic is algorithms in personnel selection.
About the Author:
Dr. Anna Christmann has been a member of the German Bundestag since 2017, where she is the spokesperson for innovation and technology policy and for civic engagement for the Bündnis 90/Die Grünen parliamentary group. She received her PhD in political science from the University of Bern in 2011 and then worked at the Center for Democracy at the University of Zurich. From 2013, she worked at the Ministry of Science in Stuttgart as the minister's office manager and as a policy officer in science policy.
What opportunities do algorithms offer in personnel selection and where are the risks?
What opportunities do algorithms offer in personnel selection and where are the risks?
If there are a large number of applicants, it can make sense to use automated pre-selection. In this way, applications that do not meet the required criteria, such as a certain level of professional experience, can be sorted out. This takes some of the work out of the hands of the recruiters and allows them to focus on those applications that meet all the requirements. However, the algorithms used in this area must be verifiable and it must be ensured that they do not systemically discriminate against a certain group of people.
So-called people analytics applications can be risky, as they use automation and machine learning to evaluate and then categorize the characteristics of applicants. For example, answers given in job interviews, but also the voice pitch of the applicants, are evaluated and conclusions drawn about their personality. In such a case, the applicant has no influence on how she is evaluated. Often the algorithms used in personnel selection systems are also subject to commercial secrecy and it is very difficult to examine them for possible discrimination.
So-called people analytics applications can be risky, as they use automation and machine learning to evaluate and then categorize the characteristics of applicants. For example, answers given in job interviews, but also the voice pitch of the applicants, are evaluated and conclusions drawn about their personality. In such a case, the applicant has no influence on how she is evaluated. Often the algorithms used in personnel selection systems are also subject to commercial secrecy and it is very difficult to examine them for possible discrimination.
What is the impact of discrimination in personnel selection?
Regardless of algorithmic systems, discrimination exists in personnel selection. Be it that people with a foreign-sounding name are not invited for an interview or that women get less salary. But these structures can be reinforced by algorithms. For example, if a system recognizes that women tend to take more parental leave, it may be more likely to recommend hiring a man. A human can also have this bias, but if it is stored in an automated system, the bias comes into play in every job application, rather than just those assessed by a specific person.
Can algorithms help reduce discrimination in personnel selection?
It is only possible to look into the conversation between the recruiter and the applicant at certain points. However, an algorithm can be examined for structural discrimination and modified. Data-driven personnel selection opens up new possibilities for checking discrimination in analog personnel interviews as well. Because what many may be able to report as empirical values then clearly shows up in the numbers. This gain in information should be used to minimize discrimination in personnel selection.
In concrete terms, what does the ideal personnel selection process look like?
In an ideal personnel selection process, the workforce has a say in which tools are used to select future employees. HR professionals are supported by a vetted system that uses machine learning, for example, to pre-screen applications and is neutral to the gender or background of applicants. The decisive selection interviews and decisions will still be made by humans.