AI in HR: Overcome the fear, embrace the opportunities!
AI is neither all good nor all bad. Used correctly, it can improve the lives of many people in general and working life in particular. New opportunities are opening up in HR recruitment and development in particular, without people being ” sorted out ” or replaced by technology. Let’s take a look at what is important for a fearless, constructive and responsible approach to AI in HR.
1. Be aware that AI cannot make decisions.
The question of whether an AI can decide about a person’s professional future becomes obsolete if we realise that the technology cannot make decisions on its own. But it can make us believe that it can. In the end, the AI accesses codified human decisions in order to carry out an action (decision). In other words: What the human doesn’t put in, the machine can’t put out. Or as the authors of ‘Power and Prediction’ put it: ‘Nobody ever lost a job to a robot. They lost a job because of the way someone decided to program a robot.’ If we are aware of this, we can develop a (self-)conscious and responsible approach to AI.
2. Make the ‘why and what for’ the starting point for the use of AI.
Before organisations rush into using new technologies, they should ask themselves what specific problems they want to solve with AI. It makes little sense to collect and analyse huge amounts of data if the objectives and benefits are not clear. These considerations should be based primarily on the needs of the people who are connected to the company in some way, while also taking into account the cost-benefit ratio. With regard to AI-supported personality analysis tools, companies can ask themselves:
- What does a bad hire cost me with all the resulting consequences (morale throughout the team, offboarding, job advertisement, new candidate search, onboarding, training phase…)?
- What does it cost me in return if I invest in technology that makes bad hires unlikely?
3. Keep working on your culture when using AI.
Algorithms are often so complex that even developers cannot always fully understand them. In order to use the technology in a way that benefits both employees and the organisation as a whole, companies need to work more on their culture – more specifically, on a culture that promotes the ethical and responsible use of technology. Guiding questions could be:
- How do we want to work together?
- What values characterise our work and teamwork?
- How do we define success?
- How do we make decisions?
- How do we solve conflicts?
It should be a key part of the corporate culture to continuously reflect on existing thought patterns, behaviours and unconscious biases. Employees need time and safe spaces to be able to ask themselves and others critical questions. Open formats in which all employees can participate should be regularly offered specifically on the topic of ‘Dealing with AI’. This allows knowledge and experience to be shared and blind spots in working with AI and data to be recognised at an early stage.
4. Learn to distinguish good data from bad data
The type of data we use to train AI systems is crucial. If we use biased or prejudiced data, the machine will deliver results that further amplify stereotypical attributions and discrimination. We therefore need mandatory quality criteria for training data. Answers to the following questions, among others, provide guidance:
- Was the AI trained with biased data or with data that represents the overall average of the population?
- In the case of questionnaire-based data collection: Were there any possible incentives for participants to provide false information when gathering the training data?
- For language models: Does the AI only analyse individual words and pay attention to correct grammar, or does it try to capture the whole context? (Particularly important with regard to the discriminatory feature ‘native speakers’).
There are many more.
5. Be diverse.
Diversity is more important than ever in times of AI. A diverse workforce brings different experiences and perspectives to the discussion about the ethical use of AI systems. This not only helps to improve the quality of decision-making, but also to recognise and reduce unconscious bias.
6. Take a realistic look at the role of AI in the decision-making process.
A fearless and constructive approach to AI technology requires that such analysis tools are only one of several factors in decision-making processes. They serve as a source of additional information that makes it easier for recruiters, for example, to make a final decision in favour of or against an applicant. It should be clear to everyone that AI predictions are never perfect. AI-based analyses are based on empirical data and scientific principles, but nothing more. In AI-supported personality analyses, as we develop them at Zortify, the error rate is realistically between two and five per cent. If we are aware of this, we can deal with it and develop suitable behaviours for the use of AI in organisations together with the employees who use the technology.
7. Make processes transparent (not data sets).
In personality analyses in particular, it is not only HR managers who need to understand how the AI comes to its results, but also the people affected, such as candidates. The keyword here is ‘Explainable AI’. But how can companies explain something so complex that also contains valuable information, for example for competitors? It remains uncertain what benefit applicants could derive from access to raw data or complex equations, as these are often difficult to understand and are not sufficient on their own to recognise bias in the right context.
The U.S. Association of Computing Machinery has developed a pragmatic approach. It requires that institutions using algorithmic decision-making be able to explain the underlying process of the algorithm and the resulting decisions in non-technical language. The aim is therefore not to disclose technical details in detail, but to improve transparency in two areas: the processes and the results. To do this, people need a deep understanding of how AI gets its results (as an example, take a look at our Zortify certification programme).
The ethical design of processes in dealing with AI begins long before the AI is actually used. Think about when and who you need to involve internally in the process – from the data protection officer to the procurement team to the work council. (A corresponding ‘onboarding package’ from Zortify is in the making. If you haven’t subscribed to our newsletter yet, now would be a good time to find out more soon 😉).
8. Create suitable team roles.
AI technology is too important to be left to just a small group of ‘IT nerds’. Instead, an open discussion about the responsible use of algorithms and data should be initiated across the entire workforce. This requires people at the intersection of IT, business departments, HR and corporate culture who actively drive these discussions forward and document progress. Positions such as ‘AI ethicist’ or ‘human-robot relations manager’ are not abstract figures of a distant future, but are already in demand today.
9. Allow yourself to have healthy doubts: about the AI and about yourself.
Just as we shouldn’t blindly trust the machine, we shouldn’t blindly trust ourselves either. Humans make mistakes, carry biases, are sometimes bad-tempered or overconfident and don’t always make wise decisions. Nonetheless, we can allow ourselves to listen to our instincts and intuition.
AI systems can help us not to be blinded by first impressions. They can make established procedures, such as assessment centres, more objective and fair. Above all, they can make them faster and cheaper, thus creating the freedom to constantly reflect on ourselves and engage in deep interaction with others (such as applicants) so that we are ultimately able to make the best decision.
10. Be honest with yourselves: What can AI do better?
In the discussion about Artificial Intelligence, the potential risks are often emphasised. Without ignoring these, companies should consciously shift their focus and ask themselves when they last had an in-depth discussion about human bias and the subjectivity of recruitment decisions.
The fact is: AI systems can perform some tasks better than humans. In the area of recruitment and employee development, technology can analyse decision-relevant information faster than an entire team ever could. It uncovers aspects that escape the human eye even on second glance, thus contributing to better decisions – better for applicants, better for HR professionals, better for the entire organisation. As a result, it can make a valuable contribution to the search for talent and equip companies to meet the complex challenges of our time.