By Irma Doze
•
25 Feb, 2020
Do you have the guts to use artificial intelligence to separate the good from the bad in your recruitment process or for talent management? Quite scary, but without that data, we definitely won't do better. The results were clear when the American marketing company Dialog Direct used algorithms to recruit new employees. The organisation saved 75% on interview time and the retention increased by 7 percentage points. This way the organization saved millions of dollars. The French glass producer Saint-Gobain (with 180,000 employees worldwide) also reaped the benefits of the use of algorithms: with the help of an algorithm they spotted various internal talents that would otherwise have been left out of the picture. No matter how beautiful those examples may sound, algorithms have stirred much discussion lately. Because how honest and fair are they? What if candidates are incorrectly rejected or promoted, based solely on data? We read more and more often that algorithms would be unreliable. Research shows that HR officials want to use HR analytics, but at the same time have huge reserves. Are algorithms actually so much better than we humans? Or worse? The wrong decision I understand if you find it difficult to trust data. And rightly so, because an algorithm is never 100% reliable. But neither are our brains. On the contrary, ask a group of 25 people how likely it is that 2 of them have their birthday on the same day. They will estimate that this chance is very small, but in reality, it is almost 60%. In the field of statistics, our intuition often fails us. This is demonstrated by Nobel Prize winner Daniel Kahneman in his book "Thinking, fast and slow". When making a decision, we are partly guided by prejudices. Suppose an applicant has a C on his diploma. You know that a list of grades does not say much about someone's talent as an employee, but your brain nevertheless records this as something negative. Whether you like it or not, during the job interview, the C haunts you. Your brain automatically searches for confirmation. You see what you expect to see, and you ignore all signals that contradict that feeling. What do you put in it? An algorithm would not have that problem. Data does not suffer from self-overestimation or emotions. Data is neutral. It is the combination with human action that makes technology good or bad. There are 2 characteristics that you have to take into account with an algorithm: What you don't put won’t come What you put will come out Let's start with what you put in. Suppose you are looking for a new programmer and you have algorithms search for the right candidate. You do not find age and gender relevant, so you don’t include those variables. What do you put in? You are looking for talent and you want to know how good the candidate is at work. That is why you have algorithms analyse pieces of program that the candidate has written. Even though after this exercise you know nothing about gender, age or diploma’s of the candidate rolling out of this, you know one thing for sure: you have a programming talent! So, you hire him. But what appears after a while: this colleague does not fit into the team at all. The algorithm has not taken this into account, because: what you do not put won’t come out. You should therefore have taken that variable (match with the team) into account. The analysis process therefore starts with an important piece of human action: ensuring that the system starts with the right variables. Sit around the table together and brainstorm freely about all the variables that could be important. Think broadly and creatively, it can be hundreds of variables. Then it's the turn of the data: it analyses which variables have the most impact on what you want to predict with the algorithm, based on statistics. The past predicts the future But also, at the end of the ride, human action comes into play. Because even with the data that 'comes out', you run into a problem. Algorithms always base their predictions on data from the past. That old data was generated by people. And people are prejudiced. Take the programmer in question. Perhaps women have a different programming style than men. And that in the past you employed more men than women. Then it becomes a self-fulfilling prophecy: the programming style that evaluates the data as "good" is mainly based on the style of men. That means that the data unknowingly discriminates against gender. The data builds on human choices from the past. Fine tuning What should we do with that knowledge? Consider the automatic pilot of an aircraft - all algorithms. In principle, the pilot has to trust them blindly. But if his intuition says the meters are broken, he will really have to take the wheel himself. We will have to do that too. It is therefore important to keep in mind: do not automate fully immediately, but keep checking yourself. Be critical. Analyse the data, test the algorithms for integrity. Evaluate the results, also view the candidates who did not pass the algorithm. Do you find out that the data unconsciously still discriminates? Then find out why. Then you can adjust the algorithm so that this no longer occurs in the future. Through frequent use of algorithms and analyses, we can fine tune them further and further. In this way they become even better, fairer and more reliable. Already, algorithms select a lot fairer and fairer than the human brain. We are very aware of the few discrimination that is still in it. We evaluate, analyse, check and test. Something that is often not even done with human decisions. Previously, we were unconsciously unable. Now we are at least aware, and most of the times also competent. body content of your post goes here. To edit this text, click on it and delete this default text and start typing your own or paste your own from a different source. Originally published on CHRO.nl (Dutch version)