Are robots and background screenings a perfect marriage? Not without some work… 

The use of Artificial Intelligence is nothing new now-a-days, at least not to several industries. The manufacturing industry uses robotics to make processes more effective, and to replace some of the more monotonous tasks. Within the financial field, nobody raises an eyebrow in regards of AI being used to power different kinds of financial services. The idea of the robots as something that will save us all, or steal our jobs, feels more and more like the plot of an 80’s sci-fi movie. We have grown used to their existence.

The growing, cross sectoral, use of AI will most likely continue. The discussion is also ongoing  within our world of background screenings. Some predict that AI will automatize the entire industry. ToFindOut’s strong view is that it is a mistake to simply look at the potential upsides with employing robots with background screenings. Even if a robot could perform a screening faster, we discourage from a process being entirely driven by AI. At the end of the day – background screenings are a matter of people.

The opening value masters the result

When something comes up in a background screening , it’s still common for employers to – instinctively – want to turn the candidate down. He or she all of a sudden becomes a risk and the recruitment process is cancelled, no exceptions. If that is the basis on which screenings are conducted , the implementation of an AI should work perfectly for you. The robot will sort out candidates before you even have a chance  to read their resume. That’s the wrong way of going about it, though, in our opinion.

As an employer, you need to take the responsibility of looking at the bigger picture. By keeping an open mind to individuals of different backgrounds and experiences you are creating a more fair set of preconditions – both for the candidate and for yourself. Dare to stand up for people who escaped a negative pattern! A candidate with a non-payment on record may have turned his economical behavior around and learned how to budget. Someone who once exceeded speed limits once may have promised herself to never let that happen again. To include them is a way of working towards diversity, too. By using an AI to screen job applicants by itself, you would never even get to meet them – neither hear their story. 

Can AI  really be objective?

Robots need rules to comply to. If you are using an AI to screen sort out candidates, you will first have to decide on what kind of people you don’t want to employ and what you, given the context, are willing to accept. When people say that robots are more objective than humans, they tend to forget that a robot can only be objective within the frame, instructions and data given to them by humans. You may have read  [hyperlänk https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G] about Amazon’s experiment with a recruitment robot, who turned out to screen away female applicants. Amazon’s robot is an example of a robot that was handed inadequate information, leading it to act as a sexist. 

AI does not work naturally unbiased , it learns from the input and  data it processes after receiving your instructions. An AI driven support for background checks can help you to make important information visible, but the robot itself can’t take significant decisions for you. There can’t be any question marks regarding your policy for background checks, and your view on the subject, before you even begin to think about robots.

Just as any digital system, AI exists to assist people, not to replace them. Without you guidance, no AI and no system can decide on what is sustainable for you, or which steps you should take to create a safe work environment. That needs to be decided by you.

My belief is that the last word remain to be said when it comes to the use of AI in the background screening process. There are so many things that can and will be included as this technology develops. ToFindOut has always believed that hiring decisions need to be data-driven, methodological and systematic. But we also firmly believe that the same decisions need to be made in a way that is personally safe and ethical. With that perspective in mind us humans have a far bigger role to play then AI, according to my assessment. Leadership is something only us humans can provide. 

What do you think? Please tell my by emailing me at birgitta.edlund@tofindout.se