Take a quick scan of the recruitment industry’s blogosphere and you might leave convinced we are heading rapidly toward a robot-led future of total automation, a world cut from the same cloth as The Matrix or Blade Runner. Or, should you read articles from the opposing view, you might think that nothing has really changed, and all this talk about AI is little more than hot air and hype.
Sophisticated concepts such as ‘machine learning’ (and its sub-set ‘deep learning’) lose their meaning when used casually for marketing purposes. That makes it even more imperative that we attempt to define these phrases, and in turn, figure out a way to characterise and evaluate AI’s impact on recruitment technology (RecTech) in a way that is accurate and true.
The buzz around machine learning
Without a doubt, machine learning – and especially deep learning – is at the centre of AI development today. Google’s tremendous effort in this area is leading the charge. Deep learning algorithms are those that process data according to deductive representations, rather than task-specific orders. In terms of RecTech, deep learning should ultimately enable parsing solutions to read nuances in CV language – an R&D challenge we’re tackling aggressively here at DaXtra. While our intended destination in this regard is clear to all – i.e. intelligent software that improves itself over time and reads nuance and context – the reality is we still have a very long way to go.
A good example of where we stand with the technology is the Google gorilla fiasco. For the uninitiated, here’s a brief (and embarrassing) summary: In 2015 a software engineer pointed out that the Google Photo algorithms auto-classified his black friends as ‘gorillas’. The algorithm’s mistake was bad enough. But it gets worse: after more than two years, the tech giant’s only solution has been to prevent any images from being labelled gorilla, chimpanzee or monkey.
This means that not even Google, which has billions to spend on R&D, can solve the problem of coding algorithms to accurately read nuance and context. All humans intuitively know that people are not gorillas. But a computer? Not so much. It processes conclusions based on a pre-determined system of classifications.
Google certainly didn’t cause this problem on purpose. But the bigger point isn’t lost: as of 2018, machine learning is nowhere near where it needs to be. It’s still a great big black box mapping statistical regularities and applying such analyses to new data to draw conclusions and make predictions, often making huge errors in the process.
As for how this applies to recruitment, that’s obvious: the room for CV parsing errors is enormous given the infinite variations of everyday language. For example, let’s say your last name is ‘Orlando,’ or you’re a medical doctor who describes yourself as an MD. In the first instance, the algorithms as they are today can confuse your name for a place – the city of Orlando, Florida; in the second the possibilities for error are even more numerous: the computer may think you meant managing director, or perhaps even the US state of Maryland (often abbreviated to MD).
Rules-based systems – the way forward
Does all of this give you a headache? If so, that’s understandable: the challenge to program computers with processing capabilities that resemble human intuition and deductive reasoning is formidable. At DaXtra we are using machine learning technologies to develop rules-based algorithms that can improve themselves over time. That means these algorithms collect information and make predictions in a relational manner. In short, they eventually learn the nuances of context. For example, if MD is listed right before information about a medical school on a CV, the algorithm knows this MD reference doesn’t stand for Maryland. Rules-based algorithms can still make errors. But the difference is they can be fixed – in fact, deep-learning technology implies that they should be able to fix themselves.
Again, we’ve barely left the starting gates with this new technology. No matter what any marketing department says, the fact is algorithms haven’t really changed in the past 30 years. Apologies to all of you sci-fi enthusiasts but the robots aren’t taking over any time soon. We’re still in the early stages of teaching robots how to read nuances in information.
The next time you hear a technology company say they’ve increased their software’s accuracy by 30 per cent due to improvements in deep learning, you should proceed with caution. Even Google hasn’t solved that riddle – yet.