The hits and misses of using Artificial intelligence for recruitment – Livemint

Artificial intelligence is making huge strides and has been occupying some of the best minds of this century, but the hype around it is just as massive. Artificial intelligence is entering everyday lives and products, and many of us find ourselves in positions, both in our professional and personal lives, where we need to evaluate the genuineness of claims to using AI. And if we cant separate the hype from the truth, wed end up spending money on fake products and services.

Over the years that Ive spent with startups, Ive come across both genuine AI products and fakes. Ill start with the ones that truly solved problems using AI.

A few years ago, one of the co-founders of Liv.ai, a Bengaluru-based AI start-up, met me and demonstrated their product that used natural language processing (NLP) to convert speech to text in multiple Indian languages. Converting speech to text in multiple languages was a hard problem to solve. I was a bit skeptical at first, but when I saw the product, I was quite blown away. Flipkart acquired it and built a shopping assistant, Saathi, with text and voice interface to support shoppers in the smaller towns.

Facial recognition is another problem that has been solved and has wide applications that touch everyday lives, including unlocking ones smartphone. Work is in progress for image recognition applications in other fields, including in horticulture.

And now, I come to what I call fake products riding the AI wave. A vendor once approached us claiming their product could predict criminal tendency in an individual with an accuracy of 60%, and suggested we use this tool to evaluate our delivery boys. This means there is a 40% probability that it would classify someone with no criminal tendency as one with a criminal tendency. Do you need anything else to decide whether you should pay this vendor and run all your new hires through a test like this?

Another AI vendor once confidently bragged to us that their tool could look at a job description, evaluate 100 CVs and pick the best five suited for the job. When we asked, How?, they resorted to deep jargon: We use a deep learning algorithm. When we tested the tool and got it to look at 100-odd CVs and shortlist the best five, there was a zero match with what a good recruiter and hiring manager with years of experience had shortlisted.

Claims like these give AI a bad name. Arvind Narayanan, a computer science professor at Princeton, puts it more succinctly, Much of whats being sold as AI today is snake oil it does not and cannot work. Why is this happening? How can we recognize flawed AI claims and push back?"

He has classified AI into three broad buckets:

1. Areas where AI is genuine and making rapid progress like facial recognition, medical diagnosis from scans, etc.

2. Areas that are imperfect but improving like detection of spam, hate speech, etc.

3. Fundamentally dubious areas like predicting job success, recidivism, at-risk kids etc.

The last category, which is really about predicting social outcomes, is essentially the snake oil being sold to gullible users and used as a pretext for collecting a large amount of data. Users are made to believe that magical insights can somehow be extracted from large amounts of data and more the data better the insights.

Professor Narayanan writes that there has been no real improvement in the third category, despite how much data you throw at it; he further goes on to show that for predicting social outcomes, AI is worse off than manual scoring using just a few features.

In another questionable claim, Ginni Rometty of IBM said last year that IBM artificial intelligence can predict with 95% accuracy which workers are about to quit their jobs. In my opinion, using AI to predict human and social behaviour will always be flawed because human beings arent all that predictable. Theyre individualistic, and their behaviours depend on a number of factors that cant always be reduced to data points.

The protagonists of predicting social outcomes will no doubt claim that it is only a matter of time before AI gets better. I believe this is untrue.

Those who have heard of chaos theory (more commonly known as the butterfly effect) understand that small differences in initial conditionssuch as those due to rounding errorscan yield widely diverging outcomes even for deterministic systems where an approximate present cannot determine an approximate future. So, one can imagine how much more indeterminate or irrelevant the predictions would be for inherently non-deterministic systems like social behaviours and outcomes. Just as the Heisenbergs uncertainty principle places fundamental limitations at an atomic level, chaos theory places a similar limitation in areas like social outcomes.

Vested interests will always have a motive for creating the myth of being able to predict social outcomes using vast data. This myth needs to be dispelled.

T.N. Hari is head of human resources at Bigbasket.com and adviser to several venture capital firms and startups. He is the co-author of Saying No To Jugaad: The Making Of BigBasket.

Go here to see the original:
The hits and misses of using Artificial intelligence for recruitment - Livemint

Related Posts

Comments are closed.