Beware of any company whose hiring manager is an algorithm. That’s the takeaway from a recent Washington Post story on the use of artificial intelligence in the hiring process and the not-so-successful attempts thus far to scrub away human bias with an algorithm. Highlighting Amazon’s failed experimental project that used an algorithmic tool to evaluate job applications, the story suggests companies would be wise to exhibit caution when turning to artificial intelligence to inform human decisions. Why? Because, as Information Science Assistant Professor Solon Barocas points out, oftentimes the very data fed into the algorithm is itself biased.
An algorithm trained to match candidates to top performers may be based on performance reviews that themselves are biased, thanks to managers who rate people higher or metrics that aren’t gender neutral.
(For instance, female leaders are often penalized when seen as too assertive -- but having an “aggressive drive for sales” may be a “competency” on which employees are graded.) “Even with the annual review score, there’s human bias involved in that assessment,” Barocas said.
Check out the complete WaPo story, Why robots aren't likely to make the call on hiring you anytime soon, and read more about Solon Barocas’s research.