Do people governed by algorithms improve or quit?
Traditional performance reviews between an employee and their supervisor are based on transparent, open communication and honest feedback.
What happens, then, when the entity giving a performance review is an algorithm instead of a human? Organizational research suggests that the answer depends on the type of algorithm.
For transparent algorithms—those whose criteria and inner workings are knowable by the person being evaluated—workers often learn how to game the system, focusing their efforts on maximizing their algorithmic score. As a result, organizations have increasingly used opaque algorithms, which hide their inner workings from workers in an attempt to achieve a more accurate performance assessment.
Hatim Rahman (PhD '19), now assistant professor of management and organizations at Northwestern University's Kellogg School of Management, studied how these algorithms with secret criteria affect workers. He found that when confronted with an opaque performance algorithm, not all workers attempted to make changes. Some high-performing workers simply left the platform, and others attempted to limit their exposure by limiting the number of clients they worked with.
These and other findings are detailed in this article in the Organizational Musings blog.
And you can read the full paper here (requires a subscription): The Invisible Cage: Workers’ Reactivity to Opaque Algorithmic Evaluations