Experts have always tried to predict the future of work in the face of technological advancements. For example, David Ricardo added his famous “Machinery” chapter to the 1821 edition of his Principles of Political Economy, as an attempt to work out what would happen to handloom weavers when machines were introduced into production. In the twentieth century, the coming of computers saw another transformation of work.
Some would argue, however, that the next disruption, artificial intelligence (AI), has a qualitatively and quantitatively different feel. For them, robots will so extensively replace humans that we will simply not need nearly as much human labor.
Putting aside the question of whether it’s a good thing to have robots perform routine work, such as vacuuming a pool, I wonder to what extent robots can lead. In my view, there is something distinctly human about leadership.
While most of the literature on leadership takes humanity for granted, some of the literature on persuasion touches on the question. Persuasion is, after all, something we locate in speech, and speech has been an almost uniquely human phenomenon. However, AI means that robots can now develop speech patterns, and they can certainly construct and convey leadership messages to followers. Again, the question arises: Will robots and humans be effective in accomplishing group tasks?
Experimental evidence suggests that there is, indeed, something importantly human about leadership. Daniel Houser, David Levy, Kail Padgitt, Erte Xiao, and I designed an experiment to test this question. In the experiment, followers saw the same message under two sets of known conditions, one from a computer and a second from a human. (We also varied how the leader was chosen, but that variation will be the subject of a different post.) We then compared how followers reacted to the messages in each case (holding everything else constant). The same message from a human leader was more effective than from a computer: Computer-generated messages did not yield the same coordination effects as the identical message from a human.
While this may be unsurprising to those who study leaders, it is, as far as I know, the only empirical evidence that human leaders may still have a significant role to play in a world of AI. Of course, the question immediately arises: What if the robot can actually make a superior (not the equivalent) recommendation to followers, and what if the message is, then, better than the one emanating from a human? As far as I know, experimental research has yet to tackle that question!
Was there a social dilemma or pure coordination? If people knew the AI was programmed to make group welfare enhancing decisions, then maybe could flip the outcome you got. But probably not just a trust issue.