A Walk Through the Uncanny Valley
We cannot go a day without encountering artificial intelligence in some quadrant of our lives. I have explained already that as a university professor during the last academic year, the first year of the full appearance of ChatGPT and its use by my students in preparing and submitting their graded papers on ethics, I can honestly say that I have looked at the AI beast in the eye, and understand much of the angst it causes, especially among knowledge workers in today’s society. People seem to fall into two primary camps on the topic of AI. One camp thinks that all the AI hype is overblown, and that it will take years and years for AI to have any real impact on our day-to-day lives. We will call that the Ostrich Camp. The other camp has chosen to embrace AI as a means of supporting their political and economic preferences, and feels that AI will replace large swaths of human labor, and both improve productivity to the benefit of capitalists, but also leave in its wake, many people, mostly the low end of the socioeconomic, totem pole, without gainful employment. Their theory is that AI will render labor (what Adam Smith said was the key component of value creation), especially immigrant labor, all but irrelevant to economic prosperity. We can call this the Neutron Camp. It feels like the Ostrich Camp has its head in the sand and wants to whistle along in hopes of the threat passing them by. I tend to be more sympathetic to this camp, since it seems like a natural human response to something new, especially when not fully understood. The Neutron Camp (as in Neutron Bomb) leaves me cold because it seems like a way for short term thinkers who are afraid of a world with 8 billion people and who are in search of a rationale to justify their economic elitism while ignoring the reality of the masses.
For those in the Neutron Camp I ask several simple and very revealing questions which almost no one is prepared to answer. The first is, what do you do with the displaced and unemployed or unemployable 4 billion or so people in the world? Do you want to kill those people? Are you prepared to watch them die for want of resources, sprawled across your manicured lawn or just outside you walled gates? Do you not worry about a World War Z phenomenon where the hungry are driven to primordial revolution, not on ideological grounds, but on mere survival grounds? Obviously, those people try to avoid those sorts of questions and pretend that their responsibility and purview do not extend to those problems. Reality reminds us otherwise. Sooner or later the ovens of Auschwitz are always revealed for what they are. Even Dickens understood that one had to deal with the “surplus population.” Rather than turning AI against the masses, perhaps we should figure out if it can help solve the problems of the masses in some new age Maslowian way.
I think we all have a lot to learn about AI and how it is most likely to reshape our world in much the same way that we had to learn a few decades ago how information technology, and especially the Internet, were going to reshape our world. There may well have been people who correctly predicted what the Internet would do, and perhaps those are the people who are enjoying billionaire status at this time. But I am more inclined to believe that for every ounce of omniscience there were one million ounces of pure luck in correctly guessing how the information age would unfold much less how to profit from it.
I have a friend (Soumitra Dutta) who is currently the Dean of the business school (Said Business School – named after Wafic Said, a Syrian/Swiss/Saudi/Canadian billionaire benefactor, who lives in Monaco) at Oxford University in England and who wrote his doctoral thesis number of years ago on the topic of artificial intelligence. He was a business school professor on the topic of technology in business for many years at top-flight schools like INSEAD, and then transitioned into academic administration, first at Cornell and now at Oxford. It would be hard to imagine a stronger résumé of a person as well equipped to understand, and explain where AI is most likely to go in the business world. I have know Soumitra since 2011 and I met with him recently while he was here in San Diego (I also follow him regularly through LinkedIn), so I am aware, more or less of his thinking around the subject. Despite his deep familiarity with the subject of AI and its potential impact, I think he would be the first to admit that there is still much to understand, and much to learn before one could easily predict all of the ways in which AI can affect us both as a global economy and as individual human beings.
While reading one of the many articles that come across my desk about AI, I noticed one that had in its title, the term “the uncanny valley”. I have never heard that term, so I did some research. It turns out that for the last several decades, as people have been discussing the psychological impact on human beings of the advent of AI, they have done enough research to find that there is a phenomenon that has to do with human beings’ perception of robots and AI-driven devices that relates specifically to how similar they are to humans. The phenomenon charts comfort, as in psychological comfort, against similitude of the robot to humans. In general, the more human-like robots become, the more comfortable people become dealing with robots. But then, as the robot approaches a very high degree of similitude to humans, there’s suddenly a dramatic fall or valley in those results that shows that there’s a precipitous drop in human comfort when robots get too close to being like humans. This created a very interesting psychological issue that researchers found worth digging into. There are a number of theories as to why this “uncanny valley”, as it’s called, occurs and the theories behind it reach into the very primordial brain stem issues that we, as humans, have very little control over.
As an example, the more similar, we as humans perceive something in terms of its similarity to us, the more our brain stem tells us to be careful about that entity, because there is the same kind of concern that our psyche has about mating with another human that is too similar to us (i.e. too much, perhaps, in our own genetic chain to create a healthy and diverse being from the meeting). That gets into some very strange stuff because it’s affectively telling us that we as humans get very concerned when robots, thanks to AI or manufactured skins and such, become too much like us. This is perhaps because we somehow sense that there is the potential for inbreeding that could occur, and that the value of diversity in nature is such that our brain stem wants or prefers us to associate with entities that are similar only to a point, but not too similar to us. The ramifications of that are very complex and they run to issues of racism and to issues of natural selection and thus the most fundamental drivers of the human psyche that exist. What it tells me is that there’s an awful lot for us to learn about AI and how we react to AI before we summarily use it and introduced it into just any aspect of our lives.
I am sure that the uncanny valley is just the first of many human psychological issues that we will encounter as we delve deeper and deeper into the fourth industrial revolution of cyber physical systems. In other words, in terms of AI predictions, I would caution you to hold all bets.