Yanai Sened is an (ex) Programmer, (ex) Teacher, and philosophy PhD candidate, Fordham University, New York City.
We’re all going to die. Prior to that, many of us will experience, throughout our lives, a ‘bell curve’ corresponding to our ability to impact the world around us. We were born, supported, grown and raised. We built up independence as we understood the foundations of the world around us. We acquired skills and professions. We came to be proficient, developed expertise, established ourselves, all the while we also grew older. A new generation will grow beside us, from us, by us, with new skills that are better adapted to their brave new world, as we slowly perish.
This eventual yet basic and essential understanding was the source of many anxieties surrounding arrival of “the next generation”. For example, in Totem and Taboo (1913) Freud recognizes the common human myth of patricide, killing one’s father. While Freud sees and focuses on aggression of child against parent, it’s worth noting the two Greek myths of Oedipus and Chronos: Oedipus is expelled from his city in his infancy by his father; Chronos, father of the Greek gods, tries to eat his own children; both myths reflect the struggle between older and younger but begin with the violence of the father towards his child.
In the modern age myths of paternal violence against their own offspring have faded in favor of Golems rebelling against their makers. Beginning with the original Robot story, the theater play Rossum’s Universal Robots (Karl Capek, 1921) (more about this in “What is a Robot?”, Uri Aviv’s article in this issue, and Dr. Elana Gomel’s deep dive into R.U.R.) alongside other prominent examples such as The Golem by author Bashevis Singer (1969) or Player Piano, Kurt Vonnegut’s first novel (1952), and in a broader sense, the film Moon by film-maker Duncan Jones (2009), numerous speculative stories have dealt with the fear that our extensive and hard work is what actually generates the conditions of our own replacement.
The robot is the ideal object to represent the above-mentioned fear because it’s perceived as a mechanical object designed to and capable of performing a wide range of human actions.
Machines that replace human manpower in specific jobs are nothing new, certainly not regarding work that was widely accepted as extremely difficult, dangerous or boring and therefore monotonous jobs, or positions in which maximal efficiency would lead to particularly high incomes or cost savings. Throughout history, windmills were integrated with and even replaced the work of millers; steam engines of various types have replaced factory workers; ATMs replaced bank clerks/tellers and computers replaced people (mostly women) in accounting firms, insurance agencies, not to mention laboratories, and R&D facilities, but a few decades ago.
While each of these profound changes has raised justified concerns, and ultimately political and economic upheaval, in this article the feature explored and that distinguishes these changes from the current conversation about automation and robotics is that all these previous machines have not undermined the relationship between the workers and the means of production. A bank teller may well be fired because an ATM (Automatic Teller Machine) is fulfilling their role now, but the ATM will not raise their children; care for their partner (or make love!); wear the clothes they have in their closet as they go out with friends; eat breakfast, lunch or dinner with their family; or go in his stead to that all-important local baseball game or the opera friendship association meeting.
The boundary that all machines do not cross but robots flaunt and don’t much care about is thus teleological, anchored in their purpose, their essence. The robot essence as conceived and created, as recognized and accepted in human society, especially as it includes human replacement (in specific jobs or roles, ostensibly), is extremely complex and relies on the definition and essence of humanity itself: the essence of the robot is dependent on the essence of humanity.
We’re worried we’re creating machines that will deprive us of agency and take our place in deciding our fate: Not only would our jobs be done well, our metrics – exceeded, our positions – optimized; these are not merely places or “positions” but our contribution to our communities, our society. What if the robot had some other utilities or benefits, maybe notes for our next season, be it baseball or opera, or god forbid it might have some thoughts on the way we raise our children?
The fear expressed in Ex Machina (director: Alex Garland, 2015) is that sex robots will decide they wish to be released from their master; The fear expressed in Blade Runner director: Ridley Scott, 1982) as well as Do Androids Dream of Electric Sheep (Philip K. Dick’s 1968 novel on which the cinematic masterpiece is based ) is that the robots (androids in this case) decide they wish to live beyond their useful roles, choosing for themselves their lived experiences, their paths, their lives; The fear in Moon (director: Duncan Jones, 2009) is that robots (clones to be precise) would fulfill all of our roles, professional but also social, personal, most intimate, as we’re already dead and gone. Not only will they fulfill our role in operating factory machines, but our role as lovers, partners, parents, friends, and possibly also our function as amateur baseball players or our position in the opera friendship association. Hence the fear that Vonnegut presents in the above-mentioned Player Piano, that after robots replace all workers, there’ll be no one left to decide on the purposes and essences of human society.
Where does this teleological fear, anxiety of meaning, originate? Robots, much like any other artificial object, are manufactured with external meaning, essence, teleology. Philosopher Jean-Paul Sartre used the following distinction to explain the special teleology he attributes to human beings: “Existence precedes essence”. For example, Sartre claims a knife is created based on a mold designed for a specific purpose. Therefore, the knife has an essence, prior to existence. When thinking about the essence, the purpose of certain specific knives, purposes such as cutting cardboard or steak or injuring and killing enemy soldiers, according to the intended purpose, essence emerges, and the object is created. (Hu)man, according to Sartre, it not created with any such purpose, any essence or pattern, and therefore can only determine its own essence. The robot, unlike (hu)man, is designed for specific use, according to an external purpose, why should its essence, its teleology, be any different?
The naive answer is technological. What if technology is (supposedly) so complex, creates such complex consciousness, that it will be able to assert agency and autonomously object, subvert or deny its designed intent. While this possibility exists technically, technological science fiction stories that seriously study this option tend to answer that under such conditions, the difference between humans and robot machines loses all meaning. Robot-creatures may physiologically differ from humans, but they are not separate in their (human) rights, their equality or access. The PKD novel Do androids dream of electric sheep challenges the human obsession of distinguishing between “artificial” and “natural”, and the movie Ex Machina invites us to empathize not with the exploitative humans but with their (exploited sex) robots.
A more complex answer can be found as we delve deep into the primordial fear all parents have from their children. One characteristic of this fear is its tendency to not be realized overtly as actual conflict. Parent-child relationships are of course notoriously extremely complex, yet more often than not parents and children see each other, while separate from another, as belonging to the same unit; they have shared interests through the majority of their lives. This fact is essential to human existence and is entrenched in our political perceptions, so much so that one of the first texts to justify the political rights afforded monarchs to rule, Patriarcha: Or the Natural Power of Kings (1680, Sir Robert Filmer), argued that the right to rule comes down in direct succession from Adam himself (and Eve), inherited from father to son (through the genealogy of the fathers’ lines, not the mothers, according to his claim) reaching all the way to the monarchy of his age.
Thus, even though our myths identify a potential parent-child conflict, social mechanisms such as inheritance, education, and family unit, arrange it so that the child’s interest aligns with the parents’. This arrangement is essential to human social existence, one of the factors that spur continued birth and creation of further generations; no one in their right mind would choose to have children if it were commonly understood that they would undermine their parents, as Laius, King of Thebes feared his son Oedipus would. It’s interesting to keep in mind that Oedipus, while indeed he does kill his father Laius, he does so unknowingly and accidentally, precisely because he doesn’t know his own father and thus doesn’t recognize him (accidental patricide) – Laius is thus “punished” in this myth precisely because he does not fulfill his parental duties, his obligations to the family unit.
Our Robo-phobic anxieties are based, among other things, on the fear of potential conflict, that same conflict that is usually un-realized between parents and their children but continues to exist, an ever-present irrational dread that clouds and burdens our relationships, with robots. What are then the social mechanisms that produce that conflict? Lead to the teleological essence of robots being oppositional to ours, to us? What are the forces that separate us, prevent us from being one (“familial”) unit with common interests, instead nurturing contrarian purposes? I offer two explanations, but of course there are many other possibilities:
In his book To Save Everything, Click Here (2013), journalist and internet researcher Evgeny Morozov presents a concept called Solutionism, the idea that every problem can be solved by technology. Part of the problem that the perception of Solutionism resonates is the tendency of the general public, certainly a decade or more ago, to accept with little to no criticism innovative technological solutions to seemingly difficult and substantial problems, or at least, that’s how they’re presented to the public. The problems are generational, the solutions are no less than magic, the opposition, absent. Why would there be any? The public was not much aware of the existence of said problems, their severity, the acute and dire need for solutions, but the public is also quite content and satisfied with the solutions offered and implemented. Even more so, it accepts with not much review and zero opposition the political, social and economic changes that accompany said solutions, which are presented as desirable and welcome. Solutions to problems that not many were actually aware of…
When we examine the marketing narratives and strategies of supposedly futuristic services and products, such as autonomous cars or the current Metaverse trend, we can see that they are presented as huge strides forward in their ability to solve massive social problems. the Metaverse (by Facebook/Meta, for example) will allow us to connect and communicate with each other better than ever. The autonomous vehicle (by Tesla, possibly) will be the end of car accidents, traffic jams and most importantly – the hopeless, agonizing, and endless search for a spot; everybody could use a car exactly when they need to, not a milli-second more. (1).
Because we live in a society where such processes have already taken place, solutions have been implemented, and we’ve gone through several cycles of extraordinary visions and reality checks in recent decades, we’ve adopted some degree of natural suspicion; we know that these solutions, as they become integral to our lives, often reveal themselves as somewhat hostile to us, have a confrontational teleological essence to us. When we began using Facebook, we thought we could better stay in touch and catch up on the lives of our friends and loved ones. No one presented the possibility that a company like Cambridge Analytica would collect data about us and perform marketing and political manipulations with exemplary, frightening, and infuriating precision (2). When we started using YouTube, we thought we’ll just watch a bunch of short videos. Cats. Bloopers. Cat bloopers. No one ever imagined that the algorithm would decipher our exact political stance and send us more and more videos of an increasingly extreme nature (3).
When a robot is marketed to us as a solution, we’re immediately suspicious, and for good reason; we fear that behind the presented form, the obvert essence, one of presumably comfortable, elegant, and well-designed solution, hides an ulterior motive.
In a robot-Utopia, a society in which robots perform any and all work, who will have the right to reap the fruits of labor? We don’t need to imagine or speculate as this has been happening in cyclical fashion for centuries. Russian anarchist Peter Kropotkin wrote in his book The Conquest of Bread (1892) about how all the marvelous technological achievements of any single entrepreneur, CEO, or capitalist are dwarfed by the marvelous technological achievements made over centuries and millennia of human existence and that has been laid at their disposal and were necessary for their success.
The industrial revolution is now hundreds of years old. Almost everyone who participated in it, renovated, enhanced, optimized, and maximized our means of production, had died long ago. But the factories that have filled the world and have multiplied by thousands the percentages of possible production volumes do not equally serve all humans on the face of the Earth. Ownership is divided according to rules that seem completely arbitrary, using a broad enough historical perspective. These are the initial conditions of a society, our society, into which we’re worried to insert robots; for very good reason.
Robots manufactured by corporations in the substantial quantities that will make it possible to replace the human workforce will probably be privately owned. Anyone unable to afford and operate a robot (or, as it seems, a fleet of robots), will be outgunned and outnumbered, left behind and lose all ability to fight for their place in the market, in society. This problem was predicted in 1970 by philosopher Hannah Arendt in her book On Violence in which she postulates that a society cannot be oppressed only and entirely by violence as oppressive violence must hold a certain justification, otherwise the people required to apply oppressive violence (police, military) will refuse to apply it. The caveat she raises and opines on against this argument is precisely the possibility that an army of robots will enable a small minority to hold a monopoly on violence with no need for justification, since it can go to (autonomous) war, if necessary. If robots are to be at the heart of another phase, the next cycle of the industrial revolution, it’s quite reasonable to fear that the distribution of capital and power will be even more centralized than previous cycles.
Is absolute dread of any technological progress the only possible conclusion? We at Utopia believe otherwise. Just as parents can live in peace, love and harmony, have positive and supportive relationships with their children, so can humans bring robots into a world well-prepared and ready for them, a world that enables a life of reciprocity and mutual growth. For that to happen we first must understand that the most important step towards a mechanized heaven, a robotic utopia, is not technological at all, but a more just, fair and equitable organization of human society.
Footnotes and Other Stray Thoughts
1 // On the social, political, moral, legal, and behavioral obstacles to the implementation of autonomous private mobility we recommend the talk “Dude, Where’s My Autonomous Vehicle?” by Uri Aviv, presented at the re: publica 2018 conference.
2 // The mining and collection of data and its use for marketing and political purposes has been well documented and published, as have of course the activities of Cambridge Analytics. Here in The New York Times, April 2018, by Nicholas Confessore: Cambridge Analytica and Facebook: The Scandal and the Fallout So Far
3 // Rabbit Hole (2020), highly recommended, a unique 8-episode podcast project from the NYT. Journalist Kevin Ross dives into the depths of YouTube’s algorithm with the people who influenced it and were influenced by it. From YouTube’s CEO and the first senior engineer of the algorithm in its early days to a man who underwent indoctrination (from nationalist to white nationalist racist to neo-Nazi) and went through self-de-indoctrination, all through YouTube.