The Evitable Conflict

Image result for robot cartoon

In Asimov’s “The Evitable Conflict,” dystopian leaders Stephen Byerley (the World Coordinator) and Susan Calvin (a leading robopsychologist) discuss the sudden presence of macroeconomic turbulence. Byerley is particularly concerned about a recent spurt of global supply chain inefficiencies (steel, construction, etc.) and is determined to identify their root causes. In a series of lengthy interviews with regional leaders, he makes little progress, and reveals several of the systemic challenges that accompany a robotically-governed-society. Under “I Robot” style rule (First and Second Laws in effect), this civilization is deeply fractured, as regions quarrel over priorities and existentially debate the efficacy of robot-dependance. Budding tension across borders, paired with the rise of the Society for Humanity (an anti-Machine regime), threatens the prosperity of mankind. No one knows who to hold responsible.

Near the end of the story, chaos becomes order as Dr. Calvin provides a nuanced explanation to the madness: the Machines are purposefully causing minor economic disturbances as a sly pass at crippling the Society for Humanity, who they see as a major blocker to long-term progress and the longevity of mankind. While Byerley has been applying rational frameworks on “the micro” factors of this society, Calvin helps him see the bigger picture: “They are robots, and they follow the First Law. But the Machines work not for any single human being, but for all humanity, so that the First Law becomes: ‘No Machine may harm humanity; or, through inaction, allow humanity to come to harm'” (Page 212).” This type of reasoning has several ethical implications.

First and foremost, it magnifies an uncomfortable reality about humanity’s lack of agency. Machines, in fact built by other machines, determine the long term fate of civilization and we have very little control over this destiny. While this frightens Byerley, Dr. Calvin points out that “humans were always subject to forces beyond our control: we are always subject to the weather, to economic and social forces, to war, etc. Page 225.” Logical, yet unsettling, this explanation brings to light many deeply philosophical and personal questions…what is the purpose of living? What is the purpose of working? Why are we here? These are challenging questions that are critically important to our futures. Our natural bias is to assume human-centric decision making – we are significant – but must remind ourselves that mankind has only been around for a small chapter of this billion dollar ecosystem. We are really just beginning to understand the awesome complexity of the nature we inhabit. But if we are not important, than what are we? Many would argue that living without purpose is not really living – humans need objectives to unlock self-achievement and fulfillment. Then should we be spending now literally trillions of dollars on technology that replaces our core objectives? Or should we slow progress down and focus on intensifying the human experience?

Though abstract, these types of questions are paramount to order. Without clear structure, society falls apart. This problem is amplified in today’s connected economy (2018), as, over the past several centuries, Western Cultures have propped up the power of ego and individualism. Globally, we have shifted away from communalism towards independent consumerism, as life, especially out West, is far more about bettering the I versus the team mentality. As has played out in ‘tragedy-of-the-commons sectors’ (the environment, healthcare), mankind has deprioritized the collective. While reasonable people can debate the higher order effects of this type of transition, it is critical to underscore the dangerous, yet inevitable byproduct of radical change: societal turmoil.

As is witnessed time and time again throughout history, changing power tides bring unrest. This message is highlighted throughout Byerley’s interviews with regional leaders, as no particular leader takes ownership for society’s problems. The dominant North fears turnover as younger,  more technologically capable regions make a push for power. We see this type of battling for economic positioning take place in the modern world, as old governments and majority leaders prepare for an inevitable backlash. Changing tastes and preferences amongst emerging millennial generations are poised to disrupt incumbents across every sector. This certainly scares existing powers, but, all-in-all, is the better and most natural evolution.

Another dilemma presented by the story, briefly mentioned above, is needing to figure out the long term outcome of accelerating technological progress. How should we prioritize new developments? What type of technology should be heavily regulated? Who should regulate it? Will we reach a singularity? This rabbit-hole barrage of questions are fundamentally vital to the survival of the human race. In science, things tend to move slowly until they don’t. Without any guiding rails in place, we, as a species, are likely to end up in a place we truly do not understand. This is displayed in the story, as Byerley’s discussion with Vincent, the head of research at US Robots, uncovers the dissatisfying truth: the world’s smartest scientists can no longer understand the machines, their systems are too complex for any human brain to process. “Their computers were built with help from yesterday’s computers so there is no way to do a purely human check of their systems (Page 44).” Today, leading technologists, like Sam Altman, Zuckerberg, and Musk debate the ethics of investing in artificial intelligence. Google Brain and Open AI, two leading AI research institutions, still hold uncertain stances as to how we should think about technology and the future. It is important to note that the time-scale is very variable. In other words we, in 2018, are not really that close to building any fully autonomous agents that can act and think just like humans. But it is something that we must be mindful of, and continue to plan for the future. I expect AI regulation to emerge over the next few years, but I worry that we have no independent body capable of a) understanding and b) enforcing any type of impactful governance. It will be interesting to see how this type of regulation comes to fruition.

My overall take on the future is pragmatically optimistic: I believe that technology is critically important, but that we should understand, as history has shown us, that change is both inevitable and dangerous. There are grave consequences to altercations of the global equilibrium, and introducing changes at a rapid pace is likely a recipe for disaster. How, then, should we progress? My first thesis is that we should prioritize the development of ‘positive-sum’ technologies that make humans better at being uniquely human. We should build “iron-man suits” for every service professional – platforms enable normal people to be far better at their jobs. At the same time, we should invest in building technologies that free humans from doing “un-human tasks.” Doctors should not have to take notes, loan officers should not have to fill out paperwork, etc. These types of things should be outsourced to machines whereas humans should be left with their unique abilities, like critical reasoning and interpersonal relationship management. My belief is that this type of tech will enable humanity to do more, with less pain and grief.


Also published on Medium.