Overcomplicated Read online




  CURRENT

  An imprint of Penguin Random House LLC

  375 Hudson Street

  New York, New York 10014

  penguin.com

  Copyright © 2016 by Samuel Arbesman

  Penguin supports copyright. Copyright fuels creativity, encourages diverse voices, promotes free speech, and creates a vibrant culture. Thank you for buying an authorized edition of this book and for complying with copyright laws by not reproducing, scanning, or distributing any part of it in any form without permission. You are supporting writers and allowing Penguin to continue to publish books for every reader.

  ISBN 9780698189195

  While the author has made every effort to provide accurate telephone numbers, Internet addresses, and other contact information at the time of publication, neither the publisher nor the author assumes any responsibility for errors or for changes that occur after publication. Further, the publisher does not have any control over and does not assume any responsibility for author or third-party Web sites or their content.

  Version_1

  For Abigail and Nathan,

  who will come of age in a world of wonders.

  Never stop being excited by it.

  Contents

  Title Page

  Copyright

  Dedication

  INTRODUCTION

  Chapter 1

  WELCOME TO THE ENTANGLEMENT

  Chapter 2

  THE ORIGINS OF THE KLUGE

  Chapter 3

  LOSING THE BUBBLE

  Chapter 4

  OUR BUG-RIDDEN WORLD

  Chapter 5

  THE NEED FOR BIOLOGICAL THINKING

  Chapter 6

  WALKING HUMBLY WITH TECHNOLOGY

  Further Reading

  Acknowledgments

  Notes

  Index

  INTRODUCTION

  On July 8, 2015, as I was in the midst of working on this book, United Airlines suffered a computer problem and grounded its planes. That same day, the New York Stock Exchange halted trading when its system stopped working properly. The Wall Street Journal’s website went down. People went out of their minds. No one knew what was going on. Twitter was bedlam as people speculated about cyberattacks from such sources as China and Anonymous.

  But these events do not seem to have been the result of a coordinated cyberattack. The culprit appears more likely to have been a lot of buggy software that no one fully grasped. As one security expert stated in response to that day’s events, “These are incredibly complicated systems. There are lots and lots of failure modes that are not thoroughly understood.” This is an understated way of saying that we simply have no idea of the huge number of ways that these incredibly complex technologies can go wrong.

  Our technologies—from websites and trading systems to urban infrastructure, scientific models, and even the supply chains and logistics that power large businesses—have become hopelessly interconnected and overcomplicated, such that in many cases even those who build and maintain them on a daily basis can’t fully understand them any longer.

  In his book The Ingenuity Gap, professor Thomas Homer-Dixon describes a visit he made in 1977 to the particle accelerator in Strasbourg, France. When he asked one of the scientists affiliated with the facility if there was someone who understood the complexity of the entire machine, he was told that “no one understands this machine completely.” Homer-Dixon recalls feeling discomfort at this answer, and so should we. Since then, particle accelerators, as well as pretty much everything else we build, have only increased in sophistication.

  Technological complexity has been growing for a long time. Take the advent of the railroads, which required a network of tracks and a switching system to properly route trains across them. The railroads spurred the development of standardized time zones in the United States in order to coordinate the many new trains that were crisscrossing the continent. Before this technology and the complexity it entailed, time zones were less necessary.

  But today’s technological complexity has reached a tipping point. The arrival of the computer has introduced a certain amount of radical novelty to our situation, to use the term of the computer scientist Edsger Dijkstra. Computer hardware and software is much more complex than anything that came before it, with millions of lines of computer code in a single program and microchips that are engineered down to a microscopic scale. As computing has become embedded in everything from our automobiles and our telephones to our financial markets, technological complexity has eclipsed our ability to comprehend it.

  In recent years, scientists have even begun to recognize the inextricable way that technology and nature have become intertwined. Geologists who study the Earth’s rock layers are asking whether there is enough evidence to formally name our current time period the Anthropocene, the Epoch of Humanity. Formal title or not, the relationship between our human-made systems and the natural world means that each of our actions has even more unexpected ramifications than ever before, rippling not just to every corner of our infrastructure but to every corner of the planet, and sometimes even beyond. The totality of our technology and infrastructure is becoming the equivalent of an enormously complicated vascular system, both physical and digital, that pulls in the Earth’s raw materials and emits roads, skyscrapers, large populations, and chemical effluent. Our technological realm has accelerated the metabolism of the Earth and done so in an extraordinarily complicated dance of materials, even changing the glow of the planet’s surface.

  We are of two minds about all this complexity. On the one hand, we built these incredibly complicated systems, and that’s something to be proud of. They might not work as expected all the time, but they are phenomenally intricate edifices. On the other hand, almost everything we do in the technological realm seems to lead us away from elegance and understandability, and toward impenetrable complexity and unexpectedness.

  We already see hints of the endpoint toward which we are hurtling: a world where nearly self-contained technological ecosystems operate outside of human knowledge and understanding. As a journal article in Scientific Reports in September 2013 put it, there is a complete “new machine ecology beyond human response time”—and this paper was talking only about the financial world. Stock market machines interact with one another in rich ways, essentially as algorithms trading among themselves, with humans on the sidelines.

  This book argues that there are certain trends and forces that overcomplicate our technologies and make them incomprehensible, no matter what we do. These forces mean that we will have more and more days like July 8, 2015, when the systems we think of as reliable come crashing down in inexplicable glitches.

  As a complexity scientist, I spend a lot of time being preoccupied with the rapidly increasing complexity of our world. I’ve noticed that when faced with such massive complexity, we tend to respond at one of two extremes: either with fear in the face of the unknown, or with a reverential and unquestioning approach to technology.

  Fear is a natural response, given how often we are confronted with articles on such topics as the threat of killer machines, the dawn of superintelligent computers with powers far beyond our ken, or the question of whether we can program self-driving cars to avoid hitting jaywalkers. These are technologies so complex that even the experts don’t completely understand them, and they also happen to be quite formidable. This combination often leads us to approach them with alarm and worry.

  Even if we aren’t afraid of our technological systems, many of us still maintain an attitude of wariness and distaste toward the algorithms and technologies that surround us, particularly when
we are confronted with their phenomenal power. We see this in our responses to the inscrutable recommendations of an Amazon or a Netflix, or in our annoyance with autocorrect’s foibles. Many of us even rail at the choices an application makes when it tells us the “best” route from one location to another. This phenomenon of “algorithm aversion” hints at a sentiment many of us share, which appears to be a lower-intensity version of technological fear.

  On the other hand, some of us veer to the opposite extreme: an undue veneration of our technology. When something is so complicated that its behavior feels magical, we end up resorting to the terminology and solemnity of religion. When we delight at Google’s brain and its anticipation of our needs and queries, when we delicately caress the newest Apple gadget, or when we visit a massive data center and it stirs something in the heart similar to stepping into a cathedral, we are tending toward this reverence.

  However, neither of these responses—whether from experts or laypeople—is good or productive. One leaves us with a crippling fear and the other with a worshipful awe of systems that are far from meriting unquestioning wonder. Both prevent us from confronting our technological systems as they actually are. When we don’t take their true measure, we run the risk of losing control of these systems, enduring unexpected and sometimes even devastating outcomes. Next time, the results of our failure to understand might not be as trivial as a frustrated Wall Street Journal reader being unable to access an article at the time of her choosing. The glitches could be in the power grid, in banking systems, or even in our medical technologies, and they will not go away on their own. We ignore them at our peril.

  Technology, while omnipresent, is not pristine or unfathomable because of its creation by some perfect, infinite mind. It is wonderfully messy and imperfect. And it is still approachable. We require a strategy to directly confront this situation.

  My goal is to help each of us navigate a path between the two extremes of fear and awe, laying out an orientation toward our technologies that will allow us to make progress in how we approach them. It is an optimistic orientation, one that involves changing the way we think about these systems without falling into paralyzing fear or reverence.

  This orientation will require us to meet our technologies halfway by cultivating a comfort with these systems despite never completely understanding them. This is the sort of humble comfort that dwells in ambiguity and imperfection, yet constantly strives to understand more, bit by bit. As we will see, this orientation involves, among other things, each of us thinking the way scientists do when examining the massive complexity of biology.

  Despite all the overcomplication of the systems we vitally depend on, I’m ultimately hopeful that humanity can handle what we have built.

  This book is why.

  Chapter 1

  WELCOME TO THE ENTANGLEMENT

  On a winter day early in 1986, less than a month after the Challenger disaster, the famous physicist Richard Feynman spoke during a hearing of the commission investigating what went wrong. Tasked with determining what had caused the space shuttle Challenger to break apart soon after takeoff, and who was to blame, Feynman pulled no punches. He demonstrated how plunging an O-ring—a small piece of rubber used to seal the joints between segments of the shuttle’s solid rocket boosters—into a glass of ice water would cause it to lose its resilience. This small piece of the spacecraft was sensitive to temperature changes, making it unable to provide a firm seal. These O-rings seem to have been responsible for the catastrophe that cost seven crew members their lives.

  Contrast this example with another failure, one in the automobile industry. In 2007, Jean Bookout was driving a 2005 Toyota Camry in a small town in Oklahoma when her car began accelerating uncontrollably. She attempted to brake, even using the emergency brake and leaving skid marks on the road. Her efforts did not stop the car, and it crashed into an embankment. Bookout was left terribly injured, and her friend and passenger, Barbara Schwarz, died.

  Bookout’s story is not a unique one. For a number of years, it seems that numerous vehicles manufactured by Toyota exhibited this strange and dangerous problem: they maintained or even increased speeds against the will and efforts of the driver. Multiple people died as a result of this “unintended acceleration.” Several potential causes were proposed, including driver error, floor mats that could jam the gas pedal, and even a sticky gas pedal. But there were too many cases these causes couldn’t account for: fewer than half of the affected models ever had any recalls for ill-fitting floor mats or sticky pedals, and there was no reason to believe drivers of Toyotas were so much more likely to err than drivers of other types of cars.

  Toyota granted access to its proprietary and closely held software code to an embedded-software expert named Michael Barr. With the assistance of half a dozen other experienced engineers, Barr endeavored to explain what went wrong. The computer scientist Philip Koopman has also examined publicly available features of the design in order to understand what might have occurred. Both experts concluded that the massive complexity and poor design of Toyota’s engine software was responsible for at least some of the unintended acceleration in these cars. No single piece or design could be pointed to as the clear-cut cause of this problem. Rather, there were several distinct problems that interacted, and the fault thus lay in the massive interconnectivity of the baroque structure of the computer code and the surrounding electromechanical systems in these cars. The complexity of this system made it difficult to understand the implications for these interacting pieces that, both individually and when combined, had deep issues and flaws. According to the evidence presented, Toyota could have been far more careful when building such a complex—in this case, unnecessarily complex—system.

  This diagnosis couldn’t feel more different from our familiar models of technical failure. Unlike the well-known story of Feynman’s demonstration of the reason for the Challenger disaster, there was no single smoking gun that we could comfortably point to as the cause of the problem with Toyota’s cars. Rather, a steady march of complicated components and failures of design, when combined, added up to a disaster for Toyota.

  In fact, even when we can find a single cause for a failure, it actually may be somewhat of a red herring in today’s complex systems. In 1996, an Ariane 5 rocket exploded, self-destructing thirty-nine seconds after launch. All four satellites on board were lost. Analysis of the failure revealed that the explosion was due to some older software code being used in the newer rocket under new conditions. But according to Dr. Homer-Dixon, no individual contractor was blamed. The explosion was less the fault of a single decision than of the incredible complexity of the entire system involved in launching this rocket into space. Other similar disasters, such as the Three Mile Island nuclear disaster, might also have an identifiable cause, but when it comes down to the real reason for the failure, it’s more accurate to say it was the system’s massive complexity, rather than any single component or choice.

  When we think of untangling massive complexity, we are drawn to the popular narrative of the Challenger. Even though the launch and operation of a space shuttle was an incredibly complex matter, we feel that by applying our ability to scrutinize sophisticated systems, break them down, and determine how they work and sometimes fail, we should be able to understand them. We owe this overconfidence to an idea that many of us are ensorcelled by: the unlimited potential of the human mind. We believe that if we just work hard enough, we can achieve a perfect understanding of everything around us, especially what we ourselves have built.

  We see this sentiment in what is termed the Whiggish view of progress, described by the writer Philip Ball as the belief that humanity is on “a triumphant voyage out of the dark ages of ignorance and superstition into the light of reason.” This view is found within science as well as among technophiles, as the historian Ian Beacock writes: “The tech industry tells a Whiggish tale about the digital ascent of humanity: from our be
nighted times, we’ll emerge into a brighter future, a happier and more open society in which everything has been measured and engineered into a state of perfect efficiency.” Surely, say those who adhere to such a viewpoint, this growth in efficiency and productivity presumes our continued ability to understand the phenomenal engineering we have constructed for such an uplifting purpose. This Whiggish perspective is related to the modern mind-set described by the sociologist Max Weber: a sense that “the world is disenchanted,” that “one can, in principle, master all things by calculation.”

  But more and more, this way of approaching complexity just doesn’t work. We are in a new era, one in which we are building systems that can’t be grasped in their totality or held in the mind of a single person; they are simply too complex. We are finding ourselves, expert or not, more often in the realm of Toyota’s “unintended acceleration,” where this old way of thinking will no longer suffice. These situations are not at the edges of our experience; they suffuse our lives.

  What Complex Systems Are—and Aren’t

  It’s worth taking a moment to talk about what we mean by “complicated” and “complex” systems. While in this book (and even in its title) I use these terms—complex and complicated—more or less interchangeably in their colloquial sense, there are important distinctions.

  Imagine water buoys, tied together, floating in the water. As a boat goes by, its wake generates small waves that begin moving one buoy, then another. But each buoy does not act alone. Its own motion, since it is connected by rope to other buoys of different weights and sizes, causes changes in the undulations of the others. These movements can even cause unexpected feedback, so that one buoy’s motion eventually comes back, indirectly, to itself. The boat’s simple wake has generated a large cascade of activity across this complex network of buoys. And if the boat had sped by in just a slightly different way—at another speed or angle—the motions of the buoys might have been entirely different.