In our digital, networked world, metaphors that become memes (such as "cloud" computing, "smart" tech and the “Internet of Things") can be powerful tools for techno-social engineering humans—meaning that that they shape conversations and beliefs about reality; perpetuate illusions; and engineer complacency. As George Lakoff and Mark Johnson showed long ago in their classic Metaphors We Live By, metaphors are ubiquitous and fundamental for humans to communicate, understand and experience the world. Yet they’re always incomplete; they necessarily emphasize a partial view of something.
The problem with digital-tech metaphors is that what's left out is usually what’s most important. They obscure more than they reveal and generate power by distorting conversations, expectations and understanding of the relationships between technology and humanity. A few examples demonstrate.
In the early days, a simple flowchart explained how the iInternet worked: People around the world connected to each other via local internet service providers and a puffy cloud. So, for example, person A would connect to ISP A and then to the cloud—and then out of the cloud there would be a connection to ISP B and then to person B.
What happened in the cloud? Various networks interconnected to exchange and route traffic, but for most people, most of the time, those details didn’t matter and, besides, were too complicated to explain. The cloud image served as an epistemological black box within which complexity was dumped and hidden.
But “cloud” is no longer just an image at the center of internet drawings. It is now used to describe a host of other services beyond network interconnection, routing and so forth. When people refer to cloud services, storage or computing, they often are attempting to black-box those services as if the details still didn’t matter and were still too complicated to explain.
Except they do matter, and they’re not hard to explain at all.
All you need to do is replace "cloud" with “on someone else's computer.” To see the difference, stop and think about why, for example, storing all of your memories captured in photos, messages or other data more generally “on someone else’s computer” might matter. You might care about the trustworthiness of the service provider and the owner of the computers. They’re not necessarily the same!
You might care about whether those computers are secure, where they are located (e.g., what countries?), and who else can obtain access to them. Frankly, only with a more nuanced understanding of these services can we evaluate what it means to outsource memory to third-parties and question who is doing what thinking, who gains what power, and how such outsourcing may affect our basic human capabilities.
Much like “cloud,” “smart” is a buzzword used for a wide range of digital tech—smart phone, smart grid, smart car, smart clothing, smart toaster, etc. What these and many other tools labeled “smart” have in common is that the tools (supposedly) harness data and computational power to improve or add functionality. The metaphor appeals to our inclination to anthropomorphize tech. We attribute some intelligence to the tools, even if we understand that it is artificial. But the type of AI, how it works (or doesn’t), who owns or controls it and many other details that vary tremendously across examples are hidden inside an epistemological black box.
“Smart” conflates different forms of intelligence and makes it harder to evaluate differences in degree and kind. Smart for toaster is radically different than smart for car, yet as a metaphorical meme, smart is enough to preempt nuanced evaluation. This is a powerful way to attract investors, sell products, and smooth the path for rapid technological adoption. Smart seems unabashedly good, certainly better than dumb.
But that’s ridiculous if you stop and think about it. Dumb tech is sometimes better. Cash, for example, is a very useful dumb technology. But the smart/dumb dichotomy is itself pretty dumb. Evaluation of “smartness” is almost always a matter of degree that depends on the technology, people involve, and context.
Insert “supposedly” in front of “smart” whenever the word is used. Don’t be complacent. Break open the black box and think critically about who gets smarter, how and for what purpose.
Internet of Things
The internet connects people. The incredible social value of the internetis attributable to humans communicating with each other, whether in economic transactions, social interactions, political debate or countless other innovative and creative activities.
The Internet of Things is a powerful metaphor that replaces people with things. These “things” include sensors, phones, devices, automobiles, homes, etc.—all components of supposedly smart techno-social environments. Where did the people go? They fade into the background, as passive consumers satiated by the supposedly smart techno-social system and hardly distinguishable from the devices and other programmed and managed artifacts.
Let me close with a final example. It is the pernicious use of “free” to describe online content and services. The metaphor works because online content and services are often free in the narrow sense that there is no monetary price to be paid. But keep in mind that there’s always a price. The myth of free has shaped the beliefs, preferences and expectations of millions, and that’s fueled surveillance capitalism.
Replace "free" with “paid for with data” and “possibly paid for with attention, labor, trust and even your mind.” Now you can begin to evaluate what’s hidden within the box.
In Re-Engineering Humanity, we dig into these examples and many more. One of our primary concerns is that fetishizing supposedly smart tech puts humanity itself at risk. As we collectively race down the path toward supposedly smart techno-social systems that efficiently govern more and more of our lives, we risk outsourcing too much of what matters about being human and becoming increasingly predictable, and worse, programmable, like mere cogs in a machine.