Skip to main content

How Personalization Leads to Homogeneity

Tech companies are perpetuating a bleak view of humans as programmable cogs

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


The phrase "people are products” has become a cliché. That tells you something is wrong. It implies normalization of some rather heinous ideas. To buy and sell people rings of slavery. It reduces people to things, objects, resources, or mere means. As one of us (Deven Desai) wrote: “Treating a person like a resource is [a fundamental] error.” Somehow, magically to our minds, these negative associations fall away when the medium of exchange is digital data and human attention.

We need to examine the role of personalization in programming people as products. So, let’s consider how personalization of inputs—stimuli, choice architecture, message, inducement—enables standardization of outputs—response, behavior, beliefs and even people.

A few familiar examples show how personalization can be lead to homogenous behavior. Suppose we’d like to induce a group of people to behave identically. We might personalize the inducements. For example, if we’re hoping to induce people to contribute $100 to a disaster relief fund, we might personalize the messages we send them. The same applies if we’re hoping to nudge people to visit the doctor for an annual check-up, or if I’m hoping to get them to click on an advertisement. Effective personalized ads produce a rather robotic response: clicks. Simply put, personalized stimuli can be an effective way to produce homogenousresponses.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


This personalized-input-to-homogenous-output (“PIHO”) dynamic is quite common in the digital networked environment. What type of homogenous output would digital tech companies like to produce? Often, companies describe their objective as “engagement,” and that sounds quite nice, as if users are participating actively in very important activities. But what they mean is much narrower. Engagement usually refers to a narrow set of practices that generate data and revenues for the company, directly or via its network of side agreements with advertisers, data brokers, app developers, AI trainers, governments and so on.

For example, Facebook offers highly personalized services on a platform optimized to produce and reinforce a set of simple responses—scrolling the feed, clicking an ad, posting content, liking or sharing a post. These actions generate data, ad revenue, and sustained attention. It’s not that people always perform the same action; that degree of homogeneity and social control is neither necessary for Facebook’s interests nor our concerns. Rather, for many people much of the time, patterns of behavior conform to “engagement” scripts engineered by Facebook.

A very similar story can be told for many other platforms, services and apps. Of course, the business models, strategies and even the meaning of “engagement” vary, but PIHO remains a consistent logic. It’s a potent blend of Frederick Taylor’s “scientific management of human beings,” B.F. Skinner’s operant conditioning, and modern behavioral engineering methods, such as nudging.

PIHO requires personal data and sufficient means for translating such data into effective inducement and reinforcement. In other words, whatever intelligence about a person gleaned from collected data must be actionable and impact the person’s behavior.

Some types of data are more useful than others for inducing and reinforcing the desired response/behavior, and the relative utility of different data types may vary across people and contexts. Not surprisingly, many digital tech companies collect as much data as possible, either directly from consumers or indirectly from data brokers or partners with whom they have side-agreements. They run experiments on consumers. They use various data processing techniques to identify patterns, test hypotheses, and learn about how people behave, whom they interact with, what they say and do, and what works best in shaping their behavior to fit the companies’ interests.

For example, companies map users’ social graphs to learn about their relationships, including the strength of different influences on individuals. Just as a choice architect who wants to nudge people to vote or file tax returns might let them know how many neighbors have done so, a social media company can leverage social graph insights to induce people to log in, create posts, read posts, share posts, and more. The stimulus might be simple, a personalized e-mail to tell a user that a friend tagged them in a post. The goal is to get the user to the site (or the app), so that the user comments, or posts, or tags. The hope is that the user sees ads and clicks, reads and shares posts, plays games. In short, their ideal is that the user does anything that deepens the behavior of staying logged in and using the service. If that happens, the company has succeeded. It has induced and reinforced engagement.

When digital tech companies deliver personalized services and content, there is always a feedback loop. They’re constantly collecting data and learning about you to fuel the service. But that’s just the first loop to be aware of. Additional feedback loops cross sectors and span the networked environment; digital tech companies often have side agreements with each other. Ever notice those Facebook, Twitter and other buttons on websites you visit? Ever use your social media credentials to log in to other sites? (If you really want to see feedback loops and how data flows, check out your advertising preferences on Facebook.)

What’s tricky about PIHO on digital platforms is the fact the personalized stimuli do, to some degree, satisfy the interests of users. In other words, personalization benefits users directly because they’re getting news, status updates, videos, and other stimuli customized to their preferences. Inducing “engagement” requires that users get something they want, after all. But that doesn’t mean the benefits of personalization flow exclusively or even mostly to users. Personalization makes it cheaper and easier both to serve and to script behaviors.

It can even go deeper than that. Feedback effects and repeated and sustained engagement can, but don’t necessarily, shape beliefs, expectations and preferences. When coupled with design and engineering practices informed heavily by behavioral data (and not necessarily personalized), addiction, dependence and a host of other concerns arise. In the moment, people may be satiated, but that doesn’t mean they like who or what they’ve become if/when they reflect on their behaviors. 

Engagement could mean something more, something great for humanity, and digital networked technologies could pursue such engagement, but that’s not really what we get in our modern digital networked world. Digital tech could, in theory, personalize goods and services in a manner geared toward your interests. Instead, they mostly pursue their own interests and cater to those on the other sides of the market—that is, those who pay the bills—advertisers, partners collecting data and training AI, governments, etc. It’s just capitalism at work, one might say, and that wouldn’t be wrong. But that doesn’t justify the practice. Nor does it excuse the potentially dehumanizing consequences of treating people as products to be standardized. Digital tech companies adopt—and worse, perpetuate—an impoverished view of humans as predictable and passive consumers who (can be trained to) behave according to standardized scripts. In that world, we are nothing more than programmable cogs.

We need to change the social script that not only permits but also enables and encourages such practices.

Brett Frischmann is the Charles Widger Endowed University Professor in Law, Business and Economics, Villanova University. His latest book is Re-Engineering Humanity (Cambridge University Press 2018). His novel, Shephard's Drone, will be out on February 3, 2019.

More by Brett Frischmann

Deven Desai is an associate professor at the Scheller College of Business at the Georgia Institute of Technology. Among other professional experience, He has worked for Google, Inc. as a member of the policy team. He has received research support as unrestricted gifts to the Georgia Tech Research Institute made by Facebook and Google.

More by Deven Desai