Giant Tech Firms Plan to Read Your Mind and Control Your Emotions. Can They Be Stopped?

[Google. Amazon. Facebook. Apple. We live within the digital worlds they have created, and increasingly there’s little chance of escape. They know our personalities. They record whether we are impulsive or prone to anxiety. They understand how we respond to sad stories and violent images. And they use this power, which comes from the relentless mining of our personal data all day, every day, to manipulate and addict us.

University of Tennessee law professor Maurice Stucke is part of a progressive, anti-monopoly vanguard of experts looking at privacy, competition, and consumer protection in the digital economy. In his new book, Breaking Away: How to Regain Control Over Our Data, Privacy, and Autonomy, he explains how these tech giants have metastasized into “data-opolies,” which are far more dangerous than the monopolies of yesterday. Their invasion of privacy is unlike anything the world has ever seen but, as Stucke argues, their potential to manipulate us is even scarier.

With these four companies’ massive and unprecedented power, what tools do we have to effectively challenge them? Stucke explains why current proposals to break them up, regulate their activities, and encourage competition fall short of what’s needed to deal with the threat they pose not only to our individual wallets and wellbeing, but to the whole economy — and to democracy itself.]

❈ ❈

Lynn Parramore: The big firms that collect and traffic in data — “data-opolies” you call them – why do they pose such a danger?

Maurice Stucke: People used to say that dominant companies like Google must be benign because their products and services are free (or low-priced, like Amazon) and they invest a lot in R&D and help promote innovation. Legal scholar Robert Bork argued that Google can’t be a monopoly because consumers can’t be harmed when they don’t have to pay.

I wrote an articlefor Harvard Business Review revisiting that thinking and asking what harms the data-opolies can pose. I came up with a taxonomy of how they can invade our privacy, hinder innovation, affect our wallets indirectly, and even undermine democracy. In 2018 I spoke to the Canadian legislature about these potential harms and I was expecting a lot of pushback. But one of the legislators immediately said, “Ok, so what are we going to do about it?”

In the last five or six years, we’ve had a sea change in the view towards the data-opolies. People used to argue that privacy and competition were unrelated. Now there’s a concern that not only do these giant tech firms pose a grave risk to our democracy, but the current tools for dealing with them are also insufficient.

I did a lot of research and spoke before many competition authorities and heard proposals they were considering. I realized there wasn’t a simple solution. This led to the book. I saw that even if all the proposals were enacted, there are still going to be some shortcomings.

LP:What makes the data-opolies even more potentially harmful than traditional monopolies?

MS: First, they have weapons that earlier monopolies lacked. An earlier monopoly could not necessarily identify all the nascent competitive threats. But data-opolies have what we call a “nowcasting radar.” This means that through the flow of data they can see how consumers are using new products and how these new products are gaining in scale, and how they’re expanding. For example, Facebook (FB) had, ironically, a privacy app that one of the executives called “the gift that kept on giving.” Through the data collected through the app, they recognized that WhatsApp was a threat to FB as a social network because it was starting to morph from simply a messaging service.

Another advantage is that even though the various data-opolies have slightly different business models and deal with different aspects of the digital economy, they all rely on the same anti-competitive toolkit — I call it “ACK – Acquire, Copy, or Kill.” They have greater mechanisms to identify potential threats and acquire them, or, if rebuffed, copy them. Old monopolies could copy the products, but the data-opolies can do it in a way that deprives the rival of scale, which is key. And they have more weapons to kill the nascent competitive threats.

The other major difference between the data-opolies today and the monopolies of old is the scope of anti-competitive effects. A past monopoly (other than, let’s say, a newspaper company), might just bring less innovation and slightly higher prices. General Motors might give you poorer quality cars or less innovation and you might pay a higher price. In the steel industry, you might get less efficient plants, higher prices, and so on (and remember, we as a society pay for those monopolies). But with the data-opolies, the harm isn’t just to our wallets.

You can see it with FB. It’s not just that they extract more money from behavioral advertising; it’s the effect their algorithms have on social discourse, democracy, and our whole economy (the Wall Street Journal’s “Facebook Files” really brought that to the fore). There are significant harms to our wellbeing.

LP: How is behavioral advertising different from regular advertising? An ad for a chocolate bar wants me to change my behavior to buy more chocolate bars, after all. What does it mean for a company like Facebook to sell the ability to modify a teenage girl’s behavior?

MS: Behavioral advertising is often presented as just a way to offer us more relevant ads. There’s a view that people have these preconceived demands and wants and that behavioral advertising is just giving them ads that are more relevant and responsive. But the shift with behavioral advertising is that you’re no longer just predicting behavior, you’re manipulating it.

Let’s say a teenager is going to college and needs a new laptop. FB can target her with relevant laptops that would fit her particular needs, lowering her search costs, and making her better off as a result. That would be fine — but that’s not where we are. Innovations are focused on understanding emotions and manipulating them. A teenage girl might be targeted not just with ads, but with content meant to increase and sustain her attention. She will start to get inundated with images that tend to increase her belief in her inferiority and make her feel less secure. Her well-being is reduced. She’s becoming more likely to be depressed. For some users of Instagram, there are increased thoughts about suicide.

And it’s not just the data-opolies. Gambling apps are geared towards identifying people prone to addiction and manipulating them to gamble. These apps can predict how much money they can make from these individuals and how to entice them back, even when they have financial difficulties. As one lawyer put it, these gambling apps turn addiction into code.

This is very concerning, and it’s going to get even worse. Data-opolies are moving from addressing preconceived demands to driving and creating demands. They’re asking, what will make you cry? What will make you sad? Microsoft has an innovation whereby you have a camera that will track what particular events cause you to have particular emotions, providing a customized view of stimuli for particular individuals. It’s like if I hit your leg here, I can get this reflex. There’s a marketing saying, “If you get ‘em to cry, you get ‘em to buy.” Or, if you’re the type of person who responds to violent images, you’ll get delivered to a marketplace targeted to your psyche to induce the behavior to shop, let’s say, for a gun.

The scary thing about this is that these tools aren’t being quarantined to behavioral advertising; political parties are using similar tools to drive voter behavior. You get a bit of insight into this with Cambridge Analytica. It wasn’t just about targeting the individual with a tailored message to get them to vote for a particular candidate; it was about targeting other citizens who were not likely to vote for your candidate to dissuade them from voting. We’ve already seen from the FB files that the algorithms created by the data-opolies are also causing political parties to make messaging more negative because that’s what’s rewarded.

LP: How far do you think the manipulation can go?

MS: The next frontier is actually reading individuals’ thoughts. In a forthcoming book with Arial Ezrachi, How Big Tech Barons Smash Innovation and How to Strike Back, we talk about an experiment conducted by the University of California, San Francisco, where for the first time they were able to decode an individual’s thoughts. A person suffering from speech paralysis would try to say a sentence, and when the algorithm deciphered the brain’s signals, the researchers were then able to understand what the person was trying to say. When the researchers asked the person, “How are you doing?” the algorithm could decipher his response from his brain activity. The algorithm could decode about 18 words per minute with 93 percent accuracy. First, the technology will decipher the words we are trying to say, and identify from our subtle brain patterns a lexicon of words and vocabulary. As the AI improves, it will next decode our thoughts. Turns out that FB was one of the contributors funding the research — and we wondered why. Well, that’s because they’re preparing these headsets for the metaverse that not only will likely transmit all the violence and strife of social media but can potentially decode the thoughts of an individual and determine how they would like to be perceived and present themselves in the metaverse. You’re going to have a whole different realm of personalization.

We’re really in an arms race whereby the firms can’t unilaterally afford to de-escalate because then they lose a competitive advantage. It’s a race to better exploit individuals. As it has been said, data is collected about us, but it’s not for us.

LP: Many people think more competition will help curtail these practices, but your study is quite skeptical that more competition among the big platform companies will cure many of the problems. Can you spell out why you take this view? How is competition itself toxic in this case?

MS: The assumption is that if we just rein in the data-opolies and maybe break them up or regulate their behavior, we’ll be better off and our privacy will be enhanced. There was, to a certain extent, greater protection over our privacy while these data-opolies were still in their nascent stages. When MySpace was still a significant factor, FB couldn’t afford to be as rapacious in its data collection as it is now. But now you have this whole value chain built on extracting data to manipulate behavior; so even if this became more competitive, there’s no assurance then that we’re going to benefit as a result. Instead of having Meta, we might have FB broken apart from Instagram and WhatsApp. Well, you’d still have firms dependent on behavioral advertising revenue competing against each other in order to find better ways to attract us, addict us, and then manipulate behavior. You can see the way this has happened with TikTok. Adding TikTok to the mix didn’t improve our privacy.

LP: So one more player just adds one more attack on your privacy and wellbeing?

MS: Right. Ariel and I wrote a book, Competition Overdose, where we explored situations where competition could be toxic. People tend to assume that if the behavior is pro-competitive it’s good, and if it’s anti-competitive, it’s bad. But competition can be toxic in several ways, like when it’s a race to the bottom. Sometimes firms can’t unilaterally de-escalate, and by just adding more firms to the mix, you’re just going to have a quicker race to the bottom.

LP: Some analysts have suggested that giving people broader ownership rights to their data would help control the big data companies, but you’re skeptical. Can you explain the sources of your doubts?

MS: A properly functioning market requires certain conditions to be present. When it comes to personal data, many of those conditions are absent, as the book explores.

First, there’s the imbalance of knowledge. Markets work well when the contracting parties are fully informed. When you buy a screw in a hardware store, for example, you know the price before purchasing it. But we don’t know the price we pay when we turn over our data, because we don’t know all the ways our data will be used or the attendant harm to us that may result from that use. Suppose you download an ostensibly free app, but it collects, among other things, your geolocation. No checklist says this geolocation data could potentially be used by stalkers or by the government or to manipulate your children. We just don’t know. We go into these transactions blind. When you buy a box of screws, you can quickly assess its value. You just multiply the price of one screw. But you can’t do that with data points. A lot of data points can be a whole lot more damaging to your privacy than just the sum of each data point. It’s like trying to assess a painting by Georges Seurat by valuing each dot. You need to see the big picture; but when it comes to personal data, the only one who has that larger view is the company that amasses that data, not only across their own websites but in acquiring third-party data as well.

So we don’t even know the additional harm that each extra data point might be having on our privacy. We can’t assess the value of our data, and we don’t know the cost of giving up that data. We can’t really then say, all right, here’s the benefit I receive – I get to use FB and I understand the costs to me.

Another problem is that normally a property right involves something that is excludable, definable, and easy to assign, like having an ownership interest in a piece of land. You can put a fence around it and exclude others from using it. It’s easy to identify what’s yours. You can then assign it to others. But with data, that’s not always the case. There’s an idea called “networked privacy” and the concern there is that choices others make in terms of the data they sell or give up can have then a negative effect on your privacy. For example, maybe you decide not to give up your DNA data to 23andMe. Well, if a relative gives up their DNA, that’s going to implicate your privacy. The police can look at a DNA match and say, ok, it’s probably someone within a particular family. The choice by one can impact the privacy of others. Or perhaps someone posts a picture of your child on FB that you didn’t want to be posted. Or someone sends you a personal message with Gmail or another service with few privacy protections. So, even if you have a property right to your data, the choices of others can adversely affect your privacy.

If we have ownership rights in your data, how does that change things? When Mark Zuckerberg testified before Congress after the Cambridge Analytica scandal, he was constantly asked who owns the data. He kept saying the user owns it. It was hard for the senators to fathom because users certainly didn’t consent to have their data shared with Cambridge Analytica to help impact a presidential election. FB can tell you that you own the data, but to talk with your friends, you have to be on the same network as your friends, and FB can easily say to you, “Ok, you might own the data, but to use FB you’re going to have to give us unparalleled access to it.” What choice do you have?

The digital ecosystem has multiple network effects whereby the big get bigger and it becomes harder to switch. If I’m told I own my data, it’s still going to be really hard for me to avoid the data-opolies. To do a search, I’m still going to use Google, because if I go to DuckDuckGo I won’t get as good of a result. If I want to see a video, I’m going to go to YouTube. If I want to see photos of the school play, it’s likely to be on FB. So when the inequality in bargaining power is so profound, owning the data doesn’t mean much.

These data-opolies make billions in revenue from our data. Even if you gave consumers ownership of their data, these powerful firms will still have a strong incentive to continue getting that data. So another area of concern among policymakers today is “dark patterns.” That’s basically using behavioral economics for bad. Companies manipulate behavior in the way they frame choices, setting up all kinds of procedural hurdles that prevent you from getting information on how your data is being used. They can make it very difficult to opt out of certain uses. They make it so that the desired behavior is frictionless and the undesired behavior has a lot of friction. They wear you down.

LP: You’re emphatic about the many good things that can come from sharing data that do not threaten individuals. You rest your case on what economists call the “non-rivalrous” character of many forms of data – that one person’s use of data does not necessarily detract at all from other good uses of the data by others. You note how big data firms, though, often strive to keep their data private in ways that prevent society from it for our collective benefit. Can you walk us through your argument?

MS: This can happen on several different levels. On one level, imagine all the insights across many different disciplines that could be gleaned from FB data. If the data were shared with multiple universities, researchers could glean many insights into human psychology, political philosophy, health, and so on. Likewise, the data from wearables could also be a game-changer in health, giving us better predictors of disease or better identifiers of things to avoid. Imagine all the medical breakthroughs if researchers had access to this data.

On another level, the government can lower the time and cost to access this data. Consider all the data being mined on government websites, like the Bureau of Labor Statistics. It goes back to John Stuart Mill’s insight that one of the functions of government is to collect data from all different sources, aggregate it, and then allow its dissemination. What he grasped is the non-rivalrous nature of data, and how data can help inform innovation, help inform democracy and provide other beneficial insights.

So when a few powerful firms hoard personal data, they capture some of its value. But a lot of potential value is left untapped. This is particularly problematic when innovations in deep learning for AI require large data sets. To develop this deep learning technology, you need to have access to the raw ingredients. But the ones who possess these large data sets give it selectively to institutions for those research purposes that they want. It leads to the creation of “data haves” and “have nots.” A data-opoly can also affect the path of innovation.

Once you see the data hoarding, you see that a lot of value to society is left on the table.

LP: So with data-opolies, the socially useful things that might come from personal data collection are being blocked while the socially harmful things are being pursued?

MS: Yes. But the fact that data is non-rivalrous doesn’t necessarily mean that we should then give the data to everyone that can extract value from it. As the book discusses, many can derive value from your geolocation data, including stalkers and the government in surveilling its people. The fact that they derive value does not mean society overall derives value from that use. The Supreme Court held in Carpenter v United Statesthat the government needs to get a search warrant supported with probable cause before it can access our geolocation data. But the Trump administration said, wait, why do we need a warrant when we can just buy geolocation data through commercial databases that map every day our movements through our cellphones? So they actually bought geolocation data to identify and locate those people who were in this country illegally.

Once the government accesses our geolocation data through commercial sources, they can put it to different uses. Think about how this data could be used in connection with abortion clinics. Roe v. Wadewas built on the idea that the Constitution protects privacy, which came out of Griswald v. Connecticutwhere the Court formulated a right of privacy to enable married couples to use birth control. Now some of the justices believe that the Constitution really says nothing about privacy and that there’s no fundamental, inalienable right to it. If that’s the case, the concerns are great.

LP: Your book is critically appreciative of the recent California and European laws on data privacy. What do you think is good in them and what do you think is not helpful?

MS: The California Privacy Right Act of 2020 was definitely an advance over the 2018 statute, but it still doesn’t get us all the way there.

One problem is that the law allows customers to opt out of what’s called “cross-context behavioral advertising.” You can say, “I don’t want to have a cookie that then tracks me as I go across websites.” But it doesn’t prevent the data-opolies or any platform from collecting and using first-party data for behavioral advertising unless it’s considered sensitive personal information. So FB can continue to collect information about us when we’re on its social network.

And it’s actually going to tilt the playing field even more to the data-opolies because now the smaller players need to rely on tracking across multiple websites and data brokers in order to collect information because they don’t have that much first-party data (data they collect directly).

Let’s take an example. The New York Times is going to have good data about its readers when they’re reading an article online. But without third-party trackers, they’re not going to have that much data about what the readers are doing after they’ve read it. They don’t know where the readers went–what video they watched, what other websites they went to.

As we spend more time within the data-opolies’ ecosystems, these companies are going to have more information about our behavior. Paradoxically, opting out of cross-context behavioral advertising is going to benefit the more powerful players who collect more first-party data – and it’s not just any first-party data, it’s the first-party data that can help them better manipulate our behavior.

So the case for the book is that if we really want to get things right, if we want to readjust and regain our privacy, our autonomy, and our democracy, then we can’t just rely on existing competition policy tools. We can’t solely rely on many of the proposals from Europe or other jurisdictions. They’re necessary but they’re not sufficient. To right the ship, we have to align the privacy, competition, and consumer protection policies. There are going to be times when privacy and competition will conflict. It’s unavoidable but we can minimize that potential conflict by first harmonizing the policies. One way to do it is to make sure that the competition we get is a healthy form of competition that benefits us rather than exploits us. In order to do that, it’s really about going after behavioral advertising. If you want to correct this problem you need to address it. None of the policy proposals to date have really taken on behavioral advertising and the perverse incentives it creates.

[Lynn Parramore is Senior Research Analyst at the Institute of New Economic Thinking. Courtesy: Institute for New Economic Thinking website.]

Janata Weekly does not necessarily adhere to all of the views conveyed in articles republished by it. Our goal is to share a variety of democratic socialist perspectives that we think our readers will find interesting or useful. —Eds.

Facebook
Twitter
LinkedIn
WhatsApp
Email
Telegram

Contribute for Janata Weekly

Also Read In This Issue:

Fear Still Stalks Religious Minorities

In the words of activist Harsh Mander, a prominent target of the regime, the “election results of 2024 have not erased the dangers of fascism. The cadres of the Hindu Right remain powerful and motivated.”

Read More »

The Changing Face of Dalit Politics

The rise in social mobility among Dalits and disenchantment with the status quo has led to a shift in Dalit politics. Opposition parties have been the beneficiaries of Mayawati’s marginalisation. Contrary to popular belief, Dalit consciousness is robust, radical, and committed to social justice values.

Read More »

If you are enjoying reading Janata Weekly, DO FORWARD THE WEEKLY MAIL to your mailing list(s) and invite people for free subscription of magazine.

Subscribe to Janata Weekly Newsletter & WhatsApp Channel

Help us increase our readership.
If you are enjoying reading Janata Weekly, DO FORWARD THE WEEKLY MAIL to your mailing list and invite people to subscribe for FREE!