Machines Like Us

Cory Doctorow on the True Dangers of Surveillance Capitalism

Episode Summary

Where does the tech industries’ power lie? Are they “mind-control” platforms capable of influencing elections through algorithmic muscle, or does their true threat to society lie in market concentration?

Episode Notes

Where does the tech industries’ power lie? Are they “mind-control” platforms, as some have described them, capable of influencing everything from consumer choices to election results, or does their true threat to society lie in market concentration? 

In this episode of Big Tech, Taylor Owen speaks with Cory Doctorow, a science fiction author, activist and journalist. Doctorow’s latest book, How to Destroy Surveillance Capitalism, argues that big tech’s purported powers of manipulation and control, peddled to advertisers and based on an overcollection of our data, are essentially an illusion. “Being able to target cheerleaders with cheerleading uniform ads does not make you a marketing genius or a mind controller. It just makes you someone who’s found an effective way to address an audience, so that even though your ad may not be very persuasive, you’re not showing an unpersuasive ad to someone who will never buy a cheerleading uniform,” Doctorow explains. 

Doctorow’s view is that the threats to society that big tech present are far less sinister than tech critics such as Shoshana Zuboff and Tristan Harris make them out to be. Rather, the big fives’ monopolistic practices are the real issues to wrestle with.

Episode Transcription

This transcript was completed with the aid of computer voice recognition software. If you notice an error in this transcript, please let us know by contacting us here: https://www.cigionline.org/contact

 

Taylor Owen: Hi, I'm Taylor Owen, professor of public policy at McGill and a CIGI Senior Fellow, and this is Big Tech.

 

So I have to admit, if I were you, I might not have started listening to this podcast. And before you unsubscribe and start listening to Joe Rogan or Kara Swisher or Ezra Klein, let me explain. There are now a lot of podcasts about tech and society, and the discourse in this space is getting a bit tiresome. It feels like we're having the same conversation over and over and over again. In the span of a little over a decade, we've gone from completely ignoring Silicon Valley to blaming big tech for what seems like all of our collective failures. Racism is on the rise, anti-vaxxers threaten our COVID recovery, American democracy is in decline; it's all big tech's fault. Now, I'm not saying there aren't a ton of issues with social media and the internet more broadly. There are. In fact, I've spent the last decade researching, writing about, and working on policy solutions for just these issues. Big tech is without a doubt a big problem. But what I've found is these are deeply complex challenges that go way beyond Silicon Valley. So maybe we need to start looking beyond Silicon Valley as well for the answers, which brings me to this podcast. It's called Big Tech, so of course we're going to talk about technology. But I want to dramatically expand this conversation. Are all of these tech problems or are some of these just old-fashioned capitalism problems? What sort of philosophical frameworks could help us navigate our post-truth era? What can we learn from those who study and understand other global crises: climate change, global migration, structural racism or inequality? How is our technology infrastructure intertwined with our politics, our colonial past, and with power? And what does it mean to be human in an emerging age of AI and biotech? The reality is that technology doesn't operate in a vacuum. Tech problems are rarely, if ever, just tech problems. They reflect issues with our social structures, our political systems, and our economic orthodoxies. It's a big, tangled web, and I'm excited to try and untangle it with you.

 

If you've read anything about social media in the past five years, you've definitely heard the term surveillance capitalism. It's a phrase that Shoshana Zuboff coined in her 2018 book, The Age of Surveillance Capitalism.

 

[CLIP] Shoshana Zuboff: It is a power based on an increasingly ubiquitous, pervasive digital infrastructure that has been commandeered by this economic logic to shape our behaviour in a way that is aligned with its interests, and it is radically indifferent to everything else.

SOURCE: Access Now YouTube Channel https://youtu.be/FX2g6xPeftA

“Real corporate accountability for surveillance capitalism with Shoshana Zuboff and Chris Gilliard”

July 28, 2020

 

Taylor Owen: Her theory is essentially that big tech is mining our data, usually without us realizing it, and then using those data along with machine learning to control and manipulate our behaviour. This could be to nudge us to buy a particular product, but it could also be to get us to join an extremist group on Facebook or to vote for a particular candidate, or just to keep scrolling. For Zuboff, mass data is the commodity that fuels the attention economy. And she argues that this is a new rogue form of capitalism. And here's the thing: in the last couple of years, Zuboff's theory has sort of become gospel.

 

[CLIP] Tristan Harris: So they've unleashed this civilization-scale mind control machine that they don't even know what thoughts it's pushing into two billion people's minds.

SOURCE: Bloomberg Technology YouTube Channel https://youtu.be/nay5w-FC08Q

“Tristan Harris Says Tech Companies Have Opened Pandora's Box”

January 17, 2018

 

Taylor Owen: Zuboff and other tech critics like Tristan Harris are convinced that big tech has developed a super weapon capable of literally changing the way we think and the way we act. And I must admit, I agree with much of her argument. I have testified to governments alongside her and have learned a tremendous amount from her work. Like all big, ambitious, and important academic ideas, it has challenged and enabled us to look at the world differently. But my guest today isn't so sure. Cory Doctorow is a science fiction author, activist, and journalist. He's just written a book called How to Destroy Surveillance Capitalism, which you can find online for free. Cory agrees that big tech poses an existential threat, but he doesn't buy into this idea that they can actually influence our behaviour. Instead, he argues that big tech's power doesn't really come from the tech part, it comes from the big part.

 

Cory Doctorow: The weapon they have is not an influence machine. I think it's a monopoly.

 

Taylor Owen: It's an argument with serious implications. A lot of the policy fixes in this space actually give tech companies more power, not less. For example, if we're trying to clean up the online discourse, many look to Facebook to start policing speech more seriously on their platform. But if Cory's right, that's the last thing we want to be doing because it'll only give big tech more power. This is exactly the kind of conversation I want to be having on this show. As you'll hear, I don't always agree with him, but Cory challenges a lot of the conventional wisdom about tech. He does so from a position of deep engagement in the tech community, decades of activism for the rights of citizens online, and a career writing speculative science fiction that has defined the zeitgeist of the internet age. And in taking a step back like he does in this new book, he makes a pretty compelling case that this isn't a tech problem, it's a capitalism problem. Here is Cory Doctorow.

 

Taylor Owen: Cory Doctorow, welcome to the podcast.

 

Cory Doctorow: Thank you very much. It is a pleasure to be on the podcast.

 

Taylor Owen: Thanks. So I wanted to focus this conversation on your response book to Shoshana Zuboff's concept of surveillance capitalism. But first, I was really struck by something you wrote in Slate recently describing this scene outside the gun store and being concerned that somehow you were responsible for this idea that people were panic buying guns in LA. Can you describe why you thought that? It felt, to a certain degree, like this merging of your science fiction worlds and activism worlds and policy worlds, and how did that play out?

 

Cory Doctorow: Well, I don't know that I am solely responsible for this, but I think that science fiction is a pulp literature, and that's great. I love pulp literature. Pulp literature is plot-forward, and that means that you're always looking for ways to make things exciting and keep them exciting. And these two basic plots of man against nature and man against man, you can get a two-for for them when you have man against nature against man where the tsunami blows your house down, your neighbour comes over to eat you. So we see that as a recurring motif in fiction, so much so that no one ever questions it. Like if I write a scene in which there is a blackout and then I write in the next scene that there are roving gangs in the street taking advantage of it, no one's going to go, "That's stupid. That's not what people do during blackouts." But it isn't what people do during blackouts. In fact, it happens so infrequently, and it is so noteworthy on those rare occasions when that kind of thing happens that we make a big news deal out of it because it's rare. So I think that the story that we tell ourself in our fiction, it works like an intuition pump, like a Daniel Dennett intuition pump, in the sense of being a kind of rehearsal for an extreme situation. And in extreme situations, you don't have a lot of time for reflection. You just reach for your rehearsed response. So I worry that we in our rush to make plot have written the world a story that it's not true and not only that, but it's counterproductive, that the only way out of a crisis is cooperation, I mean, by definition, right? Even if you think that you're going to get one of the few lifeboats on the Titanic by elbowing everyone else in the guts and escaping, you still only get back to land if someone comes and picks you up, that is unless there is some cooperation. The war of all against all does not produce a stable civilization. And it's not like I don't think that we'll have crises, bad ones. I don't think that makes me a dystopian or a pessimist. I think that the belief that we won't have a crisis makes you irresponsible. It makes you the person that decided that the Titanic didn't need lifeboats to begin with. Planning for things going wrong is good planning. It's stuff we need to rehearse. But what we need to rehearse is how we come together, how in times when our structures fall apart, we can jerry rig alternative structures to carry us through while we rebuild them. And anything else just makes you into some Rambo adult, terrified bed wetter, getting in the way of the people trying to fix the sanitation. And I don't want to be that guy and I don't want to enable that stuff. So the fiction that I'm writing these days, all the conflict doesn't come from your neighbour is coming to eat you, although there are people who want things that I think are objectively bad in my stories, just as there are in the world, but the real conflict comes from people who agree on what should happen and disagree on how to do it so irreconcilably that they can't work together. That's the most intense crisis you can have.

 

Taylor Owen: How do you, though, separate these roles as a science fiction writer, a nonfiction writer, someone engaged in 20 years of activism in this space? Do you think of them differently or are they all just one and the same?

 

Cory Doctorow: Well, they definitely are different, and they're different in some pretty obvious ways, like making art has a duty which is to make art that is aesthetically pleasing and engaging, and that duty comes ahead of the duties about the social role of art, but it's not clearly divisive because of course one of the things that can make art aesthetically pleasing is its social commentary. So they're intertwined, but they are different. I don't feel a need to be aesthetically pleasing when I do nonfiction activist work; maybe compelling, maybe interesting, maybe well-spoken or memorable, but not in the same way that I think about the art that I do.

 

Taylor Owen: So, I want to talk about one of those pieces. This, I guess, would fall in your journalism bucket, which is this new book, How to Destroy Surveillance Capitalism. And you certainly get the sense reading it that you do feel an urgency around this topic and an emotion even around it. And I want to talk a bit about where you diverge from Shoshana Zuboff. But first, it seems to me there's some places where you really do agree and align, and one of the places that really struck me was around the idea of sanctuary. And I recently interviewed Beeban Kidron about some of her work on kids' tech activism. And preparing for that, I watched her documentary In Real Life, which you're in. And this is decade ago, I guess, now. And in it, you just sort of rail against the idea that we should be making our private lives public. And it feels like Zuboff is echoing that, right, that we still have this problem or we're still forcing more of our lives out into the open when it should be providing us with this sanctuary. And I'm wondering why that's still so important and has surveillance capitalism actually made it worse, since?

 

Cory Doctorow: Yeah, that's a good framing. I think, first of all, that measured disclosure, having the discussion with the door closed so that you can figure out what you mean and what you feel, and then open the door and shout it, that's fine. That's actually a really good way to be. That's how we develop our ideas, that's how people who have socially disfavoured the ideas like, say, the idea that Black Lives Matter or that gender is not a binary, that's how they get to the point where they take that position to the wider world, is they have a discussion in private. And a private discussion can be in service to a public discourse, but it needs a private space to incubate. That's really, really important. I think it's one of the most important pieces of our privacy discourse, is that our social progress depends on people having a quiet, private place where they can talk with one another before they talk to the world. And then of course, some conversations never go live, but a lot of them do. And I think that big tech is a little like big hedge fund in that they are people who first and foremost believe in themselves. They really believe that if they get more data, they can make predictions of increasing fidelity about what we are going to do and, also, about what we would do in the presence of some stimulus, how they can change our behaviour. So I think that they vastly over-collect. I think that they over-collect beyond any articulatable business need, except to the extent that maybe privately some tech companies would admit that having 10 or 15 years worth of data just scares upstarts, that if you're thinking about doing a search tool or a social media tool or what have you and your adversary says, "How will you ever match our 15-year corpus of data," that you might say, "You know, you're right. I can't, so that means that I just can never effectively compete with you." So it sort of scares people off, even if it's not really conferring an articulatable advantage, I mean, 15-year old data might help you sell me a roof, right? If you know that 15 years ago I bought a roof, it might help you sell me another roof in about five years, but it's not going to help you do much else unless maybe if it's like voice data, you might be able to use it to train a machine learning classifier which, again, this is pretty small potatoes in the grand scheme of things. It doesn't account for the difference between Google's market cap and Google's assets. If this is the intangible, Google has a mild advantage in training speech recognition systems, that is a wildly overvalued intangible. So I think that they over-collect, and I think that there are people who believe big tech. And you asked me about where I have similarities with Zuboff's views, but this is an important difference too which is that big tech is storing our data because they think it can help them manipulate us. And some people believe that too. I don't think it's true and I think to the extent that it's true, it's both short-lived and overstated. But the thing that big tech does in its product design is try and incentivize us to make disclosures without explicit consent, without thoughtfulness, without measure. And those disclosures can harm us. They can harm us in lots of ways. They can harm us by being breached, they can harm us by being exploited and so on. I just think that the manipulation story is grossly overstated, and that really the main harm is the harm of knowing you're being watched, and then secondarily the harm of what happens when people who don't want to make money but do want to hurt people do bad things with that data? So we've had the US government in the form of its local police forces serve reverse location warrants on Google where they say, "Tell me everyone who was present at a protest," by using location data from phones. And now we see search term warrants, "Tell me everyone who searched for these search terms." And that's not in service to making money, but it is an incredible risk. And you could imagine that risk if the temporal authority who was requesting it wasn't the US government who, frankly, don't have a great human rights record, but was instead, say, the dictator of Belarus.

 

Taylor Owen: I guess let's just talk about what you think she gets wrong about big tech then, because I think you seem to be saying that she is vastly overstating the power that these companies have over the ability to change our behaviour, and that is in their interest for us to believe that, right? So-

 

Cory Doctorow: Let me rephrase that slightly: they vastly overstate their ability, and she believes them.

 

Taylor Owen: Right.

 

Cory Doctorow: I don't want to claim that she is the origin of this story. I think that she and many other people, both the critics and the customers and the investors of big tech, believe this story. I just don't think that there's good evidence for it.

 

Taylor Owen: Yeah. And this is of course a big debate, right, around Cambridge Analytica too and a lot of the academic work basically saying ... in particular in the political behaviour space, saying this stuff doesn't work. And you're seeing it in ad tech as well and political advertising where increasingly it looks like this is a bubble where we're propping up a whole ad complex that isn't really capable of doing what it's saying it's going to do.

 

Cory Doctorow: Well, as I said in the book, this is not new. I've worked in advertising, like everybody who went through the dot com bubble ended up working with advertisers. So I've worked in advertising. They believe that they can do it, and their customers believe that they can do it. There is one group of people that advertisers are really good at persuading, and that's people who want to buy ads, people who want to buy ad agency services. And I guess there's a little element of self-selection: all the people who can't be convinced by ad companies don't buy ads. But they are enormously persuasive at selling their services in the same way that hedge fund managers are enormously persuasive at selling their services. And clearly, there are some major advantages to advertising with big tech, but they're mostly targeting advantages. The example that I use in the book and that I use all the time is selling a refrigerator. The median person buys one or fewer refrigerators in their life. It's really hard to figure out where to advertise to them. As someone who bought a house a couple of years ago, I can tell you the refrigerator salespeople are really, really hardcore into pitching people who just bought houses because even though most people who've just bought a house can't afford a new kitchen because they just bought a new house, they are far more likely to buy a refrigerator than a median person. And boosting your success rate from a millionth of a percent to a hundred thousandth of a percent is a ten-fold increase in efficacy. And so why wouldn't you direct mail the hell out of people who just registered a title deed? But it doesn't make the person who thought of that a Svengali. Being able to target cheerleaders with cheerleading uniform ads does not make you a marketing genius or a mind controller. It just makes you someone who's found an effective way to address an audience so that even though your ad may not be very persuasive, you're not showing an unpersuasive ad to someone who will never buy a cheerleading uniform. You are only showing your unpersuasive ad to people who might buy a cheerleading uniform.

 

Taylor Owen: I wonder if it doesn't need to be all-powerful in order to still be damaging? I mean, it feels like ad targeting in particular is responsible for some of the problems you list around political division, maybe voter suppression, radicalizing certain segments of a population who might be more prone to radicalization. I mean, this ability to target people with messages based on finely-grained views of their beliefs is damaging, though, right, still?

 

Cory Doctorow: I don't know that you're getting finely-grained views of their beliefs, although sometimes you get that. But remember, sentiment analysis is another one of these pseudoscience things that just ... none of it replicates. It's just all really low-quality, many false positives, many false negatives. So really, I mean, maybe you can get people's beliefs if they self-identify. If you've got a Facebook group called Jew-Hating Nazis, you can probably make a guess at the political views of the people who are most active in it. But just figuring out the words that people use to identify their sentiment is not grounded in evidence-based science, it's grounded in marketing materials and in non-replicating and largely discredited psych research. But I think we need to tease apart those different issues. So one of the things that you talked about is radicalizing people. One of the things that you talked about is disinformation, and then there were a few others. And they're really different kinds of harms and they have different theories. So the idea that people are being radicalized because of their susceptibility to radicalization, I do talk about that a lot in the book. And what I say is that the primary thing that makes you susceptible to radicalization is trauma combined with the manifest untrustworthiness of institutions, that if you've been harmed by institutional failures or people you love have been harmed, then the conspiracy that says that the institution should be torn down and nothing it says should be believed is credible. And you're right that maybe if people who have been traumatized by, say, the opioid crisis and the lack of enforcement action by regulators that allowed it to kill more Americans than the Vietnam War, maybe if they never countered anti-vax, with its story that the pharma industry wants to kill you and the regulators would let it, maybe they wouldn't be anti-vaxxers, right? But they would eventually end up in some dysfunctional space, not because of the particular views that people promulgate to them, or the ability to target people who are in self-identified skeptical groups or whatever, but because they've got a huge hole punched in their cognitive immune system by their lived experience. And I think that focusing on stopping wounded people from believing bad things is laudable, but it is reactive and incomplete. And the only really meaningful intervention that we could make that would really change the situation is to make interventions in the system that produces the trauma to end monopolies which are ... Monopolies are conspiracies, right? Monopolies are corruption and corruption is conspiracy. So to reduce corruption is to reduce the extent to which people are traumatized by the world around them, which is to give them the resilience that they need to fight back against conspiratorial beliefs.

 

Taylor Owen: So let's talk about that. How does competition policy, or dealing with monopolies in a wide range of ways, empower individuals or protect individuals against that kind of tendency that exists in these monopoly platforms?

 

Cory Doctorow: Right. Well, to be safe and prosperous in the world requires that you correctly answer a bunch of very difficult technical questions like should I wear a mask, is a certain vaccine safe, is my kid's distance education curriculum adequate, can I go to the grocery store safely, should I fly on a 737 max, when my doctor gives me 50 Oxys because I sprained my knee, should I take them? All of those questions are questions that if you are a bright, educated person, you could probably find the answer to on your own. You could go out and read the literature and evaluate it and look at some meta analyses and then look at the characteristics of the journals in which it's being published and the impact factor of them and determine whether they're high quality or low quality journals. You could probably do that for any one of those questions. But if you lived to be 150 years old, you couldn't do it for all of them or even a plurality of them. You are going to eventually have to delegate your trust to a consensus, and that consensus emerges from a regulatory process. At the end of the day, regulators are truth seekers. They gather people with competing truth claims. So regulators hear these competing truth claims, they adjudicate among them, they recuse themselves where they have conflicts, they show their work when they explain why they accepted one claim over another, they make a rule, and then they have a procedure by which that rule can be amended or repealed as new facts come to light. And that process requires trustworthiness because although you probably can't evaluate the content of the regulatory proceeding, the form is legible. You can tell if you live in a country with three automakers or two giant brewers or three movie studios that the fact that the regulator keeps making choices that are really bad for you and really good for them, it's probably not a coincidence. And then when you find out that everyone qualified to work at the regulator used to work for one of those giants, it becomes really obvious what's going on. It's why the bad SEC chairman under Trump is a Verizon lawyer and the good one was a cable lobbyist under Obama, because when they're all so concentrated, that's what they end up with. And concentrated industries have two really important assets when it comes to influencing policy. The first is monopoly rents. The reason firms and their investors prefer monopolies is because competition is, to quote Peter Thiel, wasteful, which is to say that if you have to compete with someone, then maybe they'll lower their prices and you'll have to lower your prices to meet them until you start to approach the marginal price. So you can extract monopoly rents, so you've got extra money. And then you've got a small enough group of people that you can figure out how to spend it. You can agree on what should happen with it. When we look at the photo of the tech leaders around the boardroom table at the top of Trump Tower after the election, many of us are aghast that these bastions of liberality are meeting with Donald Trump, but we should be far more concerned about the fact that they all fit around one goddamn table, right? So the combination of needing to have honest processes to know the truth and not die, and those processes inevitably being corrupted by a monopoly, means that you are cast adrift in a state of epistemological terror where you cannot know why you know things. If you asked me why I believe in vaccines, which I do, unequivocally, at this point, I couldn't tell you why. It used to be because I trusted the process. I don't trust the process anymore. If you ask yourself why you believe in vaccines and you're really honest with yourself and you ask yourself, "Does that basis for my belief in vaccines also militate for my taking Oxycontin the next time my doctor overprescribes it to me," then you will find that you don't know why you believe in vaccines either, or you're going to end up eating far too much Oxycontin.

 

Taylor Owen: Yeah. I mean, you talk about the epistemological crisis, and you ground it in the material conditions of the platform as opposed to these just underlying hesitancies or whatever they might be, or distrusts. So is that material condition the economic model of the platforms and the fact they've emerged as these monopolies, or is it about the design of the platforms themselves, or are those intertwined?

 

Cory Doctorow: They're definitely intertwined. I mean, certainly, Facebook and Google and the rest of them spend a lot of time figuring out how to trick you into disclosing more information than you want to. And a lot of the time, that's just absolute kindergarten stuff, like having a giant blue button that says okay and underneath in 90% grey on white type, eight points, it says, "Click here to learn more." Again, this does not make you like Derren Brown snapping his fingers and convincing you that you're a chicken on stage or something. This is the most penny-ante sociopath stuff. It's just pure grift. So yeah, they spend a lot of time trying to figure out how to get you to cough up more information. And there are lots of ways to do it. And sort by controversial is probably one of the more effective sentiment analyses, like knowing where there's more heat than light. There are a bunch of metadata characteristics like formal characteristics of an argument that are easier to detect than other kinds of discussions. There's still probably some false positives. You think about an argument in the real world. When you hear people getting really loud and shouty and waving their hands around, it's a good bet they're having an argument, but maybe they're just my large, brawling Jewish family around a dinner table, because we do that too and the first time I took my fairly reserved British wife to one of our family dinners, she was terrified that we were all about to kill each other, and we were just having a normal evening for in the Doctorow household.

 

Taylor Owen: So the targeted ads and recommendation engine would have been flying if they were targeting your dinner conversation, right?

 

Cory Doctorow: Yeah, it would have said, "Oh, let's just" ... Whatever it is that we did before this, all of these interactions erupted, right, because that's all they can count really is interactions. Then they can take a few guesses at the character of the interaction. But, "Look at all these interactions. Let's just do more of what made this interaction." And it probably works reasonably often, and then sometimes it doesn't. And they pay a low penalty when it doesn't. If they don't manage to kick you and your friends into a rager that generates a lot of impressions for low-performing ads, then they just move on to the next one. And given that if they make it odious enough, you might leave Facebook, that's a small price to pay because the odds are very strong that if you leave Facebook, you'll end up on Instagram. So for Facebook, that's just a wash, especially since they merged the back ends. And again, this is where competition policy comes in because when Facebook bought Instagram, it was an obviously predatory acquisition, and it was so obvious that even the sleeping, toothless watchdogs of the antitrust and anti-monopoly regulators on the other side of the ocean in the EU could see that this was bad news, and they made them promise that they would keep the back ends separate, they wouldn't merge Facebook data and Instagram data. They just went ahead and did it. And then I think maybe they paid a fine, which is to say they paid a price. It's just extra code development. And now hilariously, Facebook's leaked strategy memo to stop anyone from proposing that they be broken up is that they've put so much energy merging the back end of Instagram and Facebook that it would be really hard to separate them. In other words, our broken promise means you can't punish us.

 

Taylor Owen: Right. Well, and even worse than that, or more egregious than that, merging the back end is required to solve all of the problems you claim exist on our platform, right, whether it's fighting disinformation or whatever it might be, right?

 

Cory Doctorow: Right, and that's the other piece. So you asked me why these disagreements are salient, like are these quibbles or do they matter. And I'll tell you why I think they matter, and it's because if you think that we are at the end of history, that big tech has reached an apex state, then all we can hope for is to make peace with it. And Zuboff says this. She says that breaking them up will just fragment our ability to understand what they're doing, that they've got these super weapons, and at the very least we want them in a few hands, not lots of hands. It's like rebalancing the post-World War II nuclear landscape not by creating arms limitations but by giving everyone nukes. That is clearly not a good outcome. But I think we can disarm them, because I think that the weapon they have is not an influence machine, I think it's a monopoly. So you can disarm a monopoly by breaking it up or subjecting it to merger scrutiny, or making it divest of things that give it vertical power. So the problem is that if she's right and I'm wrong, then what I propose would create an existential risk for our species. And if I'm right and she's wrong, then all that happens is that big tech gets stronger and bigger because it has been deputized to serve as an arm of the state. We've got these social problems that big tech unequivocally plays a role in, but if we ask them to solve them, then anything we do that weakens their power potentially removes their capacity to solve that problem, and we can't afford that.

 

Taylor Owen: Yeah. And I guess I'm wondering why they're mutually exclusive thought, your two views on this. I mean, I work a lot in the platform governance conversation, which often has 20 different policies all simultaneously being discussed, one of which is a set of competition policy ideas and another of which, which I think Zuboff would probably prioritize, would be around data protections and limits on how data can be stored and used and sold and so on and so forth. Why are those mutually exclusive?

 

Cory Doctorow: Well, I don't think those prescriptions are mutually exclusive. I think data protection's a great idea. I also think that there are data practices we would prohibit rather than regulating. I think that the policies that I imagine I disagree with Zuboff on and that certainly other people in that orbit I know I disagree with on, is policies about making them responsible for what happens on their platform, making them responsible for unlawful speech acts or even odious speech acts that aren't unlawful on their platform. I think that on the one hand they just won't do it well, partly because they're bad at it and partly because no one is good at it. The problem of Mark Zuckerberg being in charge of the rules governing the social lives of 2.6 billion people is only partly that Mark Zuckerberg is totally unsuited to that role, but it's also that nobody who was ever born-

 

Taylor Owen: Should have that power. Yeah.

 

Cory Doctorow: Yeah. So anything we do that says to Zuck, "You've got to come down harder on speech," is going to end up with him being both more vigorous and less accurate, erring on the side of caution. And the people whose speech I care about are not Nazis. It's the people who Facebook had been routinely victimizing with its moderation policies long before anyone woke up and decided that Alex Jones being deplatformed was a great injustice. It's sex workers, it's trans rights activists, it's anti-pipeline activists, it's Black Lives Matter activists. Those are the people who have been deplatformed for years and years by aggressive moderation policies. And any system of civil justice, which this is, although it's a private system of civil justice, favours those with resource. So if you are Alex Jones, on the one hand, you can afford to hire a bunch of lawyers or Facebook experts or consultants to help you, and so that's the one thing, is it favours people with extra resource, and it also favours people with extra visibility, that Alex Jones can go and piss and moan to his millions of followers about Facebook's moderation policies and maybe get a variance, as we saw Trump getting a lot of variances during the election. Whereas, people who have no access to either social or capital power are not able to either invoke the public opinion or to pay for experts to help them navigate these increasingly baroque moderation systems.

 

Taylor Owen: Isn't part of the answer to that more democratic governance of this space, not less, though?

 

Cory Doctorow: Well, in the sense of having an exalted council of Facebook arm's-length judicers who create precedent and follow it on Facebook, or-

 

Taylor Owen: I mean, certainly not a council chosen by Facebook or Zuckerberg himself or Facebook staff themselves. But it seems to me that part of the response to Zuckerberg having so much power over speech is to put that power in an institution that has a degree of accountability, however imperfectly built into it, which if we're lucky enough to live in a democratic society, should that be us?

 

Cory Doctorow: I strongly favour democratic control over Facebook, but the democratic institution I want to mobilize first is the Department of Justice, right?

 

Taylor Owen: Fair enough.

 

Cory Doctorow: It's not like the Department of Justice is a corporation. There are flaws with our democracy, but like the Competition Bureau in Canada and the Department of Justice and the competition department of the European Commission, they are democratic institutions, and I want to see them taming corporate might. And one of the ways that they might seek remedies against Facebook ... There are a couple. One, they might break it up. And even if they just unwound the unlawfully procured mergers like the WhatsApp and Instagram mergers where they lied to achieve them, that would give people places they could go when they didn't like Facebook. I mean, I think the research is pretty solid that the number one place for a Facebook refugee is Instagram.

 

Taylor Owen: Yeah, absolutely.

 

Cory Doctorow: So you could, at the stroke of a pen really, achieve an enormous amount of variance in how people moderate. But there are other ones. We saw the ACCESS Act introduced in the last Congress that produced some mandates for interoperability.

 

Taylor Owen: Yeah, I was going to say, so much of the competition policy debate gets stuck, I think, in this trust-busting mentality, which in countries like Canada...

 

Cory Doctorow: Well, that's not trust-busting.

 

Taylor Owen: No, I know. That's what I mean. That's why I loved your argument around interoperability because you don't often hear that being talked about in the antitrust conversation or the monopoly conversation. It gets put somewhere else.

 

Cory Doctorow: Yeah, and I think it's got a lot of legs because it's the kind of thing that if you're on the political left you're like, "Oh yeah, that's self-determination, strip away the company's right to stop you from modifying their products or services or procuring a modification of their products and services." I was on a call this morning with someone from the OECD who was like, "It's all well and good to talk about interoperability because you've got a PhD in computer science. Who is going to reconfigure the product for a random person?" Well, YouTube is currently threatening legal action against NYU researchers who modified the service so that everyday users can gather the ads that they see and put them in a repository that accountability journalists can mine.

 

Taylor Owen: Yeah, we work with that team and use that data regularly. It's-

 

Cory Doctorow: Yeah. I mean, there's tons of this stuff, right? youtube-dl, a tool made by a couple of, I believe, German kids, which is now facing enforcement action by the IRAA allows people to download and modify and use YouTube videos in scholarly work. If you're a parent whose kid has been assigned a bunch of YouTube videos for homework and you live in a broadband desert because of the lack of competition, because the lack of a competition is a problem for everyone in every way ... So you live in a broadband desert, and the way your kid does homework is you drive to the parking lot of a Taco Bell for two hours every night while your kid does their homework, you can use youtube-dl to download the videos that your kid needs to watch for their homework so you don't have to sit in the Taco Bell parking lot for three hours. And interoperability allows people to reconfigure services so that they suit their needs. And where their needs run counter to the needs of the service providers, then they get to win. They get to win the argument. So we had the ACCESS Act, which had some mandates. That's really good. I could totally go for mandates that were thoughtful and well-thought-out. But then there's the even more powerful thing which is stripping companies of the legal right to stop people who want to interoperate with them.

 

Taylor Owen: So, explain how that would work. What's the holy grail there and what's the most powerful mechanism you could put in place there?

 

Cory Doctorow: So this is the thing we used to call adversarial interoperability, but my German colleagues, it's like watching a German say squirrel. It's just impossible. So now we call it competitive compatibility, which is a phrase that is a little easier to pronounce.

 

Taylor Owen: I kind of like the adversarial aspect of the other one, though, but-

 

Cory Doctorow: I do too, but stressing the competition is good as well. So we're all familiar with interoperability, you can wear any socks with any shoes. That's interoperability. You don't need to buy the hotdogs and the buns from the same company. But the adversarial interoperability is when you have someone who makes a product or a service and really doesn't want you to change how it works. It's against their interest for you to modify it, and you do it anyway. So this is like Apple facing a complete sidelining in the enterprise because Microsoft was deliberately foot-dragging on making Mac Office work. So if you worked in an accounting firm with one designer on a Quadra in a corner and you gave them a Word file, if they opened and saved it in Word for Mac, no other device ever made could possibly read that file. It was just like permanently corrupted. And rather than plead with Bill Gates to fix this, Apple just reverse-engineered all of the Microsoft file formats and released the iWorks suite. They release Pages and Numbers and Keynote. It just reads and writes Microsoft files, without Microsoft's permissions and against Microsoft's wishes. And every tech company, there's a story like that in their history. And every tech company that has attained dominance has done everything they could to prevent anyone from using the tactics on them that they used to gain dominance. So whether that's expanding software patents, expanding software copyright, the spread of noncompete and confidentiality agreements within the workplace, all of these things together have conspired to produce an environment in which the normal activities that were once absolutely routine and accounted for much of the dynamism of the tech landscape are now legal suicide missions.

 

Taylor Owen: So often, when people talk about interoperability, they talk about data sharing or making social graphs public or whatever they might be. Do you think those are almost distractions to some of these more core, or are they important too?

 

Cory Doctorow: That's in the realm of mandates, and I think that there's certainly a role for it. I am increasingly of the view that all of that stuff is a dead letter without a federal privacy law though, because what you end up doing is you end up writing a half-assed federal privacy law in your mandate rule where you're like, "You must do it, but you mustn't do it if the following things are happening," and that becomes your federal privacy law, right? And the things that you prohibit, the privacy invasions that you prohibit should just be prohibited, period. They shouldn't merely be prohibited for inter-operators. They should be prohibited for the firms that actually have the data. And it cuts against the data coops and money for paying for privacy and all of that other stuff that is often bandied about as a solution by people who think markets should be the first recourse for solving this, like maybe Facebook should just pay you the data dividend for your data. First of all, most people's data does not generate a lot of money for Facebook, like $44 bucks a year for their most valuable customers, right? It's not like ... I mean, sure, a bag of groceries will help a very poor person, but the other thing is a very poor person's data is not worth much. The people whose data is worth much are the people who have the most disposable income. So the idea that a data dividend solves this is not sufficient, not least because the ownership of data is muddled in a way that makes it completely unamenable to market solutions. If you think about an element of the graph, like who your parents are, do you own the fact that your mother is your mother or does she? That is a really profound question. The harder you think about it, the less clear the answer gets. Then when you start to complexify it, like, what if your mother was an abusive mother and she claims ownership of that fact so that you can't disclose to people that your mother abused you? What if your boss was an abusive boss? Does your boss own the fact that he used to employ you, your ex-boss? And if so, can he use his ownership of that fact to silence you because you are stealing his property? And this has got a real world live action going on right now. So in the world of real time auctions for behavioural ads, if you go to a major news site, if you go to The Washington Post, The Washington Post identifies you, sends your identity to a dozen ad brokers, holds a spot auction to determine whether or not which ad you're going to see, and one of those entities wins the auction. The other 11 lose, but they get to own the fact that you are a Washington Post reader. And then they go to some website, Tabbouleh or something, that's just, "You will not believe the many infectious holes in this person's hand. Come and look at the photo. It will take you 20 clicks to see it." What they are selling on that website is, "Hey, do you want to advertise to Washington Post readers without paying The Washington Post rate card? I have a Washington Post reader who's about to make 20 clicks." So, that is eroding The Washington Post's rate card every time it happens. The reason that is happening is because the fact that you visited The Washington Post is not owned by The Washington Post nor is it owned by you, nor is it owned by all the people that The Washington Post told about it, it's owned by all of you. And one of the things that markets are good at is finding out who will sell the asset the cheapest. So anytime you're talking about a data dividend, you'll always get a race to the bottom.

 

Taylor Owen: Absolutely. Why do you think, with all this in mind, one, I guess, why tech critics have underplayed, you argue to a certain degree, the role of monopoly power? And why do you think this idea of surveillance capitalism as the core of this issue has had so much resonance in the last couple of years? I mean, it's been remarkable how it's caught hold almost as an ideology describing this tech infrastructure. What explains that?

 

Cory Doctorow: I mean, I think tech people like to believe that they're geniuses. And if the only kind of geniuses they can be is evil geniuses, they'll settle for that. Nobody wants to think that they're just a low rent, high tech Rockefeller or Mellon or Carnegie or something. They want to think that they're of a new breed, that they're doing something novel.

 

Taylor Owen: I am really curious about how both the reaction you've received for this and also why you think this concept has such resonance at the moment. And I think those two are almost related, right? It's resonating, so people are defensive of it.

 

Cory Doctorow: Yep. Well, the majority of the feedback I've gotten has been pretty positive. I have heard from some techies who want me to know that they sincerely strongly believe that they are better at manipulating people than I do. But when I ask them how they know that's true, they say, "Well, I work for a big community and had to sign a nondisclosure and I can't tell you." And I think that extraordinary claims require extraordinary evidence. And I think that the fact that you think you can do it, and the fact that someone pays you to do it, is not in itself sufficient evidence that you can do it; going back to hedge fund managers. As to why the story of big tech as Svengali is so powerful, I think that if you're a technologist, it's nice to be a genius even if the only kind of genius you can be is an evil one. And I think that if you are someone who is bewildered by the shifts in our discourse and by the rise of conspiratorialism, by the rise of odious racist beliefs, the resurgence of eugenics and so on, that if we can attribute this to evil geniuses manipulating public opinion instead of a collapse in the lived experience and material circumstances of people around us, then it supposes that there might be an easier answer, like maybe we can just turn off the mind control ray or get it to beam better thoughts into people's heads, the Cass Sunstein thing. Maybe we can make people eat well and watch PBS instead.

 

Taylor Owen: Better nudges.

 

Cory Doctorow: Yeah. And the eat well part's really interesting because, again, this is a piece of research that just didn't hold up and didn't replicate well, but also, if big tech was as good at manipulating us as they claim to be, they would all just be in the weight loss business. There is so much money to be made if you can convince people to do what they want, which is to control their bad eating and exercise habits; trillions of dollars on the line, far more than you would get for merely manipulating an election. I just find it baffling that if they can do it, that they haven't. So the fact that they haven't to me seems at least good evidence that they can't. So yeah, it's very comforting to think that there's an exogenous source of bad ideas rather than a sickness in our culture. And I think that if you are someone who believes in markets and in particular someone who believes in the orthodoxy of markets that has caused such forbearance in antitrust law, this consumer harm doctrine, and the belief that monopolies are efficient and that companies grow because they're run by really smart people, then there is a comfort in this story because it doesn't locate the problem with an intrinsic problem with markets, with this [inaudible 00:53:18] view that markets produce returns to people who are wealthy, not returns to people who make amazing things that people love. And you can preserve your core belief system if you make this a rogue capitalism instead of just capitalism.

 

Taylor Owen: I want to end on one thing you said in an interview to The Guardian recently that I just was sort of cheering for and so profoundly agree with, and I'm just going to read what you said, if that's okay.

 

Cory Doctorow: Sure.

 

Taylor Owen: You said, "As a society, we have a great fallacy, the fallacy of the ledger, which is that if you do some bad things, and then you do some good things, we can talk them up. And if your balance is positive, then you're a good person. But no amount of goodness cancels out the badness, they coexist. The people you will hurt will still be hurt, irrespective of the other things you do to make amends. We're flawed vessels, and we need a better moral discourse." And you hear this so much. In fact, you hear it from people at tech companies themselves saying, "Well, we do more good than harm," as if it's this 50%-plus-one equation that as long as you're on the right side of that ledger, as you say, you should be able to continue doing what you're doing. And I guess is the way to get beyond that to have a moral conversation, like you're saying, and what does that look like?

 

Cory Doctorow: It's a really interesting question. I don't know that I have an answer for how we do it with companies. But I was thinking about it in the context of people. I'm a science fiction novelist. And the way that the generations of my forebears have worked, we are at a point now where there are a lot of old science fiction writers dropping dead every year, beloved, great men and women of the field. And when they die, because of where we are as a culture, their hagiography is inevitably disrupted by people who want to tell you about the stuff that they did that caused real harm. And a lot of the time, these people did really bad things in their lives. Then other people come forward to talk about the really great things they did in their lives. And it seems to me that the only way that we can resolve those two claims is to take them both at face value. And I think that anyone who's ever made their peace with someone who hurt them knows that this is how you do it. The parent that you love who struggled with addiction and inflicted great harms on you but whose love for you was true and unwavering and who sacrificed for you and made sure that you knew that you were loved even as they were abusing you, who you come to forgive, you don't erase the bad things they did. That forgiveness is not about saying that the good outweighs the bad. It's about saying that the bad is the bad and the good is the good. This person is who they are. All things being equal, you would have preferred the bad stuff not to have happened, but the bad stuff happened and it cannot be removed. Time flows in one direction. It cannot be undone. Amends cannot be made.

 

Taylor Owen: Yeah. And that need to embrace that complexity, to bring it back to this technology conversation, seems to be how we have to look at these platforms. I mean, you end on this really optimistic note around collective action and technology. We have to be able to embrace that and acknowledge that that is a core part of moving forward and progressing while at the same time holding organizations and people accountable for the things that are harming us.

 

Cory Doctorow: Yeah. I mean, here's ... Like the history of medicine is terrible, right? And every medical discipline, when you look at it long enough, you find an abusive sociopath tormenting human beings for what amounts to their own pleasure. And it's just awful in every conceivable way. And no one should ever do it again, and we should learn that lesson from these people, that no one should ever do that again. And also, the science that we learn from those terrible things, we should not throw away. I think that's what it means to live in the contradiction.

 

Taylor Owen: That was activist and science fiction author Cory Doctorow. I hope you enjoyed our conversation as much as I did.

Big Tech is presented by the Centre for International Governance Innovation and produced by Antica Productions. 

 

Please consider subscribing on Apple Podcast, Spotify, or wherever you get your podcasts. We release new episodes on Thursdays, every other week.