Column: Have tech companies like Facebook tricked us into abandoning our humanity?
Facebook, the world’s social media siren, has taken a beating — to billions of dollars in market value, to the incalculable chunk out of its reputation for trustworthiness. Congress has summoned CEO Mark Zuckerberg to testify, and Zuckerberg has resorted to “legacy media†— newspaper ads — for a mea culpa. None of this may make any difference at all, and that’s what worries the people at the Center for Humane Technology. These are no Luddites; they’re Silicon Valley heavy-hitters themselves who have become Paul Reveres of the tech takeover.
Aza Raskin is a co-founder and chief strategy officer, and he sounds an alarm about where tech is taking democracy and humanity — if you can hear it through all the chaff and clamor on that phone in your pocket.
Click here for the full archive of "Patt Morrison Asks" podcasts »
The mission statement of the Center for Humane Technology is, “Technology is hacking our minds and society.†Certainly in the last couple of weeks we've been given proof of that.
Our founder, Tristan [Harris], has been talking about this since 2014, when he became the design ethicist for Google, and looked at all the ways in which our tech companies’ business models are designed to addict us: We make money when people spend time in our site, and this creates a whole bunch of bad incentives to us to stick around. And we've created the world's largest persuasion machine.
When 67% of the U.S. population gets its news from social media and from tech companies, they literally are deciding what we see and what we believe is true, and what our sense of reality is.
And we're now seeing that a couple of bad actors like Cambridge Analytica have used the ability of these platforms as they were designed to be used, which is to say to sell product to influence people — which is not so much different from selling an ideology.
So like Capt. Renault in “Casablanca,†are we shocked, shocked that this is happening, when this was in fact part of the business model all along?
Many times, when we talk to people, it’s like, Oh this is nothing, we've had advertising for a long time, we've had propaganda for a long time. What's different this time — and that it's hard to see when you're inside of the machine — is for the very first time, our connections with our friends are intermediated.
These companies intermediate our sense of consensus reality, which then gets hacked by Russia, which then gets hacked by Cambridge Analytica. So that’s one thing that's different.
The second thing is that it is 24/7. The phone is the first thing people pick up and look at when they wake up, the last thing they use before they go to sleep, and they check it 150 times a day.
When 67 percent of the U.S. population gets its news from social media and from tech companies, they literally are deciding what our sense of reality is.
— Aza Raskin
And the third thing is that these products are a little bit like tobacco. They’re addicting. But it's a tobacco that gets smarter and better at addicting us the more we use. So with just 150 likes you analyze on Facebook, a computer algorithm can predict your action and know you better than your co-workers do, than your friends do. And even than your spouse does.
And you put these three things together, and we should be shocked at how easy it is to sort of drift a population in a direction that you want. This is one of the hardest problems to communicate.
Then if anyone is to talk about blame, where does blame get assigned?
There's always more that we as individuals can do, but this is not an individual problem. This is a community and society-scale problem. Early on in the internet, there was Section 230 of the 1996 Telecommunications Act, which said that internet companies were not responsible for content that the users posted, which is a way of saying the Internet and software was creating a space of deregulation where there were no protections for users.
At the beginning that felt like a great thing. The web was this this wild new world where creativity could be unleashed. You could connect with people and groups that could exist here that couldn't exist elsewhere.
But what we've seen is that if software is eating the world; as [software pioneer] Marc Andreessen likes to say, deregulation is now eating the world. So if companies aren't responsible for the content that gets posted, at the very least they need to be responsible for the content that they promote.
There is a [former Google engineer and] YouTube researcher, Guillaume Chaslot, who’s worked on the recommendation engine for what YouTube videos get played next, and what he's discovered is that no matter where you start on YouTube, if you just let the recommended videos play four or five times, you always get pushed further and further down conspiracy roads, further and further toward radicalization. If you start on things about vegetarians or vegetarian food you end up in chemtrails [conspiracy sites] .
And Facebook seems to be acting as a kind of great radicalizer, as Zeynep Tufekci has now famously said in her New York Times op-ed. And there, I think 100% that these companies have responsibility. Facebook knew that Cambridge Analytica was using this data as of 2015. And yet they didn't really do anything, because it wasn't convenient and it wasn't good for their business model.
So at the heart of this of course lies the business model. If we are the product and not the customer, then all of Facebook's business model relies on grabbing our data and selling it to people. And this is just one bad actor. But do we really want somebody that has all of this power over society to be unregulated?
If Facebook wanted to throw an election today I think they could. Of course they could.
We can feel our values being ripped from us, like we’re losing our grasp on them as the discourse, our public discourse, gets more and more vitriolic.
You are a second-generation tech person; your father was interested in the computer-human interface. Did you grow up thinking about these things, or did you have a road-to-Damascus moment yourself?
I grew up immersed in this world. Something my father, Jef, talked about is that humans should not be subservient to computers. For him, what that meant is that we shouldn't have to learn the arcane ways of typing holding down your-left-shift-elbow-nose to get the computer to do what you want. But what we're missing is the next level up. Computers can work pretty well for an individual, but they're working terribly for society.
And just like in the ’80s, when we had computer-human interaction take off as a field, what we need now is society-technology interaction, understanding how societies and technology interact, because there are a lot of dark patterns that are breaking down democracy.
Many of these companies have taken a position that they’re ethically neutral, that they’re morally neutral.
Yes, I think that the tack of saying, we're a neutral platform, so we don't have to take responsibility — it feels convenient and it feels nice, but it’s very much like taking your hands off the steering wheel and saying, Well, I'm not responsible for what happens if it crashes.
And so [companies] have taken the course of saying, we're a neutral third party. But that's not actually true because we’ve just outsourced our decisions to algorithms, and said, Hey, algorithms, just show users what is most popular, what gets most clicked on. But that's an editorial choice and those editorial choices can be hacked easily.
So I think it's important moving forward that, especially for the recommendation system, that companies be held responsible for the content that they promote.
When systems are abused, like the 126 million American citizens who saw Russian propaganda, or the 50 million users whose data was abused by Cambridge Analytica — they should all be informed directly that they were targeted.
Because I don't think they can have it both ways, of trying to pretend to be neutral but then not taking responsibility when their platforms are used.
Brian Acton, who created WhatsApp, said, “Delete Facebook.†Is that even plausible?
I would love for that to be plausible because it would set up the next round of companies try to tackle this. But I think for most people, Facebook and these other platforms have become such a part of their lives that it's hard to say just get off of it.
Monopoly law was set up when all products cost money. Now, many of these products are free to use, and so our monopoly laws don't even touch that. And there's a new kind of monopoly, which is the monopoly of attention, the monopoly of network effect.
It's hard to get off Facebook because all of your friends are on Facebook, all the events happen on Facebook, and even if you got off, Facebook would still be collecting data about you inferred from your friends. And so instead I think we have to think about other regulatory mechanisms here.
[Technology legal scholar] Tim Wu says the 1st Amendment was created in a time when speech was expensive and listening was cheap, that it was hard to get your message out, but there wasn't that much content, so you could choose what to listen to.
And now the inverse of that is true, that it’s easy to get a message out but it's hard to hear because there's so much out there. And that's changed the way the thing that the 1st Amendment was trying to protect is being abused.
An example is that China often doesn't just do direct censorship. Instead, they pay 2 million people to post 480 million comments on their social media. And that just diverts conversation or makes the false sense of popular opinion being something else.
We need to be thinking at that society scale, because that's what we as Internet companies do — operate at society scale.
You and the people in your group are or were also Silicon Valley moguls, and you saw some of the problems. But what about the people who may not? What, for example, is a Mark Zuckerberg thinking about retrenching, regenerating, restructuring his product to address some of these? Is that going on in Silicon Valley now?
I think what we need to be careful of and watching for is the PR moves, the too-little-too-late kind of stuff. Because to really fix these problems, you're going to have to take a hit on your revenue.
When the British Empire moved away from slavery, they took a hit of 2% of their GDP for 60 years. Doing right often comes at the expense of business models, and unless they show that they're willing to do that, I'm unconvinced there's going to be real change.
Can they do that? Will the shareholders start screaming and saying you're abdicating your responsibility to make us rich?
I think we can all be on Team Humanity here, shareholders alike, in saying we see where this road is leading us. And it's not good. If we want to remain a functioning society, we need to change. So I hope the answer is “yes.â€
What other remedies does your group suggest?
There are a couple of simple things that individual users can do, just to fight against the effects of digital addiction. One of our favorite ones is to turn your phone into black and white mode, What we’ve found is that just reducing the sugariness of the colorfulness of your icons makes it a little easier for you to put down your phone.
Another one is turn off all notifications from non-humans. So, no apps, no likes, just stuff that real people said. And that immediately reduces the amount of buzzing in your pocket and reduces tech addiction.
I suppose there are a lot of Americans who say, I don't care if Facebook knows my kids’ birth dates, for example. Is that the most alarming scenario for you — that people just don't care that the technology is taking them over, and that by extension it can take over the functions of democracy and consumerism as well?
That is indeed one of my biggest worries, that this just doesn't feel we're under attack by the Russians. They said that they're in an information war with us. It doesn't feel like we're at war. There are no bombs exploding. There’s nothing in our “felt†sense that tells us that something is wrong.
And the same thing is true with all of the tech addiction and the way we're being manipulated is that it doesn't feel like there's somebody walking behind us looking at everything that we do, and then crafting messages to get us to do whatever that person wants. Even though that's what's happening.
You often hear people say, I don’t have anything to hide, what should it matter? But the truth is that, given these data points that we willingly give up to Facebook, algorithms that can predict your sexual orientation, the likelihood that you have a drug problem, whether you're more neurotic or open, and what kind of message will most exactly land with you to get you to vote for what politician.
Are you and your associates talking to some of the tech companies where you used to work to say, look, this is how stuff has to change?
We do we do try to talk to these companies and in fact Tristan spent years on the inside at Google. [He] had this presentation where, in 2014, he started to talk about the vulnerabilities that our business models create.
And it became the most talked-about thing, and [he] asked that it be brought up at the next all-hands meeting — which they didn’t talk about. [Center for Humane Technology adviser] Sandy Parakilas, who was at Facebook and helping to run their application programming interface privacy policy violations team, brought this stuff up to them, I believe in 2012. So we’ve tried from the inside and that just hasn't worked. And now it comes from being on the outside and applying public pressure.
Imagine if tomorrow, Zuckerberg were like, you know what? You're right. These are the ways my platforms are being abused. And instead of just optimizing for time on sites and page views, which optimizes for outrage and filter bubbles, I'm going to set my thousand, ten thousand engineers to try to figure out how to heal the divides between people — that could be really powerful. The question is, will they?
And what are the odds of that?
It doesn’t seem great. The thing that I start really worrying about is right now, the level of trust and shared reality is incredibly diminished. Truth is decayed. But I think it’s going to get much worse, because we’re just at the brink of cheap AI and algorithmically generated fake video.
I think very soon, we’re going to have bots scraping the top news and taking the [people in them] and their words, and just putting up every possible meme into them. So when scandals break, you’ll be like, Oh, I’ve already seen everything. I’ve seen every possible scandal video; every possible sex tape has been created.
[In this scenario] when the Trump “Access Hollywood†came out, you would be, Yeah, but I can listen to 100,000 other versions of this, with him saying other things. Anything that’s within the research community now is going to be an app on your smartphone within a year.
Follow the Opinion section on Twitter @latimesopinion or Facebook
MORE PATT MORRISON ASKS
Ken Burns on making his Vietnam War documentary: 'I was humiliated by what I didn't know'
Caitlyn Jenner talks Trump, being a transgender Republican and missing Bruce
A cure for the common opinion
Get thought-provoking perspectives with our weekly newsletter.
You may occasionally receive promotional content from the Los Angeles Times.