How much would you need to be paid to give up your Facebook account for four weeks?

That was the question a group of researchers from Stanford asked thousands of Facebook users last year in an effort to better understand how the social network affected issues such as political polarization and mental well-being.

The study — which paid some users to abandon Facebook and encouraged others to give it up by using just their self-control — found that cutting Facebook out of your life has a number of consequences. Many of them are positive.

The study, which was published late last month, led to four key findings:

  1. People who gave up Facebook spent less time online — their Facebook time wasn’t just replaced by other apps and websites. People spent more time watching TV, but also more time with family and friends.
  2. People were less informed — but also less politically polarized.
  3. Giving up Facebook improved people’s health. The study found that, on average, those who gave up Facebook reported “small but significant improvements in well-being.” The study also found “little evidence to support the hypothesis suggested by prior work that Facebook might be more beneficial for ‘active’ users.” In other words, engaging on Facebook didn’t make people feel better, as Facebook has suggested.
  4. Those who left Facebook temporarily said they planned to spend less time on Facebook after the study concluded.

This isn’t the first study to explore the health effects of social networks. But in a world where Facebook is now used by more than 2.3 billion people per month, studying its impact on mental health, news distribution, and tech addition has never been more important.

“We were having [those discussions] without really clear-cut causal evidence of what was the real effect,” said Matthew Gentzkow, a Stanford economics professor who is one of the study’s authors. “So we were hoping to provide that.”

Recode spoke with Gentzkow about his findings, and about how social media might be “fixed” down the line. You can read the full study here and an edited version of our conversation below.


Kurt Wagner: You timed this to coincide with the run-up to the midterm election last year. Why was it important for you to study social media impact during that specific time period?

Matthew Gentzkow: I wouldn’t say it was essential. I think we could’ve done this at a different time. But we felt like looking at things like how Facebook affects polarization, or whether people are learning any real information from it related to politics. It would be great to do this at a time where that was really in the front of peoples’ minds, and a central part of what was going on on Facebook.

The polarization element has been a knock on these social networks for a long time. Were you able to detect how much of the polarization is really caused by Facebook — their product, Facebook’s technology — and how much of it is a result of people simply surrounding themselves with like-minded people?

I don’t think we can really tease those things apart. You either have Facebook or you don’t. You can’t randomly assign people different flavors of Facebook. I think it’s notable that you see polarization go down at the same time that people’s overall news knowledge and news consumption goes down. How extreme people’s views are tend to be very correlated with how much they’re engaged in politics. For a lot of these people, being on Facebook led them to just be reading more, consuming more, talking more about politics, and that doesn’t have to be specific to anything about Facebook’s algorithm or to what’s fulfilling people.

When I hear that deactivating Facebook impacts your news knowledge — on the surface that sounds like a bad thing, that people are less informed. Is there a benefit to being less informed?

I think in some ways that reduced polarization is the benefit. It’s a pretty deep question on some level: Would we rather have people who know less, are less engaged, and are also less upset? Or would we rather have people be talking about politics more, be more engaged with it, know more about it, and thereby also have deeper divisions in society? I don’t think there’s any simple formula either way.

The real goal is: How do we get more of the good information while maybe dialing down the extent to which everything gets weighted towards more extreme, more inflammatory [interactions]? We would need to recognize that fundamental trade-off — that people being upset and polarized, to some extent, is part and parcel of having a democratic society.

What it is about the internet, or Facebook in particular, that tends to create an environment that leads to extreme interactions that might not exist elsewhere?

Here I’m just speculating, but … we know from other research that people’s social networks are much more segregated by ideology than any other media sources, say either traditional media or television. You’re much more likely, if you’re a conservative, to be watching the same cable TV station as a liberal, or reading the same online news sites as a liberal, than you are to have a liberal who’s your friend, or your co-worker, or your family member.

So the fact that social networks are very segregated by ideology means if we set something up, where now we’re going to filter all the political content people see through their social networks, that’s going to, in a very simple way, tend to make it more extreme.

The other thing I think is important is that [social media] weights what people share and what people like. What people share on social media can be very different from what they value, what they think is important — even what they think that their friends ought to be reading or would benefit from reading. The motivations for sharing stuff are just quite complicated.

The front page story on the Wall Street Journal that says, “The unemployment rate is up this month,” or that says, “Donald Trump has agreed to go have a summit in Vietnam with Kim Jong Un,” those stories, they’re not really exciting to share with people on Facebook, and those sorts of stories tend not to get shared that much. But that doesn’t mean people don’t think that they’re relevant, or important, or valuable.

Just because that’s not inflammatory enough? Or it’s not shocking enough?

What exactly are people’s motives when they click “Share”? That, I think, is something we need more research on. It’s not just that I’m choosing for my friends to see this. I’m just choosing for my friends to see that I shared it, and so that means content that kind of signals my identity is going to get shared a lot more. If we’re all on the blue team, kind of rah-rah cheerleading partisan content that favors the blue team might be something I’m going to tend to want to share in that sense, because I want everybody to see that I’m part of the team. If I read a story that I think is actually quite thought-provoking and interesting and important from the other side, I might be much more reluctant to share that, just because I don’t want people to misunderstand that I’ve become some kind of red-team supporter.

I think it would be interesting to imagine an experiment where you asked people anonymously, “What would you like your friends to read more of?” I suspect that what people would say they would like their friends to read more of might be quite different, actually, than what they share on Facebook.

Facebook has always argued that people need to use their true identity because that cuts down on things like bullying or harassment. But you bring up an interesting point, which is that if I didn’t necessarily have to make everything I post a part of my online reputation, maybe I’d be a little more authentic in what I’m saying or what I’m sharing. Twitter obviously has anonymous accounts, and I think that you could argue that anonymity creates a lot more problems. But do you get a sense that true identity is better than anonymity, or do they both come with their own sets of problems?

I actually think the problems with anonymity are bigger. I would not at all recommend that Facebook move to some anonymous sharing system.

A real solution is something kind of old-fashioned — and I think we’ve seen some sort of move towards this in the last couple of years — which is just human curation. The basic premise of social media — that we can cut out the human curators and just have this crowd-sourced algorithmic determination of what people should see — I think we’ve learned that while that works great for certain things, it works fairly poorly for other things. It works fairly poorly specifically for things like news and politics in a way that gets pretty predictable.

For the whole rest of history, more or less, the way we have solved this problem is, basically, I pay somebody to think hard about what content I ought to read today and give me a recommendation. So that could be a newspaper editor or the person putting together the nightly news on TV. This has underlined most media for most of history. The healthiest settings in which news and politics content can be consumed need to have a substantial element to that.

These companies have long argued, “Hey, we’re not media companies.” They don’t want to be media companies. But if they truly care about people’s health, and they truly care about people being informed, can they be a hands-off platform and accomplish those goals?

I think they have become media companies by accident, not at all by design. The new platforms were not designed for the purpose of being a news media vehicle. It hasn’t worked out so well. I don’t know what that means for what they should do going forward … But certainly if we were designing things from scratch, the platform we would design for the purpose of news consumption and political information, would be quite different and would look much more like traditional human-curated content.

One of the things I thought was really interesting was that you found that deactivating your Facebook account did not have a “detectable effect on fake news knowledge.” Can you explain what you mean by that? Does that mean that this whole notion of “fake news” is actually a smaller deal than we made it out to be?

I would be careful with that. We went through the most-shared, most-viral false stories on Facebook during that period, pulled out a few of them, and asked people, “To the best of your knowledge, is this claim true or false? Or you’re not sure?” If it was right that lots and lots of people on Facebook would be exposed to those things and were also being led to believe them, you might have thought the people who were still using their Facebook accounts would be like, “We believe those false claims.” And those people who deactivated would be either more likely to say they were false, or likely to say that they weren’t sure. We didn’t see any clear pattern in that direction.

However, the precision of those exponents is pretty low. We can’t rule out some meaningful number of people having been exposed to those [false stories]. Or that maybe that, on average, not that many people are persuaded, but small numbers of people are persuaded, and that still matters. If a tenth of a percent of people are convinced, say, not to give their kids vaccines, then we would care about that from a social perspective. Even though the percent sounds like a small number, it’s still a bunch of kids who are at risk of getting sick.

You found that people reported more positive social well-being when they deactivated their account. Do you think that was specific to Facebook use, or do you think that was reflective of our dependence on technology more broadly? Can you distinguish between the two?

So, we can’t distinguish between the two, and I think the results are entirely consistent with it being at least in some part the latter. People were using Facebook an average of an hour a day. So that’s for sure some big chunk of it. Contrary to what some people would have predicted, people use all digital stuff less when they deactivate their Facebook accounts. You might’ve thought that if you turn off Facebook, people will switch to using Twitter more, or switch to using Instagram more, or switch to reading Recode more. They don’t do any of that — they’re just on their phones less.

We asked a handful of people this question for Facebook’s 15th anniversary, which was just a couple weeks ago: Do you think that Facebook has been a net positive or a net negative for humanity?

[Chuckles] I don’t think I’m going to answer that one. The truth is, I really don’t know, and I think our research gives you a long list of positives and negatives, but it doesn’t really give you a metric to add them up. I think how you add them up depends on a lot of things, including your personal values and how much of a problem do you think political polarization is, how much of a benefit do you think people having social connections is. Those are tough things to aggregate.

You asked how much people would need to be paid in order to deactivate their account for four weeks. When you asked that, what was the general feedback you got from people? Were you surprised by what they said?

The median valuation was around $100 dollars a month. But there’s a lot of spread. There’s a lot of people all the way from $0 to $100, and then a lot of people who gave really big numbers. Sort of like, “You couldn’t pay me enough to give up Facebook.” I can’t say conclusively, “Has Facebook proven good or bad for society?” [but] what is clear is that the people who use it value it a lot.