How you get addicted to apps (Tristan Harris, founder, Time Well Spent)
On a recent episode of Recode Decode, hosted by Kara Swisher, Tristan Harris, founder of Time Well Spent, talked about the ways that tech is designed to manipulate users and hold their attention for as long as possible. Tech companies, he said, need to question themselves about the consequences of the products they’re developing.
You can read some of the highlights from the interview at that link, or listen to it in the audio player above. Below, we’ve posted a lightly edited complete transcript of their conversation.
If you like this, be sure to subscribe to Recode Decode on iTunes, Google Play Music, TuneIn and Stitcher.
Kara Swisher: Today in the red chair I am happy to have Tristan Harris, the founder of Time Well Spent. It’s a movement to create apps and other tech products that value the time of their users. He previously co-founded Apture, which was acquired by Google in 2011, and then at Google he spent nearly three years studying the intersection of ethics, philosophy and tech. Tristan, welcome to Recode Decode.
Tristan Harris: Thanks for having me.
I have been interested in you for a long time and what you’ve been doing. I actually read this fantastic piece on you in the Atlantic on some issues that I’ve been thinking about a lot. I thought it was just astonishing that you’re thinking the way a lot of people are thinking now, about the impact of tech in a way that’s not so good for humanity. Why don’t you give us some background first. Give me your quick five-minute bio.
Well, I grew up actually in the Bay Area, in San Francisco. Since I was about 10 or 11, I thought I wanted to change the world and work for Steve Jobs and work for Apple, so I did when I was 18.
Why was that? You just saw what he was doing — think different?
Yeah. Just to sort of anchor this conversation, I think about how technology is empowering us or not. I was really inspired by the computer as a bicycle for our mind. I wanted to work on the next Macintosh. I worked at Apple when I was 18, when I was actually here at Stanford doing internships. I worked on this company called Apture which was … quickest way to explain it is a one-click explanation engine for things when you’re on the internet. Actually, related to this conversation, I first became aware that there’s a difference between our social mission, which I kept telling myself as CEO …
You are a very classic geek, that you want to change things, that it was more than just selling plastics or working at a Walmart or figuring out finance or something like that.
Yeah, yeah. The motivation for the company was how can we make people more curious and make it easy to learn about things. At the same time, we were selling our explanation product to publishers like the New York Times or TechCrunch or something. I faced this conflict because publishers wanted us to just increase engagement, to make people spend more time on their website.
Engagement, what a word.
Engagement, what a word, right. I’m very skeptical of words like this.
It used to be a nice thing, like you get engaged to be married. Now it’s just horrible.
I would keep telling myself that oh no, no, but we’re helping people learn about things. But then to sell the product successfully, I had to just increase time on their website or make them more money. I kept pretending that this was the same thing, and I started basically really questioning my own beliefs about what is this thing I’m calling educating people or helping them learn about something? When is that actually happening, and when am I just as a founder telling myself some positive story, which you have to do to inspire investors and employees?
Sure, absolutely, but when in fact you were just sort of … get them to keep eating.
Get them to stay on there. In terms of my background, the other thing was that actually very close to where we are right now at Stanford, literally a building down the street, I studied at the Persuasive Technology Lab with B.J. Fogg.
He studied habit formation. First guy to …
Many companies have used his work.
Oh absolutely. Him and then later Nir Eyal, and he authored the book “Hooked.” There’s essentially this … most people don’t know this. There’s this whole discipline and field of persuasive technology. There’s basically a playbook of how to persuade people’s minds to use products more successfully.
The nicotine of tech, essentially.
Yeah. That imposes a moral view on all these things. You could argue, as many do, these persuasion principles are happening all the time, and then these guys just want to use it consciously. I got really interested in the ethics of that. What is the ethics of persuasion, especially when the consequences in this case now affect billions of people?
To ground that conversation and that class, that year in 2006, the founder of Instagram and I … or Mike Krieger, one of the founders of Instagram, and I were project partners. I saw many alumni in that class go on to be early in the ranks of Facebook and Instagram. I got really concerned about how do we, with this power, persuade people to spend time on things? How do we know that that’s good? How do we truly verify?
Do you even ask the question to start with?
Do you ask the question? In that class, we did ask the question. to be fair.
Right. Persuasive technology means how to use technology to be more persuasive, or to get people to do things. How did you look at it? You could define that in a number of different ways.
Yeah. Specifically I think the way B.J. defines it is, persuasive technology is to persuade people’s attitudes and behaviors.
To get them to do whatever.
To get them to do things. In your product, you want them to finish signing up on an email form.
Click here. You want them to subscribe to a newsletter. You want them to scroll for longer. You want them to invite their friends. You want them to fill out a profile on LinkedIn. There’s just all of these things that products need to do to be successful, and this class was just reexamining what we traditionally call design, in terms of a different frame of persuasion. How do I persuade them to do something?
Which design does — like, come into this door. Isn’t this attractive, this product, the shape and feel of it?
Absolutely. None of this is really new. I mean, this has been going on in marketing or store design like Walmart for ages, but when you apply it to technology, suddenly the scale is totally different.
I’ll never forget, one time I was at Circuit City — this was a million years ago in the ’80s. I was covering … Circuit City at the time was a big retailer. There was a wall of televisions and I was walking through it with the CEO, he goes, “Ah, the wall of confusion.” I was like, “What?” They purposely put up so many televisions you don’t know what to buy, and then they move you to the one they want you to buy that particular day. Some of them were done badly, so that you wouldn’t want it. I was horrified. Then of course they have the race track, where you can’t escape. It was a similar kind of thing which reminds me a lot of what’s happening in technology, which is more effective, actually. You don’t even know it’s happening.
Totally. Most people have some sense that when you go to Walmart or just a grocery store, that the design is explicit. There’s probably some team of people somewhere who had to deeply think about all this and …
End caps and where things are placed on shelves.
Totally, the milks and … the two reasons people go to the grocery store most often are for milk and for the pharmacy. And so they put those …
In the back.
There’s a big debate about whether milk is at the back because it’s the closest place to load it in and keep it fresh, but needless to say, there’s still just all this persuasion there, but most people don’t know with technology, the common narrative about these tools is these are just neutral platforms, it’s up to people to choose how to use them, and it’s just up to us to learn how we have to adapt to this new technology. That just ignores this conversation about persuasion.
Talk about that ethics of it, because you talked about it in the class. Did anyone care, or just say yeah, yeah, yeah?
It is part of the curriculum. Both then and now. B.J. likes to point that out. He’s right to. I think we had one three-hour class that was just about the ethics. Really ethics is just about asking the question. It’s just about really being honest with yourself — how would I know if this is actually good for someone, and what are the variables?
We can go into it, but it’s really a rich and complex field. Does the persuadee know what the persuader’s goals are? Are they aware? Do they know that the persuader has a huge amount of expertise in their methods? That there are methods even present? Does the persuader have respect for the persuadee? Does the persuadee get the ultimate thing that they wanted? What are the persuadee’s goals? Does the persuadee even know their goals? Most of the time we don’t know our goals when we’re using software. There’s just all of these rich variables.
I would say no to all those things. I think we’re being manipulated almost all the time.
Yeah, and words like manipulate, persuade, coerce, there’s official nomenclature for some of this stuff, but yeah. This is a rich topic and this is the conversation, I think, is what is persuasion, and what is the ethics of persuasion?
All right. You do this class, and then what? You create your company, and trying to not lie to yourself that you’re not trying to get them to use the products more, which is the goal.
Yep. Then I justified it obviously by, well, people are benefiting from it. We had good intentions, they just became aware of this conflict. The attention economy demands … product design used to be about building a product that functions well, that helps people, and now design is sort of, I recognize in working on the company, that design became subsumed into the how do I get people to use it? How do I get people’s attention? How do I hold them here? Almost all designers are now in this totally different role of just getting people’s attention. So I did that with the company and started to realize that … our company got talent acquired by Google and I was …
“Acqhired” by Google, and then about a year into being at Google I was actually working on the Gmail team, and we were actually working on Inbox, which is the successor to Gmail, and I was part of conversations about how do we make it easier or delightful to use your email client, how we build a better email client, and I felt like we were missing this deeper question, which is how much time do we all spend on email? Just so much time.
Waste of time.
And how much of any of that is ever adding up to a positive contribution to your life?
Maybe two emails a week or something.
Such a small number, right? Here I am, part of this room of good, smart people trying to do a good job in building a great product, but there’s this really deeper kind of subtler question, which is how would we design this not just so it’s kind of cool and has some nice animations and makes it easy to do a few actions and make it simple to use and instead ask, how would you pivot this entire paradigm to actually be about those two emails a week, that actually make life better?
And that conversation wasn’t happening, and there’d be decisions being made about, for example, should we send you a push notification for new emails when you get them by default. And I’d hear some engineers and designers just quickly think through this question and say, yes, sure, why not? And it’s like, wait a sec. In that one moment a billion people, as a consequence of this, would be interrupted at dinner, will not be with their kids …
Which they’re not thinking about at all.
I mean, I think it crosses the mind, but the question is whose job is it in that room, in space and time inside of one of these concrete structures somewhere within a couple miles of where we are right now? Whose job is it in those product design rooms to ask that question about consequences?
Well, I would say do they want to ask that question, because one of the things you do get from them when they say, I’m working or the reason I went to Google for … I’ve heard this from a million people going to Google or Facebook, whatever, is I realized I had the possibility to impact a billion people, and that was so attractive. And I would always ask, what were you impacting them? And that was never the second part … I just get to impact them, and I’m like, well, that could be for bad or good, and believe me when that moved into that, it was almost like I was speaking Latin to them. It was fascinating, because they never thought of that. There’s nothing within the companies to … they never think of that. It is just to do it for doing’s sake. To increase engagement or whatever the goal is.
For sure. What scares me more is when there’s usually … because there is a positive intention in the tech industry to make things better …
Yes, that is the intention.
The assumption is that to impact them at all would be to impact them positively.
What scares me the most is again, how would we know if we’re wrong? How could you, inside the mind of someone who didn’t see something, know what you don’t see?
Or do we measure … what are our standards for measurement? So what did you do there? Were you being irritating to them by saying, wait a minute, shouldn’t we think about this? Or did you just go along with it?
I mean, when I worked on the team, I was participating in the conversations, but I just had this growing feeling of something is wrong. And at the same time I felt like … I was using at Google especially … You just get flooded with emails and flooded with calendar invites and you’re using technology a lot, all the time, and I was just feeling like, man, is any of this making life better? I kind of had, personally just on my own, enough and I started making this presentation, which is referenced in that Atlantic piece, which was …
When I say presentation, I mean it was just a deck, a slide deck. But I made it in a very high-impact way. Every slide was a big photo with four words saying, you’d advance really quickly through it, it said, never before in history have basically 50 mostly men, mostly 20–35, mostly white engineer designer types within 50 miles of where we are right now, had control of what a billion people think and do, when they wake up in the morning and turn their phone over and … Look at this thing. We should basically …
We had enormous responsibility to get this right. And it had a whole bunch of stuff in there about cognitive biases, and how we play on people’s psychological vulnerabilities by accident and just made this case. So I made this presentation thinking this is kind of like a thing I just personally care about, right before I was planning to leave. And I sent it to about 10 people, and when I came in to work the next morning and I clicked on the Google slide deck that shows you in the top right how many people are looking at it, and there was something like 100 people looking at it right then. I thought, well, I only sent it to 10 people, so clearly it was starting to spread. I looked at it later that day, there was like 300, the next day there was 400.
I went to the Google bus in the morning and many of the laptop screens in the Google bus had it open. So there was really this moment. It was the number one in MeanGen, which is the internal company thing, and it basically had gone around the company saying like, whoa yeah, we should really be thinking about this. And as a result of that, I was fortunate that, not that I’ve done it in a way to get me fired, but I was basically offered a stake and generously given a space to think about the ethics of all of this stuff, and did so for the next couple of years.
To Google’s credit, they very generously let me do that. But it’s very hard to change these systems from the inside, because as anybody knows, trying to change any big system from the inside is very challenging. So I left actually in large part to have conversations like this one.
You stayed there for how long doing this?
A couple of years.
A couple of years. That’s how they do it. They keep you, we should think of that. I could see them patting themselves on the back, that we’re thinking of I can just … It’s like an episode of “Silicon Valley” in a lot of ways.
Yeah, it’s complicated.
Yeah, yeah, yeah. What did you do? What was your impact there when you were there? You were the ethics guru apparently, or something.
Yeah. I mean, it’s not like I was the guy and then everybody came to me. I actually felt kind of the opposite. I felt like I was trying to figure something out, which is: How do we work out this question of when is email actually making life better? What does it mean? It’s a lot of philosophy. It’s design. It’s thinking. It’s looking at examples. It’s attention. It’s well-being. It’s life choices. It’s about free will. It is about when is someone making a choice. It’s about documenting all of the cognitive biases that exist, taking all the stuff from behavioral economics to saying like where do people’s minds get tripped up?
We’re making intersections. One map is where the people’s minds get tripped up. Just like a magic trick, like I can stun them, I can add a slot machine. I can give them social obligations they feel like they have to repay. It’s like where are all the things that trip up minds, and then where are the products that we make doing that or allowing that to happen or encouraging that to happen? And when is it resulting in good or bad outcomes?
Did your stuff there result in any changes to products?
Not really, unfortunately, and so I made some proposals.
The reason I stayed is because there’s really two companies that can actually change this system.
It’s Apple and it’s Google. Because they’re the mediators between … there’s a billion minds through which then all of these apps and websites and all the stuff has access directly to your mind, through the vehicle, which is the smartphone or a web browser. And only two companies make those things. I mean, web browsers, sure, but really it’s just the smartphone that’s the dominant medium. So it was really about changing Android and iOS’s home screen, notifications and the web browser.
So for example, little things like … let’s imagine that you could mark websites that you wanted to be more mindful of your time. I don’t want to use Twitter for more than about 15 minutes. I know there’s a thousand engineers at Twitter whose job is to make you use it as long as possible. I want to use it for 15 minutes, so imagine, you could mark Twitter that way and it disappears from your new tab screens, so it doesn’t ever invite you to go there when you don’t want to.
Then when you hit T, it auto-completes, it shows you inside of the auto-complete, the number of minutes spent there so far, which is fine, and then it changes color, let’s say, when you have gone past the 15 minutes or whatever and you’re about to go there and then it says, are you sure? And maybe some breathing thing comes up for half a second. There are just different ways you could do this. I don’t have exactly the right solution.
And the answer I got back was, well … actually, it’s interesting. One of the common arguments in the tech industry is like, “Look, users have free choices. They can make their own choices.” And it’s really just not 100 percent true. People’s minds are shaped in manipulating these moments. That is very much a Google ethos, by the way, Google much more so even than Apple, and Facebook is certainly. These three companies have very strong libertarian values. People make their own choices. And also in the case of this, there’s opportunities for extensions, Chrome extensions to do some of these things. Of course it’s a long story, but some of the extensions don’t give you access to the places in the UI you’d actually need to really focus on this problem.
I would say they just didn’t want to do that. It’s not in their interest to do that, to make you aware of how much the time it is sucking out of your life.
Well, so it starts to point the finger at maybe a growing problem. I don’t think any of these companies are evil or that the people are evil. I think there’s some bad incentives. We measure something that’s fundamentally opposed to what’s good for people. The question is how honest can we be about that when we discover it?
Well, we’re going to talk about that next, because I’ve had some real big run-ins with people at all these companies recently about fake news and alternative facts, and all kinds of things that I think they’re absolutely responsible for, and they absolutely do not think they are. We’ll talk about that more and what you’re doing with Time Well Spent.
We’re here today with Tristan Harris, the founder of Time Well Spent. He’s actually a geek who is thinking about the impact of the things he makes in technology. We’ve been talking about his time at Google, where he was trying to get them to think about the uses of the technology that Google, such a powerful player in the industry, makes, but then you left. You were doing some of the ethics around how we get our technology and what the responsibility of the platforms are. Talk about what you did after you left and then I want to get to this idea of who’s responsible, and the fact that most platforms that I encounter want to abrogate their responsibility almost totally for what they’re doing.
And now it seems like the chickens are coming home to roost, as they say. Right now especially in this election, and things like that, where social media has become weaponized, where there’s real-world impact. And one of the things I was thinking about was that they all talk about how they want to change the world, but now that they actually did, they don’t want to take responsibility for it. So talk about what you did after Google. So you leave, sold your company, you stayed there and than you left. Typical journey of …
Yeah, yeah. Before I left, I gave a TED talk to introduce this concept of Time Well Spent.
Yes, which got a lot of attention.
And that helped kind of launch this … I mean, what it meant to do is to say, you need something to fix this. You have essentially a perverse incentive in a market and you need to be able to say, how do we get out of this? And part of it — just like I learned from inside of Google — is that Google wasn’t going to do something until consumer demand was there. Some of the things you said, that they didn’t want to do it. I would say, if there was evidence that a million people just said, “God, I am distracted all the time. I feel like this stuff is manipulating my mind and I want the technology companies to help me do it,” that consumer demand can speak for something different.
Would that ever happen?
Well, it doesn’t happen on its own. Not that I arrogantly thought I could kind of make all that happen, but I wanted to create a conversation about consumer demand as well. So the TED talk in part was about saying, just like with the organic food movement there … Walmart had no incentive to do other than the race to the bottom of the cheapest apple for the cheapest price, or for the lettuce, unless consumers said … First of all, there’s education that, hey, the cheapest thing isn’t actually good for us, because the chemicals that are put on there to get that price don’t make it good for us. But then people have to be able to define and articulate, what is that thing that’s worth paying for? And with technology right now, we think it’s free. When it’s free, how much are we paying for our Facebook accounts? Nothing.
It means that while they want to benefit us, that’s not who they’re beholden to. They are beholden to the advertiser. We need something like an organic food movement for the tech industry. That’s one market intervention style. Other ways involve regulation, other ways involve taxes, which are not on the positive side. So the one collaborative solution is to say, let’s charge more for something that’s actually good for people or aligned with people’s interests. But then that gets into a whole other conversation about admitting when parts of the product today are not aligned with people’s interests.
Meaning you’re giving the poor people the bad food and the rich people meanwhile enjoy the fruits of the whole food or something like that.
Right, and how do you, if it’s the same company that has to provide that, for example, because you can’t fork it into two companies. Imagine if Safeway said, “Well, we have Safeway, but then we have Safeway Plus.” You’d be like, “Hold on a second.”
Whole Foods is different. It’s a separate company. So how do you basically segment that out? I was going to say the cigarette industry, I remember, when there were conversations in the discovery that maybe this is not good for people.
Right, “maybe.” All these dead people here.
But most people don’t know, the cigarette industry actually spent over a billion dollars developing a safe cigarette. And one of the reasons they didn’t release it is, would be to say for releasing a safe cigarette, what does that make regular cigarettes?
So there’s a really important, subtle conversation, which is how do you navigate to the better territory. It’s like when you were creating a whole economy on something, and then you realize, “Uh-oh, maybe it’s based on something that’s not totally good.” Slavery was another interesting example. For a while, people thought that was okay. And then it wasn’t, and then you realize, “Uh-oh, we can’t actually just say this is bad. We have to get out of it. Too much of our economy is dependent on it.” So it’s much subtler the way …
It’s a good example.
It’s much subtler the way that selling attention is bad. It’s not like it’s evil or it’s slavery, or it’s killing people. It’s much more subtle. It’s much more diluted.
And if they stop, others might just rush in.
Right, exactly. So it’s a game theory thing. If I don’t do it, the other guys still do. I’ll just die. I can’t kill myself for my shareholders.
Just because I feel better.
Just because it’s the right thing to do or something like that. So the question is how do you lift out of a repugnant situation into a better terrain?
So how do you do that?
First of all, it involves actually honest conversation. The recognition of what’s good, what’s bad, how do we talk about that? How do we know? With slavery, I like to give this example sometimes. Historically, a lot of people don’t realize, obviously that took place over a long long period of time. That transition involved in civil war. The British government had to give up slavery. When they decided they were going to do it, they calculated that it was the equivalent to losing 2 percent of their GDP every year for 60 years to basically give up slavery.
Once you discover that something’s not good, sometimes you can charge extra and make up for the difference. You kind of blend between these two worlds. And sometimes you might just have to take a small [hit].
It’s just a cost.
And it’s an externality. I think a good metaphor for what’s happening right now is just like pollution. When we all got cars, it was great. We can now go to all these new places and totally transform civic life, and we could go to do things we couldn’t do before, but it also added pollution to our environment, and it isolated us. It put us in individual transportation units. So it’s not that cars are bad, but the first versions were really pollutive, and they isolated us. Cellphones are kind of similar. They’re taking us to all these interesting new places. They’re doing lots of amazing positive things. It is not a unilateral conversation.
Sure. That is the problem.
People get tripped up when something is so good and offers them so many amazing things, that they can then say, well then it must not be bad, but because our mind tends to this black-and-white thinking. So the question is, can we not get rid of the cars or get rid of the cellphones or get rid of Facebook. Let’s instead say, can we build the green version, the one that’s both empowering and doesn’t pollute our internal environment or our social environment?
So you’ve created this organization to do what? Just raise awareness of these things?
We’ve done a few things. I think there’s a few different activities that need to exist. One is we’ve taught design workshops to try and also just put forth, “This is a way to think about what a Time Well Spent design experience is.” And we’ve posted a few of them. We’ve gotten designers from different companies. They’re all independent, though.
We’d like to actually elevate it to a much higher level. I think there also needs to be a convening force in the industry. Because right now, essentially, Apple, Google and Facebook are kind of like these private companies who collectively are the urban planners of a billion people’s attentional landscape. We kind of all live in this invisible city.
Which they created.
Which they created, and the question is … Unlike a democracy, where you have some civic representation and you could say, “Who’s the mayor and should there be a stoplight there?” Or a stoplight on our phones, or blinker signals between the cars or these kinds of things. We don’t have any representation, except if we don’t use the car or we don’t buy it. And that’s not really representation, because the city itself …
Attention taxation without representation.
Maybe, yeah. But I think there’s this question of how do we create that accountability? First of all, who are the people that ask the questions about what’s that balance between what’s good for the business and commerce in that city, which we need to function well? Which is to say, yeah, people need to get to websites, buy stuff, buy games, buy things that they want. How do we make commerce successful? How do we make app developers successful? And also, how do we think about human values? Who’s the Jane Jacobs of this attention city?
Right. I was just reading her book, actually.
“The Death and Life of Great American Cities.”
There’s been a whole bunch of new books on her, and I find she’s fascinating. Especially where she started out, which is very opposite of where she ended up, which is interesting. She’s a fascinating person. This is Jane Jacobs who was an urban planning person, who really talked a lot about human cities.
What makes a great city great from a human perspective? Eyes on the streets, sidewalks with the stoops in New York. She studied the West Village.
And what was eliminated in so much of the urban renovation, that created these wastelands and crime and how it leads to stuff like that? We can look at it a lot today. It’s a very similar thing.
Totally. It’s almost different, talking about inspiring people. Like who at Apple and at Google are the Jane Jacobs of this attentional city? Who has the time? Who’s given the mandate and the explicit carve-out?
It’s our job to think about what’s really good for people. In terms of Time Well Spent, we want to bring together the leadership of those people who really care about having that conversation together to talk about it. And the question is, you just mentioned one of the tricky things, there is a lot of upset people in the world at these tech companies for many good and bad reasons. When you start to talk about it in public or talk to each other, there’s some NDA confidentiality issues. If we start talking like this, suddenly we bear the responsibility entirely. Everyone is going to be upset at us. I think there’s a question of how do we make it safe and cooperative to acknowledge that this is our role. We are designing what a billion people think and believe every day.
I think the only thing is, I don’t think they believe it, first of all, nor do they think about it. And then when you challenge them — I was just telling you this before we started. I’ve been recently driving Facebook executives crazy over this fake news thing. Every time they release something like this, this is ridiculous. And once again you’re saying something completely stupid and I was getting into it with one executive, I won’t say who it was, pretty high up. This person said, “Oh, you have so much vitriol.” And I said, “That’s a thing you say to a woman to get her to shut up, but that doesn’t work with me.”
And what was interesting, it was not vitriol. I care about the civic life of our digital world. I really do, and I feel like you’re polluting it in a way that’s really dangerous. And you have real-life examples of impact. And you want to pretend that you have no culpability, and that there’s nothing you could do about it. And then I outlined six or seven things they could do. And of course, it’s all like, “Well, we could.” No, it’s like, no, you actually could, actually, if this concerned you. And then they’re like, “Well, we don’t want to weigh in.” But they will always have some sort of [impact]. “We don’t want to impact!” I said, “But you created it, you’re making money off of it. And you’ve killed off media, pretty much.”
“No, we haven’t …” “You kind of have.” Whether the right consumers wanted it? That’s true, and therefore now you inherit the responsibility. So how do you get them to talk about it? Because this is a group of people that literally think they have almost no responsibility. Very few people within these companies think they have responsibility for this.
I think in part that’s right. I think many people don’t think they have responsibility.
I know their readership doesn’t.
Yeah. I wonder sometimes though about — there’s that which people internally believe and that which is allowable to say to the public-facing audience. People don’t like to feel lied to, but sometimes, if you don’t know where information is going to go, people at companies almost have to say, because of their shareholders, “No, we don’t have responsibility.” It would be impossible for us to take on the liability of the elections of the free world as our responsibility, even though obviously, in the case of Facebook and Twitter, it’s shaping that. So I do wonder to what extent that responsibility is there.
We definitely need to change the consciousness to the point where people accept that that is happening. There’s no way around it. If I design a product that there’s a variable reward that shows up when I hit return, sometimes I get something and something I don’t. It’s going to act on my mind the way like a slot machine does. And if it does that, whether that company intended for that to happen or not, they still designed it that way. And it’s not that suddenly they’re evil, because they’re doing this. It’s just that they have a responsibility in knowing that that impact is there, and to devote some engineering resources to fix it.
Talk a little bit about … Let’s use the example of fake news. I understand a lot of this has been hyped. There’s so many factors that come into this election, which has been so ugly and has really divided this country, especially on social media. Twitter’s just become a hellscape at this point in terms of wading in there. It is an angry mob of different sides going at each other. But it’s strangely addictive. Give me your take on the fake news thing because if people figure it out, fear works.
Yeah, right. So the way to reframe a lot of these conversations, just to sort of name it, is again the attention economy. If you’re a news website, or even a meditation app or a news website or a game or whatever, Netflix, you’re always competing for attention. You don’t exist, even as a meditation app, you don’t exist if you don’t get 10 minutes of that person’s attention.
It’s not about good people, bad people, bad companies, good companies. It’s just about everyone needs attention. Now the question is, how do you get people’s attention? You become persuasive. You evolve like an organism that’s mutating a new kind of arm that’s shaped in a new way, some new thing that’s just more persuasive than every other organism that’s out there.
So to make that concrete, let’s say you’re YouTube. You evolved this new arm called “autoplay the next video,” and it works, and suddenly people watch, let’s say, 5–10 percent more videos, just because you autoplay the next video, and you create inertia. Okay, so you do that. The other organisms, the other Netflixes and Facebooks out there, they’ll die if they don’t also do the autoplay the next video arm thing. So everyone is just evolving these persuasive strategies, and I call it the race to the bottom of the brainstem to basically seduce people’s psychological vulnerabilities.
Back to fake news. So what is fake news? Well, if you’re some guy in, I think a lot of guys are like are in Arizona or Macedonia, and you figure out, okay, I can get people’s attention by writing something that confirms their underlying suspicions, conspiracy theories.
Hillary Clinton is an alien lizard.
That’s the extreme example, and there’s much subtler examples, that are just using extreme verbs on existing truths that are … let’s call it gray news instead of fake news or something. But you do that and you can generate outrage. So if I figure it out, I can generate outrage. Outrage, bam. I’ve just got a new persuasive weapon. I can generate outrage and I’ll get the attention. Well, you just got Breitbart, and you can just generate outrage.
And outrage from the people who believe it and those who don’t like it, too.
Exactly. It works on everybody. That’s the thing about this. That’s what this conversation is actually about: A species, us, that are waking up to the fact that things persuade us even if we know that they persuade us. I know that outrage persuades me. It works on me.
“I can’t believe they said that.”
And it’s a reaction that is so strong and overpowering in me, that I’ll still click on it.
And then you also have a social media ability to react, so quickly. One of the things I’ve been thinking about a lot is, a lot of stuff Trump does with his tweets and the press reacts rather quickly to them. But the 29th time, I was like, I can’t believe he said that, and I was like, why not? Because you believed in the last 29 times. Why can’t you? Of course he’s going to say that. Everyone feels like a fresh hell, essentially. Like I never experienced him saying something idiotic before, except the last 300 times. And it reminds me of a Maya Angelou quote, “When someone shows you who they are the first time, believe them.”
The thing I realized about relationships is that when you finally break up, it’s not because of something new, but because of the first thing that you saw.
First thing that was a problem … about this idea, that outrage works absolutely clearly?
Now let’s say you’re Facebook. So far, your ranking algorithm says, “What should I put at the top of the News Feed?” Now keep in mind, Facebook’s business model is, it needs to get as much attention from you as possible. So in a totally value-neutral world where it doesn’t actually care what gets your attention, then it just says, “Okay, whatever my News Feed algorithm is looking at and says gets the most clicks or the most shares or the most comments, that’s driving the most engagement, so I should literally just put that stuff at the top.”
The problem is, if you don’t have values, you don’t know what human values are there, like is it true? Does it help me? Does it generate honest conversation or divisive conversation? There’s so many different things we could be caring about besides whether it gets clicks, shares or comments. And those are hard to discern from an algorithm.
And they don’t want to.
The question is, do they?
No. They do not, because they’re always like, “We don’t want to be the one deciding.” I’m like, “But you should be, because you’re putting it up there.”
That is the stop sign for all of them. It’s fast. Except for Google, which is like, “We can tweak behind the scenes. Nobody knows what we are doing.” I think the problem with something like Facebook is it has an inherent promise to their users that they can put up anything they want. And Google does not. So they can easily tweak it where you’re not looking at it.
The funny thing about it, I call it Heisenberg morality. There’s still an algorithm today, which actually does have a bias. It’s currently biased toward a certain set of factors, like whatever gets the most clicks or comments. And if it was accidental or unconscious, they’d say, no, that’s good. It’s good for people.
But as soon as you say, “No, we should explicitly consciously choose,” what other values [do you choose]? Is it true? Does it do this? Does it have high reputational integrity? What’s its business model? How fast do they publish it? Do we want to reward websites that publish the fastest or we’ll reward websites that do these long, deep, evergreen kind of reflective pieces?
As soon as you get conscious about it, they say, “Whoa, whoa, whoa, whoa, whoa. Who are we to say what’s good for people?” And that’s why this goes back to philosophy. How can you agree on human values in a secular world, when businesses are implicitly putting certain values first?
Which they’re not doing in this place.
They’re implicitly putting the value of whatever gets the most attention. But they’re not seeing that as a moral decision.
Yes, they don’t, but it is in fact.
It is in fact.
When we get back, we’re going to talk about what we can do about this and where we’re going to go from here, because I think a lot of these social media networks especially are becoming cesspools. It was interesting in one of the debates I had with the Facebook people was that, fine, if you don’t want to have a choice in it or you feel like it’s not your responsibility, your suburb is becoming … Suddenly there’s broken glass and trash and crap. And so your business isn’t as good, and that seemed to perk them right up. “What? Wait a minute. We don’t want a trashy place.” And I’m like, “Well, that’s what it is and I don’t want to be there.” It’s an interesting question. We’ll talk about that and more when we come back with Tristan Harris, who is leading a movement called Time Well Spent.
We’re here with Tristan Harris. He is a really compelling young man, who’s talking a lot about the implications of not just social media, but all the technology we use, and how even though some of it can now become very damaging as it involves more and more in our life, what we should do about it, what consumers should do about it, what companies should do about it. So let’s talk about that first. What do you think the responsibility of companies is, because most of them don’t feel that they have very much responsibility. They’re just making these things. “We don’t know how people use them.”
That just needs to change. And I don’t know how many interviews we need to do, but it’s so clearly … It is impossible to hold that stance.
They’re good at it.
I would love to bring those conversations to the forefront whenever they’re necessary, because it is just not true. Now there’s instances when it’s true and I believe it, but I can’t say that’s true again. I’d first have to admit that it’s a problem before I can try to fix it. One of the background problems is fundamentally advertising. The business model of advertising. Specifically, engagement-based advertising, meaning I have an unbounded interest in keeping more eyeballs for longer there. So long as that’s true, I can’t get off this racehorse. Everyone is there — again, no evil people, just companies, that to succeed and survive need to …
And if outrage works, we’ll use outrage. If fear works, we’ll use fear.
Right. I keep in mind a couple of quotes that I like from the industry, that are representative. The CEO of Netflix, Reed Hastings, saying, “Our biggest competitors are YouTube, Facebook and sleep.” Because if you have more sleep, than you’re getting less Netflix. So if they can eat into your sleep budget, do they value human value or so they say let’s blow it away, or Lehman Moonves from CBS news saying …
Les Moonves, sorry. “Donald Trump’s probably not good for America, but he’s been great for CBS news,” and CNN said something similar. Again it comes back to just because this is what works at getting attention or money, is it good? First is, can we change how we get our money? Could it not be advertising? And obviously this story with Medium has been going around: Ev Williams chose to fire — what is it? — one third of his … Not fired, but let go of one-third of his staff, because he’s fundamentally accepting that this business model of advertising and being a media business has inherent conflicts, problems, externalities — again, pollution it generates, to use that business model, it generates pollution. One thing is, we need to have a different business model and we need to not have that be the conversation it’s been the last 10 years.
For the last 10 years people were saying, “Oh we need to pay for things instead of getting it for free.” But we now have evidence, once we can draw the lines from fake news and ruining civic discourse to this is coming in part from advertising, we now have the strongest rationale. We’re ready to pay.
Now the question is, how much are we willing to pay? And do the economics work out? And should everybody pay and that creates inherent inequalities and let’s have that conversation. The bad background news on that is that most of the world’s default settings that are free settings usually aren’t good for people anyway. So even if there’s an inequality, that’s a longer problem to solve.
So business models, we’re ready to pay and there’s different examples of this. Google launched something called — and this is not a pro-Google stance or something like that. They launched something called Contributor. Do you know it?
I know about a lot of efforts like this. It’s been like the tip jars.
Exactly, these kinds of experiments. So what you need is, you need someone with a lot of distribution. Contributor, just for people out there who don’t know, was this thing where you set up a budget of … It could show you, for example, how much your Google has a decent idea of how much you consume on the internet in a strange way, and you could say basically, actually for this much per month, you could be paying for not seeing any ads. You could be bidding on top of what everyone else bids. When you land on a web page, there’s this instant auction that all these technology companies basically bid … for your eyeball in that one moment.
And basically you just outbid them by one penny, or one micropenny, and you suddenly pay for what ads you want to show there, which is to say nothing. It turns out that I don’t know exactly what the numbers are, but from stuff that I heard, people actually value their own attention more than advertisers do. Once that’s made clear, and there’s a way to see that, meaning: Would you pay $7–8 a month for basically a version of Facebook which is entirely aligned with helping you live your life and getting rid of fake news or something?
And I think there’s no place for Facebook to say, “Hey, would anybody pay … Would 10 million people out there pay for this?”
It’s too hard for them, because they are doing very well in their current system.
They’re doing well in the current system. It grows and scales differently. It’s important. So advertising allows you to get better and more efficient and make more money over time. If you fix the price at a certain subscription rate, it’s different. But for the last numbers I heard, at least a couple years ago, the average revenue per user on Facebook was, I think it was like $7–8 or something like that per year. That’s average of course, globally.
It used to be that at AOL, too. I remember once his speech, “Our users are worth $20 a year to us.” And I said, “When am I going to get my 10 of that for having … ?”
Right, right. But imagine you can basically pay for your … It’s almost like — and I hate to invoke the slavery metaphor, but it’s like a self-purchase agreement. You’re basically saying, I’m going to pay whatever I’m worth to you currently as an attentional slave. I’m going to pay to basically have the free version.
Or a better version where you actually take time to not make it a disgusting mess essentially.
Yeah. Again, I think, it’s not so much that there is an intention behind the disgusting mess that they create, but I think the question is how do they budget in all the cleanup activities? Who funds the department that’s the sustainability department, clean up the broken glass, make the cities and streets safe? How do we fund that? Because it’s got to come from somewhere.
I used to say when I was at Google, if you just look at a Google Calendar of any Google employee, and it’s just packed. And you ask where is that meeting that says, where is their product actually good for people? Where is the broken glass? How do we make that better? That’s not going to fit in their Google Calendar until there’s actually an explicit …
And also a business model for it.
And a business model for it. So here’s a concrete way that this could happen. Here’s another example of a different industry, where there is a similar perverse incentive: Energy. I have a perverse incentive, if I’m an energy company, I make more money the more energy you use, but that’s not good for our environment. California in the 1970s set up this regulated hybrid model, where they set basically a target for how much energy they’d like people ideally to be using.
And if energy companies hit that target, the rest of the money that they would make gets reinvested and explicitly carved out for investment in sustainable energy programs and programs that help consumers save energy and use less, and also actually like generating research in new forms of energy. And it basically prevented something like 24 power plants from being built in California. Super successful, scaled across the United States. There’s something similar here. What if essentially there’s a target, where companies, instead of wanting to maximize how much you use a product, actually how much time they take from your life, which is again a perverse incentive, and basically set a target that’s aligned with what’s good for people, and the rest basically gets reinvested in the departments that are currently under-funded that do this kind of work.
To Facebook’s credit, they have this. I think it’s called the compassion team or the trust and safety team that did this great work with … [they] basically embedded nonviolent communication principles into the photo-tagging feature so that teens wouldn’t bully others and abuse this photo-tagging feature. Now I know people who come from that team and similarly they kind of asked me, “When you were googling that ethic stuff and you’re trying to do the good-for-people stuff, like, how do you do that? How do you prioritize that?”
And we all have the same question, which is, for those of us who are working more explicitly on the human values department of what is good for people, how does that get prioritized and funded? You can imagine something like this California energy thing working out for technology companies.
It could also, though, be — I’m a cynical person, obviously — like those ads that Exxon does with a lot of trees and everything looks totally like, “Oh, we’re giving back. We’re planting trees!” And they’re still an awful rapacious oil company.
So this is super important, because there’s this sort of moral offsetting. Basically I’m polluting over here, but I’m doing …
Or he said. I remember him saying, “I’m buying carbon.” I said, “Stop flying a friggin’ private plane. How about that? Because that seems to be what would save energy, in my opinion.”
Totally, or Pepsi Cola creating diabetes and then funding whatever other programs over here.
I think there’s this question of, it’s really admirable when really wealthy tech people do these huge philanthropic projects, but I think the bigger question is again going to the morality of what we’re doing now. How do we make the existing impact or our everyday impact on people’s lives good? And how do we investigate that? And what your point is, how do we avoid greenwashing ? How do we avoid companies just telling themselves some false self-fulfilling narrative, that allows their employees to feel good about themselves when in fact the actual outside world is still suffering?
Actually, Facebook’s done some of those ads like, “Oh, it brings you together.” You know it isolates you. It brings you here; it’s like, actually it makes you lonely. It’s an interesting thing to watch them start to do that. Like Google has a series of ads that are very emotional and fantastic, when in fact it’s nothing like what they … It is, to an extent.
The one company that’s actually been super effective at this, and I think is actually accurate, is Apple in some of their ads. You do have a good feeling about that company, because there are things that you get from the products that it doesn’t feel quite so grabbing. I guess because they don’t believe in it. They don’t do advertising. It’s just selling the products. So it has to have a different incentive. So you do feel good with your camera. It’s an interesting thing.
Well, and companies are always going to do the marketing thing, the greenwashing thing. Let’s just acknowledge that. You mentioned Apple and their business model. Again Apple and Google … So Google’s business model, a big part of it is advertising.
Almost all of it.
But a lot of that advertising revenue comes from search, which is actually a pretty aligned business model in terms of the needs of everybody. The problem again is the engagement advertising side of it. So in this position, Google’s business model is search and they make a mobile phone. Android is not trying to … specifically, it isn’t designed to get you to use every app for as long as possible.
But it’s also not explicitly to help you spend your time well, according to Time Well Spent. But Apple’s and Google’s business models are mostly aligned here, which is the phone, specifically, in which they could, if we demanded, next year instead of flashy new phone, bigger camera …
Throw in a toilet, whatever.
… That it actually is just entirely designed to help people spend their time well or care for these human values. I don’t want to call this a thing. We call our thing Time Well Spent, because that’s a good encapsulation …
Talk about the actual physical thing. So what could be … You’ve talked about products at the beginning of this, like you’ve used Twitter too much. There have been things that I’ve tried, they just don’t work very well.
They don’t work well because they need to be first of all integrated into the most people …
It’s also like steps. I don’t know why that’s good. Here, you’ve done 15,000. What does that mean? It’s all … you’ve done this much on Twitter. Does that mean too much or too little? The Fitbit’s the same thing. They’re not actionable. You know what I mean? They just tell me the number and I don’t know what 15,000 means. It seems good, but what does that mean for my heart? What I’d like, for example, on those things is, “Hey, you ate a donut this morning. That was a mistake, because here’s what happened to your blood sugar now. Get up and eat this or do this or see this.” It is the same thing with your phone, like, “Hey, you’ve been sitting there looking at Facebook for a little bit too long, maybe you should go and try …”
Let me make this super concrete, because I’ve been part of … it’s so hard. There’s so much material here, and there’s so much to talk about. So what specifically could Apple and Google do to make your smartphone give you back your agency in the face of an increasingly persuasive world? If companies are essentially competing to implant or install a habit in your mind, they are competing to just create this unconscious process to come back every day and do this thing. So if that’s what they’re actually competing for, that’s the currency of what they’re competing for, let’s imagine that there’s this little page in iOS or Android called Your Habits, and it reflects back to you just like in a given week, what are the things that you’re doing, what are the kind of surfaced habits?
Like when you wake up in the morning, the first thing you do is you check Twitter, and your average time on Twitter is about 30 minutes. And then we just ask you, is that what you want to be doing, or is there something else you want to be doing? And depending on what you’d want to do, let’s say it’s just less time or maybe you want to start with some silence first, it would just make sure that no notifications come during that time, and that when you say you want to use it less, it’s now on your team.
So when you’re in the Twitter app or something in the morning, at the top in the status bar, it could basically say something like, when it gets close to your whatever limit, 10–15 minutes, it puts a little timer, like when you’re in a phone call, it flashes at the top. You could do something like that for your time when you get near that for the morning.
How would Twitter feel about that?
Twitter wouldn’t feel very good about that. But we’re also talking about shifting everybody from a world where they’re trying to maximize attention to a world where you pay some small amount for your Twitter account. I think a lot of people would be happy to do that, if they actually understood the trade-off.
And one of the things that’s hard to think is, what is the benefit, because a lot of these things are addictive. Like actually addictive. I know there’s tons of people out there [who find it hard to] turn off the phone. Put it up in the other room. I have a bed for it. It’s actually addictive. It’s actually like cigarettes. So it’s like not so easy to turn it off. Not so easy.
Which is again why we, again, everyone’s basically competing to be better at addicting you than someone else. Let’s just say here’s your addiction management panel. What are the things you’re happier to see? Maybe you’re super happy with your addiction to exercise in the morning and you feel great about that or you’re happy with your addiction to Headspace. Great, wonderful. I’m not trying to say morally that you shouldn’t be addicted, but how do we give you more agency, so again, this habit management thing could be built into your phone?
My question is, can you know that you’re addicted? Because I am not an addictive person. I never smoked, I never drank, but I know I have a problem with Twitter. I just do. I can’t stop.
And it’s actually can’t stop. It’s not won’t stop. I wonder what happens. There’s lots of brain science on this and where we’re going on focus and things like that, but it’s such a dopamine … It must dopamine.
I’ve written a lot about this. There’s a great woman called Natasha Schüll, who wrote a book called “Addiction by Design,” showing how slot machines work. In the TED talk, I talked about the slot machine thing. Here’s one little example. A lot of people don’t know of a deliberate design element inside of products that makes it work like a slot machine. So let’s use the Twitter examples, as we’re talking about it. You know how you land on Twitter and then there’s this extra delay. There is a couple seconds, one second, two seconds, and then the number shows up of how many notifications you have. Okay, so they could have just shown you the number. But that extra delay is exactly how a slot machine works. It’s a variable schedule reward. You don’t know when it’s going to come back. And the anticipation is what generates the large release of dopamine. Dopamine is actually released at anticipation, not at the moment. So that’s an example of a design element that could be just subtly shifted and it would have a slightly different feeling to it.
Which they don’t want.
Which again they don’t want until there’s …
They used to do that with … I remember when I did my book on AOL, Steve [Case] was like, “Oh, the way we DM you is like you’re a mouse.” Like grab food, grab food. And they used to talk about it explicitly. One of the things that was fascinating at AOL, when you opened up the AOL screen — and this is for people who are not old enough and don’t remember this — they would have a picture there that was slightly blurry. But it was always of a girl or someone pretty and so, you lean in and you want to click on it, thinking it was your computer’s problem when in fact it wasn’t, and it made you click, because it was a major lean-in.
It’s someone involved in a persuasive mechanism.
They knew. I was like, “Why is this not clear?” “Oh, we do that on purpose. We make it fuzzy.”
So again there’s a really big question here, which is like let’s zoom out and say, we need to basically say, we’ve been unconsciously building this attention economy city that a billion people live inside of as a result of these designers at Apple and Google. We need to say them, stop. We are the Jane Jacobs. Let’s get the groups of people together to think really about human values and how to design that city so it works for people. People will still be competing for attention.
Your internal thought processes, your thinking about your future, your friends, Facebook and all these other people, all these different things will ultimately fit into a finite pool of attention that you have. The question is, how do we organize that city better? We have to invent zoning laws, coordination between people, so you’re not in the infinite loops. I send you a message and I don’t know when you’re going to send it back, so I sit there and wait 10 seconds or a minute it takes to respond.
Texting is devil’s work, as far as I can tell.
In the TED talk, there’s some solutions that are presented there about focus mode, about better ways of coordinating our interpersonal communications that we’re responding at the same time or looking at the same time.
Or not responding. What’s interesting is people do feel like they have to respond. One of the things, things that used to be unsaid now immediately are said. It’s a really interesting … what I’m doing a lot with the younger people, like you don’t have to actually text back or say something. But again, it’s the same kind of thing.
But I think that this has to be kind of a conscious conversation. We’re all now citizens of this city. Part of that citizenship is like, yeah, how should we be using that? What’s the consumer awareness with the psychology we should have? What are some norms we can develop?
Sort of like maybe this big glass of soda isn’t so good for us. Look at what happened with Mayor Bloomberg, he got such a pushback on that.
How do we know that something’s not good for us? How do we talk about that in a neutral way? How do we get past the old arguments that, well, it can cause me so many good benefits of getting so many bad things. I guess that it’s good, who is to say? It must be fine. They’re not responsible.
Who’s to say ?
All that needs to end. We know that there are certain things that are causing problems. We can just make a list. Say okay, how can we get more of the good that we’re currently getting, that’s wonderful, and can we reduce some of the bad? Is there an equilibrium where we get more of the good and less of the bad? There’s definitely that.
We have to finish up now. But two things I want to finish up on: One is, do you feel hopeful about this, because it seems like we now have a president that’s on a twitch. He’s the perfect example of this. He’s twitchy. He’s reactive. He’s president. Which is disturbing. And everyone seems even more so. It seems even more … and then you have the tech companies sort of rolling over and saying nothing about a lot of things these days. Are you hopeful?
I don’t want to get in the political side. I’m definitely not hopeful in that front.
But in general, that move … we’re getting more twitchy than ever.
I think that is happening and I think it’s a crisis. We need to just acknowledge that it’s a crisis and say how can we get together and help solve it? I’m literally here at the service of whatever these companies can do at all, once we have admitted some of these problems, and say how can we actually fix it?
Once you admit you have a problem. Last question. What are the three things people can do? That they can do to begin to remove them? What could people do if they really care about this? Because I do, for sure.
You mean just better habits. One thing that’s super obvious most people may not do. Turn off every single notification, except for notifications that come from other human beings who are trying to reach you. The number of things that are on your phone that are basically a machine or a growth team, even like things like Apple News, all of this stuff …
It suddenly went on again on my phone. I don’t know how that happened.
Yeah I noticed that actually today, that Apple News turned back on their notifications. Just turn off everything that’s not a human being trying to reach me through like a messaging app or something. Another thing. This is on the Atlantic article. So basically I keep my home screen to just in-and-out tools that are only … if I use it unconsciously, I would never get sucked in. Google Maps is on my home screen. I never get accidentally bothered …
Yeah, I need to find this. It’s a utility.
So just in-and-out utilities. Your home screen just to that, and then learn to launch everything else by typing it.
So you have to find it.
If you think about it, what we want to do is imagine this new filter you’re going to put between you and your phone, which is basically a filter that tosses out unconscious uses of your phone and keeps only conscious uses. That filter looks like what’s called typing, because you can’t actually move your fingers …
You’ve got to go find Instagram.
You’ve got to go find it. But you’ve got to find it by typing the name, not by swiping to the folder and have a memory … left and I do this right. So I don’t do that. So literally just type it. That’s something you can do. I keep my second folder … so basically I only have two pages of apps. A lot of people keep like 10 pages of apps, and then what they do is this weird habit I did … people do this thing. They unlock their phone. They swipe through like all the pages of apps. They swipe back through all the pages of apps and they turn off the phone. It’s kind of this OCD-like thing. If you just have two pages of apps, that stops some of that, because you don’t just end up turning on just through this behavior …
So what should be on? Uber, maps?
I’ve got Uber, maps, notes. I also put my aspirations on there. So you mention audiobooks. I do podcasts. I love that. My phone and messages. That’s basically it, nothing else. Everything else on the second screen is in folders. I never even go into the folders. Like I said, I just type things.
That is a fantastic idea.
And my folders are all gray. So I don’t want any color basically triggering my mind to do something. So since I came from the …. last thing is the Google micro kitchens. There is a study done at Google, they have the snacks and everything, and they’re delicious and people love it, but they also started eating …
Started to get fat. I remember him telling.
So they actually did this thing where they said, let’s not take away the M&M’s, but we’re noticing there’s millions of dollars that the company of M&M’s spends on the trigger, which is the wrapper, that color, that look. When you see it it triggers to wanting it, or the color of the M&M’s. So instead, let’s put them in porcelain opaque jars, so you don’t see it, and then the label isn’t like an M&M’s wrapper that’s stuck to the outside, but specifically they put a comic sans, very neutral font that just says M&Ms. You still see that there’s M&Ms. They’re still there. You can still get it, but they don’t trigger you. So again, that’s a conscious choice …
My son was doing it the other day. He was eating a lot of junk, and I was like, why are you doing it? “Well, if I see it, I have to have it. Make it so I don’t see it.” And I said, “All right. That’s very reasonable.” That’s a reasonable thing.
Totally, but these are all these subtle things that are small. I mean, imagine if companies built this into their choice architectures. They are building choice architecture that a billion people live by. They can and they should. And we should talk about it.
Absolutely, Tristan. This has been riveting. This is an area, I think, that is super important, and it also goes to the area of responsibility on lots of things, on the impact that they actually have. These are now a lot of companies here that still see themselves as small little things that don’t have any impact, and all they do is impact people all day. So it’s really important to think about what that means and the responsibilities that go with it. I really enjoyed you being here. Thank you so much for coming by.
Originally published at www.recode.net on February 7, 2017.
Originally published at medium.com