Skip to main content

You make this possible. Support our independent, nonprofit newsroom today.

Give Now

Microsoft President Brad Smith on consumer privacy

caption: "Tools and Weapons: The Promise and the Peril of the Digital Age" by Brad Smith and Carol Ann Browne.
Enlarge Icon
"Tools and Weapons: The Promise and the Peril of the Digital Age" by Brad Smith and Carol Ann Browne.
KUOW Photo/Alison Bruzek

Brad Smith's new book is "Tools and Weapons: The Promise and the Peril of the Digital Age," co-authored with Carol Ann Browne.

This is a full transcript of Bill Radke's interview with Brad Smith, president of Microsoft.

You know, there is so much anti-Amazon feeling in Seattle, but Microsoft -- you're a tech giant. Your workers live in homes most people can't afford. Why don't more locals hate Microsoft?

Well, I think we all go through our days of favorability or otherwise. And so...

Yes, Microsoft has had its day.

Yeah, we've had our days, too. And I was at Microsoft in the late 1990s. I would step back more broadly from one company or another. And I think we're living at a time when people expect more of the tech sector. They expect more of governments to address the issues related to the tech sector. We're trying to step forward. We're trying to address real problems in significant ways. It doesn't mean we have all the answers or that we even do everything right.

The New York Times said you're trying the role of moral leader.

Yeah, that's never a label I would apply to ourselves. I don't think that people who work at other companies are looking to their competitors to be their moral leaders. I do think we're trying to think broadly and we're trying to figure out what the right thing is to do. And I think that's actually good thing for everybody. It's certainly what we're striving for.

One of those thorny issues has to do with privacy invasion. And the Washington state legislature, as you well know, is debating a new data privacy law (sponsored by Reuven Carlyle). Will Microsoft support such a law?

Absolutely. We have actually supported that kind of law at the federal level in the United States since 2005.

That might surprise some people.

Yeah, it was 15 years ago I went to Washington, D.C. I gave a speech on Capitol Hill. I said we would support this kind of legislation basically for two reasons. First, we want people to have confidence in the technology they use. Second, we've long believed that if we want to serve the industry best, if the industry wants to be healthy on a sustained basis, it too needs to sustain the public's trust. And the public ultimately will trust technology only if their rights are protected and their rights are protected only if they're protected under the law. So that does take new legislation. And California did this in 2018. I think we will see Washington do it in 2020. And I think the Washington public will benefit.

One of the critiques of Washington state's attempt at such a law is how involved technology companies have been in the drafting of it. What is Microsoft's involvement in drafting this state law?

Well, we've been one of many companies and groups that have offered ideas. But I think one of the virtues of the work that Senator Carlyle is really driving in the state Senate this year, is he's spent an enormous amount of time with privacy groups, with nonprofit groups. I think if you compare what he has put forward with the kinds of legislation we've seen in other parts of the world where privacy protection indisputably is very strong, as in Europe, I think that you'd find that his Washington bill is is very comparable. And I think that people will say this is a pro privacy piece of legislation.

As you said, Europe's got such a law. California does. What is Microsoft already had to do differently because of California's law?

Well, it's interesting. The California law in many respects is similar to the European law. It's different in some ways. It fundamentally says that companies cannot sell your data to somebody else. We have said that we are extending those rights and benefits not only to our customers who live in California, but to our customers who live everywhere in the United States. A simple example y'know anybody can go to, you know, Microsoft dot com, do a search on our Web site. You can find where your own data is stored and you can access it. You can see what data Microsoft has, if any, about you. If you're an email customer, a Bing user, a browser user, a Windows user. And if it's wrong, you have tools to make these changes. And interestingly, although this came from Europe in 2018, we have found that more Americans have used that ability than people in Europe.

Will so many people opt out of data collection that Microsoft can't make Cortana smarter, for example, your digital assistant? You can't make Bing search results as personalized as your customers want?

That has not been the trend to date. Many people continue to opt in. We continue to have access to enough data to make our services better. I have confidence that if we have strong privacy protections, if we have a strong value proposition and we communicate clearly to our customers, enough of them will want to continue to make it possible for us to make our products better.

What's the difference, between you and Jeff Bezos (CEO of Amazon) in this regard?

You see Microsoft lean in, in two respects. One, is when a law takes effect in one place, we're obviously much more likely to apply its benefits in many places, and not just the jurisdiction where the law took effect. Amazon's not doing that, at least not today. The second is we do tend, in my view, at least to identify areas where we go beyond the law. That's not a statement necessarily about privacy or privacy alone. Amazon does a very good job of analyzing the issues. They've been much more supportive of new laws and regulation, but they're not as likely as we are on the East side of Lake Washington to proactively apply principles that go beyond the law.

Don't people give up their data voluntarily? Why not let us do that?

I think that it's good that we live in a world where people can use services that in part involve the use of data. But I think people's rights need to be protected. We are sharing enormous amounts of information with the services that we rely on in our daily lives. Look, the fact that you can buy technology that looks at everyone that comes to your front door, that knows exactly how your house is laid out. If you happen to, you know, buy devices that provide audio services, that may be able to listen to everything that is said. There's a lot of convenience that can come from that. But do we really want to open up our homes to a company in that way? I believe that people should have more choices. And I think we absolutely believe that there should be some legal limits that will fully protect people's privacy, because at the end of the day, each of us as consumers doesn't fundamentally have individual bargaining or negotiating power. It's not as if we as a consumer can call on whether it's Microsoft or Google or Facebook or Amazon and say, you know what, I'd like to buy your service, but I want to negotiate over how much data I'm going to share. Ultimately, if we want people's rights to be protected, it will take laws and companies that are principled to protect them.

But does that mean that we're all going to get long agreement messages we don't understand and we'll click 'yes' so we can keep using our device and nothing changes?

Well, it's interesting that you raise that because it's a problem. Originally, privacy law was put together with this premise that if every service that you use gave you notice that your ability to say yes or no would actually be something that meant something. And what we found was because the Internet exploded, each of us ended up with a choice. We could either spend our time reading long privacy notices, or we could have a life. And we all chose to have a life. And yeah.

'I agree,' Brad, over and over I click, 'I agree.'

Yeah. And that's why I think we're going to see privacy protection and privacy laws continued to evolve. I think where we're really going to go over the course of this decade is more absolute rules, rules that say these kinds of practices are OK, these others are not.

The new Washington state law would allow facial recognition technology, it doesn't outlaw that. But facial recognition has all kinds of problems that you describe: accuracy, privacy problems, potential for government surveillance. So why should we allow facial recognition at all? Why not put a moratorium on it, unless it does improve?

Well, I think that the best path for the future is to encourage the types of facial recognition advances that will serve society well, that will solve real problems for real people.

Like what problems?

I'll give you to me what was a very powerful example. I saw it in Brazil earlier this year. Brazil has a problem that you see everywhere in the world: people go missing, children go missing. And so there's a nonprofit in Brazil that created an application with Microsoft's facial recognition technology. A parent, a sibling, a spouse can provide a photo of their loved ones or more than one photo. They then work with, say, the police department, hospitals, God forbid, morgues. And if there is a person who is unidentified, who has no I.D. on them, a photograph can be taken and it can be determined whether this person who is in the hospital emergency room appears to be a match for someone who is missing.

Does that require you to completely trust the Brazilian government, not to mention the Chinese, the Russian government?

No, and I think that's exactly the right point. You are making the right point. It is -- let's identify those uses that we believe are societally beneficial and encourage them. Let's at the same time identify the problems, the risks, the abuses, and let's pass laws that prevent them. And let's ask and even expect tech companies that create these services to be principled and actually restrain themselves so they don't send this technology in the wrong direction. I think we should all be concerned about the risks of discrimination that can come if a police department uses facial recognition and falsely identifies someone as a criminal suspect. And we know from the data that facial recognition today, if used that way, will lead to discrimination against people of color and against women.

You refused to sell it to California police.

And we have said that is a use that is not going to advance the public good. That's why we won't provide it. And that is why we will support a law to prohibit it. And if you want to continue on the conversation, where is Microsoft different from Amazon -- Amazon shares the view that there should be laws. But from what I can see on the East side of Lake Washington, Amazon on the West side of the lake is prepared to sell anything to anybody if it's legal and they're not applying the same controls that we or other responsible companies are signing up to apply, so that we can act voluntarily before governments pass laws to try to prevent these services from being used in ways that could abuse people's rights.

What about Microsoft's work with immigration authorities? Some of your employees are telling you, stop doing that.

It's a fair question. We don't provide facial recognition services to immigration authorities. We've taken a strong stand on certain immigration practices that we think are in many instances unlawful and in other instances are just harmful. We've gone to court. We were the only company to stand before the Supreme Court as a plaintiff, as a direct litigant in the DACA case to defend the rights of this nation's 600,000 Dreamers, including the 66 employees at Microsoft who our Dreamers. But at the same time, we don't believe that the best way for us to advance the needs of immigrants is to stop ICE, for example, from using Microsoft Word or Microsoft Office or e-mail or Excel or our databases. The fact of the matter is, if you really want to cause chaos for immigrants, take away the databases that record whether people are family members and therefore need to be kept together. If you really want to undermine the safety in the country, take away the technology from people who are responsible for stopping child trafficking or stopping the importation of biological or nuclear weapons. The immigration authorities are engaged in some steps that so many of us object to, but they also have employees, people who are doing work that is vitally important to the country.

How is that different from the California police example where you didn't want to work with the police there?

What we do is we look at particular uses of technology. In California, we had a law enforcement agency that asked if they could use our facial recognition service specifically to try to match every single person who was pulled over for any kind of traffic stop at all. And our conclusion was that given the state of this technology in the world today, it would lead to taking downtown, in the back of a police car, innocent people because of a faulty match. Now, at the same time, we did agree to provide facial recognition technology to that same police department for a different use -- use in a jail, so that the jail would know where all of the prisoners were at any particular moment in time. Something that we felt would both keep the prisoners safer and was not going to infringe people's rights because it was a small population, the technology could accurately identify each person, and they are, after all, in jail already. And I think that's sort of the point. In this kind of situation we think we're going to get to the right answer a higher percent of the time if we focus not on cutting off, you know, customers entirely, but really thinking through the uses of technology -- thinking about those that are most sensitive and then being responsible with respect to the decisions we make regarding them.

You're not ruling out selling facial recognition technology to immigration authorities, but you're just saying under certain circumstances?

Yeah we do business around the world and there may be instances -- and I'm not making a comment about any individual country in particular -- but there may be, the truth is you go you go through a checkpoint at a border, you have your picture taken. So there may be particular scenarios where facial recognition technology could be used to protect against terrorists crossing the border.

You have outlined, as I said, the potential for misuse and abuse of facial recognition technology. So my question a little while ago was, why not have a moratorium? You gave just one example of finding missing people. There are some good [uses], but why does that add up to the world changing fears, the possibilities of massive abuses?

Look, you can try to solve a problem with a meat cleaver or a scalpel. And, you know, if you can solve the problem in a way that enables good things to get done and bad things to stop happening that actually, in our view, is the best approach. And that does require a scalpel. And that is, I think, something that is possible here. The other thing I will note is that some of the challenges with facial recognition, especially when it comes to the risks of bias and the like, really in part reflect the youth of the technology itself. This is young technology. It will get better. But the only way to make it better is actually to continue developing it. And the only way to continue developing it actually is to have more people using it.

We touched on Brazil, Russia, China. Are you completely sanguine about the United States administration right now?

Well, we're never sanguine about everything in any country. You know, we have sued the United States government multiple times now.

Including the Obama Administration.

Yeah. Four times. That's one of the sets of stories we we share in our book: why we sued the Obama Administration -- that was about surveillance and privacy. We have sued this current administration both on immigration issues and on the ongoing surveillance and privacy issues. I don't think anyone's ever sanguine about everything. We certainly have, you know, the issues on which we have concerns. And we try to be direct and explicit, but also constructive as we raise issues. But we don't shy away from them. At least we seek not to.

Some other topics now. Microsoft just announced it's going carbon negative by 2030. What does that mean?

It means that by the year 2030, we as a company will be removing more carbon from the environment than we emit.

How?

Well, first of all, I want to underscore what a big goal this is. No global company has ever signed up for a commitment like this. No American company, to our knowledge, has ever embraced a commitment like this. It means that we're going to have to make a huge number of changes. Because we're saying that this isn't just about the carbon that we emit directly, this is about all the carbon for which we're responsible -- including from our supply chain, including from our so-called value chain. If you buy an Xbox, if you buy a Surface, you plug it into the wall, you use electricity -- we're saying that we're counting that in the carbon for which we are responsible. We're saying that by the year 2025, we'll be 100 percent focused in using renewable energy for all of our data centers, our buildings, our campuses worldwide. We have a carbon fee that today is 15 dollars a metric ton. Every part of Microsoft pays it. And that's not the case for most companies. Most companies either have no such fee or it's a shadow fee, meaning they calculate it, but they don't actually charge people for it.

Is it one of these voluntary carbon offsets? Somebody's going to plant trees somewhere?

In part, but that's only in part. The best and almost the only way to really remove a large amount of carbon from the environment is through so-called nature based techniques. So if we plant trees, that wouldn't otherwise be planted ...

That's important.

Exactly. If we if we're just planting trees that somebody else was going to plant anyway, then, you know, somebody might feel good, but they're not actually doing anything that's going to make a difference. We have to ensure that the trees are not cut down. We need to try to encourage trees to be planted in the parts of the world where they will have the most positive impact on removing carbon. But what the world needs, where we need to go is to develop new technology -- technology that will remove carbon from the atmosphere, so-called direct air capture. You run the air through a machine. The machine removes the carbon from the air. The carbon needs to be deposited back into the earth in a place where it will stay, not just for years, but for centuries and millennia. One of the real...

Microsoft's not building that, are you?

No, we're not.

That's somebody else's company.

But, one of the other commitments that we have now made as a company is quite literally unprecedented. We are the first, and as of this particular moment, the only company in the world to say this: By the year 2050, we will remove from the environment all of the carbon that Microsoft has emitted, either directly or as a result of our use of electricity, since the company was founded in 1975.

All the worthy initiatives you've been describing -- are you depriving your Microsoft shareholders of money, in order to be a better citizen?

I think it's a great question. And I don't believe that we are. I believe that the kinds of steps that we are taking to build a better and more socially responsible business are, from a long term perspective, going to build a business that is more valuable as well. If you want to buy a stock on Wednesday or Thursday and sell it next Monday -- go buy somebody else, buy their stock. If you want to buy a stock in 2020, that is going to be more valuable in 2022 or 2025 or 2030, bet on a company that's responsible. Bet on a company that's finding ways to do the right thing. These are the businesses that will last and that's what we want to be.

Why you can trust KUOW