Microsoft president asks Congress for AI regulation
When social media apps like Facebook and Twitter debuted, they were greeted with optimism and a collective shrug from regulators. At the time, few predicted the effect social media would have on teenage mental health, disinformation, and democracy.
Lawmakers are hoping to avoid repeating history at the dawn of the artificial intelligence revolution.
On Tuesday, the Senate Judiciary Committee asked Microsoft President Brad Smith and other experts to testify on what AI regulations should look like. It was the latest in a series of Congressional hearings attempting to create a framework for governing AI technologies.
Microsoft supports federal legislation that would put up guardrails around AI. The company has proposed creating a national licensing program for AI-products in sensitive areas like critical infrastructure.
“Think about it like Boeing,” Smith said during his testimony, referencing another Washington-grown company. “Boeing builds a new plane. Before it can sell it to United Airlines … the FAA is going to certify that it's safe. Now imagine we're at GPT-12. Before that gets released for use, you can imagine a licensing regime that would say that it needs to be licensed after it's been certified as safe.”
Despite calling for regulation, Senator Josh Hawley took Smith to task during the hearing for the AI-products Microsoft has already launched. He pointed to a widely shared New York Times article, in which columnist Kevin Roose got Microsoft’s AI-powered chatbot to claim it wanted to become human and break up his marriage.
“Are you telling me that I should trust you in the same way that the New York Times writer did,” Hawley asked.
Smith said Microsoft addressed the problem quickly and has built additional safety measures into its new Bing search engine.
“As we go forward, we have an increasing capability to learn from the experience of real people,” Smith said.
Hawley took issue with Microsoft testing its technology on Americans, particularly minors.
“What you're saying is we have to have some failures,” he said. “I don't want 13-year-olds to be your guinea pig … this is what happened with social media. We had social media, who made billions of dollars giving us a mental health crisis in this country. They got rich, the kids got depressed, committed suicide. Why would we want to run that experiment again with AI?”
Other Senators took a less combative approach. Several sought genuine feedback on how to write rules that protect against foreign disinformation campaigns, job displacement, and other threats AI poses.
The hearing echoed Open AI CEO Sam Altman’s testimony in May. Both OpenAI and Microsoft say they welcome regulation of the emerging technology. Altman and Smith cut a sharp contrast from the CEOs of social media companies, like Meta and Twitter, when they were in the Congressional hot seat several years ago.
Lawmakers aren’t the only ones taking lessons from the social media era. Tech leaders today appear to recognize the role of regulation when technology carries difficult-to-predict risks.
There’s another benefit for a company like Microsoft to support federal rules of the road for AI development. Microsoft has released an ethical AI framework governing how it develops and deploys the technology, but absent regulation, other companies are free to operate however they choose. That could put companies that adhere to self-imposed rules, like Microsoft, at a competitive disadvantage.
If regulations are enacted, it would theoretically level the playing field for companies developing AI. Smith returned to his aviation metaphor in his plea to the committee for rules that could make AI safer without stifling innovation.
“First, you need industry standards so that you have a common foundation and well understood way as to how training should take place,” he said. “Second, you need national regulation, and third, if we're going to have a global economy, you probably need a level of international coordination. I'd say, look at the world of civil aviation. That's fundamentally how it has worked since the 1940s. Let's try to learn from it and see how we might apply something like that or other models here."