"Meaningful harm" from AI necessary before regulation, says Microsoft exec
As lawmakers worldwide attempt to understand how to regulate rapidly advancing AI technologies, Microsoft chief economist Michael Schwarz told attendees of the World Economic Forum Growth Summit today that "we shouldn't regulate AI until we see some meaningful harm that is actually happening, not imaginary scenarios."
The comments came about 45 minutes into a panel called "Growth Hotspots: Harnessing the Generative AI Revolution." Reacting, another featured speaker, CNN anchor Zain Asher, stopped Schwarz to ask, "Wait, we should wait until we see harm before we regulate it?"
"I would say yes," Schwarz said, likening regulating AI before "a little bit of harm" is caused to passing driver's license laws before people died in car accidents.
"The first time we started requiring driver's licenses, it was after many dozens of people died in car accidents, right?" Schwarz said. "And that was the right thing," because "if you would've required driver's licenses when there were the first two cars on the road," then "we would have completely screwed up that regulation."
Seemingly, in Schwarz's view, the cost of regulations--perhaps the loss of innovation--should not outweigh the benefits.
"There has to be at least a little bit of harm, so that we see what is the real problem," Schwarz explained. "Is there a real problem? Did anybody suffer at least a thousand dollars' worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not."
Lawmakers are racing to draft AI regulations that acknowledge harm but don't threaten AI progress. Last year, the US Federal Trade Commission (FTC) warned Congress that lawmakers should exercise "great caution" when drafting AI policy solutions. The FTC regards harms as instances where "AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance." More recently, the White House released a blueprint for an AI Bill of Rights, describing some outcomes of AI use as "deeply harmful," but "not inevitable."
To ensure that preventable harms were avoided, the Biden administration provided guidance to stop automated systems from meaningfully impacting "the public's rights, opportunities, or access to critical needs." Just today, European lawmakers agreed to draft tougher AI rules in what could become the world's first comprehensive AI legislation, Reuters reported. These rules would classify AI tools by risk levels, so that countries can protect civil rights without harming important AI innovation and progress.
Schwarz seemed to discount the urgency of lawmakers' rush to prevent harms to civil rights, though, suggesting instead that preventing monetary harms should be the goal of regulations and that there is seemingly no need for that yet.
"You don't put regulation in place to prevent a thousand dollars' worth of harm, where the same regulation prevents a million dollars' worth of benefit to people around the world," Schwarz said.
Of course, there have already been lawsuits seeking damages from makers of AI tools, such as a class-action copyright lawsuit against image generators Stability AI and Midjourney and other lawsuits causing a legal earthquake in AI. OpenAI's ChatGPT data leak caused Italy to temporarily ban the speech generator, worried about Italian users' data privacy, and an Australian mayor threatened a defamation lawsuit after ChatGPT falsely claimed he had gone to prison.
Schwarz does not appear to be totally against AI regulations but says as an economist, he likes efficiency and would want laws to balance costs and gains from AI. About six minutes into the panel, Schwarz warned that "AI will be used by bad actors" and "will cause real damage," saying that companies should be "very careful and very vigilant" when it comes to developing AI technologies. He also said that "if we can impose a regulation that causes more good than harm, of course, we should impose it."
Microsoft is an investor in OpenAI, and during the panel, Schwarz was asked how Microsoft's and OpenAI's visions overlap.
"I think both companies are really committed to making sure that AI is safe, that AI is used for good, and not used for bad," Schwarz said. "We do have to worry a lot about safety of this technology, just like with any other technology."
Microsoft and OpenAI did not immediately respond to Ars' request to comment. [Update: A Microsoft spokesperson told Ars that as AI technology advances, increased regulatory scrutiny is appropriate, noting that until laws are passed, Microsoft is working to address the most high-risk and sensitive AI uses. "We are optimistic about the future of AI, and we think AI advances will solve many more challenges than they present, but we have also been consistent in our belief that when you create technologies that can change the world, you must also ensure that the technology is used responsibly," Microsoft's spokesperson said. "Microsoft has long said that we need laws regulating AI and as we move into this new era, all of us building, deploying and using AI have a collective obligation to do so responsibly."]