Microsoft is calling on Congress to pass new laws that make it illegal to use AI-generated voices and images to defraud people, especially seniors and children.
The call to create a “deepfake fraud statute” was part of a 52-page white paper the tech giant released Tuesday, which laid out its vision about how governments should approach AI. The company proposes the government make it illegal to use voice- and image-generation tools to impersonate someone, whether a political candidate or a friend or family member.
“While the tech sector and nonprofit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” wrote Brad Smith, Microsoft president and vice chair, in a blog post. “One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”
While some tech lobbyists have argued that existing anti-fraud laws are sufficient to address AI-based fraud, Microsoft is taking the opposite stance, saying the establishment of AI-specific laws around fraud would aid law enforcement in protecting vulnerable members of society.
“We find ourselves at a moment in history when anyone with access to the Internet can use AI tools to create a highly realistic piece of synthetic media that can be used to deceive: a voice clone of a family member, a deepfake image of a political candidate, or even a doctored government document,” Smith wrote. “AI has made manipulating media significantly easier—quicker, more accessible, and requiring little skill.”
AI fraud is already spreading at a dizzying rate. Days ago, Elon Musk reposted a deepfake video of Vice President Kamala Harris that appeared to use AI to clone her voice and make it look like the candidate was insulting herself. Meanwhile, a growing number of fraudsters are using AI to impersonate a family member who purports to be in crisis, convincing parents and grandparents to hand over money. In 2022, consumers lost $2.6 billion to this sort of fraud, up from $2.4 billion in 2021.
Smith argues in the white paper that in order for these sorts of events to not become inescapable, change needs to happen soon.
“The greatest risk is not that the world will do too much to solve these problems,” he wrote. “It’s that the world will do too little. And it’s not that governments will move too fast. It’s that they will be too slow.”
Microsoft called on Congress to require AI companies to build tools into their AI products that would show whether the content is AI-generated or manipulated, saying it was “essential to build trust in the information ecosystem.”
The company also urged state governments to update laws to address AI-generated child sexual exploitation imagery as well as artificially generated graphic sexual (or nude) images of people without their consent. Meta has struggled in dealing with these, having had two high-profile explicit deepfakes appearing on its sites earlier this year. The company’s oversight board, last week, said the company fell short in its response and called on it to update its policies.
Microsoft is not 100% in line with other tech companies on its thinking about AI regulation. Last year, it suggested the government create a stand-alone agency to regulate the technology, something other tech companies have argued is unnecessary.
The government has taken a number of steps to curb deepfakes already. Recently, the Senate passed a bill that would allow victims of sexually explicit deepfaked images to sue the creator of those for damages. And the FCC has banned robocalls with AI-generated voices, which have been on the increase over the past year, especially in the political arena.
“As swiftly as AI technology has become a tool, it has become a weapon,” Smith wrote. “It is imperative that the public and private sectors come together to address this issue head-on.”