Microsoft showed off two new computer programs to combat deepfakes, identifying when photos, videos or audio files have been manipulated in ways that are hard to detect. Microsoft said its new apps will be able to identify manipulated videos, as well as technology that will tell people that the media they’re looking at is likely authentic or not.
Microsoft says its new technology will be built into its Azure server technology for businesses. Content makers will be able to send “hash” fingerprints of their videos to Microsoft, which will then in turn help people identify whether media is likely manipulated or not.
“Disinformation comes in many forms, and no single technology will solve the challenge of helping people decipher what is true and accurate,” Tom Burt, corporate vice president of customer security & trust, and Eric Horvitz, chief scientific officer, said in a blog post Tuesday.
Microsoft’s efforts mark the latest among tech experts to raise alarm over the danger of deepfakes. The technology, which is a shorthand for videos or audio recordings manipulated by a computer to say or appear to do anything the user wants, has increasingly become easy to use and harder to spot, creating an opportunity for potentially catastrophic meddling in politics and elections.
The Massachusetts Institute of Technology dramatized that concern in July, with a deepfake video and audio of President Richard Nixon giving a speech he’d never actually delivered. “This project shows the dangers of misinformation,” MIT’s Center for Advanced Virtuality said at the time. “By creating this alternative history the project explores the influence and pervasiveness of misinformation and deepfake technologies in our contemporary society.”
To combat these concerns, Microsoft launched its Defending Democracy Program. Another app the company created was ElectionGuard, a new voting machine software that’s designed to quickly identify hacking attempts.
Microsoft said its new deepfake programs were developed within its Microsoft Research division, as well as its Responsible AI team, and its Ethics and Effects in Engineering and Research Committee.