'Deep fake' videos that can make anyone say anything worry U.S. intelligence agencies
NEW YORK (FOX 5 NY) - A video of a seemingly real news anchor, reading a patently false script saying things like the "subways always run on time" and "New York City pizza is definitely not as good as Chicago" gives a whole new meaning to the term fake news.
But that fake news anchor is a real example of a fascinating new technology with frightening potential uses.
I was stunned watching the Frankenstein mix of Steve Lacy's voice coming out of what looks like my mouth.
The video is what is known as a deep fake: a computer-generated clip using an algorithm that learned my face so well that is can recreate it with remarkable accuracy.
My generated face can be swapped onto someone else's head (like that original video with Steve) or it can be used to make me look like I'm saying things I've never said.
For this piece, I worked with Lyu and his team at the College of Engineering and Applied Sciences at the University at Albany.
For many people, seeing is believing.
"I would say it's not 100% true anymore," Lyu said.
Their deep fake research is funded by the Defense Advanced Research Projects Agency, or DARPA, which acts as the research and development wing of the U.S. Defense Department. They're working to develop a set of tools the government and public can use to detect and combat the rise of deep fakes.
"What we're doing here is providing a kind of detection method to authenticate these videos," Lyu said.
What's more, deep fakes technically aren't that hard to make. All it takes is a few seconds of video of someone, a powerful computer, and some code, which Lyu and his team don't release publicly.
"The real danger, I believe, is the fact that the line between what is real and what is fake is blurred...
But that fake news anchor is a real example of a fascinating new technology with frightening potential uses.
I was stunned watching the Frankenstein mix of Steve Lacy's voice coming out of what looks like my mouth.
The video is what is known as a deep fake: a computer-generated clip using an algorithm that learned my face so well that is can recreate it with remarkable accuracy.
My generated face can be swapped onto someone else's head (like that original video with Steve) or it can be used to make me look like I'm saying things I've never said.
For this piece, I worked with Lyu and his team at the College of Engineering and Applied Sciences at the University at Albany.
For many people, seeing is believing.
"I would say it's not 100% true anymore," Lyu said.
Their deep fake research is funded by the Defense Advanced Research Projects Agency, or DARPA, which acts as the research and development wing of the U.S. Defense Department. They're working to develop a set of tools the government and public can use to detect and combat the rise of deep fakes.
"What we're doing here is providing a kind of detection method to authenticate these videos," Lyu said.
What's more, deep fakes technically aren't that hard to make. All it takes is a few seconds of video of someone, a powerful computer, and some code, which Lyu and his team don't release publicly.
"The real danger, I believe, is the fact that the line between what is real and what is fake is blurred...
Comment