By FLORO MERCENE
A CONFERENCE held recently on ‘news reporting through the Internet’ discussed the ways to deal with manipulation of information using new technology. Some expressed concern about deepfake technology, which employs artificial intelligence (AI) to make fictitious videos that are almost indistinguishable from real thing.
Deepfakes are so named because they utilize deep learning, a form of Artificial Intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.
The term deepfakes wasn’t familiar to a lot of people until late 2017. While deepfakes gained notoriety when Reddit users began swapping celebrity faces onto porn stars, the potential for the technology’ use in misinformation campaigns has generated a fair amount of concern. Forged videos, images or audio could be used to target individuals for blackmail or for other nefarious purposes. Lawmakers and intelligence officials worry that deepfakes could be used to threaten national security or interfere in elections.
Deepfake technology still has a few hitches. For instance, people’s blinking in fake videos may appear unnatural. But the technology is improving. Within a year or two, it’s going to be really hard for a person to distinguish between a real video and a fake video. Realizing the implications of the technology, the U.S. Defense Advanced Research Projects Agency is already two years into a four-year program to develop technologies that can detect fake images and videos. It’s unclear if new ways to authenticate images or detect fakes will keep pace with deepfake technology.