Researchers turn biometric authentication technology into deepfake detection


A team of researchers from the Federico II University of Naples and the Technical University of Munich have developed a new deepfake detection system that they believe could turn the tide in the fight against fraud. Unlike other deepfake detection systems, the new POI Forensics system is not trained with any deepfake videos. Instead, it only views real videos of a subject and then uses those videos to create a biometric profile of that person.

POI-Forensics can then apply this profile to other videos to distinguish legitimate footage from deepfakes, similar to biometric authentication. This approach differs from more traditional deepfake detection systems, which study deepfake videos to learn to spot signs of digital tampering.

The problem, say the researchers, is that such a system is vulnerable to new manipulation techniques that the detection algorithm has yet to detect. The POI forensics system, on the other hand, simply asks how well a new video compares to verified footage of a subject, and flags any videos where something seems wrong. Scammers would have to create a true biometric parody to beat such a system, which covers everything from a person’s movement ticks to their specific voice and speech patterns. This technology is still a long way off and probably prohibitively expensive for most scammers.

POI-Forensics can evaluate video and voice alone, or both together to determine if a video has been faked. The system requires 10 verified videos to create a profile and it does not need to be retrained to accommodate new deepfake methods once that profile is created.

In terms of performance, the researchers claim that their solution was more accurate than the leading deepfake detection systems, especially when applied to low-quality videos. It also did a better job of sorting real and fake videos in several active attack scenarios. The researchers believe their solution will be particularly useful for celebrities and other public figures who are more likely to be the subject of a fake, although ordinary civilians could potentially use the solution to prove that they have been the victim of a deepfake attack.

Naturally, deepfakes have become one of the biggest digital security threats in recent years. Scammers have been able to use deepfake technology to hack China’s tax system, while South Korean researchers are using it to fool many of the world’s top facial recognition APIs. This has created a demand for effective detection systems and it will be interesting to see if the POI forensics approach can help fill this gap.

Source: Unite.AI

April 8, 2022 – by Eric Weiss


Comments are closed.