New AI Tool Improves Audio Deepfake Detection

New AI Tool Improves Audio Deepfake Detection

Researchers from CSIRO, Federation University Australia, and RMIT University have developed a new method to detect audio deepfakes. The technique, called Rehearsal with Auxiliary-Informed Sampling (RAIS), identifies whether an audio clip is real or artificially generated. It can adapt to new deepfake types while remembering older ones.

Audio deepfakes are used in cybercrime. They can bypass voice-based security systems, impersonate people, or spread false information. In Italy this year, an AI-cloned voice of the Defence Minister requested €1 million from business leaders, showing the risk of audio deepfakes.

RAIS uses a smart selection process. It stores a small set of past examples and adds extra labels for each audio sample. These labels highlight traits humans may not notice. The AI uses this mix of data to learn new deepfake styles without forgetting old ones.

RAIS outperforms other methods. It achieves an average error rate of 1.95 percent across multiple tests. The code is available on GitHub and works with a small memory buffer. This makes it suitable for real-world applications as deepfake attacks evolve.

The system improves continual learning for AI. It captures the full diversity of audio signals and maintains detection accuracy over time. RAIS helps reduce risks from evolving audio deepfakes and sets a new standard for reliability and efficiency.

State
ALL