Arecent studyshows thatdeepfaked audiocan deceive human listeners approximately 25 percent of the time.

It’s part of a growing wave ofAI-generated deepfakescausing concerns about distinguishing fact from fiction.

“Now, with deepfakes, that is no longer true.

Someone listening to audio on their phone using headphones while on a train.

Listening to audio.Flashpop / Getty Images

The notion of identity is important in society.

Identity is essential for communicating and collaborating with other people, and we associate identity with audio features.”

Deepfakes That Fool Users

Audio deepfakes are apparently getting harder to detect.

Closeup on someone holding a smartphone showing an incoming unknown call.

Incoming unknown call.Calvin Chan Wai Meng / Getty Images

Some participants had prior exposure to deep-faked voice samples to aid identification.

On average, participants correctly identified the deepfakes 73 percent of the time, regardless of their training.

Another danger of audio deepfakes is political interference through destabilizing or derailing elections, Memon noted.

Audio deepfakes can be very convincing and are already improving every day.

“Nixon was essentially impeached based on voice recordings, for example,” he added.

Deepfake Audio Might Not Doom Us

Not all deepfake audio is necessarily ill-intentioned.

Google Play and Apple Books utilizeAI-generated voicesfor their audio content.

He said deepfake audio will likely become common online, especially on social and interactive media.

But there’s hope that deepfake audio might not doom us to a future of false information.

He said that even when deepfakes spread, they may only be in online forums that fit the narrative.