Voice Replicating Technology Based On AI Appears In Sudan Civil War

On TikTok, a campaign impersonating Omar al-Bashir, the former president of Sudan, has garnered hundreds of millions of views, adding to the internet chaos in the already-conflicted nation that has been torn apart by civil war.

Since late August, an unidentified user has been uploading what it claims to be “leaked recordings” of the former president. Numerous clips from the channel have been posted, but the voice is not real.

Bashir, who was overthrown by the military in 2019 and has been accused of orchestrating war crimes, hasn’t been spotted in public for a year and is thought to be gravely ill. The charges of war crimes are denied by him.

His disappearance raises more questions in a nation already in turmoil following combat in April between the military, which is presently in authority, and the opposing Rapid Support Forces militia group.

Campaigns like this are important because they demonstrate how easily and fast fraudulent news can be shared on social media using new methods, according to experts.

“It is the democratisation of access to sophisticated audio and video manipulation technology that has me most worried,” says Hany Farid, who researches digital forensics at the University of California, Berkeley, in the US.

“Sophisticated actors have been able to distort reality for decades, but now the average person with little to no technical expertise can quickly and easily create fake content.”

The recordings are available on the Voice of Sudan channel. Several “leaked recordings” reported to Bashir are mixed up with old videos from press conferences held during coup attempts. The posts frequently claim to be transcripts of conferences or phone calls, and the audio is grainy as you might expect from a weak phone connection.

We initially sought the advice of a group of Sudan specialists at BBC Monitoring to confirm their veracity. We were informed by Ibrahim Haithar that they were probably not recent.

“The voice sounds like Bashir but he has been very ill for the past few years and doubt he would be able to speak so clearly.”

This does not imply that it is not him.

We looked at additional scenarios, but this is not an old clip that has surfaced and is not likely to be an impressionist’s creation.

The most convincing proof was provided by a user of X, which was formerly known as Twitter.

They knew the first recording of Bashir that had been posted in August 2023. It appears to show the leader condemning General Abdel Fattah Burhan, who is in charge of the Sudanese army.

The audio waves of the fictitious Bashir recording, shown in blue, resemble those of the authentic tape, shown in red, according to the graphic. Title: Support The voice of Omar al-Bashir was cloned.

The Bashir recording matched a Facebook Live interview that Al Insirafi, a well-known Sudanese political commentator, had given two days earlier.

He is thought to reside in the US, however he has never been on camera.

Although the two don’t sound especially similar, their scripts are identical, and when you play both clips at once, they are flawlessly synced.

Similar patterns in speech and quiet can be seen when comparing the audio waves, says Farid.

The evidence points to the use of voice converter software to replicate Bashir’s speech. The app is a potent tool that enables you to upload audio files that can have a different voice added to them.

A trend became apparent after more investigation. We discovered at least four more Bashir tapes that were grabbed from the live broadcasts of the same blogger. No proof exists that he is participating.

The TikTok account is solely political and necessitates extensive knowledge of what is going on in Sudan, but who profits from this effort is debatable. One recurring theme is criticism of the army’s commander, Gen Burhan.

The goal could be to deceive spectators into thinking Bashir has emerged to play a part in the conflict. Alternatively, the channel could be attempting to legitimise a specific political stance by utilising the former leader’s voice. It’s unclear what that angle is.

The Voice of Sudan denies misleading the public and claims to be unaffiliated with any organisations. When we contacted the account, we received a text message that said, “I want to communicate my voice and explain the reality that my country is going through in my style.”

According to Henry Ajder, whose BBC Radio 4 series investigated the emergence of synthetic media, an effort of this magnitude to resemble Bashir is “significant for the region” and has the potential to deceive audiences.

Artificial intelligence researchers have long been afraid that phoney film and audio will cause a wave of disinformation, perhaps causing unrest and disrupting elections.

“What’s alarming is that these recordings could also create an environment where many disbelieve even real recordings,” says Mohamed Suliman, a researcher at Northeastern University’s Civic AI Lab.

How can you identify audio-based misinformation?

As seen by this case, users should consider whether the recording feels realistic before sharing it.

Checking to see if it was released by a reliable source is critical, but authenticating audio is challenging, especially when content circulates through chat apps. It’s considerably more difficult during a period of societal instability, such as the one currently underway in Sudan.

The technology for training algorithms to detect synthetic audio is still in its early phases of development, although the technology for mimicking voices is extremely sophisticated.

TikTok removed the account after being alerted by the BBC, citing violations of their policy on broadcasting “false content that may cause significant harm” and their regulations on the usage of synthetic media. 

(Adapted from BBC.com)

Leave a comment