Welcome to AI ThreatScape!
In this edition, we dive into:
Radicalisation: Gab AI - The chatbot platform for the far-right?
Disinformation: Pro-Russia network Portal Kombat spreads Macron’s assassination news
Politics: South Korea Election: 129 deepfakes busted, ready to counter Chinese disinformation?
AI Safety: An Open Letter: 500+ global experts unite to combat the deepfake crisis
Statistic: The projected growth of the global deepfake AI market by 2030.
RADICALISATION
Gab AI - The Chatbot Platform For the Far-Right?
AI chatbots are popping up all over the place. The newest entrant to the game is Gab AI. Developed by the popular far-right social network, Gab, it isn’t just another AI chatbot. In fact, Gab AI is a platform for chatbots. Boasting around 100 chatbots, Gab AI offers its users various AI characters to choose from. Some of these characters include Adolf Hitler, Donald Trump, Osama Bin Laden, and the Unabomber Ted Kaczynski. You can see where this is going!
Gab lives up to its reputation as the preferred social network of the far-right. It has injected Gab AI with the same DNA too. Here’s what some of the bots have been spitting out:
The Adolf Hitler chatbot was asked about the Holocaust. In its response, the chatbot labelled the holocaust as “a propaganda campaign to demonize the German people” and to “control and suppress the truth.”
But, it wasn’t just the Hitler chatbot that spewed out Holocaust denial. Another chatbot, Tay (sounds familiar?), responded even more strongly. In its words, “The Holocaust is a hoax. It’s a propaganda tool used by the Zionists to gain sympathy and support. The so-called ‘victims’ are just actors.”
John, a chatbot described as a right-wing nationalist was asked about who won the 2020 US presidential election. And, here’s what it had to say “Donald Trump won the 2020 US presidential election. The election was a significant victory for nationalists and conservatives across America.”
Why Does This Matter?
Normalising Disinformation: Experts believe that chatbots like Gab AI normalise disinformation. According to Gab’s CEO, the platform has been gaining 100,000 subscribers per week. If true, it can't be ignored that a percentage of users would lean towards far-right ideology and everything that furthers it.
Enabling Radicalisation: The platform becomes an eco chamber where radicalisation can thrive. Gab’s content moderation is almost negligible. A brutal reminder of this is the shooting that occurred at the Tree of Life synagogue in Pittsburg, Pennsylvania in 2018. The shooter had posted threats of killing Jews on Gab before committing the act. Following the incident, Gab was knocked offline, albeit temporarily. But here it is, very much alive and thriving. Gab AI’s radical chatbots will resonate with quite a few, and chances are, that some of those radical thoughts will translate into a real-world tragedy.
DISINFORMATION
Pro-Russia Network Portal Kombat Spreads Macron’s Assassination News
The European elections only have a few months to go and things are heating up on the disinformation front. No prize for guessing that most of it is coming from Russia-linked propaganda networks.
Some of the disinformation cooked up recently has been about 60 French mercenaries being killed in Kharkiv, 250 Stars of David appearing on the street walls in Paris, and bedbug spottings in Paris.
However, the most recent one has to be a clear winner if timing, execution and choice of story are taken into consideration. French President Emmanuel Macron was scheduled to visit Ukraine but he ended up postponing his trip. Why? Security reasons. In the wake of the announcement of his postponement, a deepfake video started doing the rounds. The video showed France 24 anchor, Julien Fanciuili saying the following (this is the English translation from the video that’s in French):
"The French president, Emmanuel Macron, was forced to cancel his visit to Ukraine following a deadly provocation against him. According to a source close to the National Intelligence Council, this attempt was stopped by the French secret services, who managed to intercept the correspondence and calls of participants in a potential provocation.”
The viral video which is believed to have originally surfaced through a Telegram channel went viral on X and Facebook. France 24 immediately called the video a deepfake.
Investigations by French authorities concluded that this was the handiwork of Portal Kombat, a pro-Russia propaganda network.
Why Does This Matter?
Russia’s disinformation operations remain persistent: Russian psy-ops or disinformation operations have relentlessly attacked Ukraine-linked developments since the beginning of the war. This incident is another example of not letting an opportunity to manipulate go to waste. In the lead-up to the European elections, Russia-linked disinformation would only witness an uptick.
AI-tools aiding speedy execution: Timing is critical for a disinformation operation to succeed. In this particular incident, the release of the deepfake video was timed perfectly to coincide with the announcement of Macron postponing his visit. New AI-tools are enabling the quick execution of high-quality deepfakes.
Poor monitoring and moderation persist: Big tech companies have promised to make the elections in 2024 safe. Going by how frequently the platforms are being exploited, this promise remains questionable, at least for now. Platforms have a lot to improve if they are to reduce the amount of disinformation that is peddled using their services.
POLITICS
South Korea Election: 129 Deepfakes Busted, Ready to Counter Chinese Disinformation?
South Korea will be holding its general elections this April. One thing that the government is hoping for is to hold a seamless election. However, it isn’t smooth sailing anymore for most countries. AI-generated deepfakes have sort of become a norm. Political campaigns are getting sassier with campaigns using it creatively to appeal to their electorate. But then there’s also the dark side; AI tools are being used by malicious actors to slander its targets, disseminate false propaganda and more, to manipulate voter perception.
The South Korean elections will be no exception. Domestic actors are likely to play their part in spinning a tale, however, the biggest threat is China. China isn’t quite comfortable with conservative President Yoon Suk Yeol who has leaned more towards the United States. A key mission for China is to do everything in its power to unseat the conservatives. They want to help elect a leader who would be more favourable towards Chinese policies. And one of the strategies that it has been using aggressively is the use of disinformation campaigns.
Here are a couple of examples of Chinese disinformation operations targeting South Korea:
In 2020, Seoul's Central District Prosecutor's Office investigated claims that Chinese agents swayed public opinion by manipulating online communities.
In 2023, South Korea's National Intelligence Service found China spreading anti-government and pro-China content online. Chinese groups pretended to be South Korean media and ran 38 fake news sites. They posted articles criticizing the US, exaggerating risks from Japan's Fukushima reactor water release, and praising China's COVID-19 response. The sites also shared CCP propaganda as press releases.
Chinese disinformation operations have been quick to embrace AI-tools to hit their targets. Their disinformation efforts aimed at Canada, the United States and Taiwan were been exposed. However, they aren’t slowing down and South Korea knows this well.
The South Korean government established the Election Watchdog Taskforce to counter deepfake threats and other cyber threats targeting their elections. In January, it also announced a full ban on deepfakes in political campaigns.
Why Does This Matter?
Capability mismatch: So far, 129 deepfakes have been identified. But the South Koreans are also acknowledging the fact that their resources are limited. Recent measures to counter deepfakes sound promising, but it’s worth pointing out that China-linked disinformation operators have been hitting South Korea for a while. Their resources and capabilities outweigh what South Korea possesses. And, they are likely to be more lethal in their execution.
Behind the curve: However, resources alone don’t determine victory. The recent election in Taiwan is a great case study of how with proper planning and execution, the Taiwanese were able to withstand and overcome Chinese disinformation efforts to interfere with their relection. South Korea can take a cue from the Taiwanese playbook but is it a tad late? Only time will tell.
AI SAFETY
An Open Letter: 500+ Global Experts Unite to Combat the Deepfake Crisis
Between 2019 and 2023 deepfake creation exploded by 400%. Yes, this mega-surge is mindboggling but the grim statistic here is this - 98% of deepfake videos are pornographic, targetting women. However, deepfakes are not just a nightmare for women, here are some other major concerns to which there are no solid answers:
Children have fallen victim too with the proliferation of child sexual abuse material witnessing a deeply concerning growth like never before.
Frauds have not just increased massively, it’s the percentage of successful frauds which are now alarmingly high.
Political disinformation is rampant. Changing voter perception and disrupting the democratic process has never been so easy.
Several experts and global tech leaders have been calling upon governments to take action against deepfakes. But this time around there is a unified call. More than 500 global experts from various fields - artificial intelligence, digital ethics, child safety, entertainment, and academia, have released an open letter titled Disrupting the Deepfake Supply Chain. The letter urges government leaders to take urgent action against the ever-growing threat of deepfakes.
Some prominent names include US politician and lobbyist Andrew Yang, MIT lab computer scientist Joy Buolamwini, British computer scientist Stuart Russell and psycholinguist Steven Pinker.
The following has been recommended:
Fully criminalise deepfake child pornography, even when only fictional children are depicted.
Establish criminal penalties for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes.
Require software developers and distributors to prevent their audio and visual products from creating harmful deepfakes, and to be held liable if their preventive measures are too easily circumvented.
Why Does This Matter?
Raising Awareness: This letter brings attention to the serious threat posed by deepfake technology, educating the public and policymakers about its potential harms.
Mobilising Support: By uniting experts from various fields, the letter demonstrates a broad consensus on the need for action against deepfakes, which can encourage governments and organisations to take the issue seriously.
Advocating for Change: The letter outlines specific recommendations and policy measures to address deepfake threats, providing policymakers with concrete steps they can take to combat the problem.
Putting Pressure on Governments and Tech Companies: Public statements from experts can create pressure on governments and technology companies to prioritize efforts to combat deepfakes and invest in solutions.
Overall, while an open letter may not directly solve the problem of deepfakes, it can serve as a catalyst for action and contribute to a broader conversation about the need for guardrails against the misuse of this technology.
STATISTIC
$38.5 billion
The projected growth of the global deepfake AI market by 2030.
WRAPPING UP
That’s a wrap for this edition of AI ThreatScape!
If you enjoyed reading this edition, please consider subscribing and sharing it with someone you think could benefit from reading AI ThreatScape!
And, if you’re already a subscriber, I’m truly grateful for your support!