5 Key Takeaways From Monitoring AI-Enabled Threats For 150+ Days: A Review of 2023 & A Forecast for 2024
Welcome to the year-end special edition of AI ThreatScape!
It has been a little over 150 days that we’ve been monitoring AI threats. Seventeen editions of AI ThreatScape have been released before this one, covering macro and micro developments around the evolution of AI-related threats.
Monitoring AI threats over these 150+ days has presented a unique opportunity to review major threats that manifested during this time. Interestingly, some of these threats will not only spill over into 2024, but, they will most likely snowball into severe threats. This allows us to forecast what may be in store for 2024.
So in this edition, we put together 5 key takeaways from monitoring AI-enabled threats for 150+ days and deliver a forecast for 2024.
5 Key Takeaways & Forecasts in a Nutshell
China’s use of AI tools for disinformation has become increasingly sophisticated. The US will continue to be a prime target.
Generative-AI has been used aggressively to mould public opinion during major wars. Its usage will continue to see an uptick.
Some political parties have already tasted the impact of AI-generated material. Its full force will be felt in the upcoming major elections of 2024.
Deepfake porn and nudes have haunted women and children. With no foolproof solution in sight, the situation is unlikely to improve.
Public figures did their best to fire-fight against deepfakes, expect no respite in 2024.
Takeaways & Forecasts Unpacked
#1: China’s use of AI tools for disinformation has become increasingly sophisticated. The US will continue to be a prime target.
Targeting US Elections Using Generative-AI
In September, Microsoft released a report exposing how CCP-linked actors were using generative-AI to sow discord among American voters.
By using AI to generate high-quality visuals, this influence operation focussed on devising topics such as gun violence, denigrating US public figures and symbols etc.
The Full Article: AI ThreatScape, Edition 6
Exploiting the Maui Fires
Again in September, China-affiliated actors were found to be executing another disinformation campaign. This time around they targeted the Maui fires.
Various social media platforms saw a deluge of false reports, claiming that the Maui disaster was the outcome of a “weather weapon” being tested by the United States.
To add credibility, the reports carried AI-generated photos to support the narrative.
The Full Article: AI ThreatScape, Edition 7
Targeting Canadian Politicians
In October, the Australian Strategic Policy Institute unmasked a China-linked disinformation campaign targeting around 50 Canadian MPs, including the Prime Minister.
Sophisticated deepfake technology was used by:
Generating a deepfake video of a popular political vlogger accusing the Canadian politicians of corruption, racism and philandering.
Making viral on social media a deepfake image of Canada’s environment and climate change minister being arrested.
The Full Article: AI ThreatScape, Edition 12
YouTube Network Targeting the US
This December, the Australian Strategic Policy Institute released a report exposing a pro-China YouTube network (with 30 channels and over 4,500 videos) pushing narratives favouring China.
AI-generated voices, videos and a believable human character were all used to sway global opinion. Part of the narrative was to showcase China as a tech giant surpassing the US as a responsible world leader.
The videos generated through this network amassed 120 million views and 730,000 subscribers over a year.
The Full Article: AI ThreatScape, Edition 17
Forecast for 2024
China-backed influence campaigns will become increasingly sophisticated and widespread, leveraging AI to target the 2024 US presidential elections.
The use of generative-AI, combined with the intentions and financial muscle of the CCP will help achieve a much greater scale with its ongoing influence operations.
The impact will become more clearer closer to the elections. However, what cannot be discounted is that some percentage of American voters are likely to be influenced.
#2: Generative-AI has been used aggressively to mould public opinion during major wars. Its usage will continue to see an uptick.
Governments & Political Leaders Targeted In Israel-Hamas War
As the Israel-Hamas war escalated, a deluge of disinformation hit social media, with government and political leaders being prime targets. Generative-AI was used to produce authentic-looking and sounding videos and documents.
Some of the interesting narratives floated at the time included the US funding Israel, the US backing Iran, the US embassy in Beirut being evacuated, deepfake videos showing Trump making an anti-Israel statement, Putin and Erdogan expressing their support for Palestine in separate videos.
The Full Article: AI ThreatScape, Edition 10
Moulding Public Opinion
Amid the Israel-Hamas war, both sides resorted to the aggressive use of generative-AI tools. At least one of the primary objectives was to sway public opinion.
From using AI-generated images showing a baby in the rubble, to using deepfake videos showing celebrities making pro/anti statements, the Israel-Hamas war took usage of AI tools to another level.
Although not rare, something unconventional is the use of lulz to amplify hate. The Israel-Hamas war has also seen AI-fuelled subliminal hate - the sort of hate which is subtle, often masked in the form of humour.
The Full Article: AI ThreatScape, Edition 11, Edition 13 & Edition 14
Forecast for 2024
Generative-AI has made it easier to produce authentic-looking and sounding content. Malicious actors have been exploiting the tech to its fullest to target governments, leaders or influential individuals whose opinions matter. This tactic will continue to be applied in situations where it’s deemed effective.
Social media platforms have been unable to contain the deluge of disinformation during high-impact events such as wars. Their future ability to effectively arrest AI-enabled disinformation remains questionable.
Given that the average social media user does not attempt to verify facts, quality AI-generated disinformation and misinformation will likely receive significant engagement.
#3: Some political parties have already tasted the impact of AI-generated material. Its full force will be felt in the upcoming major elections of 2024.
Slovakia’s Election Hit by AI-Generated Disinformation
The pro-Russian leader, Robert Fico emerged as the winner in this year's Slovakian elections. He ran a pro-Russia and anti-American campaign which resonated deeply with the far right.
An interesting development which took place two days before the election was the emergence of a deepfake audio recording of Michal Simecka, the leader challenging Fico. In this audio which surfaced on Facebook, Simecka was heard discussing election rigging and vote-buying tactics.
The Full Article: AI ThreatScape, Edition 9
Zambian President Decides Against Re-Election in 2026
In October, a video of Zambian President, Hikainde Hichilema, emerged. In this video, Hichilema is seen making an announcement of not running for the presidency in 2026.
Experts analysed it and labelled it as a deepfake. Although Hichilema’s party claimed that this was the work of the opposition, which wasn’t proven, the video did seem to be an attempt to sow mistrust and cause confusion.
The Full Article: AI ThreatScape, Edition 12
AI-Generated Disinformation Hits Bangladesh
Bangladesh will be going into elections in the first week of January 2024. In the lead-up to the elections, pro-government news outlets and influencers have been using AI tools to generate disinformation.
Deepfake videos showing the opposition leader speaking unfavourably against Gazans (a faux pas in a Muslim-majority nation) and an AI-generated avatar of a news anchor criticising the US (a stance adopted by the current government), are just a couple of examples of how AI is being used to influence the voter.
The Full Article: AI ThreatScape, Edition 17
Forecast for 2024
Smaller markets, especially those that have little influence over big tech, tend to be neglected by them. This creates room for malicious actors to exploit the situation to its fullest. Neglect by big tech and the absence of solid laws to counter the negative aspects of AI, usage of AI tools to generate content that moulds and manipulates public opinion is likely to see an uptick.
More political parties and leaders will be seen using AI tools to support their political agendas, this includes dissemination of disinformation and misinformation against their opposition.
The possibility of lessons learnt by threat actors in smaller markets and deploying them in the more significant elections of 2024, especially the US presidential elections cannot be ruled out.
#4: Deepfake porn and nudes have haunted women and children. With no foolproof solution in sight, the situation is unlikely to improve.
Deepfake Porn, Revenge Porn, CSAM & Undressing Imagery
2023 has been the year when easily accessible and relatively cheap generative-AI tools such as Midjourney became a nightmare for women and children. Tons of women have fallen victim to deepfake porn and revenge porn, while the proliferation of child sexual abuse material saw a major uptick.
Such has been its impact that the FBI issued a public service announcement in June. In this announcement, the FBI spoke about how explicitly content was being created using generative-AI, the explosion in cases of sextortion and harassment, and provided recommendations on how to deal with this threat.
To contain the growing threat of revenge porn, New York even introduced a law in September that banned the dissemination of AI-generated pornographic images made without the subject’s consent.
But, at the moment law seems to be playing catch up. Take for example the world of synthetic non-consensual intimate imagery (NCII), which is now a full-fledged industry. According to a recent study, there are now at least 34 players in this space. Creators are often referred to as ‘undressers’ who use AI to manipulate real photos and videos, making people appear nude without their permission.
The Full Article: AI ThreatScape Edition 1, Edition 9, Edition 16
Forecast for 2024
While laws are being put in place to contain the threat posed by the malicious use of generative-AI, it is unlikely to make a significant impact in 2024. Malicious users have found loopholes and ways to circumvent countermeasures.
Providers of synthetic NCII seem to be growing, this only means that the risk of online harm will also continue to see a spike. Think non-consensual nudes, targeted harassment, sextortion and even generating child sexual abuse material.
Till such time there are stringent measures holding providers and consumers legally accountable, synthetic NCII will continue to witness explosive growth. Unfortunately, 2024 will only see more cases of women and children being targeted.
#5: Public figures did their best to fire-fight against deepfakes, expect no respite in 2024.
If You’re Famous, You Must Have Been Deepfaked
Be it a popular political leader, an actor, a media personality or a social media influencer - 2023 has been a year where almost everyone famous has seen a deepfake on themselves. In most cases, it hasn’t been pleasant.
Elon Musk, MrBeast, Tom Hanks, Gayle King and a host of others discovered deepfake videos of them promoting products or services that they had no clue about.
Barrack Obama talked about his personal chef’s death. Donald Trump called for Israel to be wiped off the face of the earth. Putin sided with the Palestinians. And, Zelensky was found belly-dancing in a viral deepfake.
In some cases though, people deepfaked themselves. Take for example the Mayor of New York who used his AI-generated audio clone in multiple messages to connect with New Yorkers. And, in Pakistan, the former Prime Minister, Imran Khan, who happens to be serving jail time, used a deepfake video of himself to address his supporters.
2023 has truly been the year of deepfakes!
The Full Article: AI ThreatScape, Edition 8, Edition 9, Edition 10, Edition 11, Edition 12, Edition 15, Edition 17
Forecast for 2024
Impersonation of influential individuals is not new. But, never before has the rate at which deepfake videos are being produced been this drastic.
While some deepfake videos are produced to purely embarrass the target or have some sort of fun, a significant percentage are increasingly fraudulent or are being used to manipulate public perception.
A significant upsurge in more such deepfakes should be expected in 2024. Although governments will try and introduce controls, it’s unlikely that a positive impact will be witnessed in 2024.
Wrapping Up
That’s a wrap for this edition of AI ThreatScape!
If you enjoyed reading this edition, please consider subscribing and sharing it with someone you think could benefit from reading AI ThreatScape!
And, if you’re already a subscriber, I’m truly grateful for your support!