Welcome to AI ThreatScape!
In this edition, we dive into:
Politics: Indonesia Elections: How Did AI Reshape Prabowo’s Election Campaign?
Bots: AI-Bots - A Game-Changer in the 2024 Election Landscape
Privacy: The Risks of AI Romantic Chatbots - What You Need to Know
POLITICS
Indonesia Elections: How Did AI Reshape Prabowo’s Election Campaign?
Indonesia, the world's third-largest democracy, went into elections on February 14 to elect its new President. While the official results are not out, Prabowo Subianto, Indonesia’s Defence Minister, has already declared himself the winner.
Going by preliminary numbers, it appears that Prabowo will be the next Indonesian President. But this isn’t Prabowo’s first rodeo as a presidential candidate. He ran for President in 2014 and 2019 and lost.
What changed this time?
Being a military man, Prabowo projected himself as a strong nationalist in the past. That strategy clearly did not resonate with a vast majority of the electorate.
Consider this - Millennials and Gen Z make up more than 50% of Indonesia’s 205 million eligible voters. They are also the majority when it comes to the 167 million Indonesian social media users.
This generation of voters isn’t fully aware of Prabowo’s human rights abuses, and his role in the kidnapping and torturing of political opponents in the past. They are interested in a leader who connects with them. And, this is exactly what Prabowo focussed on this time around.
An Image Rebranding
Prabowo and his team understood that a strongman image would repel young voters. They were also mindful of the fact that at 72, Prabowo was the oldest among the three presidential candidates.
Keeping this in mind, Prabowo’s image underwent a complete makeover. He was projected as a warm and cuddly leader who’s fun. AI-generated avatars of Prabowo were all over billboards, social media, sweatshirts, stickers and more!
The AI Impact
Launching The PrabowoGibran Generative AI App: In December, the PrabowoGibran.ai generative platform was launched. Users of this app could do some cool things. For example, they could immerse themselves in AI-generated scenarios such as a jungle hike or a safari with Prabowo. The app also helped Prabowo’s campaign team to understand user(voter) sentiments.
Creating Campaign Art: Text-to-art tools such as Midjourney, Leonardo AI, Microsoft Bing and Pika labs were used to support the creative aspects of the campaign. Midjourney was used to create Prabowo’s doppelganger.
Resurrecting a Former (Dead) Leader: One of the most controversial instances saw a video of Indonesia’s late strongman, Suharto, urging voters to elect Prabowo. The video was created using Midjourney and Leanardo AI, while his voice was crafted using EleventLabs.
Why Does This Matter?
The Indonesian election is a strong case study on how AI tools can and will be used to achieve campaign goals. For good or for worse, some of the areas where the use of AI tools will be very effective are:
Image Rebranding and Messaging: As seen in Prabowo’s case, AI will enable candidates to tailor their messaging and image to resonate with specific voter demographics.
Targeted Outreach and Personalisation: AI-driven analytics will help campaigns identify and target specific voter groups with personalised messages and adverts.
Predictive Analytics and Sentiment Analysis: AI tools will enable campaign teams to analyse vast amounts of data, identify emerging trends and gauge public sentiments. This will provide campaigns with valuable insights for strategic decision-making.
Innovative Engagement Strategies: The AI-generated immersive experience offered by the PrabowoGibran.AI app attracted young voters. Breaking away from traditional methods, it offered the Prabowo campaign a novel way to engage with voters. More such case studies will be seen in future elections.
Bottomline: With the right strategy (which AI can produce), and high-quality execution (which AI can enable), even a strongman with a violent past can be transformed into a warm and cuddly leader who is loved by the masses and elected to power.
BOTS
AI-Bots: A Game-Changer in the 2024 Election Landscape
AI bots have emerged as a formidable force in shaping election narratives. Hate speech, political propaganda, and misinformation have found new channels of dissemination through these automated social media accounts, exacerbating the challenges of election integrity.
The PNAS Nexus Study's Findings
A recent study published in PNAS Nexus serves us a stark warning of the impending AI onslaught in the 2024 elections. Researchers project that AI-driven disinformation campaigns will permeate social media platforms on a daily basis, influencing public opinion and potentially swaying election outcomes across more than 50 countries.
Rise of the Bots
As generative AI technology becomes more accessible, the sophistication of AI bots is set to soar. No longer confined to poorly constructed messages, AI bots now wield the power to craft convincing narratives that blur the lines between fact and fiction.
Generative AI lies at the heart of the AI bot menace. By harnessing the power of large language models, bad actors are able to produce text that mimics human speech patterns with uncanny accuracy. This newfound capability elevates AI bots from mere messengers to master manipulators of public discourse.
Mapping the Threat Landscape
The study's lead author, Neil Johnson of George Washington University, highlights the intricate web of connections between bad actor groups across various online platforms. Extremist factions thrive in smaller, less moderated online communities, amplifying their toxic messages with alarming efficiency.
Why Does This Matter?
Detection Challenges and the Arms Race: Identifying AI-generated content poses a formidable challenge for current detection tools. As AI bots evolve to mimic human behaviour more convincingly, detection methods must keep pace, however, this is proving to be a challenge at least for now.
The Implications for Election Integrity: The widespread deployment of AI bots threatens to undermine the very foundations of democratic elections. With the potential to sway public opinion and manipulate online discourse, AI bots represent a clear and present danger to the integrity of electoral processes worldwide.
DATA PRIVACY
The Risks of AI Romantic Chatbots: What You Need to Know
New research by the Mozilla Foundation exposes the alarming privacy and security risks associated with AI romantic chatbots. These chatbots, touted as "AI girlfriends" or "AI boyfriends," have amassed over 100 million downloads on Android devices, raising significant concerns about data privacy and user security concerns.
The Key Findings
An analysis of 11 popular romance and companion chatbots reveals that:
These apps collect extensive personal data.
Employ trackers that transmit information to tech giants like Google and Facebook, as well as companies in Russia and China.
They lack transparency regarding ownership and the AI models driving them.
Concerning Practices
Many AI romantic chatbots encourage users to share intimate details and engage in role-playing scenarios. However, these apps often fail to provide clear information about data-sharing practices, encryption protocols, or the handling of user information. Weak password requirements further exacerbate security vulnerabilities, putting users' personal data at risk of exploitation by malicious actors.
Crazy Stats
The Mozilla study shows that AI romantic chatbots put user information at serious risk of leak, breach or hack. Here are some mindboggling stats related to AI romantic bots:
The Romantic AI chatbot unleashed a staggering 24,354 ad trackers within just one minute of use.
73% haven’t published any information on how they manage security vulnerabilities.
64% haven’t published clear information about encryption and whether they use it.
45% allow weak passwords, including the weak password of “1.”
90% may share or sell your personal data.
54% won't let you delete your personal data.
Why Does This Matter?
Privacy invasion: AI romantic chatbots pose a significant privacy threat, collecting extensive personal data without user consent. Lack of transparency increases the risk of privacy breaches.
Security Vulnerabilities: Users face security vulnerabilities with weak passwords and data tracking by chatbots. Trackers transmit user information to third parties, including tech giants and companies in Russia and China.
Lack of Transparency: Opacity surrounding ownership and data usage policies leaves users uninformed about how their data is handled. Concerns about accountability and trustworthiness persist.
Intimate Engagement: Chatbots encourage intimate conversations and sharing of personal secrets, potentially leading to exploitation and manipulation by malicious actors.
Overwhelming Surveillance: Chatbots are deploying overwhelming ad trackers and data collection methods, subjecting users to invasive surveillance practices.
Wrapping Up
That’s a wrap for this edition of AI ThreatScape!
If you enjoyed reading this edition, please consider subscribing and sharing it with someone you think could benefit from reading AI ThreatScape!
And, if you’re already a subscriber, I’m truly grateful for your support!