China Ups Its Anti-US Propaganda, TikTok's Rewards Program Supercharges Misinformation, U.S. House Bans Copilot, Key Insights from U.S. Treasury Report
31 March 2024 | Edition 29
Welcome to AI ThreatScape!
In this edition, we dive into:
Propaganda: China ups its anti-US propaganda with The Fractured America Series
Misinformation: TikTok’s Creator Rewards Program supercharges AI-driven misinformation
Technology: Why is Congress banning staff from using Microsoft’s AI Copilot?
Cybersecurity: Navigating AI cybersecurity risks in finance: Key insights from the treasury report
PROPAGANDA
China Ups Its Anti-US Propaganda With The Fractured America Series
AI-Enabled Propaganda
Chinese state broadcaster CGTN recently released a new animated video series called “The Fractured America.”
The series features several videos aiming to depict America in terminal decline. Some examples of the anit-America narratives in the series include:
American workers are in dire straits owing to poor politics and a failing economy
The American military-industrial complex is the “real” threat
The American dream is long dead
What makes The Fractured America series unique is that all the animated videos are AI-generated. From the videos to the voice-over, every aspect is AI-generated.
Upping the Ante
China has been using AI-generated content since March 2023. However, this new series demonstrates that China is getting super-creative with its anti-US messaging. They are rapidly learning and implementing new tactics.
This video series for instance has received good engagement, even from genuine social media users.
What’s even more interesting is that the narratives shared in these videos mirror some of the common grievances shared by Americans.
Why Does This Matter?
When it comes to anti-US propaganda, China has previously tried to exploit divisive issues and stir controversy along racial, economic or ideological lines.
This video series serves as another example of China’s ongoing efforts to closely study American sentiments and capitalise on them. Effective and extensive use of AI tools is perhaps only going to supercharge that going forward!
MISINFORMATION
TikTok’s Creator Rewards Program Supercharges AI-Driven Misinformation
Basics of the Program
TikTok launched its new Creator Rewards Program around March 18. The program pays creators based on qualified views and RPM (average revenue per 1,000 views).
In addition, they’re rewarding originality and longer, engaging videos optimised for search.
To join, a creator needs to meet the following:
Must be above 18 years
Must have a U.S. account
Must have at least 10,000 followers
Must have at least 100,000 video views in the last 30 days
Exploiting the Algorithm
The TikTok algorithm is a no-brainer! The wilder the video, the more buzz it gets. And, this is precisely where generative-AI becomes a creator's best friend - stepping in to simplify tasks like boosting engagement, optimising for search, and creating longer content.
If one doesn’t care about history, facts, bias, copyright issues etc, the new gen-AI tools are just perfect. It will help creators produce videos quickly and at scale.
Here's the deal: AI tools handle everything from scripts to voiceovers. With tools like ElevenLabs, ChatGPT's scripts can be turned into voiceovers, freeing creators from performing or using their own voices. And apps like CapCut, owned by TikTok's parent company ByteDance, use AI to edit videos seamlessly.
So, is there a reason why creators aren’t going to be using AI tools to test the TikTok algorithm? Nope, no reason at all!
Why Does This Matter?
TikTok's creator program, paired with lax content rules, has already led to a surge in conspiracy videos made using free AI tools. Sorting between harmless jokes and dangerous misinformation can be tricky on TikTok, where videos often blur the line between humour and falsehoods.
Now with the enticing Creator Rewards Program dangling dollars out there, a surge in AI-generated misinformation on TikTok shouldn’t come as a surprise.
TECHNOLOGY
Why is Congress Banning Staff From Using Microsoft’s AI Copilot?
Not Taking Chances
Amid all the AI hype, the one nightmare which has plagued companies has been the risk of data leakage enabled by popular AI tools. The U.S. House is no different in that sense. It has the exact same reasons to worry too! And, that’s why, in a move to mitigate the possibility of its data being exposed, the House has imposed a ban on its staff from using Microsoft’s AI chatbot, Copilot.
The Assessment and Implication
In a statement, the CAO’s office said “The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services."
But, the doors aren’t fully shut on Copilot. The ban applies to the commercial version of Copilot. However, when the government version of Copilot is released, it will be closely evaluated and a decision will be taken accordingly.
But for now, Copilot will be removed from and blocked on all House Windows devices.
ICYDK: What’s Copilot?
Copilot is Microsoft's AI assistant. It’s built on top of technology from ChatGPT creator OpenAI.
Microsoft's Copilot operates as a standalone chatbot for the web and mobile devices. Paid versions can also integrate directly into Office apps such as Word, Excel, Outlook, and PowerPoint.
Why Does This Matter?
The U.S. House assessing Copilot to pose a risk of data leakage indicates that the chatbot cannot be trusted with sensitive information. It possibly does not have enough controls in place to protect its users.
While the average user may not share the same concerns as those of the house, there’s much to learn here. So, the next time you decide to share information with Copilot on that sensitive project that you’re working on, think again!
CYBERSECURITY
Navigating AI Cybersecurity Risks in Finance: Key Insights from the Treasury Report
The U.S. Department of the Treasury has published a comprehensive report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.
Spearheaded by the Treasury’s Office of Cybersecurity and Critical Infrastructure Protection (OCCIP), the report sheds light on significant opportunities and challenges stemming from the integration of AI into financial operations.
Here's an overview of the report's key points and recommendations:
Bridging the AI Adoption Gap: The report highlights a notable disparity in AI adoption between large and small financial institutions. While larger players possess the resources to develop in-house AI systems, smaller institutions often lack the necessary data infrastructure to embark on similar initiatives.
Addressing Data Discrepancies in Fraud Prevention: A significant data divide exists in fraud prevention efforts, with insufficient data sharing hindering the development of effective AI models across the sector. This discrepancy disproportionately affects smaller institutions, which struggle to access the volume of historical data required to build robust anti-fraud AI models.
Enhancing Regulatory Coordination: The report underscores the importance of collaborative efforts between financial institutions and regulatory bodies to address AI-related oversight concerns. However, concerns linger regarding regulatory fragmentation, as different regulators at state, federal, and international levels navigate AI regulations independently.
Expanding the NIST Framework: There's a call to broaden the National Institute of Standards and Technology (NIST) AI Risk Management Framework to include more tailored guidance on AI governance and risk management specific to the financial services sector.
Promoting Data Transparency and Mapping: Establishing best practices for mapping data supply chains and implementing "nutrition labels" for AI systems is crucial to ensure data accuracy, privacy, and transparency across the financial sector.
Advancing AI Explainability: Solutions for enhancing the explainability of AI models, particularly complex ones like generative AI, are deemed essential for accountability and trustworthiness in financial operations.
Closing the Human Capital Gap: With a growing demand for AI expertise, the report emphasises the need for role-specific training programs to equip non-technical staff with the skills required to effectively navigate AI risks.
Standardising AI Terminology: The development of a common AI lexicon is identified as a priority to promote clarity and consistency in discussing AI-related concepts and applications.
Strengthening Digital Identity Solutions: Robust digital identity standards are seen as critical tools for combating fraud and bolstering cybersecurity measures across financial institutions.
Fostering International Collaboration: The Treasury aims to engage with global counterparts to address AI-related risks and opportunities in the financial services sector through collaborative efforts and knowledge-sharing initiatives.
WRAPPING UP
That’s a wrap for this edition of AI ThreatScape!
If you enjoyed reading this edition, please consider subscribing and sharing it with someone you think could benefit from reading AI ThreatScape!
And, if you’re already a subscriber, I’m truly grateful for your support!