top of page

Influence Operations in the Age of AI: Covert Influence or Open Manipulation?

  • Writer: Clay Mobley
    Clay Mobley
  • 7 days ago
  • 2 min read

Updated: 5 days ago

In the Intelligence Community, we always understood that the tools of influence were evolving faster than most policymakers could keep up. But even during my time in the community, we rarely forecasted how quickly artificial intelligence would become the great equalizer in influence operations-turning what used to require teams of trained officers into something any actor with a laptop and motive can scale globally.


AI isn’t the future of influence. It’s the present.


AI as an Influence Force Multiplier


Today, state and non-state actors are exploiting AI to generate synthetic content, impersonate trusted figures, and micro-target vulnerable audiences-all at a velocity that traditional counter-influence methods struggle to match.


In 2025, Reuters reported that malicious actors are already using AI to impersonate senior U.S. officials, creating plausible deepfakes and voice clones that inject false narratives into information environments where speed beats accuracy (Reuters).


This isn’t just about deepfakes anymore. Meta disclosed in late 2024 that it dismantled over 20 coordinated covert influence operations using AI-generated personas and content, spanning from Iran to Russia to domestic extremist groups (The Guardian).


Perhaps most concerning, the Washington Post uncovered how Russian-aligned actors are grooming AI chatbots-programming them with specific ideological biases designed to subtly shift user opinions during prolonged engagements (Washington Post). This is not overt manipulation-it’s influence at the subconscious level, achieved through everyday conversations.


Implications for Private Sector and National Security


This shift has profound implications not just for governments, but for enterprises operating in contested markets or sectors vulnerable to disinformation. Whether it’s shareholder activism, reputational warfare, or market manipulation, the tools once reserved for nation-state propaganda are now accessible to competitors, activists, and criminal networks.


Organizations must stop thinking of influence operations as abstract geopolitical threats and start viewing them as operational risks to their brands, reputations, and bottom lines.


What to Do Next


Countering these campaigns requires more than social media monitoring. It demands an intelligence-led influence resilience strategy-combining OSINT, threat actor tracking, AI model monitoring, and narrative risk mapping.


At Cheshire Institute, we approach influence threats the way we approached them in the IC: surgically, quietly, and with the discipline to distinguish between overreaction and strategic counteraction.


Because in the AI era, if you're waiting to see the threat-it’s already too late.

Comments


bottom of page