Online Exclusive

How Autocrats Weaponize AI — And How to Fight Back

Artificial Intelligence has become autocrats’ newest tool for surveilling, targeting, and crushing dissent. Activists must learn how to harness it in the fight for freedom.

By Albert Cevallos

March 2025

Artificial Intelligence (AI) is transforming societies around the globe, ushering in new possibilities for innovation and advocacy. But it has also become a battleground between autocrats and activists. Authoritarian regimes, armed with vast resources and cutting-edge AI tools, have gained a significant upper hand in surveilling, targeting, and suppressing dissent. Meanwhile, activists often lack the resources and training they need to leverage AI and fight back.

This resource gap leaves activists vulnerable, excludes them from shaping the future development of AI, and hinders their ability to counter oppression. Closing the gap is essential for protecting human rights and ensuring that AI evolves in ways that uphold transparency, justice, and freedom.

The Autocrats’ New Tool

Autocrats and oppressive governments are increasingly using AI to monitor, target, and silence activists; undermine democratic processes; and consolidate power. Through mass surveillance, facial recognition, predictive policing, online harassment, and electoral manipulation, AI has become a potent tool for authoritarian control.

AI-powered facial-recognition systems are the cornerstone of modern surveillance. The Chinese Communist Party has implemented vast networks of AI-driven cameras capable of identifying individuals in real time. The technology is often used to monitor public gatherings, protests, and even day-to-day activities, making it nearly impossible for activists to operate anonymously. China has also used AI to target the Uyghur community under the guise of counterterrorism. Protesters in Hong Kong employed tactics like wearing masks, shining lasers at cameras, and using umbrellas to thwart facial recognition during antigovernment demonstrations in 2019, but reports emerged of individuals still being arrested based on AI-assisted identification. In Russia too, AI surveillance tools monitor antigovernment protesters. In 2021, Moscow’s expansive facial-recognition network was reportedly used to track and detain individuals participating in anti-Putin demonstrations.

The chilling effect of such technologies cannot be overstated: They deter activism and dissent through fear of retribution. What is worse, the technology is being exported and shared around the world.

Predictive policing presents a growing threat for activists. Powered by AI that analyzes data from various sources such as police records, surveillance footage, social media activity, and public and private databases, these tools forecast potential crimes or unrest. While the technology has legitimate uses, it has been widely criticized for perpetuating systemic bias and enabling authoritarian control. Activists often find themselves unjustly flagged as threats based on biased algorithms or intentionally manipulated data. In Egypt, the government has used AI to monitor social media for signs of dissent: AI systems analyze keywords, hashtags, and online activity to predict and preemptively suppress protests. Similarly in Bahrain, activists have been targeted using spyware and AI-driven monitoring systems, leading to arrests and harsh penalties.

AI technology can also help autocrats sow confusion. Sophisticated algorithms can quickly create deepfake videos, fake social-media accounts, and AI-generated content to spread propaganda, discredit activists, or sow confusion among opposition groups at a dizzying rate. During protests in Burma following the 2021 military coup, AI-driven bots harassed activists and flooded social media with pro-junta narratives. These campaigns aimed to drown out dissenting voices and fracture solidarity among protesters. Activists face an uphill battle against such coordinated efforts, which undermine trust and amplify fear.

AI can also censor dissenting voices online. In countries such as Iran and Saudi Arabia, advanced AI systems monitor and automatically delete content deemed critical of the regime. In some cases, activists’ accounts are flagged, suspended, or “shadow banned” — when posts are blocked from other users’ feeds without the creator’s knowledge or consent — thus limiting activists’ ability to organize and spread awareness. During the 2022 Woman, Life, Freedom protests in Iran that were sparked by the death of Mahsa Amini, activists reported widespread internet blackouts and algorithmic suppression of protest-related content on social-media platforms. AI-driven censorship tools make it harder for activists to document and share human-rights abuses.

AI has been weaponized to supercharge online harassment, which creates hostile digital environments that deter people from online democratic engagement. AI-driven bots and algorithms bombard activists, journalists, and opposition figures with harassment, trolling, and false information. The Belarusian government systematically deployed state-sponsored online trolls to harass independent media outlets, which creates a climate of fear and self-censorship and lets the government control the narrative. These tactics, ongoing since at least 2011, not only intimidate activists and journalists but discourage public discourse out of fear of retribution and erode trust in democratic institutions.

Targeted harassment campaigns driven by AI actively undermine democratic processes. In Zimbabwe’s 2018 election, reports indicated that AI-powered bots were used to spread false information about voter-registration deadlines, leading to voter suppression in opposition strongholds. Similarly in Russia, AI has been used to manipulate public opinion by amplifying state-sponsored narratives while silencing critics, as seen in the 2021 parliamentary elections when bots and trolls discredited opposition leaders and fabricated narratives to justify election outcomes. In Venezuela, the government allegedly has used AI to analyze voter data, gerrymander districts, and inundate individuals with pro-regime messaging to maintain control.

AI Is for Activists Too

Despite these challenges, activists and movements worldwide are beginning to harness AI as a force for good. From encryption tools to AI-driven human-rights documentation, innovative uses of AI help activists counter repression and protect their communities.

As surveillance intensifies, activists are using AI-powered tools to enhance their digital security and privacy. Encryption apps like Signal use AI to ensure secure communication and protect activists from government surveillance. These tools encrypt messages end-to-end, which makes it nearly impossible for third parties to intercept or decipher communications. Additionally, AI is being used to detect spyware and malicious attacks. Tools like Amnesty International’s Mobile Verification Toolkit help activists identify and mitigate risks from spyware like Pegasus that have targeted journalists, activists, and human-rights defenders worldwide.

Activists are also leveraging AI to debunk false information and promote factual narratives. Fact-checking platforms such as Full Fact and Logically use AI algorithms to analyze and verify claims, helping activists to counter propaganda and build trust in their messages. During the covid-19 pandemic, AI-driven fact-checking tools helped combat false information about vaccines and public-health measures. By identifying false narratives early, activists were able to provide accurate information and hold governments accountable.

Increasingly, AI is playing a crucial role in documenting human-rights abuses and gathering evidence for accountability. HURIDOCS uses AI to organize, analyze, and verify evidence of human-rights violations. Platforms like this one help activist organizations build robust cases against perpetrators. In Syria, AI-driven tools have been used by human-rights groups to analyze satellite imagery and social-media content to document war crimes. And during the Rohingya crisis in Burma, particularly following the 2017 mass displacement, AI was employed to analyze patterns of violence, corroborate survivor testimonies, and aid international advocacy efforts. In what was believed to be the first comprehensive AI analysis of the situation, Carnegie Mellon University used AI to examine over 250,000 YouTube comments to detect hate speech.

AI is transforming how activists engage with audiences. Machine-learning algorithms analyze social-media trends and help movements tailor their messages for maximum impact. Chatbots and AI-driven platforms automate responses, provide resources such as information, toolkits, and contacts, and engage supporters. In Venezuela, a group of Latin American media organizations created two AI-generated newscasters to deliver updates on the deteriorating political situation following the stolen presidential election in July 2024; the AI avatars helped keep real reporters safe from government retribution. In Belarus, an AI candidate was created for the February 2024 parliamentary elections to raise awareness about the risks opposition and rights activists faced in the country.

Why Autocrats Have the Upper Hand

While activists are increasingly experimenting with and using AI, the stark resource imbalance between oppressive regimes and grassroots movements still poses problems. Autocratic governments often have access to vast financial and technological resources that allow them to develop, deploy, and refine AI tools at scale. These regimes partner with private tech firms, fund cutting-edge research, and integrate AI into state security apparatuses with little oversight or transparency.

In contrast, activists and human-rights defenders frequently operate with limited funding, outdated tools, and insufficient training in emerging technologies. The lag in support is critical: It often takes a year or more after new technologies become widely available for activists to receive the necessary resources to counteract their misuse. This delay allows autocrats to consolidate their advantage and stifle dissent before activists can adapt. But the need for AI is palpable: In a recent Center for Applied NonViolent Actions and Strategies (CANVAS) survey of activists and partners around the world, 97.1 percent of respondents said that they want to learn more about how to use AI for their work and how AI can be used to strengthen civil society and democratic engagement. And 91 percent of respondents want continuous education opportunities to learn about AI.

The delay in providing activists with AI training and resources has profound implications. Frontline activists are left out of critical conversations about how AI should be developed and deployed. AI systems are therefore rarely designed with human rights, transparency, or fairness as priorities. And without early access to tools and training, activists struggle to counter new forms of surveillance and censorship, leaving them vulnerable to emerging threats. Further, activists with inadequate AI literacy and resources cannot leverage technology as effectively for advocacy, outreach, and movement-building. This limits their ability to inspire and mobilize international support, and reduces global impact.

Leveling the Playing Field

The global community must prioritize providing activists with the tools, training, and resources they need to protect themselves and harness the power of AI. Activists need comprehensive training programs to understand AI technologies, identify threats, and adopt best practices for digital security. Organizations including Access Now, Witness, and Tactical Tech are already making strides in this area, but these efforts need to scale globally; international donors should include such training in all their programs, especially those that support grassroots activists.

Governments, NGOs, and philanthropic organizations should also offer grants to fund activist-led projects that develop AI tools for human-rights advocacy. This includes but is not limited to tools for documenting abuses, countering false information, and evading surveillance. Donors should encourage activists and movements to explore, create, and experiment with emerging AI tools. Activists targeted by AI-driven repression also need access to emergency funding and technical assistance, which could include legal support, access to secure encryption technologies, or relocation assistance for those at risk.

Partnerships between AI developers, human-rights defenders, and civil society groups are crucial for accelerating the development of AI solutions to real-world challenges. To this end, CANVAS partners with the University of Virginia to organize the People Power Academy, where experts and leaders in the fight against the authoritarian use of technology share their insights into cutting-edge advocacy tools. Activists must also be included in policy discussions about AI governance to ensure that AI systems are designed with transparency, accountability, and human rights in mind.

By providing activists with early access to AI tools, training, funding, and collaboration opportunities, the global community can better equip them to counter repression and ensure that AI serves as a force for liberation and not repression.

A Contest of Skills over Conditions

The interplay between AI and activism underscores a fundamental truth: Technology is neither inherently good nor inherently bad — it is a reflection of the values and intentions of those who wield it. While autocratic regimes use AI to suppress dissent and consolidate power, activists are finding innovative ways to turn the tide and leverage the same tools to fight for justice, equality, and human rights.

No amount of resources can ever fully level the playing field between authoritarians and grassroots movements. States will always have significant advantages: more money, more data, more computing power, and more institutional control, plus police, military, and judicial systems at their disposal. Yet history is full of examples of less-resourced, underdog movements using the tools available to them to outmaneuver and outwit autocrats — even those who seemed invincible. AI is simply another tool activists can use.

This suggests another fundamental truth: The real battleground is not raw technological capability, nor is it about using AI for AI’s sake. The true test will be understanding AI and strategically integrating it into a movement’s broader goals. AI is not an arms race between activists and authoritarians; rather, it is a contest of skills over conditions — one where adaptability, creativity, and strategic application matter more than sheer power.

What makes AI so powerful is its ability to enhance efficiency, allowing activists to do more faster and at scale. And, in asymmetric struggles where governments have superior resources, efficiency can often be the deciding factor. Activists can harness AI for agility and disruption — automating security, evading censorship, amplifying resistance, and strategically undermining authoritarian pillars of support. AI doesn’t just help activists fight back — it allows them to outmaneuver repression in ways that were previously impossible.

Ultimately, AI will not determine the outcome of struggles between repression and freedom — people will. The activists who understand how to wield AI strategically, leveraging its strengths while mitigating its risks, will be better positioned to challenge authoritarian power and drive social change. The key is not to match the scale of authoritarian AI but to outthink, outpace, and outmaneuver it.

Albert Cevallos helps lead the CANVAS Activist Intelligence program empowering activists and movements to learn about, defend themselves from, and use AI.

 

Copyright © 2025 National Endowment for Democracy

Image credit: DAVID MCNEW/AFP via Getty Images

 

FURTHER READING

OCTOBER 2023

AI and Catastrophic Risk

AI with superhuman abilities could emerge within the next few years, and there is currently no guarantee that we will be able to control them. We must act now to protect democracy, human rights, and our very existence.

OCTOBER 2023

How AI Threatens Democracy

Sarah Kreps and Doug Kriner

Generative AI can flood the media, internet, and even personal correspondence, sowing confusion for voters and government officials alike. If we fail to act, mounting mistrust will polarize our societies and tear at our institutions.

OCTOBER 2023

Reimagining Democracy for AI

Aviv Ovadya

Advances in AI are rapidly disrupting the foundations of democracy and the international order. We must reinvent our democratic infrastructure to ensure our ability to govern in a dramatically different technological world.