Generative AI is poised to be the next free world’s great gift to authors. The viral launch of ChatGPT — a system with eerily human-like capabilities in composing essays, poetry and computer code — has awakened the world’s dictators to the transformative power of generative AI to create unique, compelling content at scale.
But the fierce debate that has ensued among Western industry leaders on the risks of releasing advanced generative AI tools has largely missed where their effects are likely to be most pernicious: within autocracies. AI companies and the US government alike must institute stricter norms for the development of tools like ChatGPT in full view of their game-changing potential for the world’s authoritarians — before it is too late.
So far, concerns around generative AI and autocrats have mostly focused on how these systems can turbocharge Chinese and Russian propaganda efforts in the United States. ChatGPT has already demonstrated generative AI’s ability to automate Chinese and Russian foreign disinformation with the push of a button. When combined with advancements in targeted advertising and other new precision propaganda techniques, generative AI portends a revolution in the speed, scale and credibility of autocratic influence operations.
But however daunting Chinese and Russian foreign disinformation efforts look in a post-GPT world, open societies receive only a small fraction of the propaganda that Beijing and Moscow blast into their own populations. And whereas democratic powers maintain robust communities of technologists dedicated to combating online manipulation, autocrats can use the full power of their states to optimize their propaganda’s influence.
In 2019, China’s Xi Jinping demanded just that when he ordered his party-state to leverage AI to “comprehensively increase” the ability of the Chinese Communist Party to mold Chinese public opinion. Russia’s Vladimir Putin has similarly doubled down on AI-enabled propaganda in the wake of his Ukraine invasion, including a fake video of Ukrainian President Volodymyr Zelensky calling for Ukrainians to surrender. These efforts are buttressed by a dizzying array of Chinese and Russian agencies tasked with thought control, cultivating a competitive ecosystem of digital propaganda tools underwritten by multibillion-dollar budgets each year.
China and Russia are, in other words, fertile ground for generative AI to usher in a historic breakthrough in brainwashing — a recipe for more international catastrophes, greater human rights abuses, and further entrenched despotism. As China refines and exports its techno-authoritarianism, would-be tyrants to the world over are likely to cash in on the propaganda revolution.
Luckily, companies in the United States and allied nations have largely led the advance of generative AI capabilities. As the technology matures, this advantage will be increasingly important in giving open societies time to understand, detect and mitigate potential harms before autocratic states leverage the technologies for their own ends. But the free world risks squandering this advantage if these pioneering tools are easily acquired by authors.
Unfortunately, keeping cutting-edge AI models out of autocrats’ hands is a tall order. On a technical level, generative AI models lend themselves to easy theft. Despite requiring enormous resources to build, once developed, models can be easily copied and adapted at minimal cost. That’s especially bad news as China routinely pillages American corporations’ tech.
American tech companies may also be tempted to sell generative AI capabilities, just as they inadvertently helped lay the foundations for China’s Great Firewall, ubiquitous surveillance apparatus, and Muslim minority gene harvesting through commercial enterprises.
Additionally, Chinese or Russian AI researchers can simply exploit several companies’ efforts to keep generative AI open source. Meta has begun “democratizing” access to its OBT-175B language model, just as the AI company Hugging Face helped launch BLOOM, an open-access, multilingual model. Well-intentioned as such efforts may be, they are a boon to propagandists.
Companies instead need to treat generative AI development with the caution and security measures appropriate for a technology with immense potential to fuel despotism, and refrain from open-sourcing technical specifics of their cutting-edge models.
The US government should clarify the strategic importance of generative AI, and restrict the export of cutting-edge generative AI models to untrustworthy partners now, building on similar measures that restrict the export of American surveillance tech. Federal research funding for generative AI should be limited to only trusted recipients with strong security practices. The US and allies should also invest aggressively into counter-propaganda capabilities that can mitigate the coming waves of generative AI propaganda — both at home and within autocracies.
The alternative is a well-trod path: American tech companies are bolstering techno-authoritarianism through a combination of profit-incentives and naivete. It’s time to do better.
Bill Drexel is an associate fellow at the Center for a New American Security (CNAS), where he researches AI and national security. He studied Chinese authoritarian technologies at Tsinghua University, Beijing, as a Schwarzman Scholar.
Caleb Withers is a research assistant at CNAS, focusing on AI safety and stability.