Getting Ahead of Digital Repression: Authoritarian Innovation and Democratic Response

In April 2024, the Hoover Institution’s project on China’s Global Sharp Power, Stanford University’s Global Digital Policy Incubator, and the National Endowment for Democracy’s International Forum for Democratic Studies held a closed-door, expert workshop to map the expanding frontiers of digital authoritarianism and to discuss the diffusion of authoritarian technologies. The workshop posed the following questions: How may emerging tech tools fit into future models of data-driven authoritarianism? How is digital authoritarianism spreading, and what strategies will ensure that digital advances do not yield democratic backsliding?

These summary reflections identify key themes that surfaced in our discussion regarding authoritarian actors’ current and potential use of digital currencies, AI-powered predictive tools, and other emerging technologies for social control. They also present participants’ insights into promising strategies for the global democratic community to push back.

Digital Authoritarianism in the PRC

As technological capacities advance, governments are collecting more data than ever, and authoritarian states are using that data to further repression.

The People’s Republic of China (PRC) presents the most stark example: The recent five-year plan of the ruling Chinese Communist Party (CCP) articulates a clear vision for technological development that systematically aids its state surveillance apparatus and geopolitical ambitions. At all levels, public as well as private digital technologies already feed into these ambitions. At home, authorities utilize a wide range of digital systems—from “smart kindergartens” that monitor children’s attendance and behavior to “city brains” that crunch data for urban governance—as part of an integrated tech-powered approach to social control. These technologies enable officials to not only collect enormous volumes of data and create detailed maps of society, but also incentivize regime-compliant behavior and penalize dissent.

The PRC views data as a national strategic resource, and this approach has implications beyond as well as within the country’s borders. Legal provisions enable authorities to scoop up information from private platforms such as WeChat or e-commerce giant Temu, meaning that when global users engage with apps from PRC companies, data-related provisions in the terms of service that would seem innocuous in other contexts may actually pose serious risks. Private data also frequently ends up in the hands of the Propaganda Department through a vast web of associated companies, likely feeding into information operations.

In tandem with its current deployments of tech tools for social monitoring and control, PRC authorities are prioritizing development in areas such as quantum computing, satellite, AI, and brain-computer interfaces, with obvious potential to tighten this control. The PRC’s intentional, state-guided advances in some of these domains raise questions about whether we can safely continue to assume that centralized systems will automatically fall behind freer-thinking ones when it comes to innovation.

Exporting Digital Authoritarianism

The decreasing cost and increasing supply of mature illiberal technologies—above all from the PRC, though countries such as Russia also play a role—is facilitating digital authoritarianism across a wide range of nations. China’s role in the global AI surveillance market is well known, but PRC-based actors are also supplying other tools that could help governments integrate data and create behavioral incentives.

Venezuela’s “homeland” system is one example. Developed using technology from the PRC vendor ZTE, it integrates a person’s national ID number with other information such as car registration, voter rolls, and social media handles. This system gives ruling-party coordinators opportunities to leverage detailed profiles of citizens for mobilization in regime-controlled elections, and to condition access to welfare services and medical treatments on behavior.

A new set of risks comes from China’s central bank digital currency (CBDC) the e-CNY, which makes transactions immediately legible to and controllable by PRC authorities. The e-CNY is becoming a contender for settling international transactions in the wider Asian region. Insofar as they make it easier to freeze assets, control spending, or integrate financial data into other systems, CBDCs like the e-CNY could become extremely dangerous in authoritarian settings. The e-CNY also illustrates a second challenge to democracy posed by China’s ascendance in tech: PRC technologies can present an appealing alternative for regimes seeking a way to work around Western sanctions.

The Impact of Frontier Technologies

Frontier technologies not only intensify the trend toward invasive data collection, but may enable authorities to leverage the data they collect in new ways. For instance, the PRC is building the world’s largest DNA database, with authorities no longer collecting genetic information only from target groups (e.g. criminals, dissidents, or ethnic minorities), but instead moving towards general population data collection. As we enter an era of personalized genomics, information of this kind could allow autocratic actors to leverage medical information and control of therapeutics as tools of coercion.

Immersive technologies, such as augmented or virtual reality headsets, also hold out new possibilities for data surveillance. These interfaces collect body-based data through methods such as eye tracking, and they can infer individual attitudes towards people or objects, personal medical information, or whether someone is telling the truth (particularly significant as some jurisdictions have experimented with holding trials in the metaverse). Having emerged primarily in the gaming sphere, they are now used in education and beyond. PRC cities are developing metaverse “action plans,” and authoritarian regimes in the Middle East and North Africa region are also actively seeking the advantage in augmented and virtual reality.

Predictive technologies, which leverage AI in combination with data surveillance to anticipate future trends, already figure prominently in digital authoritarian practices. In technologically advanced settings, including many democracies, law enforcement agencies increasingly use predictive policing tools to flag risks and guide the deployment of resources, despite concerns about opacity and discriminatory impacts.

These trends have advanced to the next level in the PRC, where proactive monitoring leveraging big data is an integral part of policing. Law enforcement officials surveil key spots and conduct simulations using digital twins to anticipate troublesome events and respond, for instance, before too many people arrive at a protest site. Key people (which might be political dissidents, people with mental health issues, or other identified threats to social stability) are also monitored, with systems flagging deviations from their usual travel patterns. Such information in turn feeds into city brains, which use AI-assisted cloud computing to integrate information streams within cities. In some cases, systems in multiple urban centers are linked. The anticipated future endpoint of these systems, though still far out, is a self-regulating entity where AI not only flags events of concern but also provides and potentially automates response options.

The impact of generative AI on digital authoritarian systems remains a question mark. On the one hand, recent AI advances make censorship less laborious, particularly when it comes to previously challenging media such as videos. At the same time, PRC authorities view external systems like ChatGPT as threats to “ideological security.” Further complicating the equation, authoritarians’ copious production of propaganda—which, unlike many reliable sources, is generally not paywalled—may pose an asymmetric challenge by influencing the output of Western AI models, which are trained on data scraped from the internet.

The Limits of Data Surveillance

While the data collection options available to governments continue to multiply, officials are still not always able to make meaningful use of all the information they gather. Even in the highest capacity surveillance apparatuses, such as those in Xinjiang, there are reports of extreme strain on personnel.

In settings where technical and bureaucratic capacity are limited, the practical obstacles are greater. Grand plans for integrated digital ID and surveillance systems may outpace the deployment of basic infrastructure such as CCTV systems. Even where authorities succeed in establishing pervasive systems of data collection—as in the Venezuelan context—it may be simply stored up for occasions when authorities wish to target a particular individual or neighborhood, rather than used to draw population-level conclusions. Different law enforcement cultures or greater popular resistance might also impede techno-authoritarian projects.

In some cases—as with Iran’s bluster about the use of facial recognition to monitor women’s compliance with hijab laws—advanced technologies may function partly as a form of security theater. Safe cities, increasingly common across a range of contexts including Central Asia, can be alluring in part because they allow leaders to claim to fix issues like poor and corrupt policing systems without comprehensive policy reforms.

Strategies for Pushing Back

At the political level, established democracies facing down the digital authoritarian threat must find ways to bring to the table an alternative, rights-respecting, positive vision for technological development, rather than simply telling other states what not to do. Democracies should identify common threads of interest with one another, and bases for cooperation with the private sector. Clear-headed thinking about data governance and data security are also critical, given the centrality of data collection to emerging digital authoritarian practices.

When it comes to the tech space itself, active engagement in technical standard setting, for instance around principles such as privacy in the design of CBDCs, can help mitigate the proliferation of tech with authoritarian affordances. The democracy community should also continue identifying opportunities to actively leverage the benefits of frontier technologies, for instance by using stablecoins (a form of digital currency) to transfer funds to activists in repressive settings. Integrating prodemocratic technologies such as internet freedom tools into the broader tech ecosystem (such as browsers or content apps) can help to normalize their use, both countering censorship and potentially acting as a deterrent for governments.

Finally, on the research and awareness-raising front, scholars of digital authoritarianism should develop communications strategies to better disseminate research and break down the silos that separate them from the technical-scientific community. Researchers must initiate conversations on the risks of collaborating with the PRC, especially the impacts of the CCP’s surveillance and censorship apparatus. Funding initiatives that support collaboration between think tanks and institutions doing technical research could help to improve knowledge-sharing on these pressing challenges.

All three of our organizations—the Hoover Institution project on China’s Global Sharp Power, Stanford University’s Global Digital Policy Incubator, and the National Endowment for Democracy’s International Forum for Democratic Studies—are committed to long term research and knowledge dissemination on this critical challenge for the future of democracy.

Share