Buffett Brief - The Rise of Ai and Deepfake Technology
Buffett Brief - The Rise of Ai and Deepfake Technology
Buffett Brief - The Rise of Ai and Deepfake Technology
Researchers have identified several possible Then, in March 2022, shortly after Russia
use cases of deepfake technology with rami- began its invasion of Ukraine, the Ukrainian
fications for the security sector, including the public was surprised to see a video of their
buffett.nor thwestern.edu 1
president, Volodymyr Zelenskyy, urging the community: Can deepfakes be used to count-
military to lay down their weapons and surren- er terrorists and destabilize terror groups?
der to the invading forces. As the video spread NSAIL researchers have been at the leading
on social media and gained traction in the edge of developing systems to generate real-
news, Zelenskyy’s office quickly disavowed its istic deepfake videos for countering terrorist
authenticity. Indeed, the video had been gen- groups, while also recommending extreme
erated using deepfake technology by Russian caution, the sparing use of deepfake technol-
propagandists—the first high-profile example ogy and a deepfake code of conduct for gov-
of a deepfake being weaponized in an armed ernments.
conflict.
RESEARCHERS INVESTIGATE
SOLUTIONS AND GUARDRAILS FOR
CYBERDECEPTION
The Northwestern Security and AI Lab
(NSAIL)—jointly housed by Northwestern Uni-
versity’s Roberta Buffett Institute for Global
Affairs and McCormick School of Engineer- Screenshot of TREAD used to put words in the mouth of
ing—is dedicated to performing fundamental two dead terrorists, Anwar al Awlaki and Mohammad al
research on AI technology, including deep- Adnani with trainers in English and Arabic, respectively.
fakes, that is relevant to questions related to
cybersecurity and international security as
well as when the use of AI or deepfakes is eth- EARLY FINDINGS AND
ically or politically warranted. RECOMMENDATIONS
NSAIL was launched in October 2022 as a part- NSAIL head V.S. Subrahmanian, in collabora-
nership between the Northwestern Buffett In- tion with Dan Byman of Georgetown University
stitute of Global Affairs and the Northwestern and Chris Meserole of the Brookings Institu-
McCormick School of Engineering. Led by V.S. tion, recently published a paper on the impli-
Subrahmanian, a Buffett Faculty Fellow at the cations of deepfakes on international conflict,
Northwestern Buffett Institute and the Walter including the threats of “falsifying orders from
P. Murphy Professor of Computer Science, the military leaders, sowing confusion among the
new lab is currently working on over 20 distinct public and armed forces and lending legitima-
research projects related to AI, examining is- cy to wars and uprisings.”
sues ranging from how to protect cities from
drone attacks by terrorist organizations, to In addition to highlighting the ways in which
detecting deception in videos, to the implica- deepfakes can be used to foster dissent,
tions of deepfakes for international conflicts. confusion and distrust among an adversary’s
public, military and media, the report also out-
The lab’s Terrorism Reduction with AI Deep- lines a series of policy recommendations for
fakes (TREAD) project—developed by North- liberal democracies balancing a vested inter-
western University Ph.D. candidate Chong- est in the accuracy of public information with
yang Gao, undergraduate Alex Feng and strong incentives to deploy deepfakes against
Subrahmanian—specifically investigates the their adversaries, particularly in the context of
implications of deepfakes for international armed conflict. Considering those incentives,
conflict and terrorism mitigation while rais- Subrahmanian and his collaborators suggest
ing a central question for the global security that the U.S. and its democratic allies develop
2 buffett.nor thwestern.edu
a code of conduct for deepfake use by govern- such steps in the near future and beyond.
ments called a Deepfakes Equities Process
based on the federal government’s existing If every piece of media content becomes sus-
Vulnerabilities Equities Process, which guides pect because of AI and deepfakes, Western
decisions on whether newly discovered cyber- democracies in particular will become ever
security vulnerabilities are publicly disclosed more concerned about the potential erosion
or kept secret for offensive use against gov- of trust needed for democracy to function. In
ernment adversaries. An “inclusive, delibera- response, expect an increase in the number
tive process is the best way to ensure deep- of organizations, thinktanks and research or-
fakes are used responsibly,” they wrote, and a ganizations like NSAIL devoting time and re-
Deepfakes Equities Process would determine sources to studying the evolving capabilities
when the benefits of leveraging deepfake and threats of cyber deception, developing
technology against high-profile targets out- technical countermeasures to detect them
weighs the risks “by incorporating the view- and providing risk mitigation guidance and
points of stakeholders across a wide range of solutions.
government offices and agencies.”’
NSAIL researchers and founder Subrahma-
nian will continue to advance the world’s
A Deepfakes Equities Process knowledge about AI technology across many
would determine when the domains, while presenting their findings glob-
benefits of leveraging deepfake ally. For example, in February 2023, the Dutch
government hosted the first global Summit on
technology against high-profile Responsible Artificial Intelligence in the Mil-
targets outweighs the risks “by itary Domain (REAIM 2023). The summit con-
incorporating the viewpoints of vened stakeholders from around the world for
stakeholders across a wide range of dialogue on key opportunities, challenges, and
risks associated with military applications of
government offices and agencies.”
AI. Subrahmanian was an invited speaker, ar-
guing for bringing insight on deepfake tech-
nology to the fore of current debates on re-
DEVELOPMENTS TO WATCH sponsible uses of AI within militaries.
Deepfake technology has wide-ranging impli-
cations beyond international security, includ-
ing the fabrication of criminal evidence, new
forms of sexual harassment including deep-
fake pornography and/or privacy violations
from employers trying to prevent deepfakes
from happening in the first place. In the months
and years ahead, governments, international
organizations, institutions and businesses are
likely to pay greater attention to the growing
role of AI and the guardrails needed for risk
mitigation through new and evolving technol-
ogies, policies and strategies. For example, in
June 2023, the European Union took the first
step towards regulating how businesses can
use artificial intelligence. Undoubtedly, the EU
will not be the only international body to take
buffett.nor thwestern.edu 3