AT LAWs Primary

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 43

Lethal Autonomous Weapons System –

Negative
***Case Answers***
Inherency
LAWS are more efficient and make war more humane. They also reduce civilian
casualties which solve terror recruitment
Kyle Hiebert (researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as
deputy editor of the Africa Conflict Monitor) “Are Lethal Autonomous Weapons Inevitable? It Appears
So” January 27, 2022 https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-
it-appears-so/

More Just War — or Just More War? Rapid advances in autonomous weapons technologies and an
increasingly tense global order have brought added urgency to the debate over the merits and risks of
their use. Proponents include Robert Work, a former US deputy secretary of defence under the Obama
and Trump administrations, who has argued the United States has a “moral imperative” to pursue
autonomous weapons. The chief benefit of LAWS, Work and others say, is that their adoption would
make warfare more humane by reducing civilian casualties and accidents through decreasing “target
misidentification” that results in what the US Department of Defense labels “unintended engagements.”
Put plainly: Autonomous weapons systems may be able to assess a target’s legitimacy and make
decisions faster, and with more accuracy and objectivity than fallible human actors could, either on a
chaotic battlefield or through the pixelated screen of a remote-control centre thousands of miles away.
The outcome would be a more efficient use of lethal force that limits collateral damage and saves
innocent lives through a reduction in human error and increased precision of munitions use. Machines
also cannot feel stress, fatigue, vindictiveness or hate. If widely adopted, killer robots could, in theory,
lessen the opportunistic sexual violence, looting and vengeful razing of property and farmland that often
occurs in war — especially in ethnically driven conflicts. These atrocities tend to create deep-seated
traumas and smouldering intergenerational resentments that linger well after the shooting stops,
destabilizing societies over the long term and inviting more conflict in the future.
Solvency
Solvency – No Modeling/Fails

International coalition fails – LAWS use inevitable.


Kyle Hiebert (researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as
deputy editor of the Africa Conflict Monitor) “Are Lethal Autonomous Weapons Inevitable? It Appears
So” January 27, 2022 https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-
it-appears-so/

Other risks are that autonomous weapons technology could fall into the hands of insurgent groups and
terrorists. At the peak of its so-called caliphate in Iraq and Syria, the Islamic State was launching drone
strikes daily. Despotic regimes may impulsively unleash autonomous weapons on their own populations
to quell a civilian uprising. Killer robots’ neural networks could also be susceptible to being hacked by an
adversary and turned against their owners. Yet, just as the debate intensifies, a realistic assessment of
the state of the killer robots being developed confirms what the Swiss ambassador to the CCW feared —
technological progress is far outpacing deliberations over containment. But even if it weren’t, amid a
splintering international order, plenty of nation-states are readily violating humanitarian laws and
treaties anyway, while others are seeking new ways to gain a strategic edge in an increasingly hostile,
multipolar geopolitical environment.

Can’t ban; LAWS meet the International Humanitarian Law threshold and therefore
are fair game
Ettinger, 20 (Jay Ettinger, 2020, accessed on 6-29-2022, 30 Minn. J. Int'L. 153, "1. NOTE: Overcoming
International Inertia: The Creation of War Manual for Lethal Autonomous WeaponsSystems, 30 Minn. J.
Int'l L. 153", file:///C:/Users/ncb12/Downloads/Nathan%20Boyle%20Overcoming%20International
%20Inertia_%20The%20Creation%20of%20War%20Manual%20for%20Lethal%20Autonomous
%20Weapons%20Sy.PDF) (NB)

When developing new military technology, States must evaluate whether the technology can be
operated in compliance with core[International Humanitarian Law] IHL principles. Under Article 36 of Geneva Convention
Protocol I ("Protocol I"), in order to develop, acquire, or adopt a new "weapon, means or method of warfare,"
the State must first conduct a review to determine whether the technology would violate IHL in "some or all
circumstances." 25The weapons review obligation of Article 36 has likely not reached the status of customary international law. 26Indeed, only a few States
currently utilize systematic approaches to weapons review. 27 [*158] However, States are still bound by the underlying IHL obligations of weapons law and targeting
law, even if they do not conduct a formal review prior to implementation. 28 The
first step is to review whether the weapon is
already prohibited or restricted by existing treaty or customary law. 29Next, the state must evaluate
whether the weapon can be used in accordance with the core IHL obligations. 30The State is not required
to evaluate all possible misuses of the weapon, but rather the evaluation should relate to the "normal or
expected use" of the weapon. 31Finally, if no existing treaty or customary law would prevent the weapon's use, the State should evaluate whether its use
would violate the 'Martens Clause'. 32The Martens Clause is a catch-all provision that is intended to provide baseline level protections to civilians, requiring States
to act from the "principles of humanity, and the dictates of public conscience," even in the absence of positive treaty law. 33According to the ICRC Commentary on
Protocol I, the purpose of the Martens Clause is to "prevent[] the assumption that anything which is not explicitly prohibited by the relevant treaties is therefore
permitted" and to protect the core principles of IHL "regardless of subsequent developments of types of situation or technology." 34Many have argued that the
Martens Clause is particularly important in ensuring adequate protection of civilians when weapons technology has developed faster than IHL can adapt to the
technology. 35 In
order for a new weapons technology to be compliant with IHL during its Article 36 review, the weapon
must meet the obligations of "weapons law" and "targeting law". 36Weapons law [*159] evaluates whether a weapon is per se
unlawful. 37Targeting law evaluates whether the weapon can be operated within a military environment in a lawful manner. 38These core IHL obligations are
codified in Protocol I and have also been recognized as customary international law. 39 The first obligation of "weapons law" is that a
weapon cannot be "of a nature to strike military objectives and civilians or civilian objects without
distinction." 40To be legal, the weapon must have the capacity to target legitimate military objectives and must
not create disproportionate harm to civilians and other noncombatants in at least some battlefield contexts. 41Examples of
weapons that fail to meet this standard under customary IHL include incendiary weapons, cluster bombs, and biological weapons. 42 The second
"weapons law" obligation is that the weapon cannot cause unnecessary suffering or superfluous injury.
43The purpose is to prevent inhumane or needless injuries to combatants . 44The International Court of Justice ("ICJ") has
previously defined "unnecessary suffering" as "a harm greater than that unavoidable to achieve legitimate
military objectives." 45Historical examples of weapons that violate this obligation include lasers used to blind soldiers and "dum-dum" bullets (bullets
that expand on impact causing large and painful wounds) -- as these weapons caused extreme suffering without progressing any legitimate military purpose. After a
weapon is found not to be per se illegal under "weapons law", "targeting law" evaluates whether the weapon can be lawfully operated in a specific battlefield
context. 47Two
key principles of "targeting law" are (1) distinction and (2) proportionality. 48The
requirement that actors distinguish between combatants 49and civilians , and between military and civilian objects, is a
"cardinal" principle of IHL. 50The goal of distinction does not demand perfect results but instead requires that actors make decisions using reasonable judgment
given the military context in which they are operating. 51Changes to the character of warfare, such as the increased presence of non-state insurgent and terrorist
groups, have made it increasingly difficult for military actors to distinguish between combatants and non-combatants in real-time combat operations. 52 Next, the
principle of proportionality requires that actors balance the extent and risk of collateral harm against the
military advantage of the operation or action. 53This is a highly context-specific assessment and has led international courts, military manuals, and others to adopt a
"reasonable military commander" standard. 54For example, the International Criminal Tribunal for the Former Yugoslavia described its standard for proportionality
as follows: "In determining whether an attack was proportionate it is necessary to examine whether a reasonably wellinformed person in the circumstances of the
actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian [*161] casualties to result from the
attack." 55Gross failures to account for the collateral damage of military operations can amount to a war crime. 56A weapon is not IHL compliant if it does not allow
an operator to use their judgment when making targeting decisions within the given battlefield context. 57New weapons technology such as LAWS can only be
lawfully used if the State can assure through an Article 36 review that the technology can adhere to IHL obligations within the battlefield context in which it is being
used. LAWS technology raises a number of complex issues, ranging from the technical feasibility of IHL compliance, to legal and ethical questions over whether non-
human intelligence should be allowed to make life-and-death decisions on the battlefield, to the significant public policy implications of wars conducted by robots.
58The novelty of LAWS technology and these complex legal, moral, and practical considerations has resulted in a highly unsettled and contentious international
legal status for LAWS development and use. 59 LAWS are not likely to be per se illegal under "weapons law" due to being indiscriminate in their impact or causing
unnecessary suffering or superfluous injury. 60The evaluation of these factors is based on the nature of the weapon in the uses for which it is designed. 61 LAWS
are not likely to be per se illegal [*162] under weapons law because, in theory , the systems can be
designed to discriminate and not cause unnecessary suffering . 62LAWS are able to be designed with the
ability to strike specific targets. 63This ability to control the impact of the weapon is an important point of
distinction as compared to other weapons previously deemed indiscriminate , such as cluster bombs, chemical weapons,
incendiary weapons, and anti-personnel landmines. 64And if LAWS use conventional forms of lethal force (e.g. standard bullets or
explosives), the presence of autonomous functionality would not likely affect considerations of whether the
weapon causes "unnecessary suffering" or "superfluous injury ". 65The more significant legal questions are whether LAWS can
be designed to comply with IHL "targeting law" and the degree of human involvement that is legally necessary. 66 In order for LAWS to be compliant with IHL
"targeting law," "they must be able to reliably and predictably distinguish between combatants and non-combatants, as well as make rapid judgments on the
proportionality of an attack against its potential collateral harms. 67First, there is a question as to whether computer algorithms will be able to gauge the complex,
context-dependent, and humanistic clues that soldiers must use to distinguish combatants and non-combatants in the modern battlefield where combatants often
attempt to conceal their identities. 68Second, even if such a distinction is technically [*163] feasible, there is a question as to whether these systems can reliably
make sound decisions given the vast array and often rapidly changing nature of battlefield contexts. 69For example, one potential risk to the system's reliability is
the introduction of bias to decision-making originating in the data sets used to train the AI system. 70Even if both of these technical feasibility questions can be
adequately addressed, there is a further question as to whether lethal decision-making inherently requires human involvement under IHL. 71And if there is such a
requirement, what degree of human involvement is sufficient to meet IHL obligations must also be determined.
Solvency – Fights over Definitions

Plan is ineffective; no clear definition of what LAWS are – aff is unable to solve
Ettinger, 20 (Jay Ettinger, 2020, accessed on 6-29-2022, 30 Minn. J. Int'L. 153, "1. NOTE: Overcoming
International Inertia: The Creation of War Manual for Lethal Autonomous WeaponsSystems, 30 Minn. J.
Int'l L. 153", file:///C:/Users/ncb12/Downloads/Nathan%20Boyle%20Overcoming%20International
%20Inertia_%20The%20Creation%20of%20War%20Manual%20for%20Lethal%20Autonomous
%20Weapons%20Sy.PDF) (NB)

There is no universally accepted definition of what qualifies as LAWS. There is even a fundamental
debate over whether LAWS should be defined as (1) the broader category of autonomous technology
systems, some of which may be legally problematic, or as (2) the problematic subset of a broader
category of systems with some autonomous functions . 12As an [*156] example, the International Committee of the Red Cross
("ICRC") defines an autonomous weapon system as "[a]ny weapon system with autonomy in its critical functions." 13A weapon system that is able to "select (i.e.
search for or, detect, identify, or track) and attack (i.e. use force against, neutralize, damage, or destroy) targets without human intervention." 14The United States
Department of Defense formulates its definition for LAWS slightly differently, defining it as "weapon system[s] that, once activated, can select and engage targets
without further intervention by a human operator." 15The degree of autonomy in a system exists on a spectrum that is commonly simplified for clarification into
three categories based on the degree of human involvement in the system. 16These categories are: (a) Human in-the-Loop Weapons: robots that can select targets
and deliver force only with a human command; (b) Human- on-the-Loop Weapons: robots that can select targets and deliver force under the oversight of a human
operator who can override the robots' actions; and (c) Human- out-of-the-Loop Weapons: robots that are capable of selecting targets and delivering force without
any human input or interaction.
Solvency - LAWS Inevitable

LAWS are inevitable; in many ways, they’re already here


Ettinger, 20 (Jay Ettinger, 2020, accessed on 6-29-2022, 30 Minn. J. Int'L. 153, "1. NOTE: Overcoming
International Inertia: The Creation of War Manual for Lethal Autonomous WeaponsSystems, 30 Minn. J.
Int'l L. 153", file:///C:/Users/ncb12/Downloads/Nathan%20Boyle%20Overcoming%20International
%20Inertia_%20The%20Creation%20of%20War%20Manual%20for%20Lethal%20Autonomous
%20Weapons%20Sy.PDF) (NB)

Systems with some degree of autonomy have been implemented on the battlefield . 18Notable examples
include missile defense systems such as Israel's Iron Dome, which uses radar to identify, track, and shoot
down incoming missiles, rockets, mortars, and drones , 19and sentry robots used by South Korea in the Demilitarized Zone, which use
heat and motion sensors to identify people and can shoot machine gun rounds or [*157] grenade launchers if granted approval by a human operator. 20As of now,
"[t]here are no autonomous weapons systems [("AWS")] in use today that directly attack human targets without human authorization." 21However, future
developments in AI could enable systems to use image, facial and behavior recognition to independently
identify targets and make lethal decisions in real time . 22Some military experts predict that these types of fully autonomous weapons
systems could be created in the coming decades. 23The U.S. Air Force anticipates "by 2030 machine capabilities will have
increased to the point that humans will have become the weakest component in a wide array of systems
and processes."
Solvency - LAWS Safe Now
High bar to deploy LAWS and current policies stop their development – the affirmative
is fearmongering about what types of weapons are used
Gregory C. Allen (director of the Artificial Intelligence (AI) Governance Project and a senior fellow in the
Strategic Technologies Program at the Center for Strategic and International Studies in Washington,
D.C.) “DOD Is Updating Its Decade-Old Autonomous Weapons Policy, but Confusion Remains
Widespread” June 06, 2022 https://www.csis.org/analysis/dod-updating-its-decade-old-autonomous-
weapons-policy-confusion-remains-widespread

In general, this is good news. The DOD’s existing policy recognizes that some categories of autonomous
weapons, such as cyber weapons and missile defense systems, are already in widespread and broadly
accepted use by dozens of militaries worldwide. It also allows for the possibility that future technological
progress and changes in the global security landscape, such as Russia’s potential deployment of artificial
intelligence (AI)-enabled lethal autonomous weapons in Ukraine, might make new types of autonomous
weapons desirable. This requires proposals for such weapons to clear a high procedural and technical
bar. In addition to demonstrating compliance with U.S. obligations under domestic and international
law, DOD system safety standards, and DOD AI-ethics principles, proposed autonomous weapons
systems must clear an additional senior review process where the chairman of the Joint Chiefs of Staff,
undersecretary of defense for policy; and the undersecretary of defense for acquisition, technology, and
logistics certify that the proposed system meets 11 additional requirements, each of which require
presenting considerable evidence. Getting the signatures of the U.S. military’s highest-ranking officer
and two undersecretaries in a formal senior review is no easy task. Perhaps the strongest proof of the
rigor required to surpass such a hurdle is the fact that no DOD organization has even tried.

Status quo solves – regulations on the use of weapons stops escalation


Congressional Research Service “Defense Primer: U.S. Policy on Lethal Autonomous Weapon
Systems” November 17, 2021 https://crsreports.congress.gov/product/pdf/IF/IF11150

Role of human operator. DODD 3000.09 requires that all systems, including LAWS, be designed to “allow
commanders and operators to exercise appropriate levels of human judgment over the use of force.” As
noted in an August 2018 U.S. government white paper, “‘appropriate’ is a flexible term that reflects the
fact that there is not a fixed, one-size-fits-all level of human judgment that should be applied to every
context. What is ‘appropriate’ can differ across weapon systems, domains of warfare, types of warfare,
operational contexts, and even across different functions in a weapon system.” Furthermore, “human
judgment over the use of force” does not require manual human “control” of the weapon system, as is
often reported, but rather broader human involvement in decisions about how, when, where, and why
the weapon will be employed. This includes a human determination that the weapon will be used “with
appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules,
and applicable rules of engagement.” To aid this determination, DODD 3000.09 requires that
“[a]dequate training, [tactics, techniques, and procedures], and doctrine are available, periodically
reviewed, and used by system operators and commanders to understand the functioning, capabilities,
and limitations of the system’s autonomy in realistic operational conditions.” The directive also requires
that the weapon’s human-machine interface be “readily understandable to trained operators” so they
can make informed decisions regarding the weapon’s use. Weapons review process. DODD 3000.09
requires that the software and hardware of all systems, including lethal autonomous weapons, be tested
and evaluated to ensure they Function as anticipated in realistic operational environments against
adaptive adversaries; complete engagements in a timeframe consistent with commander and operator
intentions and, if unable to do so, terminate engagements or seek additional human operator input
before continuing the engagement; and are sufficiently robust to minimize failures that could lead to
unintended engagements or to loss of control of the system to unauthorized parties. Any changes to the
system’s operating state—for example, due to machine learning—would require the system to go
through testing and evaluation again to ensure that it has retained its safety features and ability to
operate as intended. Senior-level review. In addition to the standard weapons review process, a
secondary senior-level review is required for LAWS and certain types of semi-autonomous and human-
supervised autonomous weapons that deliver lethal effects. This review requires the Under Secretary of
Defense for Policy, the Chairman of the Joint Chiefs of Staff, and either the Under Secretary of Defense
for Acquisition and Sustainment or the Under Secretary of Defense for Research and Engineering to
approve the system “before formal development and again before fielding in accordance with the
guidelines” listed in Enclosure 3 of the directive. In the event of “urgent military operational need,” this
senior-level review may be waived by the Deputy Secretary of Defense “with the exception of the
requirement for a legal review.” DOD is reportedly in the process of developing a handbook to guide
senior leaders through this review process; however, as the United States is not currently developing
LAWS, no weapon system has gone through the senior-level review process to date.
Solvency – Accuracy

The technology used in LAWS is getting more accurate by the day. The affirmative is
trying to reject a lifesaving technology based on flawed analysis
Hess 2020. Eric Hess is Senior Director of Product Management for Face Recognition & Security Solutions at SAFR from RealNetworks.
“Top Five Misconceptions about Face Recognition” April 28, 2020. <https://www.securitymagazine.com/articles/92242-top-five-
misconceptions-about-face-recognition> Accessed 6/30/20. ARJH

Providers and operators of any emerging technology must be held to high standards to ensure the
technology is developed and deployed in a manner consistent with human and consumer rights . Face
recognition is a powerful tool, but not a substitute for human surveillance operators making deliberate and actionable
decisions based on corroborating evidence. Its job is to present data in real time that can be used to help security staff of potential
issues or investigate incidents post-event. Yes, some facial recognition algorithms do currently show
unacceptable levels of racial bias, however, the technology is far too valuable to be banned outright. Why continue to advocate
use of face recognition when results aren’t always perfect? Some facial recognition systems exhibit lower levels of bias
than others: A recent National Institute of Standards and Technology (NIST) study found that Asian and African-
American faces had false-positive match rates 10 to 100 times higher than white faces across many tested algorithms. While these levels of bias
are clearly unacceptable, the study also identified
several algorithms that were “important exceptions.” These algorithms had
fairly consistent results across racial groups with accuracy variances as low as 0.19%. This shows that facial
recognition systems don’t inherently have high rates of bias and can be improved . Rather than ban this
technology outright, facial recognition providers should be held accountable for reducing bias in their algorithms. A labeling system — akin to
nutrition labels on foods — based on results from an independent evaluator such as NIST could provide transparency. Systems that don’t meet
minimum consistency requirements across race, age, and gender should not be considered by purchasing committees. Using face
recognition can reduce bias in real-world situations : The core function of face recognition — attempting to identify a face
based on a knowledge bank of known faces — is something humans do all the time. Take an eyewitness to a crime, a police officer looking for a
suspect based on a surveillance image, or a store clerk watching for shoplifters. Each has inherent bias informed by past interactions, the media,
and amplified by the cross-race effect. All people have inherent bias — the bias found in facial recognition algorithms stems from bias in the
humans developing them. There is no way to completely eradicate bias in humans or algorithms, but facial
recognition technology is already as good as, or better than, humans at finding correct matches when
comparing images of faces. It can also do so exponentially faster. Facial recognition algorithms are getting better:
It’s much easier to train an AI model to reduce bias and eliminate the cross-race effect than it is to
eradicate bias in every security guard, law enforcement officer, and witness to a crime. Face recognition has
come a long way since it was first developed, and the technology continues to improve with regards to accuracy
across skin tone and gender. Bias thresholds could evolve over time as the technology improves to ensure false-match rates continue
to drop for all users as the technology becomes more widespread. A complete ban on face recognition would prevent this
continual improvement of a technology that could be an equalizer . Accurate, low bias systems leveraged by humans
who are educated on how they work — and how to account for their limitations — have the potential to dramatically reduce
bias across a range of security, law enforcement, and criminal justice use cases.

Facial recognition software, which is used to identify targets, is already accurate and
getting better
William Crumpler (Research Assistant, Technology Policy Program) “How Accurate are Facial
Recognition Systems – and Why Does It Matter?” April 14, 2020
https://www.csis.org/blogs/technology-policy-blog/how-accurate-are-facial-recognition-systems-
%E2%80%93-and-why-does-it-matter#:~:text=In%20ideal%20conditions%2C%20facial
%20recognition,Recognition%20Vendor%20Test%20(FRVT).

Facial recognition has improved dramatically in only a few years. As of April 2020, the best face
identification algorithm has an error rate of just 0.08% compared to 4.1% for the leading algorithm in
2014, according to tests by the National Institute of Standards and Technology (NIST).[1] As of 2018,
NIST found that more than 30 algorithms had achieved accuracies surpassing the best performance
achieved in 2014. These improvements must be taken into account when considering the best way to
regulate the technology. Government action should be calculated to address the risks that come from
where the technology is going, not where it is currently. Further accuracy gains will continue to reduce
risks related to misidentification, and expand the benefits that can come from proper use. However, as
performance improvements create incentives for more widespread deployment, the need to assure
proper governance of the technology will only become more pressing.
Solvency – Humans Worse - Turn
LAWS are faster than humans; can strike even in unprecedented circumstances
Etzioni and Etzioni, 17 (Amitai Etzioni and Oren Etzioni, PhD, PhD in AI, May 2017, accessed on 6-27-
2022, Army University Press, "Pros and Cons of Autonomous Weapons Systems",
https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/
Pros-and-Cons-of-Autonomous-Weapons-Systems/#:~:text=First%2C%20autonomous%20weapons
%20systems%20act,areas%20that%20were%20previously%20inaccessible.) (NB)
An example of a dull mission is long-duration sorties. An example of a dirty mission is one that exposes humans to potentially harmful
radiological material. An example of a dangerous mission is explosive ordnance disposal. Maj. Jeffrey S. Thurnher, U.S. Army, adds, “[lethal
autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly
achieve and to lethally strike even when communications links have been severed.” 3 In addition, the long-term
savings that could be achieved through fielding an army of military robots have been highlighted. In a 2013 article published in The Fiscal Times, David Francis cites
Department of Defense figures showing that “each soldier in Afghanistan costs the Pentagon roughly $850,000 per year.”

LAWS are faster than humans; can strike even in hard circumstances
Etzioni and Etzioni, 17 (Amitai Etzioni and Oren Etzioni, PhD, PhD in AI, May 2017, accessed on 6-27-
2022, Army University Press, "Pros and Cons of Autonomous Weapons Systems",
https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/
Pros-and-Cons-of-Autonomous-Weapons-Systems/#:~:text=First%2C%20autonomous%20weapons
%20systems%20act,areas%20that%20were%20previously%20inaccessible.) (NB)

3 In addition, the long-term savings that could be achieved through fielding an army of military robots have been
highlighted. In a 2013 article published in The Fiscal Times, David Francis cites Department of Defense figures showing that
“each soldier
in Afghanistan costs the Pentagon roughly $850,000 per year.” 4 Some estimate the cost per year to be even higher.
Conversely, according to Francis, “the TALON robot—a small rover that can be outfitted with weapons, costs
$230,000.”5 According to Defense News, Gen. Robert Cone, former commander of the U.S. Army Training and Doctrine Command,
suggested at the 2014 Army Aviation Symposium that by relying more on “support robots,” the Army eventually could
reduce the size of a brigade from four thousand to three thousand soldiers without a concomitant reduction in
effectiveness.

LAWS have more accuracy on the battlefield; lack the restrictions of human-driven
military forces
Hiebert, 22 (Kyle Hiebert, Researcher and analyst based in Cape Town and Johannesburg, South
Africa, as deputy editor of the Africa Conflict Monitor, 1-27-2022, accessed on 6-27-2022, Centre for
International Governance Innovation, "Are Lethal Autonomous Weapons Inevitable? It Appears So",
https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-it-appears-so/
#:~:text=Put%20plainly%3A%20Autonomous%20weapons%20systems,centre%20thousands%20of
%20miles%20away.)

Put plainly: Autonomous weapons systems may be able to assess a target’s legitimacy and make
decisions faster, and with more accuracy and objectivity than fallible human actors could , either on a
chaotic battlefield or through the pixelated screen of a remote-control centre thousands of miles away. The outcome would be a
more efficient use of lethal force that limits collateral damage and saves innocent lives through a
reduction in human error and increased precision of munitions use. Machines also cannot feel stress,
fatigue, vindictiveness or hate. If widely adopted, killer robots could, in theory, lessen the opportunistic sexual
violence, looting and vengeful razing of property and farmland that often occurs in war — especially in
ethnically driven conflicts. These atrocities tend to create deep-seated traumas and smouldering intergenerational resentments that
linger well after the shooting stops, destabilizing societies over the long term and inviting more conflict in the future.
Solvency - Moral Justification
LAWS are morally acceptable
Etzioni and Etzioni, 17 (Amitai Etzioni and Oren Etzioni, PhD, PhD in AI, May 2017, accessed on 6-27-
2022, Army University Press, "Pros and Cons of Autonomous Weapons Systems",
https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/
Pros-and-Cons-of-Autonomous-Weapons-Systems/#:~:text=First%2C%20autonomous%20weapons
%20systems%20act,areas%20that%20were%20previously%20inaccessible.) (NB)

Moral justifications. Several military


experts and roboticists have argued that autonomous weapons systems should not
only be regarded as morally acceptable but also that they would in fact be ethically preferable to human
fighters. For example, roboticist Ronald C. Arkin believes autonomous robots in the future will be able to act more
“humanely” on the battlefield for a number of reasons, including that they do not need to be programmed with a self-
preservation instinct, potentially eliminating the need for a “shoot-first, ask questions later” attitude. 21 The
judgments of autonomous weapons systems will not be clouded by emotions such as fear or hysteria, and the systems will be able to process much more incoming

sensory information than humans without discarding or distorting it to fit preconceived notions. Finally, per Arkin, in teams comprised of human and robot soldiers,

the robots could be more relied upon to report ethical infractions they observed than would a team of humans who might close ranks.22 Lt. Col. Douglas A. Pryer,

U.S. Army, asserts there might be ethical advantages to removing humans from high-stress combat zones in favor
of robots. He points to neuroscience research that suggests the neural circuits responsible for conscious self-control can shut

down when overloaded with stress, leading to sexual assaults and other crimes that soldiers would otherwise be less likely to commit. However,
Pryer sets aside the question of whether or not waging war via robots is ethical in the abstract. Instead, he suggests that because it sparks so much moral outrage

among the populations from whom the United States most needs support, robot warfare has serious strategic disadvantages, and it fuels the cycle of perpetual

warfare.23

Reduce civilian casualties and unintended engagements


Hiebert, 22 (Kyle Hiebert, Researcher and analyst based in Cape Town and Johannesburg, South
Africa, as deputy editor of the Africa Conflict Monitor, 1-27-2022, accessed on 6-27-2022, Centre for
International Governance Innovation, "Are Lethal Autonomous Weapons Inevitable? It Appears So",
https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-it-appears-so/
#:~:text=Put%20plainly%3A%20Autonomous%20weapons%20systems,centre%20thousands%20of
%20miles%20away.) (NB)

Rapid advances in autonomous weapons technologies and an increasingly tense global order have brought added urgency to the debate over
the merits and risks of their use. Proponents include Robert Work, a former US deputy secretary of defence under the Obama and Trump
administrations, who has argued the
United States has a “moral imperative” to pursue autonomous weapons.
The chief benefit of LAWS, Work and others say, is that their adoption would make warfare more humane by
reducing civilian casualties and accidents through decreasing “target misidentification” that results in
what the US Department of Defense labels “unintended engagements.”

LAWS can be programed to be humane and ethical; removes unethical human


element
Kovic, 18 (Marko Kovic, Co-founder and President of ZIPAR Policy Brief, February 2018, accessed on 6-
27-2022, ZIPAR Policy Brief, "The strategic paradox of autonomous weapons",
file:///C:/Users/ncb12/Downloads/Policy_Brief___Autonomous_weapons.pdf) (NB)
War has always created suffering. Over the centuries, however, the morality of warfare has improved. For example, spoils of war for the
victors, including raping women and pillaging, used to be common military practice. Today, there are jus in bello rules that dene morally acceptable and morally

unacceptable behavior during warfare. These rules are the rules that comprise international humanitarian law, both as formal treaties and as customary law [13].
The broad strokes of humanitarian law, such as the Geneva Conventions, are almost universally accepted.
However, humans make errors, and we can be irrational, emotional, sadistic, psychopathic, and
so forth. Morally acceptable behavior in war is difficult to monitor and almost impossible to
enforce. Autonomous weapons could make a significant positive impact in this area : Since autonomous
weapons achieve their goals with the help of artificial intelligence, that artificial intelligence can be designed in such a way

that moral principles of humanitarian law are at the top of the autonomous weapons ’ utility
function [14, 15, 16]. All the errors and biases that we humans are prone to do not exist in artificial
intelligence (Unless we design the artificial intelligence in such a faulty way.). In addition, the rules of humanitarian law can be
very explicitly and formally implemented in autonomous weapons, without the possibility that
the autonomous weapon can override those rules. After all, artificial intelligence is simply a utility-maximizing apparatus that
seeks to achieve the goals it is designed to achieve. This is a very simple point, but one that might run counter to a more intuitive understanding of morality in the

context of artificial intelligence. We might feel that artificial intelligence is intrinsically «bad» or «evil» because it is only a machine, not a human. In reality,

however, artificial intelligence is merely a tool that performs (some) tasks better than humans, precisely because it does not suer from our human limitations. If our

goal is to wage war in a morally acceptable manner, then artificial intelligence could help us do so better. I believe that the probability for this outcome is very high,

somewhere between 0.8 and 1.

US has 'moral imperative' to develop AI weapons, says expert panel for Congress
Reuters, 21 (Reuters, News Corporation based in Washington, DC, 1-26-2021, accessed on 6-28-2022,
the Guardian, "US has 'moral imperative' to develop AI weapons, says panel",
https://www.theguardian.com/science/2021/jan/26/us-has-moral-imperative-to-develop-ai-weapons-
says-panel?CMP=Share_AndroidApp_Other)

The US should not agree to ban the use or development of autonomous weapons powered by artificial intelligence
(AI) software, a government-appointed panel has said in a draft report for Congress. The panel, led by former Google chief
executive Eric Schmidt, on Tuesday concluded two days of public discussion about how the world’s biggest military power should consider AI for national security
and technological advancement.https://www.theguardian.com/news/2020/oct/15/dangerous-rise-of-military-ai-

drone-swarm-autonomous-weapons Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous
weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties
or skirmishes caused by target misidentification. “It is a moral imperative to at least pursue this
hypothesis,” he said. The discussion waded into a controversial frontier of human rights and warfare. For about eight years, a coalition of non-governmental
organisations has pushed for a treaty banning “killer robots”, saying human control is necessary to judge attacks’ proportionality and assign blame for war crimes.
Thirty countries including Brazil and Pakistan want a ban, according to the coalition’s website, and a UN body has held meetings on the systems since at least 2014.
While autonomous weapon capabilities are decades old, concern has mounted with the development of AI to power such systems, along with research finding
biases in AI and examples of the software’s abuse. The US panel, called the National Security Commission on Artificial Intelligence, in meetings this week
acknowledged the risks of autonomous weapons. A member from Microsoft for instance warned of pressure to build machines that react quickly, which could
escalate conflicts. The panel only wants humans to make decisions on launching nuclear warheads. Still, the panel prefers anti-proliferation work to a treaty banning
the systems, which it said would be against US interests and difficult to enforce. Mary Wareham, coordinator of the eight-year Campaign to Stop Killer Robots, said
the commission’s “focus on the need to compete with similar investments made by China and Russia … only serves to encourage arms races.” Beyond AI-powered
weapons, the panel’s lengthy report recommended use of AI by intelligence agencies to streamline data gathering and review; $32bn (£23.3bn) in annual federal
funding for AI research; and new bodies including a digital corps modelled after the army’s Medical Corps and a technology competitiveness council chaired by the
US vice-president. The commission is due to submit its final report to Congress in March, but the recommendations are not binding.
Lethal autonomous weapons will be more ethical— less casualties
Andrew Keen Woods 2022— (ANDREW KEANE WOODS* (Winter, 2022). ARTICLE: ROBOPHOBIA.
University of Colorado Law Review, 93, 51. https://advance.lexis.com/api/document?
collection=analytical-materials&id=urn:contentItem:64SK-1TR1-JSJC-X1MT-00000-
00&context=1516831.)
Weapons systems are increasingly automated, meaning that many tasks formerly done by humans are now being done by machines. This includes tasks like flying
the military advantage these weapons
aircrafts, detecting incoming fire, and even deciding how to respond, including firing weapons. In addition to
pose, there is a case to be made that they are more ethical than human combatants. As Ronald Arkin notes, lethal
autonomous weapons systems may be imperfect, but they promise to be better than human soldiers at
reducing casualties and adhering to the laws of war. In addition to never being fatigued or upset, robot weapons need not have a
self-preservation instinct. And in the event that the laws of war are broken, robots will be more likely to report the abuse than a
human soldier. 

Clear bias between humans and autonomous weapons— autonomous weapons have
better performance
Andrew Keen Woods 2022— (ANDREW KEANE WOODS* (Winter, 2022). ARTICLE: ROBOPHOBIA.
University of Colorado Law Review, 93, 51. https://advance.lexis.com/api/document?
collection=analytical-materials&id=urn:contentItem:64SK-1TR1-JSJC-X1MT-00000-
00&context=1516831.)
In other words, international law and domestic regulations demonstrate a clear bias in favor of human
deciders over robot deciders, despite the real promise of autonomous weapons to reduce civilian
casualties.  Robot performance is capped at human performance; human performance is a ceiling, not a
floor, for robot performance. 
Terrorism Advantage
Terrorism – Recruiting Internal Link
LAWS are more efficient and make war more humane. They also reduce civilian
casualties which solve terror recruitment
Kyle Hiebert (researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as
deputy editor of the Africa Conflict Monitor) “Are Lethal Autonomous Weapons Inevitable? It Appears
So” January 27, 2022 https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-
it-appears-so/

More Just War — or Just More War? Rapid advances in autonomous weapons technologies and an
increasingly tense global order have brought added urgency to the debate over the merits and risks of
their use. Proponents include Robert Work, a former US deputy secretary of defence under the Obama
and Trump administrations, who has argued the United States has a “moral imperative” to pursue
autonomous weapons. The chief benefit of LAWS, Work and others say, is that their adoption would
make warfare more humane by reducing civilian casualties and accidents through decreasing “target
misidentification” that results in what the US Department of Defense labels “unintended engagements.”
Put plainly: Autonomous weapons systems may be able to assess a target’s legitimacy and make
decisions faster, and with more accuracy and objectivity than fallible human actors could, either on a
chaotic battlefield or through the pixelated screen of a remote-control centre thousands of miles away.
The outcome would be a more efficient use of lethal force that limits collateral damage and saves
innocent lives through a reduction in human error and increased precision of munitions use. Machines
also cannot feel stress, fatigue, vindictiveness or hate. If widely adopted, killer robots could, in theory,
lessen the opportunistic sexual violence, looting and vengeful razing of property and farmland that often
occurs in war — especially in ethnically driven conflicts. These atrocities tend to create deep-seated
traumas and smouldering intergenerational resentments that linger well after the shooting stops,
destabilizing societies over the long term and inviting more conflict in the future.

LAWS are more effective and reduce the risk of collateral damage
Zachary Kallenborn (research affiliate with the Unconventional Weapons and Technology Division of
the National Consortium for the Study of Terrorism and Responses to Terrorism, a policy fellow at the
Schar School of Policy and Government, and a U.S. Army Training and Doctrine Command “Mad
Scientist.”) “Applying arms control frameworks to autonomous weapons” October 05, 2021
https://www.brookings.edu/techstream/applying-arms-control-frameworks-to-autonomous-weapons/

At the same time, militaries see great value in the development of autonomous weapons. Autonomous
weapons offer speed. A typical human takes 250 milliseconds to react to something they see. An
autonomous weapon can respond far faster—Rheinmetall Defense’s Active Defense System can react to
incoming rocket-propelled grenade in less than one millisecond. According to General George Murray,
head of the U.S. Army’s Future Command, that speed may be necessary to defend against massive
drone swarms. These weapons may be the difference between survival and defeat. Giving them up in an
arms control treaty would be foolish. Militaries also dispute the risk of error. Humans get tired,
frustrated, and over-confident. That creates mistakes. Autonomous weapons have no such emotion, and
advocates of military AI applications argue a reduced error-rate makes pursuing the technology a moral
imperative. Plus, artificial intelligence can improve aiming. That reduces collateral harm. For example,
Israel reportedly used an artificial intelligence-assisted machine gun to assassinate an Iranian nuclear
scientist without hitting the scientist’s wife inches away. So, in their view, what are arms control
advocates really afraid of? 

Drones don’t cause radicalization—recruitment is a complex process


Aqil Shah, 6-10-2018, "Drone Blowback: Much Ado about Nothing?", Lawfare,
https://www.lawfareblog.com/drone-blowback-much-ado-about-nothing <Aqil Shah is Wick Cary
Assistant Professor of South Asian Politics in the David L. Boren College of International Studies at the
University of Oklahoma and a non-resident scholar at the Carnegie Endowment for International
Peace.>, ke

Targeted killings of suspected Islamist militants by armed drones have become the mainstay of U.S.
counterterrorism campaigns in non-traditional conflicts in countries like Pakistan, Yemen, and Somalia .
Analysts, human rights organizations, and former U.S. officials claim that drone strikes produce blowback: Rather than reducing the terrorist threat, drone strikes increase it by providing
terrorist groups with fresh recruits. According to two prominent experts, David Kilcullen and Andrew Exum, “every one of these dead noncombatants [in Pakistan] represents an alienated

family, a new desire for revenge, and more recruits for a militant movement that has grown exponentially even as drone strikes have increased.” Although intuitive, the
blowback argument lacks empirical support . The Central Intelligence Agency (CIA) has launched an
estimated 430 drone strikes in Pakistan since 2004 (roughly 75 percent of its known total strikes
worldwide). My research there shows that drone blowback may be much ado about nothing . Drawing on
interviews with 167 well-informed adults from North Waziristan Agency (NWA), the most heavily targeted district in Pakistan’s Federally Administered Tribal Areas (FATA), extensive

interviews with respected experts on terrorism, and an official Pakistani police survey of 500 detained
terrorists from southern Sindh Province, I find no evidence of a direct link between drones strikes and
radicalization or the recruitment of militants, either locally or nationally . Instead, my data and secondary
sources suggest that militant recruitment is a complex process driven by a variety of factors , such as political
grievances, state sponsorship of militancy as a tool of foreign policy, state repression, weak governance, and coercive recruitment by militant groups—not drone strikes. At the local level,
finding evidence of blowback from well-informed locals should be relatively easy given the dense social and kinship ties of NWA inhabitants. Virtually every family in NWA has been affected by
the conflict, whether through the death of a relative, the destruction of property, or displacement resulting from a military offensive. Yet these people have largely been left out of debates
about the threats they face and the effect these threats have on their lives. Moreover, the inhabitants of FATA identify themselves as members of a particular Pashtun qabail or qaum (tribe)
divided into khels (sub-tribes), each of which consists of extended clans or families. Inhabitants are therefore enmeshed in dense social networks, which makes them uniquely informed about

Most respondents claimed to personally know or be aware of someone in


the effects of drone strikes on their community.

their clan or village who had been involved in militant activity or who had been indirectly linked to
militants, but none believed that the reason was the loss of a relative in a drone strike . As one tribal elder from Dande
Darpa Khel in Miranshah, the drone-targeted headquarters of the Haqqani Network, explained: “We hear rumors that this or that man joined the Taliban or al-Qaeda because of anger over
drone strikes. It is possible. But I know almost every family in my area, and I do not know of a case where a local man or boy joined the Taliban as direct result of death or injury to a close
relative in a drone strike. In fact, most of the Taliban fighters were already radicalized, or inclined toward militancy for various reasons, or forced to join these groups.” Comments like this
came up frequently in my sample. Even local leaders of Islamist and other right-wing parties acknowledged that the ability of drone strikes to spawn militants is exaggerated. The views of this
informed group of NWA residents rest on pragmatic calculations. Most were resentful of the coercive tactics the Taliban used to terrorize and control the local population, especially after the
Pakistani military struck a peace deal with them in September 2006. Taliban tactics included taxation and harsh penalties for minor offenses. Local resentment is also rooted in state repression.
Until recently, FATA was governed under the colonial-era Frontier Crimes Regulation (enacted under British rule in 1901), which prescribes harsh penalties for crimes without the right of
appeal in a court of law, including collective responsibility for offenses committed by one or more members of a tribe or those committed by anyone in its area. During its operations, the
Pakistani military has used collective punishments, including economic blockades, to punish families or clans whose members they suspected of harboring foreign militants. The state’s
application of these draconian and often indiscriminate measures against the local population, and its appeasement of militants through peace agreements, has only compounded local
alienation stemming from counterinsurgency operations that, some studies show, benefit insurgents. But in North Waziristan, many locals are alienated from both the Pakistani state and the
militants.
Terrorism – Impact Defense

No nuke terror

Mueller 21 – John, Adjunct Professor of Political Science and Senior Research Scientist at the Mershon Center
for International Security Studies at Ohio State University, Senior Fellow at the Cato Institute, and member of the
American Academy of Arts and Sciences. “The Stupidity of War: American Foreign Policy and the Case for
Complacency, Chapter 4: Al-Qaeda and the 9/11 Wars in Afghanistan, Iraq, Pakistan”, Cambridge University Press,
03-04-2021

The exaggeration of terrorist capacities has been greatest in the many much overstated assessments of their
ability to develop nuclear weapons or devices. It has been widely predicted for two decades now that, because al-Qaeda
operatives used box cutters so effectively on 9/11, they would, although under siege, soon apply equal talents in science and
engineering to fabricate nuclear weapons and then detonate them on American cities. A popular estimate was that such a disaster
might well happen by 2014.29 Given the decidedly limited capabilities of terrorists , this concern seems to have been

substantially overwrought : thus far, terrorist groups seem to have exhibited only limited desire and even less
progress in going atomic . That lack of action may be because, after a brief exploration of the possible routes, they – unlike
generations of alarmists – have discovered that the tremendous effort required is scarcely likely to be successful .30

No nuke terror NOR retal


McIntosh and Storey 18 – Christopher, visiting assistant professor of political studies at Bard
College. Ian, a fellow at the Hannah Arendt Center for Politics and Humanities at Bard College. “Between
Acquisition and Use: Assessing the Likelihood of Nuclear Terrorism”, International Studies Quarterly, Vol.
62, No. 2, pg. 289-300, https://academic.oup.com/isq/article/62/2/289/4976557, 04-19-2018

This last point is worth emphasizing. Even in the remote case where an actor successfully acquires a nuc lear
weapon and primarily seeks raw numbers of casualties —whether due to outbidding or audience costs—
other forms of WMDs are likely to be more appealing . As Aum Shinrikyo indicates, this is particularly the
case for the group that overcomes the inevitable political and technological hurdles (Nehorayoff et al. 2016,
36–37). For these groups, chemical, biological, and radiological weapons ( CBRW ) are considerably easier to acquire ,

use , and stockpile . This is especially true when considered over time , rather than a single operation.18
While there are certainly downsides to CBRWs vis-à-vis nuclear weapons (delivery may paradoxically be easier
and the maintenance risks comparatively smaller), they are undoubtedly easier to procure and produce (Zanders 1999).

More importantly, CBRWs are perceived as easier to produce and thus likely to be viewed by targets
as iterable . Unlike a nuclear attack, CBRW threats are more credible because a single CBRW attack
can likely precipitate an indefinite number of follow-ups .

In addition to the problem of iterability, a terrorist organization must always worry about the possible
ratchet effect of an attack—a problem Neumann and Smith (2005, 588– 90) refer to as the “escalation trap.” A terrorist
organization is different than a state at war because it manipulates other actors primarily through
punishment . Campaigns are a communicative activity designed to convince the public and the leaders that the status quo is unsustainable.
The message is that the costs of continuing the target state’s policy (such as the United States in Lebanon, France in Algeria, or the United
Kingdom in Northern Ireland) will eventually outweigh the benefits. Once
an organization conducts a nuclear attack, it
lacks options for an encore . Not even the most nightmarish scenarios involve an indefinite supply of
weapons. If a single attack plus the threat of one or two others does not induce capitulation, the
organization might unwittingly harden the target state ’s resolve . The attack could raise the bar such
that any future non-nuclear attack constitutes a lessening of costs vis-à-vis the status quo.

There are also heavy opportunity costs involved in pursuing , developing, and maintaining a nuclear
capacity, let alone actually deploying and delivering it. As Weiss puts it, “even if a terror group were to
achieve technical nuclear proficiency, the time, money, and infrastructure needed to build nuclear
weapons creates significant risks of discovery that would put the group at risk of attack . Given the
ease of obtaining conventional explosives and the ability to deploy them, a terrorist group is unlikely
to exchange a big part of its operational program to engage in a risky nuclear development effort with
such doubtful prospects ” (Weiss 2015, 82).
Organizational Survival

Terrorist organizations are not monolithic entities, nor are they wholly self-sufficient actors. Historically speaking, these groups consider the public reception of their attacks in a complex manner. As Al Qaeda, the Palestine Liberation Organization (PLO) of the 1970s, the IRA, and anarchist
groups of the nineteenth and twentieth centuries all demonstrate, these groups’ thinking about public reception is nuanced and complex, regardless of time or place. We focus on two types of audiences that would be affected by decisions to attack: those internal to the group itself, and
their own broader public.

While many claim that terrorists are undeterrable, the argument misconstrues the relational dynamics between a terrorist organization, target state, international community, and the internal dynamics of the organization itself (Talmadge 2007). It is undoubtedly the case that deterring a
terrorist organization in the traditional sense is difficult (Whiteneck 2005; Mearsheimer and Walt 2003). Many lack a recognized territorial base, work on the fringes of the global economy, and are internally structured to be difficult to combat directly. Nearly all possess some permutation
of these factors. Combined with the symbolic importance of even relatively small terror attacks—especially given the role of international media—physically denying a group the ability to conduct attacks is uniquely challenging. It is minimally a vastly different proposition than precluding a
state’s ability to successfully invade its neighbor or conduct ongoing missile strikes.19 Despite these concerns, there are important reasons deterrence can and empirically does work in the case of terrorist organizations. This is especially possible when the state-terrorist relationship is not
zero-sum and the target retains some influence over the realization of the group’s eventual goals (e.g., by denying the group access to territory or withholding international recognition) (Trager and Zagorcheva 2006, 88–89). Nuclear attack presents two significant threats to the
organization’s continued existence: internal threats of disintegration and external threats to their continued operations and survival.

Terrorist organizations are not unitary, homogenous organizations. This is especially true for groups possessing the size and competence likely necessary for operational nuclear capacity. As many have noted, the terrorist organizations of the present are vastly different from those
Marxist- Leninist groups that terrorized Europe and the United States in the 1970s and early 1980s. There is a well theorized psychological value of the organization to individual terrorists themselves (Post 1998), but there is more to the organizational valuation of survival than captured in
this atomistic picture. Modern, large-scale terrorist organizations are typically heavily intertwined with the social fabric of the groups from which they originate (Cronin 2006; Hoffman 2013). Beyond significant networks of financial connections, accounts, and moguls (Hamas, for example,
draws funding from a massive international system of mosque-centered charities, while the IRA’s extensive connections to the Irish diaspora in the United States were well documented), many terrorist organizations build extensive networks of sub-organizations that tie them to the
communities in which they are based. Hezbollah, like the IRA, is internally divided between a military arm and a political arm and has run an extensive network of community schools, medical care centers, and religious outreach groups. Together they are designed to embed the
organization in the social life of (predominantly southern) Lebanon’s Muslim population and provide Hezbollah with fresh recruits (Parkinson 2013). The group’s persistence as a dominant political force in southern Lebanon nearly two decades after the initial Israeli decision to withdraw
demonstrates terrorist organizations grow to exceed their initial military objectives. The spread of Al Qaeda and its affiliates has followed a similar path.

Maintaining the continued support of these multiple audiences is therefore a crucial consideration for these organizations. While these audiences could conceivably be more casualty-acceptant than the individuals deciding the group’s operations, the broader public will usually moderate
extreme behavior. The literature assessing so-called “radical- ization” and violence by individual actors emphasizes that there isn’t a one-to-one relationship between ideological extremism and acceptance of extraordinary violence in pursuit of those goals (McCauley and Moskalenko
2014; Jurecic and Wittes 2016). It is important to resist the assumption that a politically extreme ideology automatically corresponds to shared assumptions regarding casualty-acceptance.

Some argue that the move toward “mass-casualty” terrorism obviates these concerns. Aside from the
fact that the trend line is either flat or receding in terms of the death toll of individual attacks (even if
campaigns themselves might be becoming deadlier), there is an orders of magnitude distinction in casualties
between a nuclear attack and even the 2001 attack in the U nited S tates. While the psychological
restraints on nuclear use among states do not translate precisely to this context, there is good reason
to believe that transgressing the longstanding nuclear taboo would have dramatic and negative effects on
broader public support . In an urban environment, the media would inevitably capture the attack and its
gruesome after-effects in photography or video. This imagery would be inconceivable, ubiquitous, and
inescapable. Even if supporters accept a highly retributive mentality , or as Hamid (2015) argues about the Islamic
State, actively accept the potential of death , this would pose a severe problem for all but the most extreme
supporters.20

Beyond these supporters, a nuclear attack affects the internal dynamics of the terrorist organization in
multiple ways. There could be divisiveness regarding the most effective use of the weapon. This would be
magnified by the scale of the opportunities and perceived opportunity costs. Such debates have the
potential to splinter the organization as a whole (Cronin 2009, 100–02). Factional conflict in terrorist organizations appears
frequently over questions of goals and tactics (Crenshaw 1981; Chai 1993). A decision to attack with a nuclear weapon risks
considerable internal alienation over a variety of issues—targeting decisions, method of attack,
campaign goals, potential deaths of supporters, and the domestic and international response (Mathew
and Shambaugh 2005, 621–22). Finally, a nuclear attack would exponentially raise the threat to each individual who composes the extended
organization. Post-nuclear attack, the greatest strengths of a terrorist organization—its lack of material
territory, economy, or overt institutions and reliance on individuals—could turn into its greatest
weaknesses (Eilstrup-Sangiovanni and Jones 2008). Currently, a wealthy financier found to have ties to a
terrorist group would be monitored for intelligence, arrested, and brought up on criminal charges. Post-nuclear attack,
the consequences would be immediate and rather worse .

Terrorism is becoming less frequent

Romero 18 – Luiz, reporter for Quartz cites Maryland terrorism database. “New data shows terror attacks are
becoming less frequent and much less deadly”, Quartz, https://qz.com/1346205/the-global-terrorism-database-
shows-attacks-are-becoming-less-frequent-and-deadly/, 08-01-2018

New data shows the world is becoming safer, at least by one measure. The latest update of the Global Terrorism Database, compiled by the
University of Maryland and published Wednesday (Aug. 1), shows terrorism has retreated for the third consecutive year. Attacks
around the world dropped from about 17,000 in 2014 to about 11,000 in 2017. The number of fatal victims fell by almost half in
the same period. But as the chart shows, the total number of attacks last year was still considerably higher than in the
1990s and 2000s. Though ISIL remained the most active terrorist group in 2017, it committed 10% fewer attacks, and caused 40%
fewer deaths compared to 2016. The terror group reached its peak around 2014, when it made advances in the Middle East, conquering the cities of
Raqqa, Syria and Mosul, Iraq. It also took credit for a series of deadly attacks in Europe and beyond. Since then it has lost much of its territory. Despite an
improvement at the global level, some specific countries registered more terrorism than before. In the US, the number
of lethal attacks almost tripled, with the deadliest case being the Las Vegas shooting in October.
NATO Cohesion Advantage
NATO Cohesion Fails
Disagreements within NATO undermine cohesion – NATO members say no
Melissa Heikkila (Politico) “NATO wants to set AI standards. If only its members agreed on the basics.”
March 29, 2021 https://www.politico.eu/article/nato-ai-artificial-intelligence-standards-priorities/

On paper, NATO is the ideal organization to go about setting standards for military applications of
artificial intelligence. But the widely divergent priorities and budgets of its 30 members could get in the
way. The Western military alliance has identified artificial intelligence as a key technology needed to
maintain an edge over adversaries, and it wants to lead the way in establishing common ground rules
for its use. “We need each other more than ever. No country alone or no continent alone can compete
in this era of great power competition,” NATO Deputy Secretary-General Mircea Geoană, the alliance’s
second in command, said in an interview with POLITICO. The standard-setting effort comes as China is
pressing ahead with AI applications in the military largely free of democratic oversight. David van Weel,
NATO’s assistant secretary general for emerging security challenges, said Beijing's lack of concern with
the tech's ethical implications has sped along the integration of AI into the military apparatus. "I'm ... not
sure that they're having the same debates on principles of responsible use or they're definitely not
applying our democratic values to these technologies,” he said. Meanwhile, the EU — which has pledged
to roll out the world's first binding rules on AI in coming weeks — is seeking closer collaboration with
Washington to oversee emerging technologies, including artificial intelligence. But those efforts have
been slow in getting off the ground. For Geoană, that collaboration will happen at NATO, which is
working closely with the European Union as it prepares AI regulation focusing on “high risk”
applications. The pitch NATO does not regulate, but “once NATO sets a standard, it becomes in terms of
defensive security the gold standard in that respective field,” Geoană said. The alliance's own AI
strategy, to be released before the summer, will identify ways to operate AI systems responsibly,
identify military applications for the technology, and provide a “platform for allies to test their AI to see
whether it's up to NATO standards,” van Weel said. The strategy will also set ethical guidelines around
how to govern AI systems, for example by ensuring systems can be shut down by a human at all times,
and to maintain accountability by ensuring a human is responsible for the actions of AI systems. “If an
adversary would use autonomous AI powered systems in a way that is not compatible with our values
and morals, it would still have defense implications because we would need to defend and deter against
those systems,” van Weel said. “We need to be aware of that and we need to flag legislators when we
feel that our restrictions are coming into the realm of [being detrimental to] our defense and
deterrence,” he continued. Mission impossible? The problem is that NATO's members are at very
different stages when it comes to thinking about AI in the military context. The U.S., the world's biggest
military spender, has prioritized the use of AI in the defense realm. But in Europe, most countries —
France and the Netherlands excepting — barely mention the technology’s defense and military
implications in their national AI strategies. “It’s absolutely no surprise that the U.S. had a military AI
strategy before it has a national AI strategy," but the Europeans "did it exactly the other way around,"
said Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, said: That echoes
familiar transatlantic differences — and previous U.S. President Donald Trump's complaints — over
defense spending, but also highlights the different approaches to AI regulation more broadly. The EU's
AI strategy takes a cautious line, touting itself as "human-centric," focused on taming corporate excesses
and keeping citizens' data safe. The U.S., which tends to be light on regulation and keen on defense,
sees things differently. There are also divergences over what technologies the alliance ought to develop,
including lethal autonomous weapons systems — often dubbed “killer robots” — programmed to
identify and destroy targets without human control. Powerful NATO members including France, the
U.K., and the U.S. have developed these technologies and oppose a treaty on these weapons, while
others like Belgium and Germany have expressed serious concerns about the technology. These
weapons systems have also faced fierce public opposition from civil society and human rights groups,
including from United Nations Secretary-General António Guterres, who in 2018 called for a ban.

International law fails—it lacks properties required for a legitimate system


Steinberg 5 (Gerald M., Academic and political scientist, PhD in government from Cornell University,
“The Myth of International Law” October 15, 2005,
http://www.zionismontheweb.org/myth_of_international_law.htm)

In this reality, the


principles that are said to constitute “international law” lack the two central properties
required for any legitimate legal system: the consent of the governed, and uniform and unprejudiced
application. International law and the claims made in its name fit neither criteria. In a democratic
framework, the legal system gains legitimacy through the consent of the citizens, and accountable to
democratic procedures. We accept the limitations placed on us by the system of laws and the role of the police in enforcing these laws
as part of the requirements for justice and order in any functioning society. But we do not accept limitations imposed from the outside, without
our consent. Thus, the claims of the UN, the International Court of Justice, the International Criminal Court , and
campaigns run by obsessed extremists in Europe, lack any legitimate moral foundation or standing in democratic
societies with their own legal system. Similarly, when judges sitting on the Israeli High Court base decisions on international law,
they are attempting to impose an external framework which lacks the legitimacy provided by the consent of the governed. THE OTHER
problem with the use of international law is the absence of equitable implementation. No legal system
that focuses its attention selectively can be considered legitimate. Thus, the routine condemnations of
Israeli or American policy by the UN, the ICJ, and accompanying NGOs have no moral or legal validity when the principles
are not applied uniformly. In contrast to these destructive polemics, in order to promote a meaningful universal
moral code, it is necessary to recognize the need for the consent of the governed and for consistent and
universal enforcement. International law based on justice, and not ideology, remains a worthy objective. But the
substitution of political rhetoric that invokes the myths and rhetoric for the real thing is entirely counterproductive .
NATO Cohesion – Warming Defense

Extinction from warming requires 12 degrees, far greater than their internal link, and
intervening actors will solve before then
Sebastian Farquhar 17 leads the Global Priorities Project (GPP) at the Centre for Effective Altruism, et
al., 2017, “Existential Risk: Diplomacy and Governance,”
https://www.fhi.ox.ac.uk/wp-content/uploads/Existential-Risks-2017-01-23.pdf

The most likely levels of global warming are very unlikely to cause human extinction.15 The existential
risks of climate change instead stem from tail risk climate change – the low probability of extreme levels of
warming – and interaction with other sources of risk. It is impossible to say with confidence at what point global warming
would become severe enough to pose an existential threat. Research has suggested that warming of 11-12°C would render most
of the planet uninhabitable,16 and would completely devastate agriculture.17 This would pose an extreme threat to human
civilisation as we know it.18 Warming of around 7°C or more could potentially produce conflict and instability on such a scale that the indirect
effects could be an existential risk, although it is extremely uncertain how likely such scenarios are.19 Moreover, the
timescales over
which such changes might happen could mean that humanity is able to adapt enough to avoid extinction in
even very extreme scenarios. The probability of these levels of warming depends on eventual greenhouse gas concentrations.
According to some experts, unless strong action is taken soon by major emitters , it is likely that we will pursue a
medium-high emissions pathway.20 If we do, the chance of extreme warming is highly uncertain but appears non-negligible.
Current concentrations of greenhouse gases are higher than they have been for hundreds of thousands of years,21 which means that there are
significant unknown unknowns about how the climate system will respond. Particularly concerning is the risk of positive feedback loops, such as
the release of vast amounts of methane from melting of the arctic permafrost, which would cause rapid and disastrous warming.22 The
economists Gernot Wagner and Martin Weitzman have used IPCC figures (which do not include modelling of feedback loops such as those from
melting permafrost) to estimate that if
we continue to pursue a medium-high emissions pathway , the probability of
eventual warming of 6°C is around 10%,23 and of 10°C is around 3%.24 These estimates are of course highly
uncertain. It is likely that the world will take action against climate change once it begins to impose large
costs on human society, long before there is warming of 10°C. Unfortunately, there is significant inertia in the climate system:
there is a 25 to 50 year lag between CO2 emissions and eventual warming,25 and it is expected that 40% of the peak concentration of CO2 will
remain in the atmosphere 1,000 years after the peak is reached.26 Consequently, it is impossible to reduce temperatures quickly by reducing
CO2 emissions. If the world does start to face costly warming, the international community will therefore face strong incentives to find other
ways to reduce global temperatures.

Climate change is inevitable – even ending emissions won’t solve


Zing Tsjeng, 19, 2-27-2019, executive editor and the author of the Forgotten Women book series, "The
Climate Change Paper So Depressing It's Sending People to Therapy",
[https://www.vice.com/en_us/article/vbwpdb/the-climate-change-paper-so-depressing-its-sending-
people-to-therapy], AVD
"Deep Adaptation" is quite unlike any other academic paper. There's the language ("we are about to play Russian Roulette with the entire human race with already two bullets loaded").

, there's the stark conclusions


There's the flashes of dark humor ("I was only partly joking earlier when I questioned why I was even writing this paper"). But most of all

that it draws about the future. Chiefly, that it's too late to stop climate change from devastating our world—
and that "climate-induced societal collapse is now inevitable in the near term ." How near? About a decade. Professor Jem
Bendell, a sustainability academic at the University of Cumbria, wrote the paper after taking a sabbatical at the end of 2017 to review and understand the latest climate science "properly—not

The evidence before us suggests that we are set for


sitting on the fence anymore," as he puts it on the phone to me. What he found terrified him. "

disruptive and uncontrollable levels of climate change, bringing starvation, destruction, migration,
disease, and war," he writes in the paper. "Our norms of behavior—that we call our 'civilization'—may also degrade." " It is time," he adds, "we
consider the implications of it being too late to avert a global environmental catastrophe in the lifetimes
of people alive today." Even a schmuck like me is familiar with some of the evidence Bendell sets out to prove his point. You only needed to step
outside during the record-breaking heatwave last year to acknowledge that 17 of the 18 hottest years
on the planet have occurred since 2000. Scientists already believe we are soon on course for an ice-free
Arctic, which will only accelerate global warming. Back in 2017, even Fox News reported scientists'
warnings that the Earth's sixth mass extinction was underway . Erik Buitenhuis, a senior researcher at the Tyndall Centre for Climate Change
Research, tells me that Bendell's conclusions may sound extreme, but he agrees with the report's overall assessment. "I think societal collapse is indeed inevitable," he says, though adds that

The important thing, Buitenhuis says, is to realize that the negative effects
"the process is likely to take decades to centuries."

of climate change have already been with us for some time: "Further gradual deterioration looks much
more likely to me than a disaster within the next ten years that will be big enough that, after that,
everybody will agree the status quo is doomed." "Jem's paper is in the main well-researched and supported by relatively mainstream climate science,"
says Professor Rupert Read, chair of the Green House think-tank and a philosophy academic at the University of East Anglia. "That's why I'm with him on the fundamentals. And more and more

I think it's hubris to think that


people are." Read's key disagreement with Bendell is his belief that we still have time to snatch victory from the jaws of defeat, saying, "

we know the future." But that doesn't mean Bendell's premise is wrong: "The way I see it, deep
adaptation is insurance against the possibility—or rather, the probability—of some kind of collapse," says
Read. "'Deep Adaptation' is saying, 'What do we need to do if collapse is something we need to realistically plan for?'" When I speak to Bendell, he tells me he

thinks of "Deep Adaptation" as more of an ethical and philosophical framework, rather than a prophecy
about the future of the planet. "The longer we refuse to talk about climate change as already here and screwing with our way of life—because we don't want to think
like that because it's too frightening or will somehow demotivate people—the less time we have to reduce harm," he says with deliberation. What does he mean by harm? "Starvation is the
first one," he answers, pointing to lowering harvests of grain in Europe in 2018 due to drought that saw the EU reap 6 million tons less wheat. "In the scientific community at the moment, the
appropriate thing is to say that 2018 was an anomaly. However, if you look at what's been happening over the last few years, it isn't an anomaly. There's a possibility that 2018 is the new best

case scenario." That means, in Bendell's view, that governments need to start planning emergency responses to
climate change, including growing and stockpiling food. He minces his words even less in his paper: "When I say starvation, destruction,
migration, disease, and war, I mean in your own life. With the power down, soon you won't have water coming out of your tap. You will depend on your neighbors

for food and some warmth. You will become malnourished. You won't know whether to stay or go. You
will fear being violently killed before starving to death." Should people start building bunkers and buying bulletproof vests? "There's no way of
getting through this unless we try together," he says. "We need to help people stay fed and watered where they live already to reduce disruption and reduce civil unrest as much as we can." Of
the Silicon Valley financiers prepping for the apocalypse in New Zealand, he says: "Once money doesn't matter anymore and the armed guards are trying to feed their starving children, what

Bendell wasn't always this gloomy about the state of the world. He
do you think they'll do? The billionaires doing that are just deluded."

once worked for WWF, one of the biggest environmental charities in the world, and in 2012 founded the
Institute for Leadership and Sustainability (IFLAS) at the University of Cumbria. The World Economic
Forum named him a Young Global Leader for his work. So how did he end up writing a paper that
determined that civilization—and environmental sustainability as we currently understand it— is
doomed? "Since the age of 15, I've been an environmentalist," he tells me. "I've given my life professionally and personally. I'm a workaholic, and it was all about sustainability." Once
he sat down with the data, however, he realized that his field was quickly becoming irrelevant in the face of oncoming climate catastrophe. "It would mean not getting super excited about the
expansion of your recycling program in a major multinational," he says. "It's a completely different paradigm of what we should be looking at." What he didn’t expect was for the paper to take
off online. "It was aimed at those people in my professional community and why we're in denial," he says. "When I put it out there, I didn’t expect 15-year-olds in schools in Indonesia to be

. "Someone in the
reading it with their teachers." He says that "Deep Adaptation" has been downloaded over 110,000 times since it was released by IFLAS as an occasional paper

alternative economics and bitcoin crowd told me, 'Oh, everyone's talking about deep adaptation in
London at all the dinner parties,'" he laughs. Researchers from the Institute for Public Policy Research (IPPR), an established progressive think-tank,
consulted Bendell's paper in the process of writing its new report, "This is a crisis: Facing up to the age of environmental breakdown." Laurie Laybourn-Langton, its lead author, told me via
email: "I appreciated the frankness of the report in facing up to issues that so many in research and policy communities seem unwilling to. We don't subscribe to the view that social collapse is

"This is partly because it's so hard to predict the outcomes of the complex and
inevitable, however." He explains:

uncertain process of environmental shocks interacting with social and economic systems. We simply
don't know. That said, they shouldn’t be disregarded as a potential outcome, and so we are calling for
greater levels of preparedness to these shocks." Not everyone was so taken with the paper. Bendell submitted it to a well-respected academic journal
for publication, with little success. Sustainability Accounting, Management and Policy Journal (SAMPJ) told me that the paper was in need of "major revisions" before it would be ready for
publication. Bendell ended up publishing it through IFLAS and his blog. "The academic process is such that I took that as an effective rejection," he explains, saying that the reviewers wanted
him to fundamentally alter his conclusions. "I couldn't completely rewrite the paper to say that I don't think collapse is inevitable. It was asking for a different paper." Emerald, the scholarly

the study on collapse they thought you


publisher that owns SAMPJ, says it takes issue with how Bendell frames its reception of its paper on his blog: "

should not read—yet." A spokesperson told me: "The decision was arrived at based on the merit of the
submitted article and the double blind peer review process integral to academia and the advancement
of knowledge. SAMPJ, and [editor Carol Adams] are proud members of the Committee on Publication Ethics (COPE) and adhere to the highest ethical standards in publishing. We
see no evidence that the decision of Major Revision was politically motivated. "Emerald requested the author correct their blog post to reflect the facts. This request was unfortunately
ignored. The post continues to imply the paper was rejected because it was deemed too controversial. The paper was not rejected, and was given a Major Revision due to the rigorous
standards of the scholarly output of the journal." Bendell says he did reply to Emerald's request to amend his blog post—but only if they would consider telling him the decisions of those who
reviewed his paper. (Under the double blind peer review, reviewers' decisions are anonymous.) "That title can be read in a number of ways," he says. "It is a paper that the reviewers didn't
want you to read. They didn't want it published." Climate gloom and doom is nothing new—doomsday preppers have been stockpiling their freeze-dried food rations for decades now. But
Bendell's paper appears to have hit a unique nerve, especially given that the average scientific paper is estimated to be read by only three or so people. Rupert Read tells me that he was sent it

. It hasn't been pushed by a celebrity. It was briefly


simultaneously by three other academics when it was published. But it hasn’t trended on Twitter

mentioned in a Bloomberg Businessweek article, but that's it. "Deep Adaptation" is that unique social
phenomenon: an academic paper that has gone viral through word of mouth. Nathan Savelli, a 31-year-
old high school life coach from Hamilton, Canada, was recommended the paper by a local environmental
activist. Reading it sent him spiraling into depression. "I guess in some ways it felt like I was diagnosed
with a terminal illness," he tells me. "If I'm being honest, it was a mix of heartbreaking sadness and
extreme anger." Savelli felt so low that he sought help from a climate grief support group organized by 350.org, the global grassroots climate movement. "I had attended
counseling in the past for other issues, but never a group session, and thought it might be something helpful for me," he tells me. Did it help? "I'm not sure I'd say it alleviated my grief, but it
was definitely comforting to be around people who understood what I was feeling." And therein lies the problem with "Deep Adaptation:" if you accept that the paper is entirely correct in its
prediction of collapse, how do you move on with your life? How do you even get out of bed in the morning? "I'm aware of what difficult emotions it triggers," Bendell acknowledges. "I do
believe that if you’ve come across this [paper], then absolutely some grief and despair is very natural. Why isn't that OK? We all die in the end. Life is about impermanence." On his blog, he
lists several sources for psychological support, including several groups on Facebook and LinkedIn that discuss collapse and offer help to those struggling to come to terms with the conclusions
of his paper. But, Bendell adds, reading the paper has been "transformative" for some. "People find a new boldness about living life on their own terms—actually connecting to their heart's
desire. How do they wish to live, and why don't they live that way now rather than postponing it?" In one case, it even helped prompt one high-ranking academic to quit her job and the city.

In December of 2017, Dr. Alison Green left her post as the pro vice-chancellor of Arden University. She
had read the IPCC report warning that the world is nowhere near averting global temperature increases,
as well as the 1,656-page National Climate Assessment on how climate change is now dramatically affecting our lives—and then she read Bendell's paper.
War Escalation
LAWs necessary to prevent great power competition with Russia and China. Plan
makes conflict inevitable
Jay Ettinger, 20, (Jay Ettinger, Jay is JD & Legal Intern, UN High Commissioner for Human Rights, Fall
2020, “Overcoming International Inertia: The Creation of War Manual for Lethal Autonomous Weapons
Systems,” Minnesota Journal Of International Law,
https://minnjil.org/wp-content/uploads/2021/09/Ettinger-MACRO.pdf, 6-27-2022) SCade

Second, LAWS have the potential to offer a much greater military advantage than previously banned weapons.152 Historically,
the major military powers have been cautious to limit their military options through international
treaties, particularly when the treaty involves strategically important weapons. 153 A number of experts
believe that AI functionality will have a revolutionary impact on warfare.154 LAWS technology has the
potential to create a significant competitive asymmetry for the State that can first successfully
develop the technology.155 In fact, the U.S. Military has explicitly stated that its goal for AI development is to
create “an enduring competitive edge that lasts a generation or more .”156 This significant first mover
advantage has created a high-degree of competition in LAWS development, in particular between the
United States, Russia, and China.157 All three are heavily invested in LAWS development and view it “as fundamental to the
future of armed conflict.”158 Some conceptualize this competition as an “AI arms race.”159 Each major military power is
concerned that attempts to limit or outright ban LAWS development may put them at a competitive
disadvantage if their peer States are not party to the treaty or do not abide by the treaty obligations.160 Even if all major military States
were party to the treaty, the prohibition on LAWS development would be challenging to enforce due to
the dual use (civilian and military) functionality of AI technology .161 This would make it difficult for an overseeing treaty body
to ensure that States were complying with their treaty obligation not to develop AI technology for LAWS purposes.

LAWS reduce casualties, both troops and civilians. Also key to U.S. military superiority
Michael T. Klare (professor emeritus of peace and world security studies at Hampshire College and
senior visiting fellow at the Arms Control Association) “Autonomous Weapons Systems and the Laws of
War” March 2019 https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-
laws-war

The Navy is not alone in exploring future battle formations involving various combinations of crewed
systems and swarms of autonomous and semiautonomous robotic weapons. The Air Force is testing
software to enable fighter pilots to guide accompanying unmanned aircraft toward enemy positions,
whereupon the drones will seek and destroy air defense radars and other key targets on their own. The
Army is testing an unarmed robotic ground vehicle, the Squad Multipurpose Equipment Transport
(SMET) and has undertaken development of a Robotic Combat Vehicle (RCV). These systems, once
fielded, would accompany ground troops and crewed vehicles in combat, trying to reduce U.S. soldiers’
exposure to enemy fire. Similar endeavors are under way in China, Russia, and a score of other
countries.1 For advocates of such scenarios, the development and deployment of autonomous weapons
systems, or “killer robots,” as they are often called, offer undeniable advantages in combat.
Comparatively cheap and able to operate 24 hours a day without tiring, the robotic warriors could help
reduce U.S. casualties. When equipped with advanced sensors and artificial intelligence (AI), moreover,
autonomous weapons could be trained to operate in coordinated swarms, or “wolfpacks,”
overwhelming enemy defenders and affording a speedy U.S. victory. “Imagine anti-submarine warfare
wolfpacks,” said former Deputy Secretary of Defense Robert Work at the christening of Sea Hunter.
“Imagine mine warfare flotillas, distributed surface-warfare action groups, deception vessels, electronic
warfare vessels”—all unmanned and operating autonomously.

LAWS key to military readiness – plan collapses U.S. leadership


Mary Wareham (advocacy director in the arms division at Human Rights Watch) “Summary: Stopping
Killer Robots” August 10, 2020
https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-
autonomous-weapons-and

United States. At the Human Rights Council in May 2013, the United States said that lethal autonomous
weapons systems raise “important legal, policy, and ethical issues” and recommended further discussion
in an international humanitarian law forum.[263] A 2012 Department of Defense policy directive on
autonomy in weapons systems was renewed without substantive amendments in 2018 for another five
years.[264] The policy permits the development of lethal autonomous weapons systems, but the US
insists that “it neither encourages nor prohibits the development of such future systems.”[265] The US is
investing heavily in military applications of artificial intelligence and developing air, land, and sea-based
autonomous weapons systems. In August 2019, the US warned against stigmatizing lethal autonomous
weapons systems because, it said, they “can have military and humanitarian benefits.”[266] The US
regards proposals to negotiate a new international treaty on such weapons systems as “premature” and
argues that existing international humanitarian law is adequate.[267] The US participated in every CCW
meeting on killer robots in 2014-2019.

LAWS key to military superiority


Kyle Hiebert (researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as
deputy editor of the Africa Conflict Monitor) “Are Lethal Autonomous Weapons Inevitable? It Appears
So” January 27, 2022 https://www.cigionline.org/articles/are-lethal-autonomous-weapons-inevitable-
it-appears-so/

Most important of all, mass production of killer robots could offset America’s flagging enlistment
numbers. The US military requires 150,000 new recruits every year to maintain its desired strength and
capability. And yet Pentagon data from 2017 revealed that more than 24 million of the then 34 million
Americans between the ages of 17 and 24 — over 70 percent — would have been disqualified from
serving in the military if they applied, due to obesity, mental health issues, inadequate education or a
criminal record. Michèle Flournoy, a career defence official who served in senior roles in both the
Clinton and the Obama administrations, told the BBC in December that “one of the ways to gain some
quantitative mass back and to complicate adversaries’ defence planning or attack planning is to pair
human beings and machines.” Other, smaller players are nurturing an affinity for LAWS too. Israel
assassinated Iran’s top nuclear scientist, Mohsen Fakhrizadeh, outside of Tehran in November 2020
using a remote-controlled, AI-assisted machine gun mounted inside a parked car, and is devising more
remote ways to strike back against Hamas in the Gaza Strip. Since 2015, South Korea has placed nearly
fully autonomous sentry guns on the edge of its demilitarized zone with North Korea, selling the
domestically built robot turrets to customers throughout the Middle East. Speaking at a defence expo in
2018, Prime Minister Narendra Modi of India — the world’s second-largest arms buyer — told the
audience: “New and emerging technologies like AI and Robotics will perhaps be the most important
determinants of defensive and offensive capabilities for any defence force in the future.”

LAWS are more efficient than traditional war methods


Etzioni and Etzioni, 17 (Amitai Etzioni and Oren Etzioni, PhD, PhD in AI, May 2017, accessed on 6-27-
2022, Army University Press, "Pros and Cons of Autonomous Weapons Systems",
https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/
Pros-and-Cons-of-Autonomous-Weapons-Systems/#:~:text=First%2C%20autonomous%20weapons
%20systems%20act,areas%20that%20were%20previously%20inaccessible.) (NB)
Those who call for further development and deployment of autonomous weapons systems generally point to several military advantages. First,
autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given
mission, and the efficacy of each warfighter is greater. Next, advocates credit autonomous weapons
systems with expanding the battlefield, allowing combat to reach into areas that were previously
inaccessible. Finally, autonomous weapons systems can reduce casualties by removing human warfighters
from dangerous missions.1 The Department of Defense’s Unmanned Systems Roadmap: 2007-2032 provides
additional reasons for pursuing autonomous weapons systems. These include that robots are better suited than humans
for “‘dull, dirty, or dangerous’ missions.”2 An example of a dull mission is long-duration sorties. An example of a dirty mission is
one that exposes humans to potentially harmful radiological material. An example of a dangerous mission is explosive ordnance disposal.

LAWS are better equipped to preform strategic operations


Etzioni and Etzioni, 17 (Amitai Etzioni and Oren Etzioni, PhD, PhD in AI, May 2017, accessed on 6-27-
2022, Army University Press, "Pros and Cons of Autonomous Weapons Systems",
https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/
Pros-and-Cons-of-Autonomous-Weapons-Systems/#:~:text=First%2C%20autonomous%20weapons
%20systems%20act,areas%20that%20were%20previously%20inaccessible.) (NB)

7 According to DeSon, the physical strain of high-G maneuvers and the intense mental concentration and
situational awareness required of fighter pilots make them very prone to fatigue and exhaustion; robot
pilots, on the other hand would not be subject to these physiological and mental constraints. Moreover, fully
autonomous planes could be programmed to take genuinely random and unpredictable action that
could confuse an opponent. More striking still, Air Force Capt. Michael Byrnes predicts that a single unmanned aerial vehicle with
machine-controlled maneuvering and accuracy could, “with a few hundred rounds of ammunition and sufficient fuel reserves,” take out an
entire fleet of aircraft, presumably one with human pilots.8
Democracy

Democratic peace only works with globalized democracy, absent that it fails and
states default to realism and conflict
Mearsheimer 2018. John Joseph Mearsheimer is an American political scientist and international relations scholar and R. Wendell
Harrison Distinguished Service Professor at the University of Chicago. “Chapter 7: Liberal Theories of Peace” Great Delusion: Liberal Dreams and
International Realities. 2018 Yale University Press. <https://www.jstor.org/stable/j.ctv5cgb1w> Accessed 6/30/20. ARJH

Consider, for instance, that none of the liberal theories was relevant to the superpower competition during the Cold War. The Soviet Union was
not a democracy, the two sides had little economic intercourse, and few international institutions had both sides as members. Or think about
how most liberals talk about the prospects of China’s rising peacefully. China
is not a democracy today and shows little
prospect of becoming one. One rarely hears the argument that democratic peace theory can provide the basis for peace in Asia. But
one frequently hears that economic interdependence theory can explain why China’s rise will be peaceful. China’s economy is tied to the
economies of its rivals, and this linkage means not only that China and its trading partners depend on each other to keep prospering, but also
that prosperity depends on their peaceful relations. A war involving China would be tantamount to mutual assured destruction at the economic
level. Hence, economic interdependence will keep the peace in Asia as China rises. It
is possible to hypothesize a world in
which one or more of the liberal theories apply universally, and one where none of them applies at all.
But those are not our world. In our world, those theories are likely to cover certain situations but not others. Consider, for example,
how democratic peace theory would apply to a scenario in which the United States removes its military forces from Europe and NATO
disappears. There would then be three major powers on the Continent: France, Germany, and Russia. According to the theory, France and
Germany would not fight each other, because they are both liberal democracies and thus would not compete with each other for power. But
they would have a fundamentally different relationship with non-democratic Russia: they would
be guided by realist logic, with
its emphasis on the survival motive. In that situation, all three countries would end up trying to maximize their
positions in the global balance of power. Let us assume that Russia becomes a democracy. Democratic peace theory
would then apply to relations among all three major powers. Yet democratic Russia would have to fear a rising China , which is not a
democracy, on its southern border, and so would have to act according to balance-of-power logic in its dealings with China. France and
Germany do not share a border with China, but they would still have to worry about a possible threat if China became a superpower. Aslong
as there is one powerful non-democracy in the system, no democracy can escape from acting according
to realist logic. As Alexander Wendt notes, “One predator will best a hundred pacifists because anarchy provides
no guarantees. This argument is powerful in part because it is so weak: rather than making the strong assumption that all states are
inherently power-seeking . . . it assumes that just one is power-seeking and that the others have to follow suit
because anarchy permits the one to exploit them .”6 This logic applies even though the democracies in
the system would still behave peacefully toward each other , at least according to the theory.

Democratic Peace fail: Authoritarianism is on the rise and the tenets of DPT are
fundamentally disproven
Mearsheimer 2018. John Joseph Mearsheimer is an American political scientist and international relations scholar and R. Wendell
Harrison Distinguished Service Professor at the University of Chicago. “Chapter 7: Liberal Theories of Peace” Great Delusion: Liberal Dreams and
International Realities. 2018 Yale University Press. <https://www.jstor.org/stable/j.ctv5cgb1w> Accessed 6/30/20. ARJH

Democratic peace theory was remarkably popular in the two decades after the Cold War ended. Michael Doyle introduced it to the academic
and policy worlds in a pair of seminal articles published in 1983.9 When the superpower rivalry ended in 1989, it was widely believed that
liberal democracy would steadily sweep across the globe, spreading peace everywhere. This perspective, of course, is the central theme in
Fukuyama’s “The End of History?” But time has not been kind to Fukuyama’s argument. Authoritarianism has become a viable
alternative, and there are few signs that liberal democracy will conquer the globe anytime soon . Freedom
House maintains that the world’s share of democracies actually declined between 2006 and 2016, which
naturally reduces the scope of the theory .10 Even if liberal democracy were on the march, however, it would not enhance
the prospects for peace, because the theory is seriously flawed. Consider its central finding. Some of its proponents argue that there
has never been a war between two democracies. But this is wrong: there are at least four cases in the modern era where
democracies waged war against each other . Contrary to what democratic peace theorists say, Germany was a liberal
democracy during World War I (1914–18), and it fought against four other liberal democracies: Britain, France, Italy, and the United
States.11 In the Boer War (1899–1902) Britain fought against the South African Republic and the Orange Free State, both of which were
democracies.12 The Spanish-American War (1898) and the 1999 Kargil War between India and Pakistan are also
cases of democracies fighting each other.13 Other cases come close to qualifying as wars between democracies.14 The American Civil War is
usually not counted because it is considered a civil war rather than an interstate war. One might argue, however, that the distinction is not
meaningful here. The Confederacy was established on February 4, 1861, but the war did not begin until April, by which time the Confederacy
was effectively a sovereign state. It is also worth noting that there
have been a host of militarized disputes between
democracies, including some cases where fighting broke out and people died, but that fell short of
actual war.15 There are also many cases of democracies, especially the United States, overthrowing democratically
elected leaders in other countries, a behavior that seems at odds with the claim that democracies
behave peacefully toward one another. But let us get back to my four cases of actual wars between democracies. One might
concede that I am right yet still argue that this tiny number of wars does not substantially challenge the theory. This conclusion would be
wrong, however, for reasons clearly laid out by the democratic peace theorist James L. Ray: “Since wars between states are so rare statistically .
. . theexistence of even a few wars between democratic states would wipe out entirely the statistical and
therefore arguably the substantive significance of the difference in the historical rates of warfare between
pairs of democratic states, on the one hand, and pairs of states in general, on the other.”16 Those four wars between democracies, in
other words, undermine the central claim of democratic peace theorists.

Democracy is resilient and allows for the development of opposition groups which
strengthen democratic norms
Drezner 2020. Daniel W. Drezner is a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts
University. “The resiliency of American democracy” WaPo. February 4, 2020.
<https://www.washingtonpost.com/outlook/2020/02/04/resiliency-american-democracy/> Accessed 6/29/20. ARJH

So is that it for American democracy? Nah. I do not want to minimize the illiberal nature of the Trump administration. When Secretary
of State Mike Pompeo pitches a fit because an NPR reporter acts like a reporter and then tries to claim his behavior is perfectly consistent with
press freedom, he is, you know, lying. That is not good for America. Neither is the reported effort by Trump to punish former national
security adviser John Bolton for writing a memoir confirming everything the House managers said about Trump. These petty acts of revenge,
while troubling, are not the end of the republic. And even if elected officials seem to be behaving in as self-interested a manner
as political scientists would predict, American politics is about more than just those officials. The American public
also matters, and they have not taken too kindly to this sort of swamplike behavior. Erica Chenoweth, Jeremy Pressman and others have
detailed the rise in organized political protests since the start of the Trump presidency. This kind of civil
society activism mattered in the 2018 midterms and will matter for the 2020 election and beyond. Lara
Putnam and Gabriel Perez Putnam made a similar point last summer in the Washington Monthly: “Something different has been happening in
Trump-era America. The local grassroots groups that arose after the 2016 election have developed a strikingly effective
infrastructural mix — including new local groups, reanimated Democratic Party structures, and passionate campaigns for previously
ignored local and legislative offices — to create a dense, overlapping system with multiple on-ramps to electoral
action.” Contrary to the fears of some, it would appear that Americans opposed to Trump’s brand of illiberalism are
doing something about it. And regardless of what one hears about divisions among Democrats, the latest Voter Study Group findings
by John Sides and Robert Griffin reveal that “Democratic primary voters are fairly unified in their favorable views of the leading candidates
[and] they are even more unified in their opposition to President Donald Trump.” Similarly, for
all the talk of democracy vanishing
in the United States, neither experts nor the public actually believes this. A group of political scientists founded
Bright Line Watch to examine whether both political scientists and the public perceives the erosion of American democracy. Their latest wave
of surveys, from the end of last year, found that
“perceptions of the overall performance of American democracy
remain stable among both experts and the public since we began surveying each group.”
***Off-Case***
Business Confidence/Economy Disadvantage

LAWS key to military readiness


MAJOR ANNEMARIE VAZQUEZ (Judge Advocate, United States Army. Associate Professor, Contract and
Fiscal Law Department, U.S. ArmyJudge Advocate General's Legal Center and School. B.S., Minnesota
State University, 2002; J.D., Mitchell HamlineUniversity School of Law, 2008; Judge Advocate Officer
Basic Course, 2010; LL.M., Military Law with NationalSecurity Law Concentration) “Laws and Lawyers:
Lethal Autonomus Weapons Bring Loac Issues to the Design Table, Judge advocates Need to be There”
June 28, 2022
Strategy and Communications at the Department of Defense's (DoD) Joint Artificial Intelligence Center (JAIC)reported, "Despite
expressing concern on AI arms races, most of China's leadership sees increased military usage of AI[*90]as inevitable and is
aggressively pursuing it. China already exports armed autonomous platforms and surveillance AI."3That same year, Defense
Secretary Mark Esper announced on November 5, 2019, that China had exported lethal autonomous drones to the
Middle East: "Chinese manufacturers are selling drones advertised as capable of full autonomy, including
the ability to conduct lethal targeted strikes ."4 In countering Russian and Chinese pursuit, possession, and export of
lethal autonomy the 2018 DoD Artificial Intelligence Strategyemphasized: Our adversaries and competitors are
aggressively working to define the future of these powerful technologies according to their interests,
values, and societal models. Their investments threaten to erode U.S. military advantage, destabilize the free
and open international order, and challenge our values and traditions with respect to human rights and
individual liberties.5 The "powerful technologies" referred to in DoD's AI Strategy and the comments made by Esper, Allen,
and Putinrefer to lethal autonomous weapons(LAWs),6a subset of machines that employ AI. Although there is no
internationally agreed-upon definition of LAWs,7the DoD defines them as weapons that "can select and engage targets without
further intervention by a human operator."8These are the "killer robots" referred[*91]to in the media and by organizations
dedicated to banning them.9Though technology for some LAWs

Plan hurts the economy – LAWS are infinitely cheaper than any other form of military
deployment
Coley Felt (International Policy Institute Cybersecurity Fellow) “Autonomous Weaponry: Are Killer
Robots in Our Future?” February 14, 2020 https://jsis.washington.edu/news/autonomous-weaponry-
are-killer-robots-in-our-future/

Outlining the Debate. The main arguments in support of the development of autonomous weaponry are
the military advantages and cost reduction. Autonomous weapons systems would create military
advantage because fewer warfighters would be needed, the battlefield could be expanded into
previously inaccessible areas, and less casualties could occur by removing humans from dangerous
missions (Etzioni, 2017). Furthermore, robots do not have the mental or physiological constraints that
humans possess. Thus, their judgements would not be clouded by emotions and more sensory
information would be processed without the distorting of preconceived notions (Etzioni, 2017).
Numbers from the US Department of Defense show that it costs the Pentagon about $850,000 per year
for each soldier in Afghanistan. Contrarily, a small rover equipped with weapons costs roughly $230,000
(Etzioni, 2017).
Military Readiness

The US has invested heavily in AI – key to economy


Adam Satariano et al. “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing.”
December 17, 2021 https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html

Some, like Russia, insist that any decisions on limits must be unanimous — in effect giving opponents a
veto. The United States argues that existing international laws are sufficient and that banning
autonomous weapons technology would be premature. The chief U.S. delegate to the conference,
Joshua Dorosin, proposed a nonbinding “code of conduct” for use of killer robots — an idea that
disarmament advocates dismissed as a delaying tactic. The American military has invested heavily in
artificial intelligence, working with the biggest defense contractors, including Lockheed Martin, Boeing,
Raytheon and Northrop Grumman. The work has included projects to develop long-range missiles that
detect moving targets based on radio frequency, swarm drones that can identify and attack a target, and
automated missile-defense systems, according to research by opponents of the weapons systems.
Security Link

Fear of others capabilities drives development of LAWS. The alternative solves the
root cause of LAWS proliferation
Michael T. Klare (professor emeritus of peace and world security studies at Hampshire College and
senior visiting fellow at the Arms Control Association) “Autonomous Weapons Systems and the Laws of
War” March 2019 https://www.armscontrol.org/act/2019-03/features/autonomous-weapons-systems-
laws-war

In developing and deploying these weapons systems, the United States and other countries appear to be
motivated largely by the aspirations of their own military forces, which see various compelling reasons
for acquiring robotic weapons. For the U.S. Navy, it is evident that cost and vulnerability calculations are
leading the drive to acquire UUVs and unmanned surface vessels. Naval analysts believe that it might be
possible to acquire hundreds of robotic vessels for the price of just one modern destroyer, and large
capital ships are bound to be prime targets for enemy forces in any future military clash; while a swarm
of robot ships would be more difficult to target and losing even a dozen of them would have a lesser
effect on the outcome of combat.7 The Army appears to be thinking along similar lines, seeking to
substitute robots for dismounted soldiers and crewed vehicles in highly exposed front-line
engagements. These institutional considerations, however, are not the only drivers for developing
autonomous weapons systems. Military planners around the world are fully aware of the robotic
ambitions of their competitors and are determined to prevail in what might be called an “autonomy
race.” For example, the U.S. Army’s 2017 Robotic and Autonomous Systems Strategy states, “Because
enemies will attempt to avoid our strengths, disrupt advanced capabilities, emulate technological
advantages, and expand efforts beyond physical battlegrounds…the Army must continuously assess RAS
efforts and adapt.” Likewise, senior Russian officials, including President Vladimir Putin, have
emphasized the importance of achieving pre-eminence in AI and autonomous weapons systems.
Russia DA Links
Russia is using LAWs in Ukraine
Zachary Kallenborn 03-12-2022 [Research affiliate with the Unconventional Weapons and
Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism
(START);a policy fellow at the Schar School of Policy and Government; a US Army Training and
Doctrine Command “Mad Scientist,” and national security consultant], “Russia may have used a killer
robot in Ukraine. Now what,” https://thebulletin.org/biography/zachary-kallenborn/ Cut By: m.jam
Using pictures out of Ukraine showing a crumpled metallic airframe , open-source analysts of the conflict
there say they have identified images of a new sort of Russian-made drone, one that
the manufacturer says can select and strike targets through inputted coordinates or autonomously. When
soldiers give the Kalashnikov ZALA Aero KUB-BLA loitering munition an uploaded image, the system
is capable of “real-time recognition and classification of detected objects” using artificial intelligence
(AI), according to the Netherlands-based organization Pax for Peace (citing Jane’s International Defence
Review). In other words, analysts appear to have spotted a killer robot on the battlefield.
The images of the weapon, apparently taken in the Podil neighborhood of Kyiv and uploaded
to Telegram on March 12, do not indicate whether the KUB-BLA, manufactured by Kalashnikov Group
of AK-47 fame, was used in its autonomous mode. The drone appears intact enough that digital forensics
might be possible, but the challenges of verifying autonomous weapons use mean we may never know
whether it was operating entirely autonomously. Likewise, whether this is Russia’s first use of AI-based
autonomous weapons in conflict is also unclear: Some published analyses suggests the remains of a
mystery drone found in 2019 Syria was from a KUB-BLA (though, again, the drone may not have used
the autonomous function).
Nonetheless, assuming open-source analysts are right, the event illustrates well that autonomous weapons
using artificial intelligence are here. And what’s more, the technology is proliferating fast. The KUB-BLA is not the first AI-based
autonomous weapon to be used in combat. In 2020, during the conflict in Libya, a United Nations report said the Turkish Kargu-2 “hunted down
and remotely engaged” logistics convoys and retreating forces. The Turkish government denied the Kargu-2 was used autonomously (and, again,
it’s quite tough to know either way), but the Turkish Undersecretary for Defense and Industry acknowledged Turkey can field that capability.

Russia has, is, and will keep using autonomous weapons


Sebastien Roblin 06-30-2022 [Journalist focused international security and military history with
more than 500 published articles at The National Interest, NBC and War is Boring; committed to
uncovering both human and technical dimensions ofsubjects; MA in Conflict Resolution from
Georgetown University, and served as university instructor for the Peace Corps in the beautiful deserts of
Gansu, China; organized and led large-scale training programs; worked in education and communications
in France, El Salvador, Northern Ireland and the U.S. capital]. “Russia’s war in Ukraine,”
https://insideunmannedsystems.com/russias-war-in-ukraine/ Cut by: m.jam
In some respects a slower-moving species of cruise missile, loitering munitions are endowed with greater
endurance usable for reconnaissance and target acquisition, ability to maneuver and even abort an attack,
and potential for recovery if no target is found. 
Ukraine and Russia have both developed indigenous short-range loitering munitions. The use of Russia’s
Zala KUB-BLA kamikaze is confirmed due to the half-dozen that have crashed without exploding or have
been shot down. One recording released by Russia shows a KUB missing in an attempted attack on a
U.S.-delivered M777 howitzer, its small warhead inflicting little if any damage to the high-end artillery
piece. Russia’s larger Lancet-3 munition has yet to be seen in Ukraine despite prior combat tests in Syria.
Both Lancet and KUB-BLA allegedly have controversial autonomous engagement capabilities. The
KUB-BLA can use an AI-driven image-matching system to classify targets to attack without human
oversight. “[Kub’s] limited visibility in this war so far does not allow for a definitive conclusion if it is or
is not using AI,” Bendett cautioned.

The plan makes Russia out to be a war criminal – and Putin opposes that
David E Sanger 03-17-2022 [a White House and national security correspondent, and a senior writer.
In a 38-year reporting career for The New York Times, he has been on three teams that have won Pulitzer
Prizes, most recently in 2017 for international reporting],“By Labeling Putin a ‘War Criminal,’ Biden
Personalizes the Conflict,” https://www.nytimes.com/2022/03/17/us/politics/biden-putin-war-
criminal.html Cut By: m.jam
Mr. Biden and Mr. Putin have not talked since Feb. 12, when the American president made one last attempt to warn the
Russian leader that an attack would lead to crushing sanctions, more arms to the Ukrainians and a major buildup of troops and arms on NATO’s
eastern front. That buildup was exactly the result Mr. Putin was trying to forestall.

It is almost unimaginable that the two men will be dealing with each other directly anytime soon. Hours
after Mr. Biden’s “war criminal” comments, the Kremlin spokesman, Dmitri S. Peskov, called the
characterization “unacceptable and unforgivable rhetoric.”
Mr. Putin has not said much about Mr. Biden, but he vowed this week to cleanse Russia of “scum and
traitors” who he said were being organized by the West as a “fifth column” to destroy the country.
The enmity between the two men has been barely disguised for years, since the day when Mr. Biden, as
vice president, by his account, told Mr. Putin that he had looked in his eyes and seen no soul. “We
understand one another,” Mr. Putin is said to have replied.

Putin is paranoid and worried about something – plan could be what sets him off –
he’s unpredictable
John Haltiwanger and Mattathias Schwartz 05-29-2022 [Haltiwanger is a senior politics
reporter at Business Insider. He reports on all things politics with a particular focus on national security
and foreign policy; has a BA in History from St. Mary's College of Maryland and an MSc in International
Relations from the University of Glasgow; Schwartz is a senior correspondent at Insider. He is a former
staff writer at the New Yorker and a current contributing writer at the New York Times Magazine.],
“Putin is increasingly angry in public but top US intel and military experts warn there's no 'credible'
evidence that he's ill,” https://news.yahoo.com/putin-increasingly-angry-public-top-103000113.html Cut
By: m.jam
Ex-spies and Ukrainian officials have fueled speculation that Russian leader Vladimir Putin is in poor
health.
But US intel and military experts say there's no clear or credible evidence that Putin is unwell.
The rumors may be a sign of wishful thinking that Putin's death would somehow end the brutal war, an
expert said.
Rumors have been swirling in recent weeks that Russian President Vladimir Putin is unwell and somehow
losing his grip.
The strongman and former KGB operative, who is pushing 70, has spent years trying to  cultivate a macho
image — riding a horse without a shirt on, playing hockey, and showing off his judo moves. Throughout
his roughly two decades in power, Putin has been thought of as a calculated, emotionless leader. But the
bare-chested equestrian's disposition has seemingly shifted since he launched an unprovoked, full-
scale war in Ukraine back in late February. His cheeks are puffy, he's fidgety, and his speeches
bombastic and filled with vitriol.
The war in Ukraine has been so disastrous for the Russian military that many have begun to wonder what
led Putin to miscalculate so badly. Some have guessed that he's suffering from mental decline. Others have
rumored that Putin is seriously ill, drawing broad conclusions from minute aspects of his behavior. Even the wiggling of the Russian leader's foot
has been enough to prompt a wave of speculation over his health.

But three intelligence and military experts in the US who spoke to Insider say they're not putting much weight on the continuing speculation that
the Russian leader is in physical decline — a theory that has not been endorsed by doctors or medical experts.

Jeffrey Edmonds, former director for Russia on the National Security Council and an ex-CIA military
analyst, told Insider he's "not seeing anything truly credible" to back-up the notion that Putin is unwell.
"What I and others have seen is a definite change in his demeanor," Edmonds said, adding that Putin is
"normally the voice of calm in Russia but publicly has become more emotional and angry." This suggests
that Putin is "not comfortable with something," Edmonds added.

Putin is unpredictable and we should be cautionary


Tim Hartford 03-02-2022 [writes the Undercover Economist column, and was previously an
economics leader writer for the FT; first joined the newspaper as Peter Martin Fellow in 2003;the author
of nine books, including the million-selling The Undercover Economist and most recently How To Make
The World Add Up; regular presenter for BBC radio; made an OBE in the 2019 new year honours list
“for services to improving economic understanding”], “Putin’s actions make no sense. That is his
strength,” https://www.ft.com/content/d7e38b1e-9a5a-422d-86f6-b74c729717fa Cut By: m.jam
Is Vladimir Putin mad? Russia’s president has launched a costly and unprovoked war, shocked his own
citizens, galvanised Nato, triggered damaging but predictable economic reprisals and threatened a nuclear
war that could end civilisation. One has to doubt his grasp on reason. Doubt is part of the point. In The
Strategy of Conflict, written in 1960, the economist Thomas Schelling noted: “It is not a universal
advantage in situations of conflict to be inalienably and manifestly rational in decisions and motivation. ”
A madman — or a toddler — can get away with certain actions because he cannot be deterred by threats
or because his own threats seem more plausible. But Schelling’s point is more subtle than that: you don’t need to be mad to
secure these advantages. You just need to persuade your adversaries that you might be. The idea is vividly illustrated in The Maltese Falcon,
Dashiell Hammett’s 1930 novel and John Huston’s 1941 film. Our hero, Sam Spade, knows the whereabouts of the falcon, a priceless artefact.
When the villainous Kasper Gutman tries to intimidate him into revealing the secret, Spade is not intimidated. If Gutman kills him then the
precious falcon will be lost forever. “If I know you can’t afford to kill me, how are you gonna scare me into giving it to you?” Spade challenges
Gutman. “That’s an attitude, sir, that calls for the most delicate judgement on both sides,” Gutman says. “Because, as you know, sir, in the heat of
action men are likely to forget where their best interests lie and let their emotions carry them away.” Spade doesn’t seem too worried by this,
perhaps because Gutman appears calm. Gutman might have had more success if he seemed unhinged. Then again, Gutman’s henchmen are
pointing pistols at Spade and twitching with rage, so even if Gutman keeps his cool, the threat that someone might get carried away seems
plausible. Schelling was a wonderful writer and thinker, but it gives me little pleasure to be dusting off his books. When I first encountered his
ideas on nuclear deterrence, it was the mid-1990s. The cold war was over, the threat of a nuclear exchange seemed largely past and Schelling’s
ideas could be enjoyed in much the same way as Hammett’s: as witty, surprising and reassuringly unreal. When Schelling shared the Nobel
memorial prize in economics in 2005, it was with a sense that his clear-eyed ideas about nuclear deterrence had helped human civilisation dodge
a bullet.
That nuclear bullet is now back in the gun and Putin is waving it around unnervingly. He
wouldn’t . . . would he? I don’t know, which is just the way Putin likes it. There was always something
surreal about maintaining nuclear weapons as a deterrent. Surely such weapons could never be used,
because the consequences were too horrible? And if the weapons could never be used, what sort of
deterrent did they provide? Yet the deterrent is real enough because even a small risk of escalation is a
risk worth taking seriously. That risk can come from a number of sources. There’s malfunctioning equipment: in
September 1983, Soviet officer Stanislav Petrov’s early warning radar told him that the US had just launched ballistic missiles at the Soviet
Union. He realised that was unlikely and ignored the warning. Petrov’s heroic inaction was made all the more remarkable because it came at a
time of escalated tensions between the superpowers. Another risk is that a senior decision maker is insane, rather than merely feigning insanity.
Then there is the risk of things getting out of control somewhere down the chain of command. During the Cuban missile crisis in 1962, the US
decided to stop and search ships sailing to Cuba — a potential flashpoint if the result was the sinking of a Soviet ship. President Kennedy and
defence secretary Robert McNamara asked the US Navy to soften this “quarantine” in a couple of ways. In fact, as the classic book Thinking
Strategically explains, the US Navy told McNamara to mind his own business, and the blockade was riskier than Kennedy had intended.
Unthinkable threats become thinkable in such circumstances.
Putin holds a weak hand, except for the one card that no
rational person would ever choose to play. But the essence of brinkmanship is to introduce a risk that
nobody can entirely control. If the risk becomes intolerable, you may win concessions. I am 99 per cent
sure that Putin is bluffing, but a 1 per cent chance of the end of the world is and should be more than
enough to worry about. Faced with Gutman’s warning that someone may get carried away, Spade coolly responds, “then the trick from
my angle is to make my play strong enough to tie you up, but not make you mad enough to bump me off against your better judgement”. That is
the trick the western world is now attempting to perform. By Putin’s design, it is not going to be easy.

You might also like