Scac056-What Governs Attitudes Toward Artificial Intelligence Adoption and Governance

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Science and Public Policy, 2023, 50, 161–176

DOI: https://doi.org/10.1093/scipol/scac056
Advance Access Publication Date: 14 October 2022
Article

What governs attitudes toward artificial intelligence


adoption and governance?
Matthew R. O’Shaughnessy1,* , Daniel S. Schiff 2
, Lav R. Varshney3 , Christopher J. Rozell1,†
and Mark A. Davenport1,†
1
School of Electrical and Computer Engineering, Georgia Institute of Technology, 777 Atlantic Dr NW, Atlanta, GA 30332, USA, 2 Department
of Political Science, Purdue University, 110 North University St, West Lafayette, IN 47907, USA and 3 Department of Electrical and Computer
Engineering, University of Illinois at Urbana-Champaign, 306 N Wright St MC 702, Urbana, IL 61801, USA

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


† These authors contributed equally.
*Corresponding author. E-mail: [email protected]

Abstract
Designing effective and inclusive governance and public communication strategies for artificial intelligence (AI) requires understanding how
stakeholders reason about its use and governance. We examine underlying factors and mechanisms that drive attitudes toward the use and
governance of AI across six policy-relevant applications using structural equation modeling and surveys of both US adults (N = 3,524) and tech-
nology workers enrolled in an online computer science master’s degree program (N = 425). We find that the cultural values of individualism,
egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes. Perceived benefit drives attitudes toward AI
use but not its governance. Experts hold more nuanced views than the public and are more supportive of AI use but not its regulation. Drawing
on these findings, we discuss challenges and opportunities for participatory AI governance, and we recommend that trustworthy AI governance
be emphasized as strongly as trustworthy AI.
Key words: artificial intelligence policy; public opinion; public engagement.

1. Introduction However, the technical complexity of AI makes it difficult


Artificial intelligence (AI) may fundamentally reshape our to design governance structures that the public can participate
economy and society, but across wide variety of application in effectively. As a result, discourse about AI governance can
areas, its prospective benefits are accompanied by poten- become opaque and expert-based, making the policy process
tial harms. For example, AI’s impact on economic growth ineffective at representing diverse viewpoints, vulnerable to
may be felt unevenly across the labor market. The use of capture by vested interests (Ulnicane et al. 2020), and liable
AI in new medical systems raises questions about trust, fair- to ‘ethics-washing’ (Stix 2021; Sloane et al. 2020). More-
ness, and privacy even as it enables new treatments. AI-based over, while recent opinion surveys have found that the US
systems provide new tools for free expression while simulta- public is generally supportive of AI (Morning Consult 2017;
neously powering authoritarian crackdowns and the spread European Commission 2017; Smith and Anderson 2017; The
of disinformation. Harris Poll 2017; Gallup, Inc 2018; Morning Consult 2018;
Realizing the benefits of emerging technologies like AI Smith 2018a, b; Zhang and Dafoe 2019; United Kingdom
while mitigating their accompanying harms requires gover- Government 2019; Johnson and Tyson 2020), their awareness
nance strategies that are respectful of the diverse values and of it is limited (DeCario and Etzioni 2021): even as AI is perva-
beliefs held by the public (Stirling 2008; Macnaghten and sive in applications like resume screening and credit scoring,
Chilvers 2014; Ulnicane et al. 2020; Stix 2021). Inclusive surveys have found little public support for AI in these ‘sen-
and participatory governance is a central pillar of AI devel- sitive’ settings (Smith 2018a). These seemingly contradictory
opment frameworks released by academic, industry, govern- views suggest that public opinion may change rapidly as AI’s
ment, and international groups (Organisation for Economic capabilities, limitations, and societal impacts become more
Cooperation and Development 2019; IEEE 2019; European apparent.
Group on Ethics in Science and New Technologies 2018; Ensuring that diverse public opinion is respected in AI
United States Office of Management and Budget 2020). In governance processes thus requires that AI developers and
representative suggestions, IEEE’s framework suggests that policymakers better understand the underlying values and
developers and regulators of AI should remain aware of the motivations that shape how public attitudes toward AI could
‘diversity of cultural norms among users’ (IEEE 2019) while evolve. This understanding is also critical for equipping the
the AI Now Institute stresses the importance of expanding public to meaningfully engage with AI governance: science
‘cultural, disciplinary, and ethnic diversity’ in the development communication literature suggests that processes for pub-
and governance of AI (Campolo et al. 2017). lic outreach and dialogue are most effective when they are

© The Author(s) 2022. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]
162 Science and Public Policy

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


Figure 1. SEM used for analysis. The full SEM, S, allows variables within each group (denoted by dashed boxes) as well as cultural values and
sociodemographic variables to covary; we treat demographic variables as exogenous. Two nested models are used in our analysis: S \C , which constrains
paths from cultural value constructs to outcome variables to zero, and S \B , which constrains paths from perceived benefit outcome variables to support
outcome variables to zero.

tailored to the public’s values, beliefs, and motivations (Kahan The main contribution of this work is to increase
et al. 2011; Lupia 2013). Although previous work has understanding of attitudes toward AI use and governance by
explored how attitudes of AI professionals (Zhang et al. 2021; (1) exploring a set of attitudinal drivers that is broader than
Aiken et al. 2020) and the public (Morning Consult 2017; the typically-considered sociodemographic variables, includ-
European Commission 2017; Smith and Anderson 2017; The ing both perceptions of benefit, and cultural values inspired
Harris Poll 2017; Gallup, Inc 2018; Morning Consult 2018; by the cultural theory literature; (2) directly comparing the
Smith 2018a, b; Zhang and Dafoe 2019; United Kingdom attitudes of experts and the public; and (3) considering atti-
Government 2019; Johnson and Tyson 2020) differ across tudes across a range of policy-relevant contexts of AI use. Our
sociodemographic groups, little existing work has explored preregistered analysis strategy uses the structural equation
the underlying values and mechanisms that drive attitudes model (SEM) shown in Fig. 1 (described in more detail in
toward AI. the next section), which allows us to naturally address the
In this paper, we take a step toward better understanding three key research questions defined above. Our results pro-
what shapes attitudes toward AI by looking at factors and vide insights that can aid policymakers in crafting governance
mechanisms beyond sociodemographic characteristics. We strategies that are respectful of diverse beliefs and assist AI
explore the following questions, which are key to designing developers in effectively communicating the broader implica-
effective AI governance and science communication strategies: tions of their work to the public. Drawing on these results,
we offer recommendations for engaging the public in dia-
(1) How do sociodemographic factors, cultural values, and logue about AI governance and offer suggestions for future
perceived benefit influence attitudes toward AI? research.
(2) How do these attitudes—and the factors that inform
them—differ between experts and the public?
2. Background and theory
(3) How do these attitudes—and the factors that inform
them—differ across common contexts of AI use? 2.1 Underlying factors governing attitudes toward
technology
To explore these questions, we conducted two online Prior work has found that race, gender, and political ide-
surveys in April and May 2021. The first survey sampled ology (Wildavsky and Dake 1990) are highly predictive of
N = 3,524 US adults recruited and compensated through the attitudes toward issues such as nuclear power (Slovic et al.
Lucid Theorem platform, which uses quota sampling to obtain 1991), climate change (Flynn et al. 1994), genetically engi-
participants representative of adult US residents on age, gen- neered food (Finucane et al. 2000), and radiation (Peters et al.
der, race, and region. The second survey sampled N = 425 2004). Similar sociodemographic divides have been found in
students who had recently completed a graduate AI course attitudes toward AI. Those reporting familiarity and comfort
at Georgia Tech. Most (93.9 per cent) of these students had with AI are more likely to be young, be male, be educated, live
undergraduate degrees in technical subjects, and 93.5 per cent in urban areas, and have higher incomes (Morning Consult
previously or concurrently worked in computer science or 2017, 2018; Zhang and Dafoe 2019; United Kingdom
another Science, Technology, Engineering, and Math (STEM) Government 2019; Johnson and Tyson 2020; Morning Con-
field.1 In addition to standard sociodemographic variables, sult 2021). Sociodemographic divides also shape perceptions
we consider the impact on attitudes of perceived self- and of AI’s impact on society. Those in urban areas, blue-collar
societal benefits and of the cultural values of individualism, workers, and political liberals are more likely to believe that
egalitarianism, general risk aversion, and techno-skepticism— AI will deepen inequality and reduce employment (Morning
constructs found to inform the perception of many other Consult 2017; Gallup, Inc 2018), while those with more edu-
technological risks (Kahan et al. 2007; Renn and Benighaus cation, white-collar jobs, and higher incomes are more likely
2013; Tennant et al. 2019). to believe that AI will be beneficial to society and the economy
Science and Public Policy 163

(Morning Consult 2017; Smith and Anderson 2017; Gallup, 2.2 Perceived benefit and hypothesized model
Inc 2018; Morning Consult 2018; Zhang and Dafoe 2019). In contrast to technologies whose benefits are perceived as
The cultural theory of risk perception posits that ‘cultural’ broadly shared, popular narratives about AI often feature
worldviews can be more concise and informative predictors clear losers (Fast and Horvitz 2017): workers who lose their
of attitudes toward technological risk than sociodemographic jobs to automation, for example, or minorities who suffer dis-
factors alone (Kahan et al. 2007; Weber and Hsee 2000; crimination at the hands of automated decision systems. These
Johnson and Swedlow 2021). These cultural values have been narratives may make views about AI governance—perhaps
hypothesized to define identity groups, imbue potential risks more so than views about other technological risks—subject
with affective qualities (Peters et al. 2004), and encourage to perceptions of who stands to benefit and lose from the
biased information processing (Lord et al. 1979). Indeed, continually increasing adoption of AI. However, while there
literature has found that successfully communicating scien- is some evidence that perceived self-interest informs support
tific topics to the public benefits from careful attention to for AI-based technologies (Morning Consult 2018; Liu et al.
how messages may interact with the cultural values held by 2019; Dixon et al. 2020), other literature has suggested that
the public (Kahan et al. 2011; Lupia 2013; Druckman and perceived benefit does not always eclipse affective and value-
McGrath 2019). For policymakers seeking to design inclu-

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


based concerns (Sears and Funk 1991; Chong et al. 2001).
sive governance and communication strategies, it is critical To evaluate how perceived benefit influences attitudes toward
to understand how cultural values relate to views on AI AI (and understand how it is influenced by sociodemographic
and whether this relationship differs across specific AI use variables and cultural values), we use an SEM (Kline 2016)
cases. analysis framework.
We use two cultural values that originate with the grid- The SEM that forms the core of our analysis describes
group cultural theory of Douglas and Wildavsky (1982), were hypothesized relationships between demographic variables,
operationalized for survey research by Kahan et al.’s ‘cultural cultural values, perceived individual and societal benefits
cognition theory’ (Kahan 2012), and have been identified as from AI, and support for AI use and governance. The SEM
salient to technological risk perception (Kahan et al. 2007; also mathematically defines how each variable is measured.
Dake 1991). The first represents attitudes toward the role In SEM analysis, model parameters (e.g. path coefficients
of individuals in society: individualists favor social order- and (co)variances) are estimated by minimizing the differ-
ings in which individuals are responsible for ‘securing their ence between the observed covariance matrix and the model-
own well-being without assistance or interference from soci- implied covariance matrix according to a certain statistical
ety’ and thus prefer to minimize the role of government when criterion (Kline 2016).
ensuring collective welfare comes into tension with individ- Our model, shown in Fig. 1, assumes that demographic
ual preferences (Kahan et al. 2011). The second cultural value variables and cultural values drive both categories of out-
represents attitudes toward well-defined social hierarchies: come measures (perceived benefit, support for AI adoption,
egalitarians favor greater equality between groups defined by and governance), but that the reverse driving relationships do
race, gender, wealth, and political power; they spurn strati- not exist. This reflects the assumption that cultural values are
fied social orderings based on fixed characteristics. Literature broad concepts likely to integrate beliefs and experiences from
on risk analysis and related disciplines have used cultural a wide variety of sources and that views about AI are unlikely
theory generally—and the conceptions of individualism and to be sufficiently present in the public discourse to funda-
egalitarianism we borrow from cultural cognition theory in mentally alter cultural values.2 Each cultural value construct
particular—to explain differences in opinion between envi- was measured by four survey items. While the cultural value
ronmentalists and the public (Ellis and Thompson 1997), constructs were allowed to covary in our SEM, each survey
disagreements on controversial issues such as gun control and response item was modeled as independent (i.e. each survey
global warming (Kahan et al. 2007), and divides in acceptance items are independent of each other when conditioned on
of scientific consensus (Kahan et al. 2011). their parent construct). Our SEM also assumes that perceived
We also consider two cultural values that describe general self- and societal benefits drive support for AI use and gover-
attitudes toward risk and technology. First, many individu- nance, but that the support outcomes do not drive perceived
als tend to avoid small risks even at the cost of foregoing benefit.
larger benefits; general risk aversion has been found to be The relationship between sociodemographic variables and
a powerful predictor of attitudes toward technology (Renn cultural values is a more subtle question. For example, it
and Benighaus 2013). Here we use the risk aversion construct seems likely that age and gender drive cultural values, and
of Sharma (Sharma 2010), which assesses attitudes toward conversely, literature has suggested that cultural values drive
general lifestyle risks. Second, techno-skeptics are uncom- political orientation (Wildavsky and Dake 1990). Our model
fortable with the use of new technology, cynical about the includes sociodemographic variables as exogenous variables,
intentions of groups developing new technological advance- allowing unmodeled covariance between them and between
ments, and opposed to the use of technology to solve social sociodemographic variables and cultural values. This rep-
problems (Meadows et al. 1972; Krier and Gillette 1985). resents the possibility that there exist causal relationships
Techno-skepticism has been found to partially explain divides between these variables, or that unmodeled confounding is
in opinion on topics such as nuclear waste (Barke et al. 1997), present. These covariances are denoted by the bidirectional
climate change adaptation (Gardezi and Arbuckle 2020), and dotted line in Fig. 1. Similarly, variables within each group
autonomous vehicles (Tennant et al. 2019). In the context of may be causally related or be jointly affected by unmod-
AI, techno-optimism and techno-skepticism are well-reflected eled variables. For example, techno-skepticism and risk aver-
in popular narratives about utopian and dystopian scenarios sion may be driven by individualism and egalitarianism,
driven by AI (Cave et al. 2018). rather than existing as discrete constructs.3 We model this by
164 Science and Public Policy

allowing variables within each group (sociodemographic vari- how findings relevant to AI’s impact on labor automation, for
ables, cultural values, perception of benefit, and AI support) example, generalize to AI used in medical research or auto-
to covary. mated weapons systems. To better understand these differ-
Our SEM bears some similarities to popular models of ences, in addition to examining attitudes toward AI in general,
technology acceptance and adoption used in psychology and we explore attitudes toward AI used in six policy-relevant con-
marketing research literature. The theory of reasoned action texts: predictive policing, labor automation, medical diagno-
(Fishbein and Ajzen 1975) focuses on the relationship between sis, automated vehicles, personalization, and weapon systems
behavior and behavioral intention, which is modeled as being (see Section 3.3 and Supplement Section B for more details on
shaped by attitudes and subjective norms. The Multi-Attribute these contexts).
Attitude Model (Fishbein and Funke 1976) models an indi- The use of AI in each of these contexts raises different ques-
vidual’s attitude toward a brand or product as a weighted tions about risks, distribution of impacts, and ethical ques-
linear combination of attributes. Unlike this model, in which tions like fairness. Modeling each of these contexts allows us
each individual is modeled by a unique set of weights, our to understand how the factors we study—sociodemographic
SEM models all respondents collectively with a single set of variables, cultural values, and perceived benefit—impact atti-
inferred parameters. The influential Technology Acceptance tudes differently across application areas.

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


Model (Davis 1985) posits that attitudes toward technology
use are governed by perceived usefulness and ease of use, 3. Methods
which are in turn governed by a set of ‘external factors’. While
extensions of this model use more extensive sets of external Our survey and analysis procedure were preregistered at
factors (including culturally-relevant variables such as gender https://osf.io/pcsvf/. Supplement Section E contains results
(Venkatesh and Morris 2000)), the set of sociodemographic from the complete analysis procedure specified in the prereg-
and cultural variables we use in our SEM is broader than istration; Supplement Section G describes minor deviations
typically considered in this literature. from the preregistration. The research was approved by the
Georgia Tech institutional review board under the protocol
number H21112.
2.3 Differences between experts and the public
It is particularly important to understand the ways in which 3.1 Data
public and expert attitudes diverge when discourse about
Our first sample consisted of N = 3,524 US adult participants
policy is dominated by experts. Research on other emerg-
recruited and compensated online through the Lucid Theo-
ing technologies has suggested that technical experience often
rem platform, which uses quota sampling to match the US
negatively associates with risk perception, with experts tend-
census marginal distributions on age, gender, ethnicity, and
ing to be particularly tolerant of risks stemming from tech-
region. Previous research has found that samples provided
nology aligned with their discipline (Barke and Jenkins-Smith
by Lucid provide results generally similar to US probability
̈
1993; Sjoberg ̈
and Drottz-Sjoberg 1993). Restricting policy
samples or samples provided by Amazon Mechanical Turk
discourse to those who are most knowledgeable therefore
(Coppock and McClellan 2019). However, this sample may
threatens to limit the influence of the very people who may
not generalize to US adults on dimensions such as com-
perceive the most risk. Previous work has also found that
fort with technology. Recent studies have found decreased
scientists’ views on risk vary based on gender, institutional
participant attention on Lucid and other online survey plat-
affiliation, and cultural and political values (Barke et al. 1997,
forms coinciding with the Covid-19 pandemic (Aronow et al.
2006; Funk et al. 2015). AI experts differ from the public
2020; Peyton et al. 2021); we expected that this would
along each of these dimensions; failure to appreciate how
reduce effect sizes. As a robustness check, we replicated our
these factors influence attitudes toward AI may hinder the
results with inattentive respondents removed (see Section 3.4).
creation of inclusive policy dialogue.
The completion rate (defined as the number of participants
Indeed, prior surveys comparing the attitudes of AI experts
entering the survey who completed it) for this sample was
and the public have found major differences in the trust placed
86 per cent.
in government, technology companies, the US military, and
Our second sample consisted of N = 425 master’s students
international organizations (Zhang et al. 2021; Aiken et al.
at the conclusion of a graduate-level AI class in Georgia
2020), suggesting a potentially wide gulf in attitudes toward
Tech’s Online Master of Science (OMS) in Computer Science
who should be responsible for governing AI. AI professionals
(OMSCS) or Analytics programs. OMS students have under-
also differ from the public on many sociodemographic vari-
graduate degrees in technical subjects, and in 2020, most of
ables that typically predict regulatory preferences: compared
them worked full-time in technical fields in the industry while
to the public, AI practitioners tend to be better educated, be
completing the degree. In their current and post-graduation
more racially diverse but overwhelmingly male, have higher
roles, most will be in a position to have an impact on how AI
income, and live in more urban areas (Zhang et al. 2021).
is used and governed. Recruitment materials for this sample
Understanding expert attitudes is particularly relevant in the
are provided in Supplement Section C. Participants were pro-
context of AI because technology workers have demonstrated
vided course extra credit, and nonparticipants were offered
substantial leverage in determining where and how AI is used
an alternative method for obtaining the extra credit. The
and governed (Belfield 2020).
response rate for this sample was 61.7 per cent.
Differences between these two samples go beyond aca-
2.4 Differences across use contexts demic and professional AI-related experience. In 2020, 81 per
Further complicating the design of inclusive governance and cent of OMSCS students were male and over one-third were
science communication strategies is the diversity of contexts in not US citizens or permanent residents. While the OMSCS
which AI can be used. This diversity makes it difficult to know program has enrolled students from 122 countries and 53 US
Science and Public Policy 165

Table 1. Means, standard deviations, 95 per cent confidence intervals for in their propensity to work in industry versus academia,
differences in means, and P -value (Welch’s two-tailed t-test) for each vari- their level of experience with AI, and their sociodemo-
able in the US public (Lucid) and expert (OMS) samples. Gender was coded
graphic and cultural factors. Respondents in our OMS
as a binary variable (male, female or other gender), and age was coded
using Pew’s classification of generational groups (18–25, 26–40, 41–56,
sample completed undergraduate degrees largely in North
57–75, and 76+). Race was coded as White, Black, Asian, or other, as we America (66.1 per cent) or Asia (25.6 per cent), primarily
anticipated that only these groups would be large enough in both samples in computer science (43.1 per cent) or other STEM fields
to detect effects. We used four-level scales each for education, house- (50.8 per cent). Most of them concurrently or recently
hold income, and urban/rural residence. Political orientation was collected worked in computer science or software engineering but
using a five-point Likert scale with end points ‘strong liberal’ and ‘strong not specifically in AI (63.8 per cent); 18.1 per cent reported
conservative’.
working in another field of science or engineering; and
𝑥Lucid
̄ 𝑥OMS
̄ 𝑥Lucid
̄ − 𝑥OMS
̄ P-value 11.8 per cent reported working directly in AI (see Supplement
Section A).
Age group 1.75 0.89 (0.79, 0.92) <0.001
(0–4) (1.12) (0.56)
Gender = 0.49 0.81 (−0.37, −0.29) <0.001 3.2 Survey design

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


Male (0.50) (0.39)
Ethn = White 0.75 0.41 (0.29, 0.39) <0.001 Our survey consisted of two parts. The first portion assessed
(0.43) (0.49) sociodemographic information, cultural values, opinion on
Ethn = Black 0.13 0.03 (0.08, 0.12) <0.001 risks posed by technologies other than AI, and self-reported
(0.33) (0.16) familiarity with AI. We included standard sociodemographic
Ethn = Asian 0.05 0.48 (−0.48, −0.38) <0.001 factors that have been found to associate with the opin-
(0.22) (0.50)
Education 1.36 2.24 (−0.94, −0.83) <0.001
ion on questions related to AI in previous surveys (Morn-
(0–3) (1.08) (0.43) ing Consult 2017; European Commission 2017; Smith and
Cognitive 0.25 0.97 (−0.75, −0.70) <0.001 Anderson 2017; The Harris Poll 2017; Gallup, Inc 2018;
employment (0.43) (0.17) Morning Consult 2018; Smith 2018a, 2018b; Zhang and
Manual 0.14 0.00 (0.12, 0.15) <0.001 Dafoe 2019; United Kingdom Government 2019; John-
employment (0.35) (0.05) son and Tyson 2020; Deeney 2019; Boyon 2019; ARM
Social 0.22 0.01 (0.20, 0.23) <0.001
2020; Selwyn et al. 2020): gender, age group, race/ethnicity,
employment (0.42) (0.10)
Household 1.23 2.19 (−1.05, −0.87) <0.001 job type (cognitive/analytical, manual/physical, social/people-
income (1.06) (0.91) oriented, or other), education level, household income,
(0–3) urban/rural residence, and political orientation (see Table 1
Political ori- −0.01 −0.52 (0.42, 0.61) <0.001 for coding details). We also included questions assessing
entation (−2 (1.23) (0.93) attitudes toward other technologies for which expert and
to + 2)
public risk perception has been well-studied. Participants
Urban (0–3) 1.59 2.16 (−0.66, −0.49) <0.001
(1.05) (0.79) were asked, on a five-point Likert scale (‘risks significantly
Individualism 0.06 −0.47 (0.46, 0.61) <0.001 outweigh benefits’ to ‘benefits significantly outweigh risks’),
(standard- (0.92) (0.73) about their perception of genetically modified foods, nuclear
ized) power, coal-burning power plants, vaccines, and synthetic
Egalitarianism −0.05 0.21 (−0.34, −0.18) <0.001 biology.4
(standard- (0.90) (0.78) The cultural values of individualism and egalitarianism,
ized)
Techno- 0.06 −0.45 (0.43, 0.59) <0.001
described in Section 2.1, were adapted from Kahan et al.’s
skepticism (0.92) (0.81) operationalization of grid-group cultural theory for survey
(standard- research (Kahan et al. 2007).5 Two clarifications are needed
ized) to position our use of these constructs within the broader cul-
Risk aversion 0.03 −0.24 (0.20, 0.34) <0.001 tural theory literature. First, Kahan et al.’s cultural cognition
(standard- (0.89) (0.69) theory differs from the broader cultural theory literature by
ized)
constructing survey items directly from the ‘grid’ and ‘group’
axes of Douglasian cultural theory (Douglas and Wildavsky
1982). These survey items for individualism and egalitarian-
states/territories, most work full-time in computing-related ism improve on conceptual issues with other cultural theory
jobs and are therefore more likely to be geographically con- measurement strategies (Kahan 2012), have demonstrated
centrated than our nationally representative US sample. They high predictive validity in studies of other technological risks,
also tend to be younger and have higher incomes than the and are perhaps the most popular measurement approach in
US public. Table 1 shows summary statistics comparing cultural theory (Johnson and Swedlow 2021). However, they
sociodemographic variables and cultural values in our two have been shown to be facially and empirically limited, par-
samples. ticularly because they do not incorporate the cultural values
Previous research has revealed differences in opin- of hierarchy and fatalism (Swedlow et al. 2020).6 Second,
ion between distinct groups of AI and computer science we depart slightly from the ‘cultural cognition’ hypothesis of
practitioners, such as between AI-skilled professionals at Kahan et al. (2007) by analyzing the effects of individualism
US technology companies (Aiken et al. 2020) and active and egalitarianism as individual constructs rather than ana-
researchers who publish at machine learning conferences lyzing their intersection.7
(Zhang et al. 2021). Our graduate student expert sample The techno-skepticism construct was created from items
adds an additional perspective to this literature; OMS stu- previously used in the literature and modified after testing in
dents may differ from previously-surveyed expert samples two small pilot surveys (see Supplement Section D); the final
166 Science and Public Policy

construct consisted of the following four items: ‘new tech- describing both potential benefits and concerns about the use
nologies are more about making profits rather than making of AI in that context. The points highlighted in each vignette
peoples’ lives better’, ‘I am worried about where all this tech- were chosen in an attempt to reflect arguments present in typ-
nology is leading’, ‘technology has become dangerous and ical discourse about AI, particularly those that may associate
unmanageable’, and ‘I feel uncomfortable about new tech- affective qualities with the application:
nologies’. The general risk aversion construct was adapted
directly from Sharma (2010). • Predictive policing: ‘Some police departments use AI to
The second portion of the survey assessed opinion about predict where crime is likely to occur, helping them decide
AI. We first provided respondents with a brief definition of where to deploy their resources. But civil rights groups
AI adapted from Zhang and Dafoe (2019): ‘Artificial intel- and some researchers argue that these AI systems sim-
ligence (AI) refers to computer systems that perform tasks or ply increase arrests in minority neighborhoods without
make decisions that usually require human intelligence. AI can actually reducing crime.’
perform these tasks or make these decisions without explicit • Economic/labor impact: ‘AI systems are likely to auto-
human instructions. Today, AI has been used in the following mate many tasks. Some think that these AI systems will
applications: identifying people from their photos, diagnos-

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


make work less tedious and produce higher standards of
ing diseases like skin cancer and common illnesses, blocking living. Others believe that these AI systems will increase
spam email, helping run factories and warehouses, and pre- unemployment and inequality.’
dicting what one is likely to buy online’. We then assessed five • Medical systems: ‘AI-powered medical systems can detect
outcome measures separated into two groups. The first two diseases earlier and more accurately than human doctors.
outcomes assessed whether respondents believed that (a) they But some fear that these AI systems could occasionally
personally and (b) society more generally would benefit from produce incorrect results without doctors understanding
AI. These outcome measures (self- and societal benefit) were why.’
intended to disambiguate respondents who were supportive or • Autonomous vehicles: ‘AI-powered self-driving cars could
apprehensive about AI use because of its perceived effect on save lives by reducing traffic accidents caused by human
their own lives from respondents were excited or concerned error. But some are concerned that the AI systems in self-
about its effects on society at large. The remaining three out- driving cars are vulnerable to malfunctioning or being
comes assessed, again on five-point Likert scales, support for hacked.’
whether AI should be (a) ‘use[d]’, (b) ‘carefully managed’, • Personalization: ‘AI systems can provide personalized
and (c) ‘regulated by the government’, language adapted from news, social media content, and product recommenda-
Zhang and Dafoe (2019). The differentiation of management tions using data collected from users. But some worry
and regulation was intended to better disambiguate opinion that this can undermine individual privacy and lead to
on whether some form of AI governance should occur from misinformation and political polarization.’
opinion on who is best suited to perform this governance. • Autonomous weapons: ‘Lethal autonomous weapons con-
This distinction is particularly salient in light of impending trolled by AI systems could improve our national security
regulatory efforts and ongoing debates on the comparative while putting fewer service members in danger. But some
merits of self-regulation, soft law, and formal government worry that AI-powered weapons could be dangerous or
regulation. lead to a reckless arms race.’
These five outcome measures, which assessed opinion of
AI in general, were repeated for each of the six AI applica- Supplement Section B contains a more detailed discussion
tion contexts described below. Before answering survey items of each application context along with tables summarizing the
for each application, respondents were provided with two- impact of sociodemographic and cultural factors on support
sentence vignettes describing the potential benefits and harms for AI in each context.
of AI use in that context (see below). To reduce participant
fatigue in the US public (Lucid) sample, each respondent was 3.4 Survey administration and attention model
provided with only three of the six contexts, so that the sample
The US public (Lucid Theorem) survey ran from 3 May
size for each of the six specific AI contexts in the US pub-
2021 to 30 May 2021, with most responses collected from
lic sample was N ≈ 3,524/2. The expert respondents, who we
May 3 to 5. Based on recent research on the Lucid platform
anticipated would suffer less fatigue, each provided data for
(Aronow et al. 2020; Peyton et al. 2021), we anticipated that
all six contexts. The full survey instrument is contained in
pandemic-induced structural changes in populations complet-
Supplement Section C.
ing online surveys might result in reduced effect sizes. The
expert (master’s student) survey ran from 28 April 2021 to 8
3.3 AI application contexts May 2021. Two pilot surveys (N = 50 and 150) were admin-
We assessed our five outcome variables (perceived self-benefit, istered on 22 March 2021 and 1 April 2021 (see Supplement
perceived societal benefit, and support for use, ‘careful man- Section D).
agement’, and ‘regulat[ion] by the government’) for AI in Respondent attention is a concern when using online sur-
general and in the context of six policy-relevant application vey data. Following the recommendations of Berinsky et al.
contexts. Before being asked about AI in general, participants (2019), we assessed participant attention using four atten-
were provided a brief definition of AI adapted from Zhang tion check questions: three simple grid-type attention checks
and Dafoe (2019) (see above). Before being asked about each and one stand-alone attention check. We modeled respon-
context, participants were provided a two-sentence vignette dent attention using an item response theory model similar
Science and Public Policy 167

Figure 2. Outcome measures after respondents are presented with a general (context-free) definition of AI.

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


to that used by Berinsky et al. (2019). Specifically, we used 4. Results
the standard two-parameter Rasch model: 4.1 Public and expert attitudes differ in key areas
Compared to the US public, experts were more confident and
𝑒𝑎𝑗 (𝜃𝑖 −𝑏𝑗 ) positive in their attitudes toward AI (Fig. 2). Experts were
𝑝 (𝑦𝑖𝑗 = 1) = (1) much more likely to perceive self-benefit (1.04 points on a five-
1 + 𝑒𝑎𝑗 (𝜃𝑖 −𝑏𝑗 )
point Likert scale, Welch’s unequal variances t-test: P < 0.001)
where yij denotes whether the ith participant correctly and societal benefit (0.82 points, P < 0.001). While a plurality
answered the jth attention check question, aj denotes the dis- of the US public also believed that AI would benefit both them
criminability of the jth attention check question, 𝜃i denotes personally (45.2 per cent) and society at large (44.8 per cent),
the ith particpant’s attention, and bj denotes the difficulty few professed strong opinions. Similarly, our expert sample
of the jth attention check question. Inattentive respondents was much more likely to support the general use of AI than
were defined as those in the bottom quartile of attentive- the more ambivalent US public (1.17 points, P < 0.001), with
ness {𝜃i } (computed across the combined US public/expert almost no experts expressing opposition to AI use. In both
sample). The US public sample was less attentive overall (two- samples, support for AI use was strikingly similar to perceived
tailed t-test on mean attention 𝜃i : P < 0.001); 86.1 per cent benefit (Supplement Fig. 14), a pattern we explore in more
of the expert sample was retained in the attentive subsam- detail below.
ple, compared to 73.7 per cent of the US public sample. We Recent surveys have found strong public support for the
expected that including inattentive respondents in our analy- ‘careful management’ of AI (European Commission 2017;
sis would reduce effect sizes but that excluding them would Smith 2018a; Zhang and Dafoe 2019; Selwyn et al. 2020)
bias results: respondent attention has been found to asso- but differing opinions on whether this management should be
ciate with characteristics such as age, gender, and education performed by researchers, technology companies, nonprofit
(Berinsky et al. 2014) and may thus influence outcomes. groups, or the government (Zhang and Dafoe, 2019). To
All results reported in this paper are therefore based on disentangle attitudes toward AI governance in general from
analyses that retained the complete sample. As a robustness attitudes toward government regulation, we asked respon-
check, these results are reproduced in Supplement Section F dents both whether AI should be ‘carefully managed’ and
with inattentive respondents removed. Overall, the differences whether AI should be ‘regulated by the government’, phras-
between the full-sample and attentive-subsample results were ing adopted from Zhang and Dafoe (2019). We found that
minor.8 both experts and the US public were highly supportive of
‘careful management’ and generally supportive of government
regulation (Fig. 2). Notably, we found similar support for gov-
3.5 Structural equation model and estimation ernment regulation between experts and the public (0.02 point
We used R version 1.3.9 and lavaan version 0.6–9 (Rosseel difference, P = 0.715), despite experts being more likely to
2012) with the default (NLMINB) optimizer to fit the SEMs support management (0.28 points, P < 0.001). Past surveys
defined in our analysis. Because outcome measures and cul- have found that, unlike the public, AI experts place more
tural values were measured with Likert-scale (ordinal) items, trust in scientific and international organizations than their
we used the mean- and variance-adjusted weighted least- own government to ‘develop and manage’ AI (Zhang et al.
squares estimator with polychoric correlations (Li 2016) and 2021), suggesting that compared to the public, experts may
robust standard errors. Polychoric correlations were also be more inclined to support soft law governance approaches
used to compute construct reliabilities. For identifiability, to governance (see, e.g. Marchant et al. (2020)).
cultural construct variances were fixed to unity and each The public’s support for the use and governance of AI,
factor loading was allowed to vary. The only instances of shown in Fig. 3, was largely similar across contexts—a
missing data in our survey involved context-specific out- notable finding that persisted when the analysis was restricted
come measures (as only half of the US public sample was to only attentive respondents (Supplement Fig. 24; see
asked about each application). The metrics and thresh- Section 3.4 for the definition of attentive subsample). By con-
olds we used to assess the quality of fit were preregis- trast, experts’ views were more nuanced, varying much more
tered and stemmed from typical recommendations (Kline significantly across contexts. While expert and public attitudes
2016). trended in the same direction in many contexts, they featured
168 Science and Public Policy

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


Figure 3. Comparison of support outcome measures between samples and among common AI application areas. Before responding, participants were
provided two-sentence vignettes, listed in Section 3.3, describing arguments for and against the use of AI in the context.

distinct splits in others. For example, both experts and the outcome variables, we compared the fit of S, the full SEM
public were wary of AI use in autonomous weapons, recom- shown in Fig. 1, with S\C , the nested model that constrains
mendation systems, and predictive policing, but experts’ over- to zero the paths from cultural values to outcome measures.
whelming support for AI use in autonomous vehicles, medical We found consistent global (Table 2) and local (Supplement
diagnosis, and automating labor stood in stark contrast to the Tables 21 and 22) evidence that the inclusion of pathways
much more divided public. from cultural values to our outcome variables produced better
Our results suggest that greater public awareness about the model fit, indicating that the four cultural values we consid-
unique impacts of AI in different applications may be neces- ered were indeed informative factors in explaining attitudes
sary to fully empower the public to share its perspectives on toward AI.
AI use and governance. The cross-context divides we find also We next fit the full SEM shown in Fig. 1 to data from the US
suggest that limited support for the regulation of AI in general public sample. Fit statistics are shown in Table 2 along with
(among both experts and the public) may belie support for tai- statistics for the two modified (nested) models used to evalu-
lored government intervention in specific application contexts ate the roles of cultural values and perceptions of benefit. The
such as autonomous weapons. full model achieved the standard thresholds for adequate fit
listed in our preregistration.10 Correlation residuals, shown
in Supplement Table 22, generally indicated satisfactory local
4.2 Cultural factors are strongly informative of model fit.11 Finally, we observed relatively small covariances
attitudes between support outcomes, consistent with a lack of highly
What drives these expert–public divides that persist across influential unmodeled common causes of these variables. It is
outcome measures and application contexts? These gaps may important to note that our SEM represents hypothesized rela-
be due to differences in technical knowledge or due to social- tionships between variables and that ‘equivalent’ models with
ization during AI training. However, they may also be driven different hypothesized relationships can produce the same
by differences in sociodemographics and cultural values. Our covariance structure (MacCallum et al. 1993). Thus, while
expert and US public samples differed significantly on all the fit statistics in Table 2 provide circumstantial evidence in
sociodemographic variables (Table 1) as well as across all four support of our SEM, the primary evidence for the model’s
cultural values (P’s < 0.001): experts were less individualis- correctness is based on our theoretical arguments above.
tic (0.53 points), less techno-skeptical (0.51 points), less risk Inferred SEM path coefficients are shown in Fig. 4. Overall,
averse (0.27 points), and more egalitarian (0.26 points) (see the results indicated that the cultural values of individu-
Supplement Fig. 7). alism, egalitarianism, risk aversion, and techno-skepticism
To better understand how these factors inform attitudes, were strongly predictive of attitudes toward AI. The influ-
we used the preregistered SEM shown in Fig. 1 to explore ence of sociodemographic variables also contained interesting
the relationship between sociodemographic variables, cultural patterns. Like past surveys (Morning Consult 2017, 2018;
values, perceived benefit, and support for AI use and gov- Zhang and Dafoe 2019), we found that those who were male,
ernance. (The size of our expert sample limited this SEM younger, better educated and had higher income both per-
analysis to the US public.) We first assessed the reliability and ceived more benefit from AI and were more supportive of its
fit of the cultural value components of the model. The fit in use. Yet we found that support for government regulation
each sample (evaluated using thresholds defined in our prereg- was—perhaps surprisingly—often divorced from perceived
istration) was adequate to good, construct reliabilities were benefit and support for use and more directly informed by
satisfactory, constructs loaded appropriately onto each item sociodemographic and cultural variables. For example, older
(with similar loadings in each sample), and model correlation and more conservative respondents were more hesitant about
residuals indicated adequate local fit (Supplement Tables 11, AI use. However, despite perceiving less benefit from AI
12, and 15).9 To assess the impact of cultural values on our and expressing less support for its use, they were also less
Science and Public Policy 169

Table 2. Fit statistics for the complete SEM S and two nested models used for analysis. 𝜒2 : model chi-square test, along with model degrees of freedom
and P -value, CFI: comparative fit index, RMSEA: root mean squared error of approximation, SRMR: standardized root mean square residual, ∆𝜒2 : chi-
square difference test (compared to full model S). R 2 values show coefficients of determination for the five endogenous variables in the model. The
complete model S achieved adequate-to-good global fit, with CFI and RMSEA indicating adequate fit and SRMR indicating good fit. Reduced models S \C
(used to assess the evidence for paths from cultural values to support outcomes) and S \B (used to assess the evidence for paths from perceived benefit
to support outcomes) achieved adequate fit on RMSEA and SRMR but poor global fit on CFI.

Model fit statistics R2 (benefit) R2 (support)


𝜒2 (df, P) CFI RMSEA (90% CI) SRMR Δ𝜒2 (Δdf, P) Self Soc. Use Mgt. Reg.
Model S 4650.2 (350, <0.001) 0.903 0.059 (0.058, 0.061) 0.034 – 0.274 0.262 0.470 0.235 0.201
Model S\C 8204.2 (370, <0.001) 0.822 0.078 (0.076, 0.079) 0.094 1764.8 (20, <0.001) 0.134 0.110 0.461 0.090 0.084
Model S\B 5554.4 (356, <0.001) 0.882 0.064 (0.063, 0.066) 0.047 1173.4 (6, <0.001) 0.552 0.544 0.774 0.220 0.190

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


Figure 4. Inferred path coefficients (with 95 per cent confidence intervals) for full SEM S fit with the US public data. Gender, race/ethnicity, and work
type were coded as binary; education, household income, and urban residence were coded as four-level variables; age group and political orientation
were coded as five-level variables; and cultural constructs and perceived benefit variables were standardized. See Table 2 for fit statistics.

supportive of the government regulating AI. Similarly, those Notably, our results found that some effects of cultural val-
who held cognitive/analytical jobs, lived in urban areas, and ues (Fig. 4) had reversed directions from the patterns observed
had higher incomes perceived greater self-benefit from AI and for other technologies. For example, both individualism and
were more supportive of its use. However, these groups were egalitarianism predicted increased perceptions of self-benefit
also more likely to believe that AI should be carefully managed from AI—a contrast with many other technologies, where
and regulated. egalitarianism has been found to associate with lower sup-
port.12 This reversed effect of egalitarianism suggests that AI
4.3 Cultural determinants of attitudes differed in may be perceived differently from many other technological
some applications risks, perhaps due to perceptions that automated systems can
Developing effective ‘culturally pluralized’ (Johnson and temper certain hierarchical social structures that egalitarians
Swedlow 2021) strategies for science communication and perceive as harmful. If this perception does indeed hold among
governance requires an understanding of how cultural val- the public, however, it stands in sharp contrast to the increas-
ues affect attitudes toward specific technologies and their ing realization among AI developers that bias and fairness are
applications. While previous research has evaluated how cul- significant problems in automated decision-making systems
tural values inform support for other emerging technologies, (Mehrabi et al. 2021) and evidence that awareness of these
it is not clear how—or whether—these results generalize to problems negatively affects perceptions of their performance
applications of AI. (Schiff et al. 2021).
170 Science and Public Policy

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


Figure 5. Comparison of cultural values’ effects on support for AI contexts and other technologies. Markers show ordinary least-squares regression
estimates and 95 per cent confidence intervals when controlling for sociodemographic variables. For support for AI contexts, respondents were asked
whether they supported the use of AI in a particular application. For other technologies, respondents were asked whether the technology’s benefits
outweighed its risks. Each outcome was measured on a five-point Likert scale; cultural value constructs were standardized and inferred from a
confirmatory factor analysis model. This analysis was exploratory.

We used a linear regression model to compare the effects of the use of AI in autonomous vehicles and recommenda-
of cultural values on support for AI use between experts and tion systems than they were of the use of AI for medical
the public. Our use of linear regression rather than SEM diagnosis. See Supplement Section B for tables highlight-
was due to the limited size of our expert sample; this por- ing where these results matched expectations based on prior
tion of the analysis was exploratory (i.e. not preregistered). literature.
We found that the direction of cultural values’ effects on Unsurprisingly, AI’s impact on labor and the economy was
support for AI use was generally consistent across AI appli- perceived to be more beneficial by respondents with cogni-
cation areas (Fig. 5) and again found that experts’ opinions tive/analytical jobs and higher education. However, we found
were more nuanced than the public’s. Supplement Figs. 15–16 that manual/physical employment also predicted greater per-
provide additional evidence for this phenomenon, showing ceived benefits from AI’s impact on labor and the economy.
that experts’ attitudes toward both AI and other technolo- This result is potentially surprising but consistent with the
gies varied more than the public’s, a pattern that persisted findings that many US workers believe automation is more
when the analysis was restricted to attentive respondents. This likely to affect others’ jobs than their own (Parker et al. 2019).
evidence suggests that the public’s attitudes toward AI may Interestingly, we also found that perceived societal benefit had
evolve considerably as they become more informed, under- a stronger impact on support for labor-automating AI than on
lining the importance of public education on the broader AI in general (Supplement Table 32).
impacts of AI use in specific applications. These results Prior work has found that individualism generally predicts
also revealed patterns across the six contexts we explored. higher support for technology, and we found that individ-
For example, attitudes toward the predictive policing and ualism had a similarly positive impact on support for AI.
autonomous weapons application contexts were similar, par- Less consistent with work on other technologies, however,
ticularly among experts (Supplement Fig. 15). we found that egalitarianism also tended to predict greater
To examine whether the factors driving attitudes toward support for AI. Perhaps unexpectedly, we found overall posi-
these applications were also similar, we fit a multigroup ver- tive effects of the general risk perception construct of Sharma
sion of the SEM shown in Fig. 1 to data from the US public (2010) on support for AI across contexts, suggesting that the
sample. This multigroup SEM facilitated between-context risk aversion and techno-skepticism constructs used in our
comparison by allowing path coefficients to differ for each survey measured relatively orthogonal aspects of technolog-
context while constraining the model aspects that defined cul- ical risk perception.
tural values to be constant. Some notable patterns emerged That the US public perceived AI as more egalitarian
from this model, for which inferred parameters are shown than experts did (Fig. 3) suggests that the public viewed
in Supplement Section E.4.2. We indeed found key sociode- AI as shaping society to be more equitable than experts
mographic and cultural variables whose impact on attitudes did. Particularly striking is the positive impact of egal-
toward predictive policing and autonomous weapons dif- itarianism on support for the use of AI-based weapon
fered from their impact on other contexts. For example, systems, suggesting that recent discourse and activism in
older and politically conservative respondents were less sup- the AI community opposing autonomous weapons (Belfield
portive of AI, in general, but were more supportive of AI 2020) may have been effective in driving experts’ opin-
use for predictive policing and autonomous weapons. The ions but not in breaking through to the general pub-
impact of egalitarianism on support for AI use in these two lic, who may have been more swayed by our vignette’s
contexts similarly differed from its impact on most other description of potential safety benefits to service members.
contexts. More broadly, there were substantial between- It is also notable that egalitarianism drove greater sup-
context differences in the impact of age on support for AI. port for labor-automating AI among the public than among
For example, older respondents were much less supportive experts.
Science and Public Policy 171

4.4 Perceived benefit substantially informs support role of sociodemographic variables; the impact of the cultural
for AI use—but not for management and regulation values of egalitarianism, individualism, techno-skepticism,
Our SEM (Fig. 1) hypothesized that perceived self- and soci- and risk aversion; the potentially moderating effects of per-
etal benefit drove support for AI use and governance. To assess ceived self- and societal benefit; differences between experts
the impact of perceived benefit on these support outcomes, we and the public; and differences across prominent policy-
compared the full SEM S to a nested model S\B , in which paths relevant applications of AI.
from the perception of benefit outcomes to support outcomes One consistent finding of our study is that the US public’s
were fixed to zero. Overall, global and local comparisons attitudes toward AI were much less nuanced than experts’.
of S\B and S provided mild support for the existence of an Compared to experts, the public’s views on the use, man-
agement, and regulation of AI were largely similar across
impact of perceived benefit on our support outcomes (Table 2;
application areas, and the public reported perceiving little
Supplement Tables 22 and 28).13
distinction between how AI might affect them personally
As shown in Fig. 4, perceived benefit (to both the respon-
and how it might affect the society more generally. We did,
dent individually and society at large) predicted substantially
however, find greater support for government regulation in
greater support for AI use but had much less impact on
applications such as autonomous weapons and predicting

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


attitudes toward its governance. Indeed, the total effect of
crime, indicating that while recent suggestions for soft law
sociodemographic and cultural variables on support for AI
approaches to AI governance (Marchant et al. 2020) may be
use was split roughly evenly between direct and indirect effects
more likely to find public and expert support in the USA,
(Supplement Fig. 10). By contrast, support for AI management
ambivalence toward broad AI regulation might belie support
and regulation was impacted much less by indirect effects.
for ‘hard’ legally-binding regulatory actions narrowly targeted
These findings were generally consistent across AI contexts
to certain contexts.
(Supplement Tables 30–36).
Second, we found that the four cultural values we stud-
Experts’ attitudes were again more nuanced than the pub-
ied were meaningful predictors of public attitudes toward
lic’s: we found much larger gaps between perceived self-benefit
AI. The relationships between cultural values and attitudes
and perceived societal benefit among experts than among the
are similar both across application contexts and between
public. Indeed, in the US public sample, we did not find sta-
experts and the US public (Fig. 5). For example, individualism
tistically significant differences between perceived self- and
tended to predict greater support for AI use while techno-
societal benefit in any application (P’s > 0.123).
skepticism tended to predict reduced support for AI use. These
Prior literature has conjectured that AI developers may
similarities—particularly between experts and the public—
engage in a form of motivated reasoning that makes them
advance the hypothesis that cultural values are a useful tool
more likely to believe that AI has a positive impact on soci-
for understanding attitudes toward AI and how these atti-
ety (Baum 2016) when it is professionally advantageous for
tudes may evolve. Thus, research on a larger set of cultural
them. We find mixed evidence for this theory. Consistent with
values, performed in different regions and with different pop-
this motivated reasoning conjecture, we found that experts
ulations, may be a valuable tool for creating participatory and
were indeed more likely than the US public to believe that
culturally sensitive AI applications and governance strategies.
AI was beneficial to society (0.82 points on a five-point Lik-
A third key finding of our study is that although cultural
ert scale; P < 0.001). Our expert sample was also much more
values had significant impacts on support for AI adoption and
likely than the public to believe that AI was beneficial for the
governance, these cultural values did not impact attitudes in
society in applications with significant commercial opportu-
the same way that they impact attitudes toward many other
nities such as automating labor (0.76 points; P < 0.001) and
technologies. For example, egalitarianism and risk aversion
self-driving cars (1.29 points; P < 0.001). However, AI experts
are traditionally associated with skepticism toward the use of
differed from the public on almost every sociodemographic
emerging technologies (Kahan et al. 2007); by contrast, we
and cultural trait, typically in ways that our results suggest
find that these values predicted greater support for AI. This
would predict higher support for AI use (Fig. 2(b)). Moreover,
implies that AI’s impact on society may be perceived differ-
experts were somewhat less likely to report that AI-based rec-
ently from the impacts of other technologies. Governance and
ommendation systems—a context in which AI experts as a
public dialogue strategies may be more successful if they take
whole have a large commercial interest—were beneficial to
these novel aspects of AI into account. Indeed, previous work
society (0.16 points; P = 0.020). This counterexample sug-
has found that science communication is most effective when
gests that the AI experts’ attitudes might be more substantially
it tailors its messages to the specific cultural values held by
driven by underlying sociodemographic and cultural traits
the public (Kahan et al. 2011; Lupia 2013). The relationships
rather than by a motivated reasoning mechanism related to
we find between specific cultural values and specific AI appli-
their professional orientation, although we would expect that
cations (shown in Fig. 5) suggest which potential dimensions
these results may differ in samples of other types of AI experts.
and applications could be emphasized in outreach efforts to
more effectively build credibility with the public and honor
public values.
5. Conclusion and discussion
5.1 Summary of key results
The complex and subtle sociotechnical concepts inherent to AI 5.2 Theoretical implications and contrasts with
make it challenging to design effective governance and science prior literature
communication strategies that are informed by and respectful The satisfactory fit of our SEM serves as a proof of con-
of diverse public views and values. In light of these challenges, cept for the benefits of using a combination of sociodemo-
this work evaluated underlying factors, values, and mech- graphic and cultural variables in modeling attitudes toward
anisms that influence attitudes toward AI. We explored the AI and suggests that a similar approach may be fruitful for
172 Science and Public Policy

studying public attitudes toward other culturally-polarized culturally-insensitive uses of AI techniques, and shallow or
technologies. In addition, the presence of both strong direct ineffectual participatory mechanisms. How can those inter-
effects and strong indirect effects in our fit model provides ested in inclusive governance bridge this gap? Our work
tentative (but not conclusive) support for the value of con- both provides insights and suggests challenges that may
sidering self- and societal benefit as mediating variables in face even well-intentioned efforts to develop participatory
understanding attitudes toward technology. structures.
The SEM used in this study shares some features with A first challenge is the sizable gap between the signifi-
popular frameworks in the broader technology acceptance cant public support we find for ‘careful management’ of AI
literature, such as the Technology Acceptance Model (see and the more limited support for ‘government regulation’
Section 2.2). Our work also carries implications for this class (Fig. 3), a finding that echoes prior research, particularly in
of models, providing evidence that factors adopted from cul- the US context (Zhang and Dafoe 2019). However, a growing
tural theory might also be successfully incorporated as exter- international expert consensus—including among corporate
nal factors in models of attitudes toward (and use of) other actors—has articulated a need for AI regulation, and regula-
technologies. tory efforts continue to develop. This reveals a fundamental
Finally, our work provides evidence for cultural theory tension in how public opinion should be respected in AI

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


more broadly, although survey operationalization details (dis- governance (Dragojlovic 2014). Should regulators take a tech-
cussed in Section 2.1) suggest that some caution is warranted nocratic approach and base regulatory strategies on the views
when interpreting these generalizations. First, the large and of experts, even in the face of skepticism from some quarters
statistically significant effects of cultural values on public and of the public? Or should regulators, presented with equivo-
expert attitudes toward AI we identified provide evidence in cal public support for US government regulation of AI, limit
favor of the applicability of cultural theory to attitudes toward the scope of their involvement even if they believe that public
AI and toward technology more generally. Enumerating and attitudes may evolve significantly as the impacts of AI become
categorizing values that shape attitudes is particularly valu- more apparent?
able for understanding general-purpose technologies such as One response to this tension that has been embraced by a
AI that have multiple overlapping impacts on society, and number of participatory design and governance strategies is
our work suggests that cultural theory may provide a use- to promote public education and genuine public–expert dia-
ful framework for such an effort. Second, our results in logue as part of outreach efforts. In these methods, trained
Fig. 5—which depict associations of cultural values across facilitators, researchers, or policymakers may initiate pub-
multiple technologies and AI application contexts—offer a lic engagement experiences by providing information about
basis for comparing the impacts of the four cultural values the stakeholders, benefits and costs, policy implications, and
we studied here on a variety of technologies and use cases. trade-offs that can help the public make more informed judg-
Previous work has found that those who are more com- ments. The public–expert gaps identified in our study point to
fortable with AI are more likely to be young, male, and the value of these cooperative strategies.
educated and to live in urban areas (Morning Consult 2017, Importantly, these dialogues are not unidirectional; discus-
2018; Zhang and Dafoe 2019; United Kingdom Government sion is structured and restructured by the public’s situated
2019; Johnson and Tyson 2020; Morning Consult 2021). Our experiences and values. Examples of relevant approaches can
results reflect these divisions. Moreover, we found that across be found both in long-standing participatory design strategies
most contexts, these demographic traits had positive and sta- (e.g. Multi-Criteria Decision Analysis (Triantaphyllou 2000)
tistically significant effects on support for AI not only directly and the Delphi method (Landeta 2006)) and in strategies
but also indirectly through paths mediated by perceived self- formulated or adapted specifically for science and technol-
and societal benefit. Our findings also largely align with prior ogy (e.g. the Citizen Visions on Science, Technology and
evidence that individuals with more education, white-collar Innovation method (Gudowsky et al. 2012), the Reflect! plat-
jobs, and higher incomes are more likely to perceive both self- form (Hoffmann 2020), and Deliberative Mapping (Burgess
and societal benefit from AI (Morning Consult 2017; Smith et al. 2007)). These engagement methods can elicit qual-
and Anderson 2017; Gallup, Inc 2018; Morning Consult itative and quantitative data to inform policy preferences,
2018; Zhang and Dafoe 2019).14 pointing not only to general values but also guiding spe-
Our results contrast most sharply with previous findings cific choices (Mavrommati et al. 2021). Engage2020’s Action
that blue-collar workers, those in urban areas, and political Catalogue database of participatory strategies (http://action-
liberals are most likely to report believing that AI will exac- catalogue.eu/search) provides one starting point.
erbate inequality and lower employment (Morning Consult Our results also point to specific contexts and value ori-
2017; Gallup, Inc 2018). In seeming contrast, we found that entations in which further unpacking the complex factors
those living in urban areas and political liberals tended to driving attitudes toward AI governance may be particularly
report perceiving a benefit to themselves and the society from useful. We find, for instance, that in AI applications like pre-
AI, both in general and in the economic context of labor dictive policing and autonomous weapons, experts are much
automation. more likely than the public to support government regula-
tion of AI. Moreover, in these contexts, there are statistically
5.3 Lessons for public engagement in AI significant differences between experts and the public in how
governance cultural values affect attitudes. For example, our finding that
Our study was motivated by the near-universal calls for egalitarianism predicts greater public support for AI-based
diverse, interdisciplinary, and public participation in AI weaponry but less expert support may suggest that efforts
governance from global industry, government, and civil soci- in the AI community to advocate against lethal autonomous
ety actors. Despite these calls, there are persistent concerns weapons (e.g. (Future of Life Institute 2018)) may not have
about opaque policy processes vulnerable to industry capture, reached the public eye. Similarly, our study’s finding that risk
Science and Public Policy 173

aversion predicted greater public support for AI-based recom- A second limitation concerns our descriptions of the six AI
mendation systems but less expert support may suggest that application contexts used in our survey: while we attempted
increased public awareness about the potential benefits and to faithfully reflect the way each application is framed in pub-
harms of these systems could be particularly impactful. lic discourse, it is likely that this discourse will evolve in ways
However, our findings also caution that in many appli- that change their associations with particular cultural values.
cation domains, increased public awareness of AI’s impacts Respondents also differed in their familiarity with AI; knowl-
might not produce major changes in attitudes toward AI gov- edgeable participants (and the expert sample in particular)
ernance. We find that public support for AI governance is likely considered information from previous knowledge about
relatively independent of arguably more malleable factors like the AI application contexts beyond what was provided in our
perceived self- and societal benefit from AI. Instead, our results vignettes, limiting fair comparisons between samples.
suggest that public support for AI governance is more strongly Third, while we believe our graduate student sample pro-
related to factors reflective of broader regulatory preferences vides one informative view on the beliefs of AI experts, this
such as political orientation and individualism (Fig. 4). group differs from other samples of AI experts, such as those
The contrast our study finds between the US public’s desire studied by Aiken et al. (2020) or Zhang et al. (2021). Future
for AI governance and skepticism of government involvement work should explore how our findings generalize to groups

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


suggests an opportunity for governance strategies. A major involved in other aspects of AI development and governance.
focus of AI policy discourse is ‘trustworthy AI’, an attempt Our US public sample also suffers from the typical limitations
to shape the ways AI is developed and applied in an effort to of online surveys: although respondents were representative
promote user trust. Our results reveal an additional need for of US adults on age, gender, race, and region, online sam-
trustworthy AI governance. Previous research has indicated ples tend to differ from the general population in ways not
that the US public places higher trust in, for instance, military captured by these variables.
and higher education institutions to manage AI than in the This work represents the first step toward understanding
federal government at large (Zhang and Dafoe 2019; Morning underlying mechanisms governing expert and public attitudes
Consult 2021). Identifying the aspects that have built trust in toward AI. Future research should extend these findings by
these institutions could help government and industry actors exploring how attitudes differ in non-US (and non-Western)
demonstrate their own trustworthiness in AI governance. contexts (IEEE 2019, Sambasivan et al. 2021). Despite their
Alternatively, governments could leverage these institutions limitations, the four cultural values we used here provide a
to develop and implement governance strategies, drawing on tool for quantitatively exploring cross-cultural differences in
trusted local authorities and civil society actors to develop, values and attitudes relevant to AI governance; results may
communicate, and administer aspects of AI governance. help explain emerging transnational political differences in AI
In turn, researchers can help identify participatory strate- governance strategies. It would also be valuable to study other
gies, messages, and governance approaches that promote (and groups of AI experts and practitioners, more fine-grained con-
deserve) public trust. Little is currently known about which ceptions of governance than management and regulation, and
strategies (e.g. third-party conformity assessments, labeling, using other narratives and frames for the application areas we
industry standards, or human rights or well-being impact considered.
assessments) are most likely to foster trust. In short, there
are many opportunities to promote inclusive AI governance
for both AI developers and formal governance bodies. How- Supplementary data
ever, the time horizon for doing so is not unlimited. AI systems Supplementary data is available at Science and Public Policy
with major impacts are already commonplace, and a variety Journal online.
of national and international regulatory efforts are currently
underway. Understanding effective strategies for trustwor-
thy AI governance—and the role of public views in these Funding
efforts—will be a pressing need in the coming years. We gratefully acknowledge funding from the National
Defense Science & Engineering Graduate Fellowship
(M.R.O.), and from the National Science Foundation via
grants CCF-1717530 (L.R.V.), CCF-1350954 (M.R.O. and
5.4 Limitations and future work C.J.R.), and CCF-2107455 (C.J.R. and M.A.D.).
Our research has several limitations. The four cultural val-
ues used in our model were selected because of their effects Conflict of interest statement. None declared.
in governing public opinion on other technologies, but they
may not, of course, be either root or comprehensive causes
of differing attitudes toward AI; many other sources of cul- Acknowledgements
tural diversity are important to respect when designing AI We thank M.D. Hunter and R.P. Barke for helpful discussions
governance strategies. The broader literature on technol- and feedback on earlier versions of this work, Georgia Tech
ogy acceptance (e.g. the many variants of the Technology CS 6601 course staff for assistance in fielding the OMS stu-
Acceptance Model (Marangunic and Graníc 2015)) describes dent survey, and our survey participants. All errors are the
many examples of factors that may also be influential in responsibility of the authors.
the formation of attitudes toward AI. Moreover, while pre-
vious work has posited that the interaction between cul-
tural values may drive some differences in risk perception Notes
(Kahan et al. 2011), our SEM analysis strategy does not 1. Prior work has studied other samples of AI experts: Zhang et al.
analyze interactions between variables. (2021) surveys AI researchers publishing in prestigious conferences,
174 Science and Public Policy

while Aiken et al. (2020) surveys AI professionals in industry. See model achieves adequate fit despite a statistically significant result
Section 3.1 and Supplement Section A for more details on the from this test is consistent with standard SEM practice and our
characteristics and limitations of these samples. preregistration (Kline 2016).
2. Our survey assessed cultural values before AI was introduced 11. One notable exception was the residual variance of the support for
to avoid attitudes toward AI influencing cultural values through use variable (−0.16), whose relatively large magnitude suggested
priming effects. some caution when interpreting results such as the coefficient of
3. We consider this alternate model in Supplement Section E.1.4. determination for this variable.
4. Because we anticipated that synthetic biology was likely to be less 12. Recall from Section 3.2 that two divides in related literature limit
familiar to respondents, this technology featured a one-sentence direct comparison of our results to some other work on the impact
description. of cultural values on public attitudes toward technology. First, the
5. In two small pilot surveys using the Lucid Theorem platform constructs of individualism and egalitarianism that we adapt from
(N = 50 and N = 150; see Supplementary Materials for details), Kahan et al. (2009) do not model hierarchy and fatalism; cultural
we found that the condensed cultural cognition worldview scale elements argued to be important by the broader cultural theory
of Kahan et al. (2007), which used both positively- and negatively- literature (Johnson et al. 2020; Ripberger et al. 2012). Second, like
worded items for each construct, had poor reliability. Based on the some other literature but unlike Kahan et al. (2009), we model
results of other recent studies that found reliability issues with the individualism and egalitarianism as discrete constructs rather than

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


negatively-worded cultural cognition theory items (Hornsey et al. examining effects of their intersection.
2018; Johnson et al. 2020), we followed the strategy of Hornsey 13. The evidence in support of accepting S over S\B was more equiv-
et al. (2018), by restricting our preregistered final survey to four ocal than the evidence in support of accepting S over S\C . For
positively-worded items each for individualism and egalitarianism. example, while overall we found evidence in support of retaining
The resulting scales had satisfactory reliability in both the full sam- model S over S\B , one piece of evidence supported retaining S\B :
ples and attentive subsamples (Supplement Tables 11 and 39). Our the large residual variance on support for AI use in models S\C and
results, however, may not be directly comparable to other work S vanished in S\B .
that used the full scale of Kahan et al. (2007). 14. For example, we found that employment in a ‘cognitive’ role
6. Specifically, cultural theory inspired by the work of Douglas and had a particularly strong positive effect on perceived self-benefit,
Wildavsky (1982) posits that the intersection of two axes, ‘grid’ perceived societal benefit, and support for use for both AI in
and ‘group,’ define quadrants corresponding to four distinct cul- general and for AI used in labor automation, perhaps the most
tural biases: individualism, egalitarianism, hierarchy, and fatal- economically-oriented application we considered.
ism. Initial attempts to operationalize cultural theory for survey
research using these four scales found that many participants did
not uniquely belong to a single cultural bias. The cultural cog-
References
nition theory scales of Kahan et al. (2007) that we use directly Aiken, C., Kagan, R., and Page, M. (2020) ‘Cool Projects’ or ‘Expand-
measure the ‘grid’ and ‘group’ axes as hierarchy–egalitarianism ing the Efficiency of the Murderous American War Machine?’: AI
and individualism–communitarianism. This approach sidesteps the Professionals’ Views on Working with the Department of Defense,
issues of participants scoring highly on multiple cultural biases, and Washington, DC: Center for Security and Emerging Technology.
is argued to improve on the scale reliability and predictive validity ARM. ‘AI Today, AI Tomorrow: The ARM 2020 Global AI Survey’
of other approaches (Kahan 2012), but has been criticized for its (2020).
lack of inclusion of discrete hierarchy and fatalism factors (Johnson Aronow, P. M., Kalla, J., Orr, L., et al. (2020) ‘Evidence of Rising Rates
et al. 2020; Ripberger et al. 2012; Van der Linden and Conceptual of Inattentiveness on Lucid in 2020’ preprint SocArXiv Papers.
2016). See Johnson and Swedlow (2021) for a review of cultural <https://osf.io/8sbe4> accessed 28 Jul 2021.
theory’s development and its relationship to the cultural cognition Barke, R. P., Jenkins-Smith, H., and Slovic, P. (1997) ‘Risk Percep-
theory of Kahan et al. (2007). tions of Men and Women Scientists’, Social Science Quarterly, 78:
7. The ‘cultural cognition’ hypothesis of Kahan et al. (2007) posits 167–76.
that the intersection of individualism and egalitarianism define Barke, R. P. and Jenkins-Smith, H. C. (1993) ‘Politics and Scientific
identity groups that imbue attitudes toward risk with affective Expertise: Scientists, Risk Perception, and Nuclear Waste Policy’,
qualities and lead to directionally motivated reasoning. Other work Risk Analysis, 13: 425–39.
(e.g. (Johnson et al. 2020)) has also treated these factors as discrete. Barke, R. P., Silva, C. L., and Jenkins-Smith, H. C. (2006) From
8. One notable exception was the covariance between individualism Experts’ Beliefs to Safety Standards: Explaining Preferred Radiation
and egalitarianism constructs. In the full results, we found that Protection Standards in Polarized Technical Communities, Atlanta:
this negative covariance had much larger magnitude in the expert Working Paper 14 Georgia Institute of Technology School of Public
sample than the US public sample; when restricting the sample to Policy.
attentive respondents, we found the inferred covariance for the US Baum, S. D. (2016) ‘On the Promotion of Safe and Socially Beneficial
public sample was much closer to the inferred value in the expert Artificial Intelligence’, AI & Society, 32: 543–51.
sample. Belfield, H. (2020) ‘Activism by the AI Community: Analysing
9. Although cultural construct loadings were similar between sam- Recent Achievements and Future Prospects’ in Proceedings of the
ples, there were some notable between-sample differences in the AAAI/ACM Conference on AI, Ethics, and Society, pp. 15–21. New
cultural construct covariances between cultural constructs (Supple- York: ACM.
ment Tables 13 and 14). In the US public sample, techno-skepticism Berinsky, A. J., Margolis, M. F., and Sances, M. W. (2014) ‘Separating the
was more highly correlated with risk aversion and individualism, Shirkers from the Workers? Making Sure Respondents Pay Atten-
suggesting that experts separate their views of technology from tion on Self-Administered Surveys’, American Journal of Political
their overall risk preferences and individualism somewhat more Science, 58: 739–53.
than the general public does. There was also a much larger negative Berinsky, A. J., Margolis, M. F., Sances, M. W., et al. (2019) ‘Using
covariance between egalitarianism and individualism in the expert Screeners to Measure Respondent Attention on Self-Administered
sample. These differences, however, were much smaller when Surveys: Which Items and How Many?’, Political Science Research
analysis was restricted to the attentive subsample (Supplement and Methods, 9: 1–8.
Tables 41 and 42; see Section 3.4). Boyan, N, ‘Ipos global poll for the world economic forum’ (2019).
10. While the model 𝜒2 statistic indicated a statistically significant dif- Burgess, J., Stirling, A., Clark, J., et al. (2007) ‘Deliberative Mapping:
ference between the observed and model-implied covariance matrix A Novel Analytic-Deliberative Methodology to Support Contested
(a potential indication of inadequate fit), this test is known to Science-Policy Decisions’, Public Understanding of Science, 16:
be sensitive to large sample sizes such as ours; concluding that a 299–322.
Science and Public Policy 175

Campolo, A., Sanfilippo, M., Whittaker, M., et al. (2017) AI Now 2017 Gardezi, M. and Arbuckle, J. G. (2020) ‘Techno-Optimism and Farmers’
Report. New York: AI Now Institute. Attitudes toward Climate Change Adaptation’, Environment and
Cave, S., Craig, C. and Dihal, K. (2018) Portrayals and Perceptions of Behavior, 52: 82–105.
AI and Why They Matter. London: The Royal Society. Gudowsky, N., Peissl, W., Sotoudeh, M., et al. (2012) ‘Forward-Looking
Chong, D., Citrin, J., and Conley, P. (2001) ‘When Self-Interest Matters’, Activities: Incorporating Citizens’ Visions: A Critical Analysis of the
Political Psychology, 22: 541–70. CIVISTI Method’, Poiesis & Praxis, 9: 101–23.
Coppock, A. and McClellan, O. A. (2019) ‘Validating the Demographic, The Harris Poll. (2017) ‘Monthly Harvard-Harris Poll’ <https://
Political, Psychological, and Experimental Results Obtained from harvardharrispoll.com/wp-content/uploads/2017/11/HCAPS-Octo
a New Source of Online Survey Respondents’, Research and ber_Topline-Memo_with-banners_Registered-Voters-Social-Media.
Politics, 6. pdf> accessed 13 Oct 2020.
Dake, K. (1991) ‘Orienting Dispositions in the Perception of Risk: An Hoffmann, M. H. G. (2020) ‘Reflective Consensus Building on Wicked
Analysis of Contemporary Worldviews and Cultural Biases’, Journal Problems with the Reflect! Platform’, Science and Engineering
of Cross-Cultural Psychology, 22: 61–82. Ethics, 26: 793–819.
Davis, F. D. (1985) ‘A Technology Acceptance Model for Empirically Hornsey, M. J., Harris, E. A., and Fielding, K. S. (2018) ‘Relationships
Testing New End-User Information Systems: Theory and Results’, among Conspiratorial Beliefs, Conservatism and Climate Scepticism
PhD thesis Massachusetts Institute of Technology. across Nations’, Nature Climate Change, 8: 614–20.

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


DeCario, N. and Etzioni, O. (2021) America Needs AI Liter- Institute of Electrical and Electronics Engineers (IEEE). (2019) Ethically
acy Now. pnw.ai. <https://pnw.ai/article/america-needsai-literacy- Aligned Design Version 2: A Vision for Prioritizing Human Well-
now/72515409> accessed 13 Dec 2021. Being with Autonomous and Intelligent Systems. Piscataway: IEEE.
Deeney, C. ‘Six in ten (61%) respondents across 26 countries oppose Johnson, B. B. and Swedlow, B. (2021) ‘Cultural Theory’s Contributions
the use of lethal autonomous weapons systems’ (21 Jan. 2019). to Risk Analysis: A Thematic Review with Directions and Resources
Dixon, G., Hart, P. S., Clarke, C., et al. (2020) ‘What Drives Support for for Further Research’, Risk Analysis, 41: 429–55.
Self-Driving Car Technology in the United States?’, Journal of Risk Johnson, B. B., Swedlow, B., and Mayorga, M. W. (2020) ‘Cultural The-
Research, 23: 275–87. ory and Cultural Cognition Theory Survey Measures: Confirmatory
Douglas, M. and Wildavsky, A. (1982) Risk and Culture: An Essay Factoring and Predictive Validity of Factor Scores for Judged Risk’,
on the Selection of Technological and Environmental Dangers. Journal of Risk Research, 23: 1467–90.
Berkeley: University of California Press. Johnson, C. and Tyson, A. ‘People globally offer mixed views of the
Dragojlovic, N. (2014) ‘How Meaningful Are Public Attitudes impact of artificial intelligence, job automation on society’ (Pew
Towards Stem Cell Research?’, OUPblog. <https://blog.oup.com/ Research Center, 15 Dec. 2020).
2014/01/how-meaningful-are-public-attitudes-towards-stem-cell- Kahan, D. M. (2012). ‘Cultural Cognition as a Conception of the Cul-
research/> accessed 9 Dec 2021. tural Theory of Risk’. In: S. Roeser, R. Hillerbrand, P. Sandin, et al.
Druckman, J. N. and McGrath, M. C. (2019) ‘The Evidence for Moti- (eds) Handbook of Risk Theory: Epistemology, Decision Theory,
vated Reasoning in Climate Change Preference Formation’, Nature Ethics, and Social Implications of Risk, pp. 725–59. Dordrecht:
Climate Change, 9: 111–9. Springer.
Ellis, R. J. and Thompson, F. (1997) ‘Culture and the Environment Kahan, D. M., Braman, D., Gastil, J., et al. (2007) ‘Culture and Identity-
in the Pacific Northwest’, American Political Science Review, 91: Protective Cognition: Explaining the White-Male Effect in Risk
885–97. Perception’, Journal of Empirical Legal Studies, 4: 465–505.
European Commission, ‘Attitudes towards the impact of digitisation Kahan, D. M., Braman, D., Slovic, P., et al. (2009) ‘Cultural Cognition
and automation on daily life’, Special Eurobarometer 460 (10 May of the Risks and Benefits of Nanotechnology’, Nature Nanotechnol-
2017). ogy, 4: 87–90.
European Group on Ethics in Science and New Technologies, ‘State- Kahan, D. M., Jenkins-Smith, H., and Braman, D. (2011) ‘Cultural
ment on artificial intelligence, robotics and “autonomous” systems’ Cognition of Scientific Consensus’, Journal of Risk Research, 14:
(2018). 147–74.
Fast, E. and Horvitz, E. (2017) ‘Long-Term Trends in the Public Per- Kline, R. B. (2016) Principles and Practice of Structural Equation
ception of Artificial Intelligence’. In: Proceedings of the Thirty-First Modeling, 4th edn. p. 534. New York: The Guilford Press.
AAAI Conference on Artificial Intelligence AAAI Conference on Krier, J. E. and Gillette, C. P. (1985) ‘The Un-Easy Case for Technological
Artificial Intelligence. San Francisco: Association for Computing Optimism’, Michigan Law Review, 84: 405–29.
Machinery. Landeta, J. (2006) ‘Current Validity of the Delphi Method in Social
Finucane, M. L., Slovic, P., Mertz, C., et al. (2000) ‘Gender, Race, and Sciences’, Technological Forecasting and Social Change, 73: 467–82.
Perceived Risk: The “White Male” Effect’, Health, Risk & Society, Li, C.-H. (2016) ‘Confirmatory Factor Analysis with Ordinal Data:
2: 159–72. Comparing Robust Maximum Likelihood and Diagonally Weighted
Fishbein, M. and Ajzen, I. (1975) Belief, Attitude, Intention, and Behav- Least Squares’, Behavior Research Methods, 48: 936–49.
ior: An Introduction to Theory and Research. Reading: Addison- Liu, P., Guo, Q., Ren, F., et al. (2019) ‘Willingness to Pay for Self-
Wesley. Driving Vehicles: Influences of Demographic and Psychological Fac-
Fishbein, M. and Funke, U. H. (1976) ‘A Behavior Theory Approach tors’, Transportation Research Part C: Emerging Technologies, 100:
to the Relations between Beliefs about an Object and the Atti- 306–17.
tude Toward the Object’. In: M. Beckmann and H. P. Kunzi (eds) Lord, C. G., Ross, L., and Lepper, M. R. (1979) ‘Biased Assimilation and
Mathematical Models in Marketing pp. 87–8. Heidelberg: Springer. Attitude Polarization: The Effects of Prior Theories on Subsequently
Flynn, J., Slovic, P., and Mertz, C. K. (1994) ‘Gender, Race, and Considered Evidence’, Journal of Personality and Social Psychology,
Perception of Environmental Health Risks’, Risk Analysis, 14: 37: 2098–109.
1101–8. Lupia, A. (2013) ‘Communicating Science in Politicized Environments’,
Funk, C., Rainie, L., Kennedy, B., et al. (2015) ‘Public and scientists’ Proceedings of the National Academy of Sciences, 110: 14048–54.
Views on science and society, (Pew Research Center, 29 Jan. 2015). MacCallum, R. C., Wegener, D. T., Uchino, B. N., et al. (1993)
Future of Life Institute. (2018) Lethal Autonomous Weapons Pledge. ‘The Problem of Equivalent Models in Applications of Covariance
Cambridge. Structure Analysis’, Psychological Bulletin, 114: 185–99.
Gallup, Inc. ‘Optimism and anxiety: views on the impact of artificial Macnaghten, P. and Chilvers, J. (2014) ‘The Future of Science Gover-
intelligence and higher education’s response NortheasternGallup nance: Publics, Policies, Practices’, Environment and Planning. C,
Poll’ (Jan. 2018). Government & Policy, 32: 530–48.
176 Science and Public Policy

Marangunic, N. and Graníc, A. (2015) ‘Technology Acceptance Model: Selwyn, N., Cordoba, B. G., Andrejevic, M., et al. (2020) AI for
A Literature Review from 1986 to 2013’, Universal Access in the Social Good? Australian Public Attitudes toward AI and Society,
Information Society, 14: 81–95. Melbourne: Monash Data Futures Institute.
Marchant, G. E., Tournas, L., and Gutierrez, C. I. (2020) ‘Govern- Sharma, P. (2010) ‘Measuring Personal Cultural Orientations: Scale
ing Emerging Technologies through Soft Law: Lessons for Artificial Development and Validation’, Journal of the Academy of Marketing
Intelligence’, Jurimetrics, 61: 1–18. Science, 38: 787–806.
Mavrommati, G., et al. (2021) ‘A Methodological Framework for ̈
Sjoberg, ̈
L. and Drottz-Sjoberg, B.-M. (1993) Attitudes to Nuclear
Understanding Shared Social Values in Deliberative Valuation’, Eco- Waste, xii. Stockholm: Stockholm School of Economics, Center for
logical Economics, 190: 107185. Risk Research.
Meadows, D. H., Meadows, D. L., Randers, J., et al. (1972) The Lim- Sloane, M., Moss, E., Awomolo, O., et al. (2020) ‘Participation Is Not
its to Growth: A Report for the Club of Rome’s Project on the a Design Fix for Machine Learning’, arXiv: 2007.02423[cs].
Predicament of Mankind. New York: Universe Books. Slovic, P., Flynn, J. H., and Layman, M. (1991) ‘Perceived Risk, Trust,
Mehrabi, N., Morstatter, F., Saxena, N., et al. (2021) ‘A Survey on Bias and the Politics of Nuclear Waste’, Science, 254: 1603–7.
and Fairness in Machine Learning’, ACM Computing Surveys, 54: Smith, A., ‘Public attitudes toward computer algorithms’ (Pew Research
1–35. Center, 16 Nov. 2018a).
Morning Consult. (2017) ‘National Tracking Poll’ 170401. <https:// Smith, A., Public attitudes toward technology companies, (Pew research

Downloaded from https://academic.oup.com/spp/article/50/2/161/6760880 by guest on 15 October 2023


morningconsult.com/wp-content/uploads/2017/04/170401_crossta Center, 28 Jun. 2018b).
bs_Brands_v3_AG.pdf> accessed 13 Nov 2020. Smith, A. and Anderson, M., ‘Automation in everyday life’ (Pew
——— (2018) ‘Morning Consult National Tracking Poll’ 180534. Research Center, 4 Oct. 2017).
<https://morningconsult.com/wp-content/uploads/2018/05/180534 Stirling, A. (2008) “‘Opening Up” and “Closing Down”: Power, Par-
_BRANDS_crosstabs_.pdf> accessed 13 Nov 2020. ticipation, and Pluralism in the Social Appraisal of Technology’,
——— (2021) ‘A Perspective of Americans’ Attitudes toward Arti- Science, Technology & Human Values, 33: 262–94.
ficial Intelligence’ <https://ml.globenewswire.com/Resource/Down- Stix, C. (2021) ‘Actionable Principles for Artificial Intelligence Policy:
load/a1e27f5b-7edb-49a6-a13e-835a36e7ad2a> accessed 21 Nov Three Pathways’, Science and Engineering Ethics, 27.
2020. Swedlow, B., et al. (2020) ‘Construct Validity of Cultural Theory Survey
Organisation for Economic Cooperation and Development. (2019) Measures’, Social Science Quarterly, 101: 2332–83.
Recommendation on the Council on Artificial Intelligence Tennant, C., Stares, S., and Howard, S. (2019) ‘Public Discom-
OECD/LEGAL/0449. fort at the Prospect of Autonomous Vehicles: Building on Previ-
Parker, K., Morin, R., and Horowitz, J. M, ‘Looking to the future, public ous Surveys to Measure Attitudes in 11 Countries’, Transporta-
sees an America in decline on many fronts’ (21 Mar. 2019). tion Research. Part F, Traffic Psychology and Behaviour, 64:
Peters, E. M., Burraston, B., and Mertz, C. K. (2004) ‘An Emotion- 98–118.
Based Model of Risk Perception and Stigma Susceptibility: Cogni- Triantaphyllou, E. (2000) ‘Multi-Criteria Decision Making Methods’.
tive Appraisals of Emotion, Affective Reactivity, Worldviews, and In: P. M. Pardalos and D. Hearn (eds) Multi-Criteria Decision
Risk Perceptions in the Generation of Technological Stigma’, Risk Making Methods: A Comparative Study, pp. 5–21. Boston: Springer.
Analysis, 24: 1349–67. Ulnicane, I., Knight, W., Leach, T., et al. (2020) ‘Framing Governance
Peyton, K., Huber, G. A., and Coppock, A. (2021) ‘The Gen- for a Contested Emerging Technology: Insights from AI Policy’,
eralizability of Online Experiments Conducted During the Policy and Society, 40: 1–20.
COVID-19 Pandemic’, Journal of Experimental Political Sci- United Kingdom Government. (2019) ‘Artificial Intelligence: Pub-
ence, 1–16. <https://www.cambridge.org/core/journals/journal-of- lic Awareness Survey’ <https://www.gov.uk/government/publica-
experimental-political-science/article/generalizability-of-online- tions/artificial-intelligence-public-awareness-survey> accessed 4 Dec
experiments-conducted-during-the-covid19-pandemic/977D0A 2020.
898CD4EA803ABE474A49B719E0>. United States Office of Management and Budget. (2020) ‘Memorandum
Renn, O. and Benighaus, C. (2013) ‘Perception of Technological Risk: on Guidance for Regulation of Artificial Intelligence Applications
Insights from Research and Lessons for Risk Communication and M-21-06’.
Management’, Journal of Risk Research, 16: 293–313. Van der Linden, S. (2016) ‘A Conceptual Critique of the Cultural
Ripberger, J. T., Song, G., Nowlin, M. C., et al. (2012) ‘Reconsidering Cognition Thesis’, Science Communication, 38: 128–38.
the Relationship between Cultural Theory, Political Ideology, and Venkatesh, V. and Morris, M. G. (2000) ‘Why Don’t Men Ever Stop
Political Knowledge’, Social Science Quarterly, 93: 713–31. to Ask for Directions? Gender, Social Influence, and Their Role in
Rosseel, Y. (2012) ‘Lavaan: An R Package for Structural Equation Technology Acceptance and Usage Behavior’, MIS Quarterly, 24:
Modeling’, Journal of Statistical Software, 48. 115. JSTOR: 3250981.
Sambasivan, N., Arnesen, E., Hutchinson, B., et al. (2021) ‘Re- Weber, E. and Hsee, C. (2000) ‘Culture and Individual Judgment and
Imagining Algorithmic Fairness in India and Beyond’ in Proceed- Decision Making’, Applied Psychology, 49: 32–61.
ings of the 2021 ACM Conference on Fairness, Accountability, Wildavsky, A. and Dake, K. (1990) ‘Theories of Risk Perception: Who
and Transparency FAccT’21: 2021 ACM Conference on Fairness, Fears What and Why?’, Daedalus, 119: 41–60.
Accountability, and Transparency, pp. 315–28. New York: ACM. Zhang, B., Anderljung, M., Kahn, L., et al. (2021) ‘Ethics and Gover-
Schiff, D. S., Schiff, K. J., and Pierson, P. (2021) ‘Assessing Public Value nance of Artificial Intelligence: Evidence from a Survey of Machine
Failure in Government Adoption of Artificial Intelligence’, Public Learning Researchers’, Journal of Artificial Intelligence Research,
Administration, 100: 1–21. 71.
Sears, D. O. and Funk, C. L. (1991) ‘The Role of Self-Interest in Zhang, B. and Dafoe, A. (2019) ‘Artificial Intelligence: American Atti-
Social and Political Attitudes’, Advances in Experimental Social tudes and Trends’, Center for the Governance of AI, Future of
Psychology, 24: 1–91. Humanity Institute, University of Oxford.

You might also like