Scac056-What Governs Attitudes Toward Artificial Intelligence Adoption and Governance
Scac056-What Governs Attitudes Toward Artificial Intelligence Adoption and Governance
Scac056-What Governs Attitudes Toward Artificial Intelligence Adoption and Governance
DOI: https://doi.org/10.1093/scipol/scac056
Advance Access Publication Date: 14 October 2022
Article
Abstract
Designing effective and inclusive governance and public communication strategies for artificial intelligence (AI) requires understanding how
stakeholders reason about its use and governance. We examine underlying factors and mechanisms that drive attitudes toward the use and
governance of AI across six policy-relevant applications using structural equation modeling and surveys of both US adults (N = 3,524) and tech-
nology workers enrolled in an online computer science master’s degree program (N = 425). We find that the cultural values of individualism,
egalitarianism, general risk aversion, and techno-skepticism are important drivers of AI attitudes. Perceived benefit drives attitudes toward AI
use but not its governance. Experts hold more nuanced views than the public and are more supportive of AI use but not its regulation. Drawing
on these findings, we discuss challenges and opportunities for participatory AI governance, and we recommend that trustworthy AI governance
be emphasized as strongly as trustworthy AI.
Key words: artificial intelligence policy; public opinion; public engagement.
© The Author(s) 2022. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: [email protected]
162 Science and Public Policy
tailored to the public’s values, beliefs, and motivations (Kahan The main contribution of this work is to increase
et al. 2011; Lupia 2013). Although previous work has understanding of attitudes toward AI use and governance by
explored how attitudes of AI professionals (Zhang et al. 2021; (1) exploring a set of attitudinal drivers that is broader than
Aiken et al. 2020) and the public (Morning Consult 2017; the typically-considered sociodemographic variables, includ-
European Commission 2017; Smith and Anderson 2017; The ing both perceptions of benefit, and cultural values inspired
Harris Poll 2017; Gallup, Inc 2018; Morning Consult 2018; by the cultural theory literature; (2) directly comparing the
Smith 2018a, b; Zhang and Dafoe 2019; United Kingdom attitudes of experts and the public; and (3) considering atti-
Government 2019; Johnson and Tyson 2020) differ across tudes across a range of policy-relevant contexts of AI use. Our
sociodemographic groups, little existing work has explored preregistered analysis strategy uses the structural equation
the underlying values and mechanisms that drive attitudes model (SEM) shown in Fig. 1 (described in more detail in
toward AI. the next section), which allows us to naturally address the
In this paper, we take a step toward better understanding three key research questions defined above. Our results pro-
what shapes attitudes toward AI by looking at factors and vide insights that can aid policymakers in crafting governance
mechanisms beyond sociodemographic characteristics. We strategies that are respectful of diverse beliefs and assist AI
explore the following questions, which are key to designing developers in effectively communicating the broader implica-
effective AI governance and science communication strategies: tions of their work to the public. Drawing on these results,
we offer recommendations for engaging the public in dia-
(1) How do sociodemographic factors, cultural values, and logue about AI governance and offer suggestions for future
perceived benefit influence attitudes toward AI? research.
(2) How do these attitudes—and the factors that inform
them—differ between experts and the public?
2. Background and theory
(3) How do these attitudes—and the factors that inform
them—differ across common contexts of AI use? 2.1 Underlying factors governing attitudes toward
technology
To explore these questions, we conducted two online Prior work has found that race, gender, and political ide-
surveys in April and May 2021. The first survey sampled ology (Wildavsky and Dake 1990) are highly predictive of
N = 3,524 US adults recruited and compensated through the attitudes toward issues such as nuclear power (Slovic et al.
Lucid Theorem platform, which uses quota sampling to obtain 1991), climate change (Flynn et al. 1994), genetically engi-
participants representative of adult US residents on age, gen- neered food (Finucane et al. 2000), and radiation (Peters et al.
der, race, and region. The second survey sampled N = 425 2004). Similar sociodemographic divides have been found in
students who had recently completed a graduate AI course attitudes toward AI. Those reporting familiarity and comfort
at Georgia Tech. Most (93.9 per cent) of these students had with AI are more likely to be young, be male, be educated, live
undergraduate degrees in technical subjects, and 93.5 per cent in urban areas, and have higher incomes (Morning Consult
previously or concurrently worked in computer science or 2017, 2018; Zhang and Dafoe 2019; United Kingdom
another Science, Technology, Engineering, and Math (STEM) Government 2019; Johnson and Tyson 2020; Morning Con-
field.1 In addition to standard sociodemographic variables, sult 2021). Sociodemographic divides also shape perceptions
we consider the impact on attitudes of perceived self- and of AI’s impact on society. Those in urban areas, blue-collar
societal benefits and of the cultural values of individualism, workers, and political liberals are more likely to believe that
egalitarianism, general risk aversion, and techno-skepticism— AI will deepen inequality and reduce employment (Morning
constructs found to inform the perception of many other Consult 2017; Gallup, Inc 2018), while those with more edu-
technological risks (Kahan et al. 2007; Renn and Benighaus cation, white-collar jobs, and higher incomes are more likely
2013; Tennant et al. 2019). to believe that AI will be beneficial to society and the economy
Science and Public Policy 163
(Morning Consult 2017; Smith and Anderson 2017; Gallup, 2.2 Perceived benefit and hypothesized model
Inc 2018; Morning Consult 2018; Zhang and Dafoe 2019). In contrast to technologies whose benefits are perceived as
The cultural theory of risk perception posits that ‘cultural’ broadly shared, popular narratives about AI often feature
worldviews can be more concise and informative predictors clear losers (Fast and Horvitz 2017): workers who lose their
of attitudes toward technological risk than sociodemographic jobs to automation, for example, or minorities who suffer dis-
factors alone (Kahan et al. 2007; Weber and Hsee 2000; crimination at the hands of automated decision systems. These
Johnson and Swedlow 2021). These cultural values have been narratives may make views about AI governance—perhaps
hypothesized to define identity groups, imbue potential risks more so than views about other technological risks—subject
with affective qualities (Peters et al. 2004), and encourage to perceptions of who stands to benefit and lose from the
biased information processing (Lord et al. 1979). Indeed, continually increasing adoption of AI. However, while there
literature has found that successfully communicating scien- is some evidence that perceived self-interest informs support
tific topics to the public benefits from careful attention to for AI-based technologies (Morning Consult 2018; Liu et al.
how messages may interact with the cultural values held by 2019; Dixon et al. 2020), other literature has suggested that
the public (Kahan et al. 2011; Lupia 2013; Druckman and perceived benefit does not always eclipse affective and value-
McGrath 2019). For policymakers seeking to design inclu-
allowing variables within each group (sociodemographic vari- how findings relevant to AI’s impact on labor automation, for
ables, cultural values, perception of benefit, and AI support) example, generalize to AI used in medical research or auto-
to covary. mated weapons systems. To better understand these differ-
Our SEM bears some similarities to popular models of ences, in addition to examining attitudes toward AI in general,
technology acceptance and adoption used in psychology and we explore attitudes toward AI used in six policy-relevant con-
marketing research literature. The theory of reasoned action texts: predictive policing, labor automation, medical diagno-
(Fishbein and Ajzen 1975) focuses on the relationship between sis, automated vehicles, personalization, and weapon systems
behavior and behavioral intention, which is modeled as being (see Section 3.3 and Supplement Section B for more details on
shaped by attitudes and subjective norms. The Multi-Attribute these contexts).
Attitude Model (Fishbein and Funke 1976) models an indi- The use of AI in each of these contexts raises different ques-
vidual’s attitude toward a brand or product as a weighted tions about risks, distribution of impacts, and ethical ques-
linear combination of attributes. Unlike this model, in which tions like fairness. Modeling each of these contexts allows us
each individual is modeled by a unique set of weights, our to understand how the factors we study—sociodemographic
SEM models all respondents collectively with a single set of variables, cultural values, and perceived benefit—impact atti-
inferred parameters. The influential Technology Acceptance tudes differently across application areas.
Table 1. Means, standard deviations, 95 per cent confidence intervals for in their propensity to work in industry versus academia,
differences in means, and P -value (Welch’s two-tailed t-test) for each vari- their level of experience with AI, and their sociodemo-
able in the US public (Lucid) and expert (OMS) samples. Gender was coded
graphic and cultural factors. Respondents in our OMS
as a binary variable (male, female or other gender), and age was coded
using Pew’s classification of generational groups (18–25, 26–40, 41–56,
sample completed undergraduate degrees largely in North
57–75, and 76+). Race was coded as White, Black, Asian, or other, as we America (66.1 per cent) or Asia (25.6 per cent), primarily
anticipated that only these groups would be large enough in both samples in computer science (43.1 per cent) or other STEM fields
to detect effects. We used four-level scales each for education, house- (50.8 per cent). Most of them concurrently or recently
hold income, and urban/rural residence. Political orientation was collected worked in computer science or software engineering but
using a five-point Likert scale with end points ‘strong liberal’ and ‘strong not specifically in AI (63.8 per cent); 18.1 per cent reported
conservative’.
working in another field of science or engineering; and
𝑥Lucid
̄ 𝑥OMS
̄ 𝑥Lucid
̄ − 𝑥OMS
̄ P-value 11.8 per cent reported working directly in AI (see Supplement
Section A).
Age group 1.75 0.89 (0.79, 0.92) <0.001
(0–4) (1.12) (0.56)
Gender = 0.49 0.81 (−0.37, −0.29) <0.001 3.2 Survey design
construct consisted of the following four items: ‘new tech- describing both potential benefits and concerns about the use
nologies are more about making profits rather than making of AI in that context. The points highlighted in each vignette
peoples’ lives better’, ‘I am worried about where all this tech- were chosen in an attempt to reflect arguments present in typ-
nology is leading’, ‘technology has become dangerous and ical discourse about AI, particularly those that may associate
unmanageable’, and ‘I feel uncomfortable about new tech- affective qualities with the application:
nologies’. The general risk aversion construct was adapted
directly from Sharma (2010). • Predictive policing: ‘Some police departments use AI to
The second portion of the survey assessed opinion about predict where crime is likely to occur, helping them decide
AI. We first provided respondents with a brief definition of where to deploy their resources. But civil rights groups
AI adapted from Zhang and Dafoe (2019): ‘Artificial intel- and some researchers argue that these AI systems sim-
ligence (AI) refers to computer systems that perform tasks or ply increase arrests in minority neighborhoods without
make decisions that usually require human intelligence. AI can actually reducing crime.’
perform these tasks or make these decisions without explicit • Economic/labor impact: ‘AI systems are likely to auto-
human instructions. Today, AI has been used in the following mate many tasks. Some think that these AI systems will
applications: identifying people from their photos, diagnos-
Figure 2. Outcome measures after respondents are presented with a general (context-free) definition of AI.
distinct splits in others. For example, both experts and the outcome variables, we compared the fit of S, the full SEM
public were wary of AI use in autonomous weapons, recom- shown in Fig. 1, with S\C , the nested model that constrains
mendation systems, and predictive policing, but experts’ over- to zero the paths from cultural values to outcome measures.
whelming support for AI use in autonomous vehicles, medical We found consistent global (Table 2) and local (Supplement
diagnosis, and automating labor stood in stark contrast to the Tables 21 and 22) evidence that the inclusion of pathways
much more divided public. from cultural values to our outcome variables produced better
Our results suggest that greater public awareness about the model fit, indicating that the four cultural values we consid-
unique impacts of AI in different applications may be neces- ered were indeed informative factors in explaining attitudes
sary to fully empower the public to share its perspectives on toward AI.
AI use and governance. The cross-context divides we find also We next fit the full SEM shown in Fig. 1 to data from the US
suggest that limited support for the regulation of AI in general public sample. Fit statistics are shown in Table 2 along with
(among both experts and the public) may belie support for tai- statistics for the two modified (nested) models used to evalu-
lored government intervention in specific application contexts ate the roles of cultural values and perceptions of benefit. The
such as autonomous weapons. full model achieved the standard thresholds for adequate fit
listed in our preregistration.10 Correlation residuals, shown
in Supplement Table 22, generally indicated satisfactory local
4.2 Cultural factors are strongly informative of model fit.11 Finally, we observed relatively small covariances
attitudes between support outcomes, consistent with a lack of highly
What drives these expert–public divides that persist across influential unmodeled common causes of these variables. It is
outcome measures and application contexts? These gaps may important to note that our SEM represents hypothesized rela-
be due to differences in technical knowledge or due to social- tionships between variables and that ‘equivalent’ models with
ization during AI training. However, they may also be driven different hypothesized relationships can produce the same
by differences in sociodemographics and cultural values. Our covariance structure (MacCallum et al. 1993). Thus, while
expert and US public samples differed significantly on all the fit statistics in Table 2 provide circumstantial evidence in
sociodemographic variables (Table 1) as well as across all four support of our SEM, the primary evidence for the model’s
cultural values (P’s < 0.001): experts were less individualis- correctness is based on our theoretical arguments above.
tic (0.53 points), less techno-skeptical (0.51 points), less risk Inferred SEM path coefficients are shown in Fig. 4. Overall,
averse (0.27 points), and more egalitarian (0.26 points) (see the results indicated that the cultural values of individu-
Supplement Fig. 7). alism, egalitarianism, risk aversion, and techno-skepticism
To better understand how these factors inform attitudes, were strongly predictive of attitudes toward AI. The influ-
we used the preregistered SEM shown in Fig. 1 to explore ence of sociodemographic variables also contained interesting
the relationship between sociodemographic variables, cultural patterns. Like past surveys (Morning Consult 2017, 2018;
values, perceived benefit, and support for AI use and gov- Zhang and Dafoe 2019), we found that those who were male,
ernance. (The size of our expert sample limited this SEM younger, better educated and had higher income both per-
analysis to the US public.) We first assessed the reliability and ceived more benefit from AI and were more supportive of its
fit of the cultural value components of the model. The fit in use. Yet we found that support for government regulation
each sample (evaluated using thresholds defined in our prereg- was—perhaps surprisingly—often divorced from perceived
istration) was adequate to good, construct reliabilities were benefit and support for use and more directly informed by
satisfactory, constructs loaded appropriately onto each item sociodemographic and cultural variables. For example, older
(with similar loadings in each sample), and model correlation and more conservative respondents were more hesitant about
residuals indicated adequate local fit (Supplement Tables 11, AI use. However, despite perceiving less benefit from AI
12, and 15).9 To assess the impact of cultural values on our and expressing less support for its use, they were also less
Science and Public Policy 169
Table 2. Fit statistics for the complete SEM S and two nested models used for analysis. 𝜒2 : model chi-square test, along with model degrees of freedom
and P -value, CFI: comparative fit index, RMSEA: root mean squared error of approximation, SRMR: standardized root mean square residual, ∆𝜒2 : chi-
square difference test (compared to full model S). R 2 values show coefficients of determination for the five endogenous variables in the model. The
complete model S achieved adequate-to-good global fit, with CFI and RMSEA indicating adequate fit and SRMR indicating good fit. Reduced models S \C
(used to assess the evidence for paths from cultural values to support outcomes) and S \B (used to assess the evidence for paths from perceived benefit
to support outcomes) achieved adequate fit on RMSEA and SRMR but poor global fit on CFI.
supportive of the government regulating AI. Similarly, those Notably, our results found that some effects of cultural val-
who held cognitive/analytical jobs, lived in urban areas, and ues (Fig. 4) had reversed directions from the patterns observed
had higher incomes perceived greater self-benefit from AI and for other technologies. For example, both individualism and
were more supportive of its use. However, these groups were egalitarianism predicted increased perceptions of self-benefit
also more likely to believe that AI should be carefully managed from AI—a contrast with many other technologies, where
and regulated. egalitarianism has been found to associate with lower sup-
port.12 This reversed effect of egalitarianism suggests that AI
4.3 Cultural determinants of attitudes differed in may be perceived differently from many other technological
some applications risks, perhaps due to perceptions that automated systems can
Developing effective ‘culturally pluralized’ (Johnson and temper certain hierarchical social structures that egalitarians
Swedlow 2021) strategies for science communication and perceive as harmful. If this perception does indeed hold among
governance requires an understanding of how cultural val- the public, however, it stands in sharp contrast to the increas-
ues affect attitudes toward specific technologies and their ing realization among AI developers that bias and fairness are
applications. While previous research has evaluated how cul- significant problems in automated decision-making systems
tural values inform support for other emerging technologies, (Mehrabi et al. 2021) and evidence that awareness of these
it is not clear how—or whether—these results generalize to problems negatively affects perceptions of their performance
applications of AI. (Schiff et al. 2021).
170 Science and Public Policy
We used a linear regression model to compare the effects of the use of AI in autonomous vehicles and recommenda-
of cultural values on support for AI use between experts and tion systems than they were of the use of AI for medical
the public. Our use of linear regression rather than SEM diagnosis. See Supplement Section B for tables highlight-
was due to the limited size of our expert sample; this por- ing where these results matched expectations based on prior
tion of the analysis was exploratory (i.e. not preregistered). literature.
We found that the direction of cultural values’ effects on Unsurprisingly, AI’s impact on labor and the economy was
support for AI use was generally consistent across AI appli- perceived to be more beneficial by respondents with cogni-
cation areas (Fig. 5) and again found that experts’ opinions tive/analytical jobs and higher education. However, we found
were more nuanced than the public’s. Supplement Figs. 15–16 that manual/physical employment also predicted greater per-
provide additional evidence for this phenomenon, showing ceived benefits from AI’s impact on labor and the economy.
that experts’ attitudes toward both AI and other technolo- This result is potentially surprising but consistent with the
gies varied more than the public’s, a pattern that persisted findings that many US workers believe automation is more
when the analysis was restricted to attentive respondents. This likely to affect others’ jobs than their own (Parker et al. 2019).
evidence suggests that the public’s attitudes toward AI may Interestingly, we also found that perceived societal benefit had
evolve considerably as they become more informed, under- a stronger impact on support for labor-automating AI than on
lining the importance of public education on the broader AI in general (Supplement Table 32).
impacts of AI use in specific applications. These results Prior work has found that individualism generally predicts
also revealed patterns across the six contexts we explored. higher support for technology, and we found that individ-
For example, attitudes toward the predictive policing and ualism had a similarly positive impact on support for AI.
autonomous weapons application contexts were similar, par- Less consistent with work on other technologies, however,
ticularly among experts (Supplement Fig. 15). we found that egalitarianism also tended to predict greater
To examine whether the factors driving attitudes toward support for AI. Perhaps unexpectedly, we found overall posi-
these applications were also similar, we fit a multigroup ver- tive effects of the general risk perception construct of Sharma
sion of the SEM shown in Fig. 1 to data from the US public (2010) on support for AI across contexts, suggesting that the
sample. This multigroup SEM facilitated between-context risk aversion and techno-skepticism constructs used in our
comparison by allowing path coefficients to differ for each survey measured relatively orthogonal aspects of technolog-
context while constraining the model aspects that defined cul- ical risk perception.
tural values to be constant. Some notable patterns emerged That the US public perceived AI as more egalitarian
from this model, for which inferred parameters are shown than experts did (Fig. 3) suggests that the public viewed
in Supplement Section E.4.2. We indeed found key sociode- AI as shaping society to be more equitable than experts
mographic and cultural variables whose impact on attitudes did. Particularly striking is the positive impact of egal-
toward predictive policing and autonomous weapons dif- itarianism on support for the use of AI-based weapon
fered from their impact on other contexts. For example, systems, suggesting that recent discourse and activism in
older and politically conservative respondents were less sup- the AI community opposing autonomous weapons (Belfield
portive of AI, in general, but were more supportive of AI 2020) may have been effective in driving experts’ opin-
use for predictive policing and autonomous weapons. The ions but not in breaking through to the general pub-
impact of egalitarianism on support for AI use in these two lic, who may have been more swayed by our vignette’s
contexts similarly differed from its impact on most other description of potential safety benefits to service members.
contexts. More broadly, there were substantial between- It is also notable that egalitarianism drove greater sup-
context differences in the impact of age on support for AI. port for labor-automating AI among the public than among
For example, older respondents were much less supportive experts.
Science and Public Policy 171
4.4 Perceived benefit substantially informs support role of sociodemographic variables; the impact of the cultural
for AI use—but not for management and regulation values of egalitarianism, individualism, techno-skepticism,
Our SEM (Fig. 1) hypothesized that perceived self- and soci- and risk aversion; the potentially moderating effects of per-
etal benefit drove support for AI use and governance. To assess ceived self- and societal benefit; differences between experts
the impact of perceived benefit on these support outcomes, we and the public; and differences across prominent policy-
compared the full SEM S to a nested model S\B , in which paths relevant applications of AI.
from the perception of benefit outcomes to support outcomes One consistent finding of our study is that the US public’s
were fixed to zero. Overall, global and local comparisons attitudes toward AI were much less nuanced than experts’.
of S\B and S provided mild support for the existence of an Compared to experts, the public’s views on the use, man-
agement, and regulation of AI were largely similar across
impact of perceived benefit on our support outcomes (Table 2;
application areas, and the public reported perceiving little
Supplement Tables 22 and 28).13
distinction between how AI might affect them personally
As shown in Fig. 4, perceived benefit (to both the respon-
and how it might affect the society more generally. We did,
dent individually and society at large) predicted substantially
however, find greater support for government regulation in
greater support for AI use but had much less impact on
applications such as autonomous weapons and predicting
studying public attitudes toward other culturally-polarized culturally-insensitive uses of AI techniques, and shallow or
technologies. In addition, the presence of both strong direct ineffectual participatory mechanisms. How can those inter-
effects and strong indirect effects in our fit model provides ested in inclusive governance bridge this gap? Our work
tentative (but not conclusive) support for the value of con- both provides insights and suggests challenges that may
sidering self- and societal benefit as mediating variables in face even well-intentioned efforts to develop participatory
understanding attitudes toward technology. structures.
The SEM used in this study shares some features with A first challenge is the sizable gap between the signifi-
popular frameworks in the broader technology acceptance cant public support we find for ‘careful management’ of AI
literature, such as the Technology Acceptance Model (see and the more limited support for ‘government regulation’
Section 2.2). Our work also carries implications for this class (Fig. 3), a finding that echoes prior research, particularly in
of models, providing evidence that factors adopted from cul- the US context (Zhang and Dafoe 2019). However, a growing
tural theory might also be successfully incorporated as exter- international expert consensus—including among corporate
nal factors in models of attitudes toward (and use of) other actors—has articulated a need for AI regulation, and regula-
technologies. tory efforts continue to develop. This reveals a fundamental
Finally, our work provides evidence for cultural theory tension in how public opinion should be respected in AI
aversion predicted greater public support for AI-based recom- A second limitation concerns our descriptions of the six AI
mendation systems but less expert support may suggest that application contexts used in our survey: while we attempted
increased public awareness about the potential benefits and to faithfully reflect the way each application is framed in pub-
harms of these systems could be particularly impactful. lic discourse, it is likely that this discourse will evolve in ways
However, our findings also caution that in many appli- that change their associations with particular cultural values.
cation domains, increased public awareness of AI’s impacts Respondents also differed in their familiarity with AI; knowl-
might not produce major changes in attitudes toward AI gov- edgeable participants (and the expert sample in particular)
ernance. We find that public support for AI governance is likely considered information from previous knowledge about
relatively independent of arguably more malleable factors like the AI application contexts beyond what was provided in our
perceived self- and societal benefit from AI. Instead, our results vignettes, limiting fair comparisons between samples.
suggest that public support for AI governance is more strongly Third, while we believe our graduate student sample pro-
related to factors reflective of broader regulatory preferences vides one informative view on the beliefs of AI experts, this
such as political orientation and individualism (Fig. 4). group differs from other samples of AI experts, such as those
The contrast our study finds between the US public’s desire studied by Aiken et al. (2020) or Zhang et al. (2021). Future
for AI governance and skepticism of government involvement work should explore how our findings generalize to groups
while Aiken et al. (2020) surveys AI professionals in industry. See model achieves adequate fit despite a statistically significant result
Section 3.1 and Supplement Section A for more details on the from this test is consistent with standard SEM practice and our
characteristics and limitations of these samples. preregistration (Kline 2016).
2. Our survey assessed cultural values before AI was introduced 11. One notable exception was the residual variance of the support for
to avoid attitudes toward AI influencing cultural values through use variable (−0.16), whose relatively large magnitude suggested
priming effects. some caution when interpreting results such as the coefficient of
3. We consider this alternate model in Supplement Section E.1.4. determination for this variable.
4. Because we anticipated that synthetic biology was likely to be less 12. Recall from Section 3.2 that two divides in related literature limit
familiar to respondents, this technology featured a one-sentence direct comparison of our results to some other work on the impact
description. of cultural values on public attitudes toward technology. First, the
5. In two small pilot surveys using the Lucid Theorem platform constructs of individualism and egalitarianism that we adapt from
(N = 50 and N = 150; see Supplementary Materials for details), Kahan et al. (2009) do not model hierarchy and fatalism; cultural
we found that the condensed cultural cognition worldview scale elements argued to be important by the broader cultural theory
of Kahan et al. (2007), which used both positively- and negatively- literature (Johnson et al. 2020; Ripberger et al. 2012). Second, like
worded items for each construct, had poor reliability. Based on the some other literature but unlike Kahan et al. (2009), we model
results of other recent studies that found reliability issues with the individualism and egalitarianism as discrete constructs rather than
Campolo, A., Sanfilippo, M., Whittaker, M., et al. (2017) AI Now 2017 Gardezi, M. and Arbuckle, J. G. (2020) ‘Techno-Optimism and Farmers’
Report. New York: AI Now Institute. Attitudes toward Climate Change Adaptation’, Environment and
Cave, S., Craig, C. and Dihal, K. (2018) Portrayals and Perceptions of Behavior, 52: 82–105.
AI and Why They Matter. London: The Royal Society. Gudowsky, N., Peissl, W., Sotoudeh, M., et al. (2012) ‘Forward-Looking
Chong, D., Citrin, J., and Conley, P. (2001) ‘When Self-Interest Matters’, Activities: Incorporating Citizens’ Visions: A Critical Analysis of the
Political Psychology, 22: 541–70. CIVISTI Method’, Poiesis & Praxis, 9: 101–23.
Coppock, A. and McClellan, O. A. (2019) ‘Validating the Demographic, The Harris Poll. (2017) ‘Monthly Harvard-Harris Poll’ <https://
Political, Psychological, and Experimental Results Obtained from harvardharrispoll.com/wp-content/uploads/2017/11/HCAPS-Octo
a New Source of Online Survey Respondents’, Research and ber_Topline-Memo_with-banners_Registered-Voters-Social-Media.
Politics, 6. pdf> accessed 13 Oct 2020.
Dake, K. (1991) ‘Orienting Dispositions in the Perception of Risk: An Hoffmann, M. H. G. (2020) ‘Reflective Consensus Building on Wicked
Analysis of Contemporary Worldviews and Cultural Biases’, Journal Problems with the Reflect! Platform’, Science and Engineering
of Cross-Cultural Psychology, 22: 61–82. Ethics, 26: 793–819.
Davis, F. D. (1985) ‘A Technology Acceptance Model for Empirically Hornsey, M. J., Harris, E. A., and Fielding, K. S. (2018) ‘Relationships
Testing New End-User Information Systems: Theory and Results’, among Conspiratorial Beliefs, Conservatism and Climate Scepticism
PhD thesis Massachusetts Institute of Technology. across Nations’, Nature Climate Change, 8: 614–20.
Marangunic, N. and Graníc, A. (2015) ‘Technology Acceptance Model: Selwyn, N., Cordoba, B. G., Andrejevic, M., et al. (2020) AI for
A Literature Review from 1986 to 2013’, Universal Access in the Social Good? Australian Public Attitudes toward AI and Society,
Information Society, 14: 81–95. Melbourne: Monash Data Futures Institute.
Marchant, G. E., Tournas, L., and Gutierrez, C. I. (2020) ‘Govern- Sharma, P. (2010) ‘Measuring Personal Cultural Orientations: Scale
ing Emerging Technologies through Soft Law: Lessons for Artificial Development and Validation’, Journal of the Academy of Marketing
Intelligence’, Jurimetrics, 61: 1–18. Science, 38: 787–806.
Mavrommati, G., et al. (2021) ‘A Methodological Framework for ̈
Sjoberg, ̈
L. and Drottz-Sjoberg, B.-M. (1993) Attitudes to Nuclear
Understanding Shared Social Values in Deliberative Valuation’, Eco- Waste, xii. Stockholm: Stockholm School of Economics, Center for
logical Economics, 190: 107185. Risk Research.
Meadows, D. H., Meadows, D. L., Randers, J., et al. (1972) The Lim- Sloane, M., Moss, E., Awomolo, O., et al. (2020) ‘Participation Is Not
its to Growth: A Report for the Club of Rome’s Project on the a Design Fix for Machine Learning’, arXiv: 2007.02423[cs].
Predicament of Mankind. New York: Universe Books. Slovic, P., Flynn, J. H., and Layman, M. (1991) ‘Perceived Risk, Trust,
Mehrabi, N., Morstatter, F., Saxena, N., et al. (2021) ‘A Survey on Bias and the Politics of Nuclear Waste’, Science, 254: 1603–7.
and Fairness in Machine Learning’, ACM Computing Surveys, 54: Smith, A., ‘Public attitudes toward computer algorithms’ (Pew Research
1–35. Center, 16 Nov. 2018a).
Morning Consult. (2017) ‘National Tracking Poll’ 170401. <https:// Smith, A., Public attitudes toward technology companies, (Pew research