The Effects of Cultural Dimensions On Algorithmic News

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Computers in Human Behavior 126 (2022) 107007

Contents lists available at ScienceDirect

Computers in Human Behavior


journal homepage: www.elsevier.com/locate/comphumbeh

The effects of cultural dimensions on algorithmic news: How do cultural


value orientations affect how people perceive algorithms?
Donghee Shin a, *, Veerisa Chotiyaputta b, Bouziane Zaid c
a
Zayed University, United Arab Emirates
b
Panyapiwat Institute of Management, Bangkok, Thailand
c
University of Sharjah, United Arab Emirates

A R T I C L E I N F O A B S T R A C T

Keywords: How do cultural values influence/are influenced by algorithms? A comparative study was conducted between the
Chatbots United States (US) and the United Arab Emirates (UAE) to investigate how users in the different cultures perceive
Cross-cultural study the features of chatbot-driven news and how they view ethical issues concerning chatbot journalism. Different
Chatbot news
models of chatbot news perception reveal that the acceptance of chatbots involves a cultural dimension as the
Cultural dimension
Algorithmic culture
algorithms reflect the values and interests of their constituencies. How users perceive chatbot news and how they
consume and interact with the chatbots depend on the cultural and social contexts in which the interaction is
taking place. Our results suggest the algorithms reflect cultural values and algorithms are implicitly situated in
social contexts, mediated by cultural artifacts and activities. The results resonate with ongoing debates on
whether and how algorithms reinforce cultural and social values implying the co-evolving nature of algorithms
and humans.

With the dramatic advancement of artificial intelligence (AI), chat­ how algorithmic processes affect human discretion and thoughts, how
bots are rising to great popularity in all sectors of society. By adopting algorithmic culture is gendered, how algorithmic culture imposes ma­
algorithms and AI to understand user intentions and preferences, chat­ chine learning logic on human perception, constrains its possibilities,
bots provide an interactive conversational platform (Park, 2021; Rapp, and models its behaviors (Striphas, 2015). Chatbots are designed to be
Curti, & Boldi, 202). Chatbots are becoming a common trend in the human-like in their interactions, but how users perceive those in­
service industry, education, and daily life. Recently, chatbots have been teractions remains unexamined, let alone known (Lee et al., 2019; Shin,
applied to the news media platform, and chatbot news services are 2020). It is unclear how algorithmic culture impacts media consumption
becoming popular (Jung et al., 2017). Chatbot news service (CN) uses or journalistic practice, how users experience the services provided, and
chatbots in which conversational agents interact with readers and help how the experience plays out in behavior (Rapp et al., 2021). To fill that
them to read the news by filtering or aggregating data, and other in­ knowledge gap, our comparative inquiry examines similarities and dif­
formation that they need (Thurman et al., 2019). CN has brought a series ferences in the chatbot news consumption of individuals in the US and
of ethical issues and disruptive effects upon journalistic content, UAE. Both countries have been actively developing algorithms, but their
narrative, and professional practices (bib_Zheng_et_al_2018Zheng et al., application to journalism and consumption of news appear to show
2018bib_Zheng_et_al_2018). Journalistic chatbots provide personalized different patterns. Culturally, the two countries are quite polarized and
and relevant news, but the issues of fairness, accountability, and trans­ diverse per the Hofstede cultural index (2001). Comparative observa­
parency (FAT) are inextricably linked to algorithmic trends (Shin, tions in the West and East are a valuable way to understand how value is
2020). Questions remain about how to safeguard the journalistic prin­ structured/reinforced in different cultures and examine which factors
ciples, ethics on news curations, and underlying processes of chatbots affect attitude and user acceptance. Juxtaposing the two cases can open
(Crain, 2018; Jia, 2020). FAT and related issues about how best to empirical insights into how culture affects users’ algorithmic attitudes
address issues will be key to chatbot news services (Kim & Moon, 2021; and behaviors.
bib_Lewis_et_al_2019Lewis et al., 2019). In comparing algorithmic phenomena, the heuristic systematic
Despite the growing use of chatbots, it remains to be examined if or framework of information processing is useful as the framework

* Corresponding author.
E-mail addresses: [email protected] (D. Shin), [email protected] (V. Chotiyaputta), [email protected] (B. Zaid).

https://doi.org/10.1016/j.chb.2021.107007
Received 15 July 2021; Received in revised form 24 August 2021; Accepted 31 August 2021
Available online 1 September 2021
0747-5632/© 2021 Elsevier Ltd. All rights reserved.
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

investigates how people receive and process messages. perceptual understanding acquired through socialization.
We conceptualize a cross-cultural CN model encompassing algo­ Although algorithms are technologically driven, they can be under­
rithmic quality and performance value as antecedents of trust and stood as cultural artifacts (Dourish, 2016). Seaver (2017) argues that
emotional valence. A cross-cultural perspective serves as a means of algorithms should be approached as a culture enacted through the
exploring the embedded cultural values as well as of highlight the varied practices and interactions of diverse stakeholders. An algorithm is
distinct behaviors and practices of different subcultures in algorithms embedded socially, culturally, and historically as it is designed, devel­
(Hall, 1989). Previous studies (e.g., Fortias et al., 2019; Shin & Park, oped, and consumed within a society. The ways that users perceive,
2019) confirmed that understanding FAT greatly affects the perceived interpret, and act toward algorithms can be, to a great extent, influenced
value of and satisfaction with algorithmic media. Results imply that CN by their own cultural context. Hofstede’s cultural dimensions principle
can be a cultural artifact because it is driven by the user’s historical is a suitable comparative lens for this study because it shows how culture
input and behavior, which are essential parts of cultural dimensions. The affects the values of its people (Hofstede, 2001) and how those values
interaction via CN is a reflection of what people feed and thus a reflexive interact with people’s perspectives of FAT and behavior (Cheney-­
product of a contextual culture where the interaction takes place Lippold, 2017). The theory proposes various orientations and di­
(Dourish, 2016). mensions along which cultural values can be compared and analyzed:
Our study contributes to the understanding of how culture influences individualism–collectivism (IDV); uncertainty–avoidance (UA); power
users’ information processing of chatbots in the contexts of cross- distance (PD); and masculinity–femininity (MAS). We chose these 4
cultural algorithms. cultural dimensions over 6 dimensions (Long-Versus Short-Term
Orientation & Indulgence Versus Restraint) as the original 4 di­
1. Algorithms and cultural value mensions are more suited to the CN and algorithms.
Hofstede’s Cultural Dimensional Indices. The US and UAE Hofstede
As algorithms increasingly permeate our social and cultural sphere, it index values are given in Table 1 (Hofstede, 2001). UAE’s high Hofstede
is useful to consider algorithms with/in culture as well as algorithms as dimensions are PD at 80 and UA at 78, indicating the country’s high
culture, part of broad patterns of metaphor and practice that can be level of power distance and low tolerance of uncertainty. The high UA
involved with societally and culturally. Advances in AI will have sig­ characteristic indicates that the country does not readily allow variation
nificant effects on individuals and on culture. Algorithms create new or change and is very risk-averse. UAE has a low IDV score of 38, which
information, make certain information more accessible, and change the indicates that the country is more collectivist than individualist. The
value of some types of information and ways of interaction (Park, 2021). high IDV score for the US signals a country with an individualistic
Thus, the view of algorithms as culture can examine the dynamic pro­ attitude and relatively flexible connections with others. The US populace
cesses through which algorithms frame reality, while structuring/­ is more self-reliant than those in the UAE and looks out primarily for
reinforcing how people think about algorithms and society. themselves and their close family members. The second highest Hof­
stede score in the US is MAS, with a score of 62, compared with 52 in the
1.1. How do users interact with chatbot news? UAE, indicating that in the US, women are assertive and competitive,
taking on traditionally masculine roles. The next lowest ranking
The emergence of chatbots in recent years has yielded new ways of dimension for the US is PD at 40, compared to the UAE at 80, indicating
transforming journalistic practices by using interactive dialogue via greater equality between societal levels in community and organiza­
conversational agents (Thurman et al., 2019). CN can be approached tions. The lowest Hofstede dimension for the US is UA, with a score of
two ways: 1) distribution of journalism content via chatbots; and 2) 46, compared to the global average of 64. A low UA score indicates that
engagement tool for journalists to interact with the audience, and users society has few rules and does not attempt to control all outcomes and
can use chatbots to find information, interact with their favorite news, results.
and solve their problem fast (Veglis & Maniou, 2019). As an AI-driven These cultural dimensions can be categorized into 2 dimensions:
conversational agent, a chatbot is a computer program designed to Procedural and performative (Shin, 2021). Procedural dimensions are
simulate human conversation (Rapp et al., 2021). It allows interactions concerned how algorithms work and function, whereas performative
between a human and an algorithm through messages or voice com­ dimensions are about how well algorithms serve the interest and welling
mands (Kim & Moon, 2021). Personalized online news has become of users. This categorization is nicely aligned with Hall’s high context
increasingly popular because the internet offers limitless access to news and low context (1989). While low context cultures regard accurate,
items from online resources (Kim & Lee, 2019). precise, and explicit recommendations, high context cultures regard
Advances in algorithm technology allow users to use chatbots to seek messages with implicit procedure rather than explicit performance. Shin
information about trends, news, and events, and chatbots reply with (2021) argues that PD and UA are related to procedural aspects of how
relevant content (Park, 2020). Chatbots can help reporters deliver their algorithms work whereas IDV and MAS are associated with the perfor­
news or gather information from sources (Veglis & Maniou, 2019). mance of algorithms.
These systems then generate profiles of user preferences based on their Our review of the relevant literature shows the connection between
previous online behavior and default information (Stylos et al., 2021; cultural values and technology consumption. Previous research has
Walker, 2018). CN attempts to build readership through a chat interface. shown that national cultures with low UA, high IDV, and low PD facil­
The conversational features of the interaction demand that journalistic itate new technology adoption and adaptation (Eitle & Buxmann, 2020).
chatbots present with social behaviors. National cultures with a high PD level, high UA, and low IDV are likely
to resist technology adoption. Algorithms cannot shape the culture by
1.2. Cultural dimensions and chatbots: how algorithms evolve alongside
humans
Table 1
Culture is reflected in the social behavior and norms found in human Culture dimension index.
societies, as well as the beliefs, knowledge, laws, customs, practices, and Hall’s Hofstede UAE US
values of the individuals in these groups. Generally, it is defined as the categorization dimension
underlying values, beliefs, philosophy, and ways of interacting that Procedural dimensions of High Context PD 80 40
contribute to the unique psychological and social environment. In a algorithms UA 68 46
cross-cultural context, Shin (2021) describes culture as the shared Performative dimensions of Low Context IDV 38 91
algorithms MAS 52 62
models of behaviors and interactions, cognitive constructs, and

2
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

themselves; it happens only when algorithms interact with humans. 2. User perceptions of algorithmic recommendations in cultural
Algorithms that curate and sort what we see on Facebook have a strong contexts
influence on human behavior. As previous works have confirmed the
co-influence of algorithms and culture, cultural values can be hypothe­ The proposed model describes users’ cognitive and emotional re­
sized with chatbot news. sponses to chatbots in journalism (Table 2). The model includes cognate
Hypothesis 9: The cultural dimensions influence the perception and constructs that influence emotional valence. The factors of FAT are
adoption of chatbot news. posited as heuristic antecedents of chatbot evaluation, and the model
includes the additional key concept of trust as a mediator. FAT can work
1.3. Algorithmic culture as a heuristic when making evaluations about a chatbot’s normative
value and trustworthiness.
Content-recommendation algorithms are a continuous engagement
with the integration of procedure into human cognition and social
2.1. Procedural norms of chatbots
experience. Algorithmic culture is the ongoing process through which
computational processes are utilized to form human culture (Striphas,
The notion of FAT has been critical in media and journalism studies,
2015). As cultures are systems of judgment and decision making, values,
including digital journalism (Diakopoulos & Koliska, 2016). Chatbots
preferences, and interests are all systems of judging ideas, objects,
are designed to offer personalized recommendations using predictive
practices, and the performance of algorithms. Algorithms are designed
information. Whether such results actually reflect user values and
to simulate these forms of judgmental processes. In this light, algo­
preferences and how the processes are managed remain controversial. In
rithmic cultures are defined as the use of algorithmic processes to sort,
news services by chatbots, trust describes a reliable belief in the accu­
categorize, and hierarchize humans, establish norms, and reinforce
racy of news recommendations and is understood to increase user
existing ideologies, and also the practices of thought, behaviors, and
readiness to use the recommender system’s capabilities (Graefe et al.,
meanings that arise in relation to those processes (Dourish, 2016). It is
2018). Trust thus denotes how reliable and credible a system is. When a
essential to understand the dynamic relationship between algorithms
mechanism is transparent, users can revise their input data to improve
and humans as algorithmic culture involves both algorithmic learnings
recommendation outputs, which requires that algorithm users under­
to make decisions about culture and humans learning to address those
stand the logic of the recommendation system (Shin, 2021). Waddell
algorithms. Chatbot platform designers program algorithms that can
(2019) argues that how users perceive the transparency of algorithm
learn to simulate user judgments about news and contents, to predict
news can positively affect perceived credibility. Therefore, we propose
which users will need which news. CN uses algorithms to determine if an
the following relationships:
idea for news or content will be popular, they will begin to prepare for
the kinds of news and content they write in relation to how they might H1. User assessments of transparency positively influence user trust in
be assessed by algorithms. This co-evolving relation leads to a situation the credibility of chatbots.
where culture confirms to users and reinforces existing values, rather
H2. User assessments of accuracy positively influence user trust in the
than confronting them. Personalized recommendations produce more
credibility of chatbots.
user data which in turn generate more sophisticated recommendations,
resulting in a solid feedback loop to which algorithmic culture conforms, H3. User assessments of fairness positively influence user trust in the
instead of confronting its users and societies. credibility of chatbots.

1.4. Research questions 2.2. Personalization and accuracy

For the cultural comparison, we chose the US and the UAE because Chatbots provide a set of personalized services that guide users
both are not only geographically and culturally different juxtaposed, but through huge amounts of information. Personalized news should be
also similar in chatbot development aspects. Both countries are actively accurate because users expect personalized content that matches their
involved in algorithm development and have emerging algorithm and AI expectations (bib_Zheng_et_al_2014Zheng et al., 2014bib_Zheng_e­
markets. The two countries have almost opposite cultural orientations t_al_2014). Accuracy and personalization are interrelated notions that
per the previous research (Hofstede, Hall, & Shin), and there has been together influence how a user perceives the utility of the system. Ac­
scarce research on the countries, particularly in the cross-cultural curacy represents whether a CN predicts items that people have already
algorithmic context. Chatbots from a cultural perspective resonate rated or viewed before; a CN with optimized precision will prioritize
closely with the current debate of #CommunicationSoWhite, which has contents tightly connected to previous results for its users. Personalized
arisen as responses to troubling Western-centric trends in digital media and accurate news recommendations filter information by discovering a
discussion (Chakravartty et al., 2018). The current discussion of chatbot user’s interests and needs from their profile or search history. When
media has been predominantly US-centric or Western-oriented. This users confirm that CNs are personalized to their interests, they consider
might be because algorithms have been mainly developed by such the service useful and feel satisfied with the content (Shin, 2020). Users
countries and primarily diffused and consumed by such societies. Also, view an algorithm as useful when they perceive the recommended in­
scholarly discussions surround chatbots are mostly driven by scholars formation as accurate. Previous studies have confirmed these
from the Western hemisphere and the views on chatbots have been
skewed and are thus not a representation of broader global perspectives.
Table 2
It is appropriate to test whether the relationships between the cultural Summary of Hypotheses and Cultural Implications.
dimensions and algorithm acceptance variables vary between the two
Shin (2021) Hall (1989) Hofstede (2001)
countries. We examine the perception/experience of viewing the news
through a CN by addressing the following research question: H1 Procedural High PD/UA
RQ: What are the cross-cultural differences in users’ heuristic and H2 Procedural High PD/UA
H3 Procedural High PD/UA
systematic evaluations of chatbot news? What are the roles of algo­ H4 Performative Low IDV/MAS
rithmic cultures in creating and reinforcing sets of norms? H5 Performative Low IDV/MAS
H6 Performative/Procedural Low IDV/MAS
H7 Performative Procedural Low IDV/MAS
H8 Trust Dimension High N/A

3
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

relationships in various algorithm applications (Haim et al., 2018) by comparative studies, efforts were made to collect equivalent data. A
finding that accuracy/personalization are determinants of trust and total of 370 questionnaires were acquired in the UAE, 350 of which were
satisfaction (Zheng et al., 2018). valid (94.5%). In the US, 375 questionnaires were gathered, of which
355 (94.6%) were valid. Finally, a total of 705 questionnaires were
H4. Trust has a significant effect on the perceived accuracy of
considered for further analysis.
chatbots.
H5. Trust has a significant effect on the perceived personalization of 3.2. Procedure and stimulus
chatbots.
Previous research has shown that users with intrinsic or existing trust Upon agreeing to a consent form, respondents completed a pre-
in certain media tend to see other media as trustable (Shin et al., 2020). screening questionnaire that measured their prior experiences with al­
Users’ existing level of trust can influence the trust assessment of other gorithms and chatbots. Afterward, they were randomly assigned to sort
media. This logic makes sense in that users with favorable trust in media news recommended through chatbots and instructed to interact with the
and journalism, in general, tend to see higher trust for other similar CN as much as they wanted. CNN and Quartz were chosen for stimulus
media outlets (Smith, 2018). As CN can be an extension of existing materials because both countries’ users view the news widely. Partici­
media, this trust influence can be applied to CN that users can see CN as pants could choose either CNN or Quartz for the experiment. After
more trustable when they have high trust in existing media. Thus, users’ downloading each chat application, participants were asked to use/
existing media trust can be hypothesized with the level of CN trust. interact with/inquire with chatbots and they could seek specific infor­
H8. Existing trust in media has a significant effect on the trust of mation or interesting news articles that they would like. When partici­
chatbots. pants did not know how to use/interact, specific instructions were given
to find particular topics of news (such as political issues, societal topics
of news). Participants were provided a list of possible topics of news
2.3. Perceived utility and emotional valence
(combinations of international and local issues) that they might seek via
chatbots. The stimulus materials and survey were in English in both
In recent research on chatbots, efforts have been made to understand
countries. The experiments lasted about 30–60 min. Participants
the dimension of emotional valence in AI, such as emotional chatbots or
concluded the experiment by completing a self-reported questionnaire
emotionally aware conversational agents (Lee, 2018). Emotion in a
that measured their normative values and assigned performance values.
chatbot user plays a key role in the interaction with the algorithm,
To validate the reliability of the responses, a series of confirmation
particularly in regard to the acceptance of highly complicated AI (Shin &
questions were added to the survey. With the initially collected data,
Park, 2019). Emotions significantly influence cognitive processes in
data refinement was performed in terms of the consistency, validity, and
humans, including perception, intentions, and behaviors.
reliability of the responses.
What people feel and what they want to feel provide important clues
to chatbot development. As AI interprets and responds to human
emotion, it is important to consider emotion as a factor influencing user 3.3. Data measurements
behavior. Previous chatbot research has shown that user-perceived
personalization, accuracy, and utility influence user attitudes toward All the measures in this study were based on scales previously vali­
the system (e.g., Jones & Jones, 2019). Users consider their acceptance dated in the literature. The measured items were tested with Cronbach’s
of CN in terms of how convenient and beneficial they are to use. In user alpha, and those scores varied between 0.72 and 0.91, indicating
interactions with AI, emotions make up attitudes, and attitude also acceptable internal consistency (Table 3). The convergent validity for
significantly affects human emotions. When users confirm the value of a the constructs is also suitable, as evidenced by the average variance
system, their emotion toward the AI algorithm becomes positive. extracted values. The standard deviations for all constructs are desir­
able, indicating that the measurement scores were around the average
H6. Perceived accuracy positively affects user emotion toward scores.
chatbots.
H7. Perceived personalization positively affects user emotion toward 4. Results
chatbots.
A comparison of the data from the two countries points to the cul­
3. Study design tural influence on the adoption and consumption of news algorithms.

3.1. Sample Table 3


Descriptive statistics.
This study is based on cross-sectional, self-administered surveys UAE United States
performed with samples from the US and UAE in 2020. Based on factors Age (Mean/SD/Median) 37.29/14.13/35.4 38.18/13.10/34.2
identified from preliminary studies, we composed our survey question­ Gender (female rate) 50.98 51.25
naire using multiple rounds of input from researchers, experts, and in­ College education (%) 30.10 28.22
dustry specialists. Before data collection, a power analysis was Chatbot experience 1.5 years 2 years
performed to estimate the minimum sample size needed to discriminate Reliability and Validity
significant differences at the p < .05 level. The expected effect size UAE United States
(Cohen’s d) of this study was calculated to be d = 0.39 and the power
Factor α AVE α AVE
was set at 85% with an alpha level of two-tailed 0.05 (Cohen, 1988). The
required sample size was estimated to be 310 participants in each High Context Transparency .884 .671 .773 .664
Fairness .806 .626 .705 .548
country to satisfy the power level for this study. Therefore, we recruited
Accountability .854 .7613 .858 .813
340–375 individuals in each country (from August to March 2021) via Low Context Accuracy .868 .8380 .723 .642
Amazon Mechanical Turk in exchange for monetary compensation be­ Personalization .835 .8106 .777 .756
tween $2 and $5. The sample was limited to respondents who had prior Attitude Trust .842 .8520 .844 .675
experience with algorithm services (automatic recommendations, con­ Media Trust .899 .7833 .911 .783
Emotion .889 .8196 .897 .812
tent suggestions, conversational agents, etc.). Considering the nature of

4
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

The findings of the multi-group analysis are notable in terms of both the algorithms reinforce existing cultural stereotypes and strengthen exist­
heuristic (normative belief) and systematic (functional feature) ing cultural practices, social perspectives, and even power structures.
evaluations. Hofstede’s Cultural Dimensions reflect the values regarding gender in­
equalities, power structure, societal hierarchy, and political values. The
4.1. Model fit dimension of masculine-femininity shows the distribution of gender
roles, and thus, the dimension is related to the fairness and transparency
We acquired results for the incremental fit index (IFI) = .91; .92, the factor in the model. The power distance dimension shows how a culture
root-mean-square residual = 0.01/.05, the NFI = 0.96/.97, and the CFI sees power relationships between people, which can be corresponded to
= 0.92/.90. Hoelter’s values showed acceptable results for both cases. both transparency and accountability. The uncertainty avoidance
Together these figures show evidence of a reasonably good fit. Internal dimension shows the degree to which the members of a society feel
consistencies for the nine measurements were also robust, supported by uncomfortable with uncertainty and ambiguity, which can be related to
a coefficient alpha of 0.94. the accuracy and trust factor in the model. The individual-collectivism
dimension can be associated with the factors of personalization and
4.2. National differences emotion in the model.
Table 6 illustrates how algorithms reinforce gender, power,
Independent t-testing was conducted to verify possible statistical inequality, and politics. Perceived values in each cultural dimension are
variations of measures between samples from the two countries averaged. The results show that largely 3 cultural dimensions are
(Table 4). The independent t-testing showed higher levels of perfor­ aligned with the factors in the model. The higher PD value of the UAE,
mance qualities (functional features) among UAE users, whereas US the higher values of transparency and accountability are than the US.
users were more concerned with normative values. UAE users were The same differences were found in UA and IDV, where corresponding
significantly more concerned with accuracy, usefulness, and personali­ values show higher means than the US. Only the MAS shows opposite
zation than US users. The UAE users were significantly more trusting patterns, where the corresponding values of UAE show a higher mean
than the US users (MUS = 3.74, MUAE = 4.29, t = 0.00). The UAE users than the US. That implies the UAE is more masculine than the US, where
showed a higher level of media trust than US users (MUS = 3.23, MUAE = the MAS score shows higher (62) than the UAE (52). This indicates that
4.98, t = 0.00). Emotion was lower in the US sample than in the UAE one the initial survey by Hofstede had estimate errors particularly in the
(MUS = 4.01, MUAE = 4.48, t = 0.00). Overall, the table reveals signifi­ masculine-femininity dimension, which directly contradicts the com­
cant differences. The items are not distinct in every case, but clear and mon sense that the US society is generally considered more feministic
significant differences in the factors occurred across countries. than the UAE. While our bootstrapping analysis reveals the discrepancy
between Hofstede’s initial dimension and contemporary people’s
4.3. Multigroup analysis: testing for multigroup invariance perception, it further confirms that algorithms reinforce existing cultural
values, social relations/practices, and norms.
The hypotheses that correspond to the heuristic and systematic
metrics, respectively, were rejected in each case (see Fig. 1). The results 5. Discussion: Co-Evolution of human and algorithm
indicated distinct differences in path formation and item composition,
providing insights on dissimilar value structures (Table 5). In the US Given the lack of research on the relationship between chatbot
sample, the paths from personalization and accuracy to emotion were acceptance and culture, we cross-analyzed the CN information pro­
rejected, and in the UAE case, the paths from transparency and fairness cessing model to examine cross-cultural value structures in the user
to confirmation were rejected. Additionally, the paths from trans­ interaction with algorithm-driven chatbot news. Findings show that
parency, accountability, and fairness to confirmation were significant certain similarities exist between the two cultures, but also significant
with high coefficient values in the US case (CR 0.399; 0.500; 0.785), differences are evident, possibly due to a variation of cultural, journal­
whereas their counterpart values in the UAE were either low or rejected. istic, societal, and technological factors. Users’ algorithmic information
In general, heuristic factors were important antecedents to confirmation processing differs in terms of how people process the characteristics of
for US users, whereas systematic factors were more important de­ CN and how users make sense of the chatbot news-making practice.
terminants to emotion for UAE users (CR: 0.640; 0.196; 0.117). Fig. 2 These differences illustrate that interacting with CN involves both
shows the differences in the patterns of heuristic and systematic pro­ heuristic and systematic processing about system features, user heuris­
cessing between the countries are clear. The results of squared multiple tics, and performance value (Shin et al., 2020). The findings imply that
correlations also support the proposition of different value structures in people’s understanding of algorithmic processes is contextual and
the two groups. culturally dependent (Lee, 2018). This implication echoes that of Shin’s
argument (2021) that similar algorithms can be viewed/adopted
4.4. Do algorithms reinforce cultures? differently depending on the environments or the contexts in which the
algorithms are developed, adopted, and consumed. Users’ perceptual
The findings are largely consonant with the proposition that assessment of CN features and trust serve as key heuristics in deter­
mining the effects of algorithmic systems on the three cognate aspects of
Table 4 user experience: algorithmic features, cultural factors, and performance
T-Test for National Differences. value. Although users in both countries actively engage and contribute
to news curations and the CN responds to all users’ desires, people’s
UAE (n = 350) United States (n = 355) t value (Sig.)
sensemaking of chatbots differs. It is implied the need of creating
Mean SD Mean SD culturally sensitive CN for sustainable development in the future.
Fairness 3.70 1.315 4.22 1.141 − 5.472 (0.000) **
Transparency 4.02 1.236 4.25 1.131 − 1.029 (0.042)* 5.1. Heuristic versus systematic processing
Accountability 4.21 1.247 4.31 1.289 − 1.274 (0.048)*
Emotion 4.46 1.178 4.12 1.313 0.434 (0.000)**
Accuracy 4.70 1.246 4.32 1.531 11.27 (0.000)** The findings show that heuristic and systematic qualities are valid
Personalization 4.83 1.192 3.74 1.355 6.318 (0.000) ** differentiators of CN use and emotion in both cultures. However, CN
Trust CN 4.49 1.493 3.73 1.335 3.124 (0.01)* users in the two cultures differed in how they sensed, perceived, and
Trust Media 4.98 1.401 3.23 1.914 2.404 (0.001)*
preferred to interact with the CN in terms of the procedural-
*p < .05; **p < .01; ***p < .001. performative dimensions of algorithms. The findings indicate that UAE

5
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

Table 5
Multiple group test results.
UAE United States

Unstandardized coefficient SE Critical Ratio p Unstandardized coefficient SE Critical Ratio p

H1 .222 .117 1.892 .059 .238 .085 2.706 **


H2 .379 .248 1.529 .126 .427 .064 6.719 ***
H3 .046 .175 .261 .794 .334 .085 3.936 ***
H4 1.028 .060 17.023 *** .889 .091 9.771 ***
H5 .649 .059 10.986 *** .679 .087 7.820 ***
H6 .591 .080 7.424 *** .013 .123 0.110 .913
H7 .208 .049 4.219 *** .014 .049 0.282 .778
H8 .874 .063 13.812 *** .529 .049 10.751 ***
*p < .05; **p < .01; ***p < .001.

SEM Multiple Group Analyses

Model AIC CFI RMR GFI RMSEA PCLOSE

Unconstrained 290.73 .830 0.083 0.967 0.01 .990


Structural weights 275.29 .843 0.071 0.968 0.05 .997

Model Comparison

Model DF CMIN P NFI IFI TLI

Unconstrained 18 17.923 .288 .023 .027 .008


Structural weights
2
R Comparison

UAE United States

Trust .340 .664


Personalization .495 .328
Accuracy .850 .303
Emotion .660 .370

High Context of Heuristic Procedural Processing Low Context of Systematic Processing

Accuracy
Perceived H1
H6
Transparency
H4

H2
Perceived Algorithmic Emotional
Accountability Trust Valence
H5

H3
H8 H7
Perceived Personalization
Fairness

Expectation of Normative Value Media Trust Perceived Performance

Fig. 1. Algorithmic information processing in chatbot interaction.

users are more involved by functional qualities that affect the perfor­ vital factor that influences their confirmation.
mance of the CN, such as usefulness, accuracy, and personalization than The R-squared measures in the model support those propositions.
by procedural qualities, whereas users in the US revealed the opposite Performance qualities explain 66% of the variance in emotion toward
pattern. To US users, FAT can work as a heuristic when making evalu­ the CN for UAE users and only 37% for US users. Procedural qualities
ations about the trustworthiness of a CN, whereas UAE users judge explain 66% of the variance in trust for US users and only 34% for UAE
technological quality in terms of functional features. This procedur­ users. UAE users are more likely to be satisfied by the utility of the CN
e–performance framework is consonant with the Cultural Dimension than US users, whereas US users are more likely to be confirmed through
found in this study. The procedural dimensions resonate with UA/PD, a heuristic evaluation of procedural qualities than UAE users. Other
whereas performance values are aligned with MAS/IDV. underlying factors might explain confirmation for the UAE users, and
Different attitudes were found to be significant for US users there might be other variables accounting for emotion among US users.
compared to UAE users and those attitudinal differences are related to Perhaps the US users anticipate more than performance in terms of al­
the different levels of trust and emotional valence. The differences in the gorithm emotion, whereas UAE users expect more than FAT in terms of
pattern are also related to perceived quality. UAE users who perceive the confirmation of the procedural qualities. Perceived procedural and
outcome values of a CN believe that accuracy, personalization, and performance qualities are perhaps meticulously interdependent, co-
utility are the crucial factors that influence their emotions. US users who influencing CN users across cultures. Per the cultural dimensions,
perceive procedural values, on the other hand, consider FAT to be the higher IDV and MAS of US users explain the higher procedural value,

6
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

Perceived 0.046
Fairness Accuracy
0.334*** 0.591***
0.013

1.028***
0.379 0.889***
0.427*** Emotional
Algorithmic
Perceived Trust
Valence
Accountability
0.222 0.649***
0.238** 0.679***
0.208***
0.874*** 0.529***
Perceived 0.529***
Transparency
Personalization

Media Trust

Note. The UAE coefficients are shown first, and the US coefficients are shown in bold

Fig. 2. Compared structural paths.

question to US users is “how chatbots produce better results” whereas


Table 6
UAE users may ask “what can a chatbot do for users.” The different
Culture Dimension and Algorithmic Reinforcement through Bootstrapping.
questions of how and what is related to different cultural dimensions.
Country UAE US

High Context PD 80 40 5.2. Cultural Dimensions


Transparency/Accountability 4.82 4.24
Low & High Context UA 68 46
Accuracy/Trust 5.89 4.23 The findings indicate that algorithms can be accepted differently
Low & High Context IDV 38 91 depending on the cultural contexts in which they are consumed and
Personalization/Emotion 3.21 4.99 interacted with. The US has a higher IDV and MAS and shows a higher
High Context MAS 52 62 tendency of chatbot adoption than the UAE. Similarly, the US has a
Fairness/Transparency 4.89 3.18
lower UA and PD and shows a higher tendency of adoption than the
UAE. Per the differences, it is evident the impact of cultural dimensions
whereas higher PD and UA of UAE users are consonant with higher on CN. As to individualism-collectivism, the US has a higher individu­
performance value in the model. For example, individuals with low UA alism score (91) than UAE (38) and the US users show higher scores in
will rely on chatbot algorithms to seek information and thereby reduce normative beliefs of algorithms whereas UAE users show higher
complexity and uncertainty. Individuals with high IDV are likely to use emphasis on the results of CN. An algorithm by nature is designed to
chatbot algorithms to find individualized and personalized content. provide individualized and personalized news filtering and curations.
People’s understanding of algorithmic processes can be culturally When the US users seek individualized news, they would like to ensure
relevant and contextually nuanced. For example, the differing extent of procedural aspects of personalized curations than the results. Collec­
UA results in different technological and regulatory approaches. The U. tivists may seek a greater extent than individualists on the CN algorithm,
S. tends to take preparatory measures for privacy and data, while the low which is an alternative for interpersonal communication.
index of the UAE is reflected in the absence of data protection regula­ UAE users, characterized as a higher UA (68) than the US users (46),
tions. In addition, people in the US might be more skeptical about show clear preferences of performance over procedural values. It may be
algorithmic performance in the CN context than people in the UAE. This inferred to UAE users that the low tolerance for ambiguity and vague­
finding is in agreement with Fletcher, Kalgogerpoulos, and Nielsen’s ness assumes the process should follow structured rules and the results
finding (2019) that US users navigate to algorithmic news on social should be accurate, useful, and convenient. The low UA of the US users
media with generalized skepticism because most people in the US have may accept new algorithms and individualized news services, but they
low trust in how algorithm news is selected and tend to be doubtful also would like to ensure the procedures of how the algorithms are
about how algorithms select news (Zheng et al., 2018). Perhaps the UAE legitimately constructed and how their data are rightly processed for the
users who responded to our survey do not understand exactly how the construction. The notion of uncertainty lies in where users look at either
news they receive is curated by what algorithms; thus, they randomly procedural or substantive grounds. The US users may seek to avoid
receive news without concerns about FAT. While US users might not uncertainty in the process, whereas UAE counterparts may look at cer­
understand the specific algorithm process either, they have widespread tainty as a result/performance.
concerns about FAT and skepticism about algorithmic selection; thus, UAE has a PD score of 80 (40 for the U.S.) and is considered a hi­
individuals in the US tend to critically review algorithmic issues, erarchical society, rules, and regulations take serious account. This
including FAT (Smith, 2018). UAE users tend to accept algorithms means that UAE users might take for granted the centralized process of
without question, assuming that the procedural aspects are legitimate, the algorithm. With the low PD of the US cultures (40), the users may
and they thus tend to trust algorithms more than US users. UAE users think they have legitimate rights to request the firms/organizations
could consider CN to be more reliable and trustworthy than US users. about the normative procedures of algorithms. As to MAS, the US scored
They are probably more care about the results/performance of the news a 62 on Hofstede’s scale of masculinity whereas UAE has 52 on this
curated by the CN in terms of whether it is predictable, accurate, useful, dimension. This categorization is rather problematic in applying to this
and convenient, rather than how and why their data are being used and case, as per common sense, the US is considered more feministic than
when. Given the difference in procedure and performance, a legitimate UAE. Plausible explanations might be that the assertiveness traits of
masculinity are related to normative beliefs on social media algorithms.

7
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

High MAS cultures are motivated by wanting to be the best and this perceptions of CN, cultural sensitivity can be an underlying parameter
trend might affect the US users’ normative belief. for global CN services providing value-centered and culture-specific
Together with the identified cultural dimensions, the differences can content.
be related to the black-box nature of CN algorithms (Annany & Craw­
ford, 2018). Algorithm news are black-boxes in that whatever happens 5.4. Algorithmic culture and CN
inside an algorithm is only known to the CN firms that operate it, but
users do not know the inside the process. Current CN algorithms cannot We examined how culture influenced American and the UAE users’
explain their results to the public and end-users. The US users are low PD information processing to use chatbots. Our results contribute to the
with firms and authorities and they may request any knowledge of al­ human-algorithm interaction literature by providing empirical evidence
gorithms internal workings. Being individualistic and with low uncer­ on how culture influences people’s perception of algorithms. Our find­
tainty avoidance, the US users tend to be pragmatic and tolerant of ings provide further evidence that culture influences users’ decision-
change, and thus they are willing to accept innovative algorithms, but at making and suggest that researchers should consider cultural factors
the same time, they also would like to see their CN fair, accountable and when conducting studies on algorithm adoption and interpreting the
understandable. On the other hand, UAE users (high uncertainty results.
avoidance and collectivism) tend to seek distinct performance results The contributions of this study are twofold. Theoretically, our results
(accurate, relevant, personalized, and timely news) through a seemingly contribute to the understanding of how algorithmic innovations have
well-structured automated black-box algorithm. Being high power dis­ transformed CN, how heuristic and systematic processes are culturally
tance, UAE users seem to allow certain levels of inequality (between CN embedded in CN, how an international comparative perspective can
firms and users in the way the firms exclusively collect user data and contribute to our understanding of the topic, and how algorithms can be
control the analytic processes) with low levels of procedural trans­ conceptualized as cultural artifacts. Practically, our findings also have
parency, and thus are more willing to accept the black-box feature design implications that will allow chatbot practitioners to support
without questions as long as algorithmic performance satisfies their in­ effective interactions, such as representing FAT in the CN interface.
formation seeking needs. UAE users are easily accepting of their privacy The heuristics of user experience are complicated because they are
being used for efficiency and performance, but the US users feel much closely intertwined with users’ contextual individualities, and this is
less comfortable even when their data are used to increase accuracy and true regarding CN. Few attempts have been conducted to examine cul­
personalization. This finding is consonant with Smith’s survey (2018) tural characteristics closely related to user experience. Our goal was to
that about 38 percent of non-Americans believe algorithm portrays an disclose cultural variations related to the user experience of algorithms
accurate description of society, compared with just 20 percent of and to measure them cross-nationally in terms of CN. People in these two
Americans. cultures who believed the CN services were of useable and high quality
had a positive trust, which afforded high emotion. Those who perceived
5.3. Culturally sensitive CN algorithms to be procedural were more likely to display a positive atti­
tude and approve of transparent algorithmic processes.
The enormous amount of data that algorithm media possess about This study provides several contributions to the ongoing develop­
their users is eventually what allows these algorithms to deliver con­ ment of the algorithm theory and concept. First, we have proposed a
textually personalized news in an encoded manner. Our cross-cultural cross-country matrix to examine why people perceive certain algo­
juxtaposition confirms that users’ comfort level with this behavior is rithmic attributes in certain ways. Our results show that the relationship
largely culture-dependent and must be analyzed and interpreted as such. between heuristic and systematic processes is more intricate when
The cultural dimensions significantly influence the perception and applied to cross-cultural milieus. This implies that algorithms are closely
involvement of adoption decisions of chatbot news. User attitudes and related to cultural values, and further suggest algorithms is a reflection
heuristics in different contexts are key explanatory variables when of people’s interest, societal relationships, and norms under cultural
examining cross-cultural CN use models. The cultural dimensions fit milieu. It can be inferred that algorithms and humans co-evolve with
nicely into the chatbot algorithms as they are based on conversational each informing the evolution of the other. In line with the current debate
functions. Dialogic conversations reflect human values and societal of #CommunicationSoWhite, it further shows that algorithms reinforce
norms, and what chatbots recommend and how they interact with users existing cultural and social views and attitudes toward the algorithmic
are based on cultural dimensions that the users and chatbots share. The functions, and enact new practices of cultural norms. The results illus­
design, development, and consumption of algorithms are highly trate the need for awareness of the ways in which algorithms strengthen
dependent on cultural dimensions and values (Shin, 2021; Striphas, or extend larger structures of culture, norms, and social practices.
2015). Given the cultural dimensions in algorithm acceptance, cultural Algorithmic searches return images of White men when inquired for
sensitivity can be considered in the future design and development of CN professionals, whereas they return the image of minority females when
algorithms. Algorithmic media that are aware of cultural differences and asked for hairstylists and massage specialists. Our results echo this trend
diversities will be important tasks. Culturally sensitive CN emphasizes that algorithms deliberately and/or reflexively underpin social stereo­
the understanding of a user’s background, culture, and belief system, types and cultural norms through continuous interactions with cultural
addressing the interrelations of users and algorithms in the construction entities—users, people, and organizations.
of meaning and social interaction. The idea of the culturally sensitive Second, our models indicate that the expectations and values of al­
algorithm will be considered as an essential means to observe the cul­ gorithm users might also differ between the two cultures. Our frame­
tural evolution process in users and should be used to solve complex work confirmed five feature properties: FAT as latent factors that
provisions of new global media search algorithms. Given the algorithmic facilitate user’s heuristic decision-making, and personalization/accu­
media works across cultural boundaries, understanding cultural di­ racy as determinants of emotion. At a conceptual level, we have shown
mensions is key in the globalized environment where global media firms that people’s attitudes toward algorithms are contextually defined,
are planning to increase market share in different countries. For culturally mediated in specific ways, and we call for further studies on
example, the CN algorithm can use the cultural algorithm dimension in the relationship between culture/individual features and contextuality.
providing global users with a culturally informed menu, e.g., localized From a cross-cultural lens, our results have strategic implications for
coverage or regional content just like the ways YouTube and Netflix CN development. The most noticeable differences in the two cultures are
curate localized cultural content. CN is a reflection of culture and in their value structure and the importance of procedure versus per­
perception because it’s a way of communication and interaction (Seaver, formance. Our results imply that the industry should pursue localized
2017). As culture has a strong influence on the formation and user and culturally customized algorithmic media services desired by target

8
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

nations and regional cultures. To increase the levels of CN adoption in Lastly, current developments of algorithms are characterized by
diverse countries, a culturally informed localized strategy should be constant human-algorithms interactions that call for a deeper under­
used to accommodate the unique cultural features of each market. The standing of their constituents and underpinnings. As the co-evolution of
contrasting structures of the two cultures suggest that CN users have humans and algorithms is inevitable, how algorithms evolve alongside
varied experiences with and prospects of algorithm services. Globally, humans will be an important topic. Future studies can explore the co-
algorithm users expect their algorithms to be legitimate, in addition to evolutions of algorithmic and humans, focusing on the constant inter­
useable and convenient. Besides the pragmatic benefits of usability that action, coupling, and mutual tuning of humans and their algorithmic
serve users’ underlying values, procedural values comprise the designs counterparts.
and dimensions intended to satisfy users’ rights to understand the in­
ternal operations of the AIs they use. If the global CN industry is able to Credit author statement
identify and describe the cultural elements of value, this will allow the
various algorithms to properly recognize, extract and represent cultures. Donghee Shin (Corresponding author) Overall project leader,
Global algorithm firms face a challenge: how can we ensure a cross- Conceptualization, Methodology, Software, Writing, Reviewing and
cultural user experience if we are not familiar with cultures other than Editing, Writing- Original draft preparation. Veerisa Chotiyaputta
our own? Thus, it should address the following culturally derived stra­ (Second author) Ideation, Discussion, and Scale Development of Cul­
tegies: (1) How can UAE users be offered reliable and trustable proce­ tural Dimension. Bouziane Zaid, Ph.D. (Third author) Reviewing and
dural features? and (2) how can US users be offered highly useable, Editing, Discussion, and Conceptualization.
convenient performance?
Acknowledgments
5.5. Limitations: where do we go from here?
This work was supported by the National Research Foundation of
As an essential limitation for this study, cross-cultural comparisons Korea Grant funded by the Korean Government (NRF-
are difficult and often erroneous because they involve nonmaterial traits 2017S1A3A2065831).
and there is no external mechanism by which cultures can be compared
neutrally. One possible topic for future studies may focus on individual References
differences to see if they follow Hofstede’s cultural dimensions. Given
Chakravartty, P., Kuo, R., Grubbs, V., & Mcllwain, C. (2018). #CommunicationSoWhite.
the past 20 years have observed major technological advancement, more
Journal of Communication, 68(2), 254–266. https://doi.org/10.1093/joc/jqy003
recent frameworks on cultural comparison and more pertinent to tech­ Cheney-Lippold, J. (2017). We are data: Algorithms and the making of our digital selves.
nology would be desirable. For instance, the World Value Survey con­ New York, USA: NYU, Press.
tains questions assessing the values of individuals on topics such as new Cohen, J. (1988). Statistical power analysis (2nd ed.). Hillsdale NJ: Erlbaum.
Crain, M. (2018). The limits of transparency. 20 pp. 88–104). New Media & Society.
media and technology from many countries. Related to this, the hy­ http://doi:10.1177/1461444816657096, 1.
potheses on cultural difference were derived from culture-level com­ Diakopoulos, N., & Koliska, M. (2016). Algorithmic transparency in the news media.
parisons, but the sampling in each country was gathered collectively Digital Journalism, 5(7), 809–828. https://doi.org/10.1080/
21670811.2016.1208053
without consideration of these cultural dimensions. As the cultural Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data
values were not measured among the participants, the gap between the & Society, 3(2), 1–11. https://doi.org/10.1177/2053951716665128
unit of analysis in hypotheses and data should be addressed in future Eitle, V., & Buxmann, P. (2020). Cultural differences in machine learning adoption.
Proceedings of the 28th European conference on information system. https://aisel.aisnet.
studies. It will be also good to test the model across more than two org/ecis2020_rp/138.
cultures to verify culturally sensitive CN. Comparative research is Fortias, A., Shin, D., & Yu, H. (2019). Prospectus and limitations of algorithmic
necessary for generalizing findings, coupled with cultural and social as governance. Digital Policy, Regulation and Governance, 24(4), 369–383. https://doi.
org/10.1108/DPRG-03-2019-0017
well as algorithmic developments.
Graefe, A., Haim, M., Haarmann, B., & Brosius, H. (2018). Readers’ perception of
The literature in this study is drawn upon mostly US-centric litera­ computer-generated news. Journalism, 19(5), 595–610. https://doi.org/10.1177/
ture. The related studies, relevant works, and even the theories used are 1464884916641269
heavily reflecting the Western view of culture. Thus, this US-centric Haim, M., Graefe, A., & Brosius, H. (2018). Burst of the filter bubble? Digital Journalism, 6
(3), 330–343. https://doi.org/10.1080/21670811.2017.1338145
view may bias in collecting data, analyzing findings, and drawing con­ Hall, E. (1989). Beyond culture. New York: Doubleday.
clusions. Future studies should consider pursuing a balanced view of Hofstede, G. (2001). Cultures consequences (2nd ed.). Thousand Oaks: Sage Publications.
Western and Eastern literature. The UAE and other non-Western coun­ Jia, C. (2020). Chinese automated journalism. International Journal of Communication, 14,
2611–2632. www.ijoc.org.
tries would be great counterparts to US and Western states. Contrast and Jones, B., & Jones, R. (2019). Public service chatbots. Digital Journalism, 7(8),
comparison of UAE and Western cultures will mitigate the US-centric 1032–1053. http://doi:10.1080/21670811.2019.1609371.
views as the UAE has not been extensively researched and the cultural Jung, J., Song, H., Kim, Y., & Oh, S. (2017). Intrusion of software robots into journalism.
Computers in Human Behavior, 71, 291–298. https://doi.org/10.1016/j.
implication of the Middle East is largely unknown. Just like the US, the chb.2017.02.022
UAE represents more than 200 nationalities and cultural diversity is Kim, D., & Lee, J. (2019). Designing an algorithm-driven text generation system for
fairly high. There are different kinds of cultural diversities and social personalized and interactive news reading. International Journal Of Human-Computer
Interaction, 35(2), 109–121. https://doi.org/10.1080/10447318.2018.1437864
cohesion in each country. It will be interesting to compare how different Kim, K., & Moon, S. (2021). When algorithmic transparency failed. American Behavioral
cultural diversities drive algorithm development. Beyond the Western- Scientist. http://doi:10.1177/0002764221989783.
oriented perspective, future studies may pursue queries such as: How Lee, M. (2018). Understanding perception of algorithmic decisions. Big Data & Society, 5
(1), 1–16. http://doi:10.1177/2053951718756684.
do non-Western users assess the media characteristics or attributes of a
Lee, F., Bier, J., Christensen, J., Engelmann, L., Helgesson, C., & Williams, R. (2019).
chatbot? How do users recognize and make sense of chatbots in non- Algorithms as folding. Big Data & Society. http://doi:10.1177/2053951719863819.
Western contexts? Lewis, S., Guzman, A., & Schmidt, T. (2019). Automation, journalism, and
Methodologically, the level of analysis in this study should be noted. human–machine communication. Digital Journalism, 7(4), 409–427. http://doi:10.1
080/21670811.2019.1577147.
The algorithm as a culture may be related to the macro and cultural Park, Y. J. (2020). Contextualizing privacy on health-related use of information
level, whereas the model in the study primarily measures the individual technology. Computers in Human Behavior, 105, 106204.
levels of response. Hofstede’s cultural values are societal level while the Park, Y. J. (2021). The future of digital surveillance. Ann Arbor, MI: University of
Michigan, Press.
study proposes an individual response. Particularly, a macro perspective Rapp, A., Curti, L., & Boldi, A. (2021). The human side of human-chatbot interaction.
of algorithms should be included as existing infrastructure greatly in­ International Journal of Human-Computer Studies, 151, 102630. https://doi.org/
fluences the acceptance of algorithmic services. Broader views of algo­ 10.1016/j.ijhcs.2021.102630
rithmic ecology can be considered when cultural aspects are compared.

9
D. Shin et al. Computers in Human Behavior 126 (2022) 107007

Seaver, N. (2017). Algorithms as culture. Big Data & Society, 4(2), 1–12. https://doi.org/ Stylos, N., Fortias, A., & Huan, T. (2021). Beyond smart systems adoption. International
10.1177/2053951717738104 Journal Of Hospitality Management, 98, 103042. https://doi.org/10.1016/j.
Shin, D. (2020). How do users interact with algorithm recommender systems? Computers ijhm.2021.103042
In Human Behavior, 109, 1–10. https://doi.org/10.1016/j.chb.2020.106344 Thurman, N., Moeller, J., Helberger, N., & Trilling, D. (2019). My friends, editors,
Shin, D. (2021). A cross-national study on the perception of algorithm news in the East algorithms, and I. Digital Journalism, 7(4), 447–469. https://doi.org/10.1080/
and the West. Journal of Global Information Management, 29(2), 77–101. http://doi: 21670811.2018.1493936
10.4018/JGIM.2021030105. Veglis, A., & Maniou, T. (2019). Chatbots on the rise. Studies in Media and Communication,
Shin, D., & Park, Y. (2019). Role of fairness, accountability, and transparency in 7(1), 1–10. http://doi:10.11114/smc.v7i1.3986.
algorithmicaffordance. Computers in Human Behavior, 98, 277–284. https://doi.org/ Waddell, F. (2019). Can an algorithm reduce the perceived bias of news? Journalism &
10.1016/j.chb.2019.04.019 Mass Communication Quarterly, 96(1), 82–100. https://doi.org/10.1177/
Shin, D., Zhong, B., & Biocca, F. (2020). Beyond user experience. International Journal Of 1077699018815891
Information Management, 52, 1–11. https://doi.org/10.1016/j. Zheng, L., Yang, F., & Li, T. (2014). Modeling and broadening temporal user interest in
ijinfomgt.2019.102061 personalized news recommendation. Expert Systems with Applications, 47(7),
Smith, A. (2018). Public attitudes toward computer algorithms. Pew Research Center 3168–3177. http://doi:10.1016/j.eswa.2013.11.020.
Research Report, 1, 1–37. www.pewresearch.org/staff/aaron-smith. Zheng, Y., Zhong, B., & Yang, F. (2018). When algorithms meet journalism. Computers in
Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4), Human Behavior, 86, 266–275. http://doi:10.1016/j.chb.2018.04.046.
395–412.

10

You might also like