Artificial Intelligence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Delft University of Technology

Artificial Intelligence (AI)


Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for
research, practice and policy
Dwivedi, Yogesh K.; Hughes, Laurie; Ismagilova, Elvira; Aarts, Gert; Coombs, Crispin; Crick, Tom; Duan,
Yanqing; Dwivedi, Rohita; Janssen, Marijn; More Authors
DOI
10.1016/j.ijinfomgt.2019.08.002
Publication date
2019
Document Version
Final published version
Published in
International Journal of Information Management

Citation (APA)
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Janssen,
M., & More Authors (2019). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges,
opportunities, and agenda for research, practice and policy. International Journal of Information
Management. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Important note
To cite this publication, please use the final published version (if applicable).
Please check the document version above.

Copyright
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent
of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Takedown policy
Please contact us and provide details if you believe this document breaches copyrights.
We will remove access to the work immediately and investigate your claim.

This work is downloaded from Delft University of Technology.


For technical reasons the number of authors shown on this cover page is limited to a maximum of 10.
Green Open Access added to TU Delft Institutional Repository

'You share, we take care!' - Taverne project

https://www.openaccess.nl/en/you-share-we-take-care

Otherwise as indicated in the copyright section: the publisher


is the copyright holder of this work and the author uses the
Dutch legislation to make this work public.
International Journal of Information Management xxx (xxxx) xxxx

Contents lists available at ScienceDirect

International Journal of Information Management


journal homepage: www.elsevier.com/locate/ijinfomgt

Opinion paper

Artificial Intelligence (AI): Multidisciplinary perspectives on emerging


challenges, opportunities, and agenda for research, practice and policy
Yogesh K. Dwivedia,*, Laurie Hughesa, Elvira Ismagilovab, Gert Aartsc,1, Crispin Coombsd,1,
Tom Cricke,1, Yanqing Duanf,1, Rohita Dwivedig,1, John Edwardsh,1, Aled Eirugi,1, Vassilis Galanosj,1,
P. Vigneswara Ilavarasank,1, Marijn Janssenl,1, Paul Jonesm,1, Arpan Kumar Kark,1, Hatice Kizginb,1,
Bianca Kronemannm,1, Banita Lalf,1, Biagio Lucinin,1, Rony Medagliao,1,
Kenneth Le Meunier-FitzHughp,1, Leslie Caroline Le Meunier-FitzHughp,1, Santosh Misraq,1,
Emmanuel Mogajir,1, Sujeet Kumar Sharmas,1, Jang Bahadur Singhs,1, Vishnupriya Raghavant,1,
Ramakrishnan Ramanu,1, Nripendra P. Ranab,1, Spyridon Samothrakisv,1, Jak Spencerw,1,
Kuttimani Tamilmanib,1, Annie Tubadjix,1, Paul Waltony,1, Michael D. Williamsz,1
a
Emerging Markets Research Centre (EMaRC), School of Management, Swansea University, United Kingdom
b
International Business, Marketing and Branding Research Centre, School of Management, University of Bradford, Bradford, United Kingdom
c
Department of Physics, College of Science, Swansea University, United Kingdom
d
School of Business and Economics, Loughborough University, United Kingdom
e
School of Education, Swansea University, Swansea, United Kingdom
f
Business and Management Research Institute, University of Bedfordshire, United Kingdom
g
Prin. L.N. Welingkar Institute of Management Development & Research, Mumbai, India
h
Operations & Information Management Department, Aston Business School, United Kingdom
i
Morgan Academy, School of Management, Swansea University, United Kingdom
j
School of Social and Political Sciences, University of Edinburgh, United Kingdom
k
Department of Management Studies, Indian Institute of Technology, Delhi, India
l
Information and Communication Technology Section, Faculty of Technology, Policy and Management, Delft University of Technology, Netherlands
m
School of Management, Swansea University, United Kingdom
n
Department of Mathematics, Computational Foundry, Swansea University, United Kingdom
o
Department of Digitalization, Copenhagen Business School, Denmark
p
Norwich Business School, University of East Anglia, Norwich, Norfolk, United Kingdom
q
CEO, Commissioner of e-Governance, Government of Tamil Nadu, India
r
Department of Marketing, Events and Tourism, University of Greenwich, United Kingdom
s
Information Systems Area, Indian Institute of Management Tiruchirappalli, India
t
Manipal Global Education Services, Bangalore, India
u
Symbiosis Institute of Business Management, Pune & Symbiosis International (Deemed) University, Pune, India
v
Institute for Analytics and Data Science, University of Essex, United Kingdom
w
Urban Scale Interventions, United Kingdom
x
Economics Department, University of West of England, United Kingdom
y
Capgemini UK Ltd, United Kingdom
z
Swansea i-Lab (Innovation Lab), School of Management, Swansea University, United Kingdom

Corresponding author.

E-mail addresses: [email protected] (Y.K. Dwivedi), [email protected] (L. Hughes), [email protected] (E. Ismagilova),
[email protected] (G. Aarts), [email protected] (C. Coombs), [email protected] (T. Crick), [email protected] (Y. Duan),
[email protected] (R. Dwivedi), [email protected] (J. Edwards), [email protected] (A. Eirug), [email protected] (V. Galanos),
[email protected] (P.V. Ilavarasan), [email protected] (M. Janssen), [email protected] (P. Jones), [email protected] (A.K. Kar),
[email protected] (H. Kizgin), [email protected] (B. Kronemann), [email protected] (B. Lal), [email protected] (B. Lucini),
[email protected] (R. Medaglia), [email protected] (K. Le Meunier-FitzHugh), [email protected] (L.C. Le Meunier-FitzHugh),
[email protected] (S. Misra), [email protected] (E. Mogaji), [email protected] (S.K. Sharma), [email protected] (J.B. Singh),
[email protected] (V. Raghavan), [email protected] (R. Raman), [email protected] (N.P. Rana), [email protected] (S. Samothrakis),
[email protected] (J. Spencer), [email protected] (K. Tamilmani), [email protected] (A. Tubadji),
[email protected] (P. Walton), [email protected] (M.D. Williams).
1
These authors have made equal contributions and are placed in alphabetical order.

https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Received 11 July 2019; Received in revised form 2 August 2019; Accepted 3 August 2019
0268-4012/ © 2019 Elsevier Ltd. All rights reserved.

Please cite this article as: Yogesh K. Dwivedi, et al., International Journal of Information Management,
https://doi.org/10.1016/j.ijinfomgt.2019.08.002
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

ARTICLE INFO ABSTRACT

Keywords: As far back as the industrial revolution, significant development in technical innovation has succeeded in
Artificial intelligence transforming numerous manual tasks and processes that had been in existence for decades where humans had
AI reached the limits of physical capacity. Artificial Intelligence (AI) offers this same transformative potential for
Cognitive computing the augmentation and potential replacement of human tasks and activities within a wide range of industrial,
Expert systems
intellectual and social applications. The pace of change for this new AI technological age is staggering, with new
Machine learning
Research agenda
breakthroughs in algorithmic machine learning and autonomous decision-making, engendering new opportu­
nities for continued innovation. The impact of AI could be significant, with industries ranging from: finance,
healthcare, manufacturing, retail, supply chain, logistics and utilities, all potentially disrupted by the onset of AI
technologies. The study brings together the collective insight from a number of leading expert contributors to
highlight the significant opportunities, realistic assessment of impact, challenges and potential research agenda
posed by the rapid emergence of AI within a number of domains: business and management, government, public
sector, and science and technology. This research offers significant and timely insight to AI technology and its
impact on the future of industry and society in general, whilst recognising the societal and industrial influence
on pace and direction of AI development.

1. Introduction medical diagnosis and even spam fighting can be performed via ma­
chine intelligence. The views of AI experts as presented in Müller and
Artificial Intelligence (AI) is a concept that has been part of public Bostrom (2016), predicted that AI systems are likely to reach overall
discourse for decades, often depicted within science fiction films or human ability by 2075 and that some experts feel that further progress
debates on how intelligent machines will take over the world relegating of AI towards super intelligence may be bad for humanity. Society
the human race to a mundane servile existence in supporting the new AI generally is yet to fully grasp many of the ethical and economic con­
order. Whilst this picture is a somewhat caricature-like depiction of AI, siderations associated with AI and big data and its wider impact on
the reality is that artificial intelligence has arrived in the present and human life, culture, sustainability and technological transformation
many of us regularly interact with the technology in our daily lives. AI (Duan, Edwards, & Dwivedi, 2019; Pappas, Mikalef, Giannakos,
technology is no longer the realm of futurologists but an integral Krogstie, & Lekakos, 2018).
component of the business model of many organisations and a key The probabilistic analysis of the economic impact of AI and auto­
strategic element in the plans for many sectors of business, medicine mation has been assessed by the World Economic Forum (WEF), where
and governments on a global scale. This transformational impact from they predict that 20% of existing UK jobs could be impacted by AI
AI has led to significant academic interest with recent studies re­ technologies. This figure is greater in emerging economies such as
searching the impacts and consequences of the technology rather than China and India, where the level rises to 26% due to the greater scope
the performance implications of AI, which seems to have been the key for technological change within the manufacturing sector. AI technol­
research domain for a number of years. ogies are predicted to drive innovation and economic growth creating
The literature has offered various definitions of AI, each en­ 133 million new jobs globally by 2022, contributing 20% of GDP within
capsulating the key concepts of non-human intelligence programmed to China by 2030 (WEF 2018). AI technology spending in Europe for 2019
perform specific tasks. Russell and Norvig (2016) defined the term AI to has increased 49% over the 2018 figure to reach $5.2 billion (IDC,
describe systems that mimic cognitive functions generally associated 2019). Juniper Research (2019) highlighted that global spending on AI
with human attributes such as learning, speech and problem solving. A technologies within the consumer retail sector alone is predicted to
more detailed and perhaps elaborate characterisation was presented in reach $12bn by 2023, a significant rise from the current figure of
Kaplan and Haenlein (2019), where the study describes AI in the con­ $3.5bn. The research also highlighted the increasing use of AI in the
text of its ability to independently interpret and learn from external form of chatbots for customer service applications, where these de­
data to achieve specific outcomes via flexible adaptation. The use of big ployments could realise annual savings of $439m globally by 2023, up
data has enabled algorithms to deliver excellent performance for spe­ from $7m in 2019. Technology giants such as Amazon and Walmart
cific tasks (robotic vehicles, game playing, autonomous scheduling etc.) have been experimenting with AI for some time, applying the tech­
and a more pragmatic application of AI rather than the more cognitive nology to demand forecasting and supply chain fulfilment. Walmart's
focussed – human level AI where the complexities of human thinking store of the future – Intelligent Retail Lab (IRL) is testing AI with
and feelings have yet to be translated effectively (Hays & Efros, 2007; analytics to trigger the need to respond when customers pick the last
Russell & Norvig, 2016). The common thread amongst these definitions item and then track the store's ability to quickly restock the product.
is the increasing capability of machines to perform specific roles and The Walmart IRL AI systems are supported by cameras and sensors
tasks currently performed by humans within the workplace and society installed throughout the store that transmit 1.6 TB of data per second to
in general. data centres and linked supply chain fulfilment (Forbes, 2019a). The
The ability for AI to overcome some of the computationally in­ use of AI technology within this sector can only increase as other firms
tensive, intellectual and perhaps even creative limitations of humans, respond to the competition from these market leaders.
opens up new application domains within education and marketing, The potential for AI has not been lost on the global superpowers
healthcare, finance and manufacturing with resulting impacts on pro­ with the US and China heavily focussed on the race for technology
ductivity and performance. AI enabled systems within organisations are supremacy in this area. Currently this seems to be a battle that China
expanding rapidly, transforming business and manufacturing, ex­ seems to be winning with estimates of $12 billion spending on AI in
tending their reach into what would normally be seen as exclusively 2017 and predicted spend of up to $20 billion by 2020. Although the
human domains (Daugherty & Wilson, 2018; Miller, 2018). The era of Trump administration has earmarked $2 billion for the department of
AI systems has progressed to levels where autonomous vehicles, chat­ Defence to spend on its AI Next project, this pales into insignificance
bots, autonomous planning and scheduling, gaming, translation, when compared to China. Chinese academics continue to publish

2
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

significant levels of articles on AI and Chinese industry has increased within several domains: business and management, government and
the number of AI patents by 200% in recent years, significantly sur­ public sector. science and technology. This research is presented as
passing the US. Although Europe is still the lead academic publisher on offering significant and timely insight to AI technology, its potential
AI related technologies, China now accounts for 25% of the global application and its impact on the future of industry and society.
ouput Shoham et al. (2018). China is determined to be the world leader The remaining sections of this article are organised as follows:
in AI by 2030 (Forbes, 2019b). Chinas ability to aggressively implement Section 2 presents many of the key debates and overall themes within
rather than rely solely on innovation coupled with its hypercompetitive the literature; Section 3 details the multiple perspectives on AI tech­
and entrepreneurial economy and business friendly governance, has nologies from the expert contributors; Section 4 presents a discussion on
driven the AI sector forward (FT, 2019). the key AI related topics relating to the challenges, opportunities and
Whilst the benefits of greater levels of AI adoption within many research agendas presented by the expert contributors. The study is
sectors of the global economy are felt in the context of greater effi­ concluded in Section 5.
ciency, improved productivity and reliability, this picture of positive
innovation is not universally welcomed globally. Estimates for work
2. Debate within existing literature
displacement due to automation, highlight that up to a third of current
work activities could be impacted by 2030 (Manyika et al., 2017).
This section synthesises the existing AI focussed literature and ela­
Studies have analysed the impact of this significant change, developing
borates on the key themes listed in Table 1 from the literature review.
a narrative of a changing jobs market that is predicted to focus humans
Studies included in this section were identified using the Scopus data­
further up the value chain on more creative and cognitive orientated
base, using the following combination of keywords
roles in support of AI technologies (DIN & DKE, 2018; Jonsson &
(TITLE (“Artificial intelligence”) AND TITLE (“Advantages” OR
Svensson, 2016). However, is this particular vision of an AI future a
“Benefit” OR “Opportunities” OR “Limitation” OR “Challenge” OR
universal one across the globe within both developed and emerging
“Barriers” OR “Shortcoming” OR “agenda” OR “Research Direction”.
markets? The fact that AI has the capacity to replace many rules-based
This approach is similar to approach employed by existing review ar­
and repetitive tasks, means that significant numbers of jobs that tra­
ticles on various topics (see for example, Al-Emran, Mezhuyev,
ditionally would be undertaken within emerging market economies will
Kamaludin, & Shaalan, 2018; Dwivedi, Kapoor, & Chen, 2015a;
be lost. There are benefits of AI being centred within the developed
Dwivedi & Kuljis, 2008; Hughes et al., 2019; Ismagilova, Hughes,
economies where new higher skilled jobs are likely to be created, but
Dwivedi, & Raman, 2019; Kapoor et al., 2018; Koivisto & Hamari, 2019;
there is a potential scenario where AI could displace millions of jobs
Olanrewaju, Hossain, Whiteside, & Mercieca, 2020; Senyo, Liu, & Effah,
within emerging economies. This is likely to have significant impact
2019; Tamilmani, Rana, Prakasam, & Dwivedi, 2019). Existing research
within Asia and Africa as traditional low skilled jobs are replaced by
reviewed for this article is categorised in the following major themes: AI
intelligent machine thereby damaging growth and worker livelihoods
and Decision Making; Application Domains; Data and Information;
within these economies (BBC, 2019). The social/economic construction
Challenges.
of AI, its impact on humans and society from its evolution, is still being
assessed. However, it is clear that there are likely to be both winners
and losers and that decision makers need to be strategic in their outlook 2.1. AI and decision making
for the future.
This study brings together the collective insight from the workshop Aspects of the literature have considered the use and impact of AI
entitled “Artificial Intelligence (AI): Emerging Challenges, Opportunities, based systems for decision-making applications. These studies include
and Agenda for Research and Practice” held at the School of topics such as: Algorithmic; Artificial Neural Networks; Decision
Management, Swansea University, UK on 13th June 2019. Support Systems; Deep Learning; Deep Neural Networks; Expert
Contributions were received from collaborators within industry, aca­ Systems; and Learning Systems. Studies have applied artificial neural
demia and public sector to highlight the significant opportunities, techniques to data analysis and pattern recognition problems. The re­
challenges and potential research agenda posed by the emergence of AI search by Abbot and Marohasy (2013) examined the application of
neural networks based on AI for forecasting monthly rainfall in Nebo,

Table 1
Themes in AI research.
Theme Details Citations

AI and decision making Artificial Neural Network Abarca-Alvarez et al., 2018; Abbot & Marohasy, 2013; Baldassarre et al., 2017; Cleophas & Cleophas, 2010; Kahn, 2017
Deep Learning Anderson, 2019; Lassau et al., 2019; Nguyen & Shetty, 2018; Reza Tizhoosh & Pantanowitz, 2018; Stead, 2018; Thrall
et al., 2018
Algorithmic Dreyer & Allen, 2018; Kahn, 2017; Risse, 2019; Stead, 2018; Varga-Szemes et al., 2018; Zandi et al., 2019
Learning Systems Duan et al., 2019; Glauner et al., 2017; Walton, 2018a; Walton, 2018b; Wang, Törngren, & Onori, 2015a; Wang, Li, &
Leung, 2015b
Decision Support Systems Abarca-Alvarez et al., 2018; Milano, O'Sullivan, & Gavanelli, 2014; Schulz & Nakamoto, 2013
Deep Neural Networks Milano et al., 2014; Mitchell, 2019; Duan et al., 2019
Application domains Robotics Edwards, 2018; Erikson & Salzmann-Erikson, 2016; Gupta & Kumari, 2017
Healthcare and Informatics Beregi et al., 2018; Cheshire, 2017; Cleophas & Cleophas, 2010; Combi, 2017; Dreyer & Allen, 2018; Gupta & Kumari,
2017; Houssami et al., 2017; Kahn, 2017; Khanna et al., 2013; Lassau et al., 2019; Nguyen & Shetty, 2018; Stead, 2018;
Thesmar et al., 2019; Thrall et al., 2018; Varga-Szemes et al., 2018; Xu et al., 2019; Zandi et al., 2019
Digital Imaging Beregi et al., 2018; Gupta & Kumari, 2017; Kahn, 2017; Lassau et al., 2019; Nguyen & Shetty, 2018; Stead, 2018;
Education and Policy Arlitsch & Newell, 2017; Chaudhri et al., 2013; Mikhaylov et al., 2018; Nguyen, 2018; Yoon & Baek, 2016; Yoon, 2016;
Manufacturing DIN & DKE, 2018; Haeffner & Panuwatwanich, 2017; Jain & Mosier, 1992; Jonsson & Svensson, 2016; Katz, 2017; Kumar,
2017; Kusiak, 1987; Lee, 2002; Li 2018; Li et al., 2017; Löffler & Tschiesner, 2013; Makridakis, 2018; Muhuri et al., 2019;
Nikolic et al., 2017; Parveen, 2018; Wang, Törngren, & Onori, 2015a; Wang, Li, & Leung, 2015b; Wang & Wang, 2016;
Yang et al., 2017; Zhong et al., 2017a
Data & information Big Data Abarce-Alvarez et al., 2018; Beregi et al., 2018; Duan et al., 2019; Rubik & Jabs, 2018; Schulz & Nakamoto, 2013; Stead,
2018; Thrall et al., 2018; Xu et al., 2019
Data Visualisation Olshannikova et al., 2015; Zheng et al., 2016; Zhong et al., 2017b

3
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Queensland, Australia. The study highlighted the benefits in combining generally moved on from the somewhat dated concepts of AI based
multiple non-linear relationships using neural networks to predict machines replacing all human workers. Studies have recognised the
rainfall patterns one month in advance. This application of AI was realistic limits of the continuing drive to automation, highlighting a
posited as directly contributing to the prediction of flood risk weather more realistic human in the loop concept where the focus on AI is to
patterns. enhance human capability, not replace it (Katz, 2017; Kumar, 2017).
Deep Learning is a term gaining traction within the literature and is Humans are likely to move up the value chain to focus on design and
associated with machine learning architectures and concepts but at a integration related activities as part of an integrated AI, machines and
greater level and depth of neural network layers (Glauner, Meira, human based workforce (DIN & DKE, 2018; Jonsson & Svensson, 2016;
Valtchev, State, & Bettinger, 2017). Studies have posited the potential Makridakis, 2018; Wang, Törngren, & Onori, 2015a; Wang, Li, & Leung,
benefits of Deep Learning applications in areas of digital pathology and 2015b; Wang & Wang, 2016). Manufacturing organisations are likely to
related medical applications, whilst cognisant of the limitations of this use AI technologies within a production environment where intelligent
technology in terms of human reasoning and interpretation (Reza machines are socially integrated within the manufacturing process,
Tizhoosh & Pantanowitz, 2018; Stead, 2018). Anderson (2019) ana­ effectively functioning as co-workers for key tasks or to solve significant
lysed the potential of combining Deep Learning technology with Elec­ problems (Haeffner & Panuwatwanich, 2017).
trocardiogram (ECG) applications to detect patients with reduced left Khanna, Sattar, and Hansen (2013) emphasised the importance of
ventricular ejection fraction (rLVEF). Detecting patients with rLVEF AI in healthcare, particularly in medical informatics. There is a growing
would be helpful in patients for whom echocardiography or other requirement for new technologies that understand the complexities of
imaging modalities are not available or too expensive. Early diagnosis hospital operations and provide the necessary productivity gains in
of rLVEF could directly impact patient diagnosis and mortality levels. resource usage and patient service delivery. AI has the potential to offer
Studies have posited the benefits of utilising deep neural networks improved patient care and diagnosis as well as interpretation of medical
to improve the use of AI, however, the use of deeper networks and big imaging in areas such as radiology (Dreyer & Allen, 2018; Kahn, 2017).
datasets is unlikely to develop meaning in the human context, requiring Screening for breast cancer (BC) and other related conditions could be
further interdisciplinary research to unlock this area (Mitchell, 2019). more accurate and efficient using AI technology. Houssami et al.’s
(2017) study analyses the use of AI for BC screening highlighting its
2.2. Application domains potential in reducing false-positives and related human detection er­
rors. The study acknowledges some of the interrelated ethical and so­
The AI literature has identified several separate domains in which cietal trust factors but the boundaries of reliance on AI and acceptable
the technology can be applied: Digital Imaging, Education, human in the loop involvement is still to be developed. The application
Government, Healthcare, Manufacturing, Robotics and Supply Chain. of AI and related digital technologies within public health is rapidly
Studies have analysed the impact of AI and its potential to replace developing. However, collection, storage, and sharing of AI technology
humans via intelligent automation within manufacturing, supply chain, derived large data sets, raises ethical questions connected to govern­
production and even the construction industry (Kusiak, 1987; Muhuri, ance, quality, safety, standards, privacy and data ownership (Zandi,
Shukla, & Abraham, 2019; Parveen, 2018). Existing factory processes Reis, Vayena, & Goodman, 2019). Thesmar et al. (2019) posited the
will be increasingly subject to analysis to ascertain whether they could benefits of utilising AI technology for insurance claims within health­
be automated (Lee, 2002; Löffler & Tschiesner, 2013; Yang, Chen, care. Claim submission, claim adjudication and fraud analysis can sig­
Huang, & Li, 2017). AI centric technologies will be able to monitor and nificantly benefit from AI use.
control processes in real time offering significant efficiencies over Education and information search is an area where the literature has
manual processes (Jain & Mosier, 1992; Zhong, Xu, Klotz, & Newman, identified the potential benefits of AI technology solutions. Chaudhri,
2017a). Organisations have posited the benefits of integrating AI Lane, Gunning, and Roschelle (2013) discussed application of AI in
technologies in the development of intelligent manufacturing and the education to improve teacher effectiveness and student engagement.
smart factory of the future (Li, Hou, Yu, Lu, & Yang, 2017; Nikolic, The study analysed the potential of AI within education in the context
Ignjatic, Suzic, Stevanov, & Rikalovic, 2017). The literature has of intelligent game-based learning environments, tutoring systems and

Fig. 1. AI challenges scope.

4
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

intelligent narrative technologies. The relevance of libraries in the challenges for government and organisations as the scope and depth of
modern technology era has received focus within the literature. Arlitsch potential applications increases and the use of AI becomes more
and Newell (2017) discussed how AI can change library processes, mainstream. These challenges are categorised in Fig. 1 and discussed in
staffing requirements and library users. It is important for libraries to this section.
focus on human qualities and the value add of human interaction in­ Table 2 lists the specific AI challenges from the literature and
tegrated with AI to provide a richer user experience. Moreover, breakdown subtext of challenge details.
Mikhaylov, Esteve, and Campion (2018) considered the use of AI cap­
abilities from the perspective of educating the public on policy and a
2.4.1. Social challenges
more effective mechanism for high uncertainty environments.
The increasing use of AI is likely to challenge cultural norms and act
as a potential barrier within certain sectors of the population. For ex­
2.3. Data and information ample, Xu et al. (2019) highlighted the challenges that AI will bring to
healthcare in the context of the change in interaction and patient
The topic of big data and its integration with AI has received sig­ education. This is likely to impact the patient as well as the clinician.
nificant interest within the wider literature. Studies have identified the The study highlighted the requirement for clinicians to learn to interact
benefits of applying AI technologies to big data problems and the sig­ with AI technologies in the context of healthcare delivery and for pa­
nificant value of analytic insight and predictive capability for a number tient education to mitigate the fear of technology for many patient
of scenarios (Rubik & Jabs, 2018). Health related studies that have demographics (Xu et al., 2019). Theall et al. (2018) argued that culture
analysed the impact and contribution of big data and AI arguing that is one of the key barriers of AI adoption within radiology, as patients
these technologies can greatly support patient health based diagnosis may have a reticence to interact with new technologies and systems.
and predictive capability (Beregi et al., 2018; Schulz & Nakamoto, Social challenges have been highlighted as potential barriers to the
2013). Big Data Analytics (BDA) develops the methodological analysis further adoption of AI technologies. Sun and Medaglia (2019) identified
of large data structures, often categorised under the terms: volume, social challenges relating to unrealistic expectations towards AI tech­
velocity, variety, veracity and value adding. BDA combined with AI has nology and insufficient knowledge on values and advantages of AI
the potential to transform areas of manufacturing, health and business technologies. Studies have also discussed the social aspects of potential
intelligence offering advanced incites within a predictive context job losses due to AI technologies. This specific topic has received
(Abarca-Alvarez, Campos-Sanchez, & Reinoso-Bellido, 2018; Shukla, widespread publicity in the media and debated within numerous
Tiwari, & Beydoun, 2018; Spanaki, Gürgüç, Adams, & Mulligan, 2018; forums. The study by Risse (2019) proposed that AI creates challenges
Wang and Wang, 2016). for humans that can affect the nature of work and potential influence on
Organisations are increasingly deploying data visualisation tools people's status as participants in society. Human workers are likely to
and methods to make sense of their big data structures. In scenarios progress up the value chain to focus on utilising human attributes to
where the limitations of human perception and cognition are taken into solve design and integration problems as part of an integrated AI and
account, greater levels of understanding and interpretation can be human centric workforce (DIN & DKE, 2018; Jonsson & Svensson, 2016;
gained from the analysis and presentation of data using AI technologies Makridakis, 2018; Wang, Törngren, & Onori, 2015a; Wang, Li, & Leung,
(Olshannikova, Ometov, Koucheryavy, & Olsson, 2015). The analysis 2015b; Wang & Wang, 2016).
and processing of complex heterogeneous data is problematic. Organi­
sations can extract significant value and key management information
2.4.2. Economic challenges
from big data via intelligent AI based visualisation tools (Zheng, Wu,
The mass introduction of AI technologies could have a significant
Chen, Qu, & Ni, 2016; Zhong, Xu, Chen, & Huang, 2017b).
economic impact on organisations and institutions in the context of
required investment and changes to working practices. Reza Tizhoosh
2.4. Challenges and Pantanowitz (2018) focused on the affordability of technology
within the medical field arguing that AI is likely to require substantial
The implementation of AI technologies can present significant financial investment. The study highlighted the impact on pathology

Table 2
AI Challenges from the literature.
AI Challenge Details

Social challenges Patient/Clinician Education; Cultural barriers; Human rights; Country specific disease profiles; Unrealistic expectations
towards AI technology; Country specific medical practices and insufficient knowledge on values and advantages of AI
technologies.
Economic challenges Affordability of required computational expenses; High treatment costs for patients; High cost and reduced profits for
hospitals; Ethical challenges including: lack of trust towards AI based decision making and unethical use of shared data.
Data challenges Lack of data to validate benefits of AI solutions; Quantity and quality of input data; Transparency and reproducibility;
Dimensionality obstacles; Insufficient size of available data pool; Lack of data integration and continuity; Lack of
standards of data collection; Format and quality; Lack of data integration and continuity and lack of standards for data
collection; Format and quality.
Organisational and managerial challenges Realism of AI; Better understanding of needs of the health systems; Organisational resistance to data sharing; Lack of in-
house AI talent; Threat of replacement of human workforce; Lack of strategy for AI development; Lack of
interdisciplinary talent; Threat to replacement of human workforce.
Technological and technology implementation Non-Boolean nature of diagnostic tasks; Adversarial attacks; Lack of transparency and interpretability; Design of AI
challenges systems; AI safety; Specialisation and expertise; Big data; Architecture issues and complexities in interpreting
unstructured data.
Political, legal and policy challenges Copyright issues; Governance of autonomous intelligence systems; Responsibility and accountability; privacy/safety;
National security threats from foreign-owned companies collecting sensitive data, Lack of rules of accountability in the
use of AI; Costly human resources still legally required to account for AI based decision; Lack of official industry
standards of AI use and performance evaluation.
Ethical challenges Responsibility and explanation of decision made by AI; processes relating to AI and human behaviour, compatibility of
machine versus human value judgement, moral dilemmas and AI discrimination

5
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

laboratories where current financial pressures may be exacerbated by research identified several limitations of AI for imaging and medical
the additional pressures to adopt AI technologies. Sun and Medaglia diagnosis, thereby impacting clinician confidence in the technology.
(2019) identified several healthcare related economic challenges ar­ Cheshire (2017) discusses the limitation of medical AI-loopthink. The
guing that the introduction of AI based technologies is likely to influ­ term loopthink is defined as a type of implicit bias, which does not
ence the profitability of hospitals and potentially raise treatment costs perform correct reappraisal of information or revision of an ongoing
for patients. plan of action. Thus, AI would disfavour qualitative human moral
AI technologies have the potential to affect many sectors within the principles. Weak loopthink refers to the intrinsic inability of computer
global economy. The McKinsey report on the economic impact of AI intelligence to redirect executive data flow because of its fixed internal
(Bughin, Seong, Manyika, Chui, & Joshi, 2018) develops a narrative of hard writing, un-editable sectors of its operating system, or unalterable
how organisations are likely to adopt this technology and the potential lines of its programme code. Strong loopthink refers to AI suppression
challenges for key markets during the transition. The report analyses: due to internalisation of the ethical framework.
organisation behaviours and how they are likely to adopt AI; disruption Challenges exist around the architecture of IA systems and the need
during transition as firms experience the economic gains and losses; for sophisticated structures to understand human cognitive flexibility,
country specific impacts where AI could potentially widen the gap learning speed and even moral qualities (Baldassarre, Santucci, Cartoni,
amongst emerging and developed markets as well as the rich and poor & Caligiore, 2017; Edwards, 2018). Sun and Medaglia (2019) reviewed
(Bughin et al., 2018). the technological challenges of algorithm opacity and lack of ability to
read unstructured data. The Thrall et al. (2018) study considered the
2.4.3. Data challenges challenge of a limited pool of investigators trained in AI and radiology.
The challenges of AI and integration with big data have been dis­ This could be solved by recruiting scientists with backgrounds in AI, but
cussed within several studies. There is a need for new and efficient also by establishing educational programmes in radiology professional
technologies to handle the large volume, variety and velocity of big services (Nguyen & Shetty, 2018; Thrall et al., 2018). Varga-Szemes
data (Khanna et al., 2013). Xu et al. (2019) identified data challenges of et al. (2018) highlighted that machine learning algorithms should be
using AI in cancer genomics. The study identified the challenge in va­ created by machine learning specialists with relevant knowledge of
lidating the benefits of AI solutions and challenges in obtaining statis­ medicine and an understanding of possible outcomes and consequences.
tically significant patient outcome data. Challenges surrounding trans­ Mitchell (2019) highlighted that AI systems do not yet have the essence
parency and reproducibility were also highlighted, especially in the of human intelligence. AI systems are not able to understand the si­
context of acceptability relating to public perception. Challenges within tuations humans experience and derive the right meaning from it. This
computational pathology and the use of AI have been discussed in Reza barrier of meaning makes current AI systems vulnerable in many areas
Tizhoosh and Pantanowitz (2018). The authors highlighted the com­ but particularly to hacker attacks titled – “adversarial examples”. In
plexities of using artificial neutral networks in the interpretation of these kinds of attacks, a hacker can make specific and subtle changes to
imagery and the dimensionality obstacle. Whilst Varga-Szemes, Jacobs, sound, image or text files, which will not have a human cognitive im­
and Schoepf (2018) highlighted the challenges of machine leaning pact but could cause a programme to make potentially catastrophic
within a cardiac imaging context, positing a need to create a standar­ errors. As the programmes do not understand the inputs they process
dised format to share data across different institutions. The current and outputs they produce, they are susceptible to unexpected errors and
position on standards and data structures can be a barrier to application undetectable attacks. These impacts can influence domains such as:
of AI. Sun and Medaglia (2019) highlighted several data challenges computer vision, medical image processing, speech recognition and
surrounding the use of data and data integrity. As the transition to AI language processing (Mitchell, 2019).
technologies matures, these challenges will need to be resolved to en­
sure full confidence by all stakeholders. 2.4.6. Political, legal and policy challenges
Gupta and Kumari (2017) discussed legal challenges connected to
2.4.4. Organisational and managerial challenges AI-responsibility when errors occur using AI systems. Another legal
The transition towards adopting AI technologies presents a number challenge of using AI systems can be the issue of copyrights. Current
of organisational and managerial challenges that have strategic im­ legal framework needs significant changes in order to effectively pro­
plications for firms. Reza Tizhoosh and Pantanowitz (2018) highlighted tect and incentivise human generated work (Zatarain, 2017). Wirtz,
the significant challenges in the implementation of AI. Success relating Weyerer, and Geyer (2019) focused on the challenges of implementing
to AI adoption is likely to be evidence based, will depend on ease of use, AI within government positing the requirement for a more holistic
financial return on investment and trust. The study by Khanna et al. understanding of the range and impact of AI-based applications and
(2013) highlighted the need for AI researchers to more efficiently un­ associated challenges. The study analysed the concept of AI law and
derstand the urgent current needs of health systems and design tech­ regulations to control governance including autonomous intelligence
nologies in order to address them. Current AI systems need to use more systems, responsibility and accountability as well as privacy/safety.
sophisticated technologies where human vs computer interaction can Studies have identified the complexities of implementing AI based
be improved and connected with the flow of information. Studies have systems within government and the public sector. Sun and Medaglia
highlighted that organisations face significant issues where the lack of a (2019) used a case study approach to analyse the challenges of applying
strategy relating to implications of AI could affect critical business areas AI within the public sector in China. The study analysed three groups of
and fail to address concerns from the human workforce (Sun & stakeholders – government policy-makers, hospital managers/doctors,
Medaglia, 2019). and IT firm managers to identify how they perceive the challenges of AI
adoption in the public sector. The study analysed the scope of changes
2.4.5. Technological and technology implementation challenges and impact on citizens in the context of: Political, legal and policy
Studies have analysed the non-boolean nature of diagnostic tasks challenges as well as national security threats from foreign-owned
within healthcare and the challenges of applying AI technologies to the companies.
interpretation of data and imaging. Reza Tizhoosh and Pantanowitz
(2018) highlighted the fact that humans apply cautious language or 2.4.7. Ethical challenges
descriptive terminology, not just binary language whereas AI based Researchers have discussed the ethical dimensions of AI and im­
systems tend to function as a black box where the lack of transparency plications for greater use of the technology. Individuals and organisa­
acts as a barrier to adoption of the technology. These points are re­ tions can exhibit a lack of trust and concerns relating to the ethical
inforced in Cleophas and Cleophas (2010) and Kahn (2017) where the dimensions of AI systems and their use of shared data (Sun & Medaglia,

6
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

2019). The rapid pace of change and development of AI technologies AI technologies have been incorporated into marketing and retail
increases the concerns that ethical issues are not dealt with formally. It where big data analytics are used to develop personalised profiles of
is not clear how ethical and legal concerns especially around respon­ customers and their predicted purchasing habits. Understanding and
sibility and analysis of decisions made by AI based systems can be predicting consumer demand via integrated supply chains is more cri­
solved. Adequate policies, regulations, ethical guidance and a legal tical than ever and AI technology is likely to be a critical integral ele­
framework to prevent the misuse of AI should be developed and en­ ment. Juniper Research (2018) predicts that demand forecasting using
forced by regulators (Duan et al., 2019). Gupta and Kumari (2017) AI will more than treble between 2019 and 2023 and that chatbot in­
reinforces many of these points highlighting the ethical challenges re­ teractions will reach 22bn in the same year from current levels of 2.6bn.
lating to greater use of AI, data sharing issues and inoperability of The study highlights that firms are investing heavily in AI to improve
systems. AI based systems may exhibit levels of discrimination even trend analysis, logistics planning and stock management. AI based in­
though the decisions made do not involve humans in the loop, high­ novations such as the virtual mirror and visual search are set to improve
lighting the criticality of AI algorithm transparency (Bostrom & the customer interaction and narrow the gap between the physical and
Yudkowsky, 2011). virtual shopping experience (Juniper Research, 2018).
Researchers have argued for the more realistic future where the
2.5. Future opportunities relationship between AI is likely to transition towards a human in the
loop collaborative context rather than an industry-wide replacement of
AI technology in all its forms is likely to see greater levels of humans (Katz, 2017; Kumar, 2017). Stead (2018) asserts the im­
adoption within organisations as the range of applications and levels of portance of establishing a partnership where the AI machine will cal­
automation increase. Studies have estimated that by 2030, 70 per cent culate and/or predict and humans will explain and decide on the ap­
of businesses are likely to have adopted some form of AI technology propriate action. Humans are likely to focus on more value add
within their business processes or factory setting (Bughin et al., 2018). activities requiring design, analysis and interpretation based on AI
Studies have posited the benefits of greater levels of adoption of AI processing and outputs. Future organisations are likely to focus on
within a range of applications, with manufacturing, healthcare and creating value from an integrated human and AI collaborative work­
digital marketing developing significant academic interest (Juniper force (Jonsson & Svensson, 2016; Makridakis, 2018; Wang, Törngren, &
Research, 2018). Onori, 2015a; Wang, Li, & Leung, 2015b; Wang & Wang, 2016).
The factories of the future are likely to utilise AI technology ex­
tensively, as production becomes more automated and industry migrates 3. Multiple perspectives from invited contributors
to a more intelligent platform using AI and cyber physical systems (Wang
& Wang, 2016). Within healthcare related studies, researchers have This section has been structured by employing an approach adopted
proposed new opportunities for the application of AI within medical from Dwivedi et al. (2015b) to present consolidated yet multiple per­
diagnosis and pathology where mundane tasks can be automated with spectives on various aspects of AI from invited expert contributors. We
greater levels of speed and accuracy (Reza Tizhoosh & Pantanowitz, invited each expert to set out their contribution in up to 3–4 pages,
2018). Through the use of human biofield technology, AI systems linked which are compiled in this section in largely unedited form, expressed
to sensors placed on and near the human body can monitor health and directly as they were written by the authors. Such an approach creates
well-being (Rubik & Jabs, 2018). AI technologies will be able to monitor an inherent unevenness in the logical flow but captures the distinctive
numerous life-signs parameters via Body Area Networks (BANs) where orientations of the experts and their recommendations at this critical
remote diagnosis requiring specialised clinical opinion and intervention juncture in the evolution of AI (Dwivedi et al., 2015b). The list of topics
will be checked by a human (Hughes, Wang, & Chen, 2012). and contributors is presented in Table 3.

Table 3
Invited contributor subject list.
Title of AI related topic Author(s)

Technological perspectives
Explainability and AI systems John S. Edwards
Information Theoretic Challenges, Opportunities & Research Agenda Paul Walton

Business and management perspective


A Decision-Making Perspective Yanqing Duan, John Edwards, Yogesh Dwivedi
AI-enabled Automation Crispin Coombs
Labour Under Partial and Complete Automation Spyros Samothrakis
A Generic Perspective of AI Arpan Kar
Artificial Intelligence for Digital Marketing Emmanuel Mogaji
Artificial Intelligence for Sales Kenneth Le Meunier-Fitzhugh, Leslie Caroline Le Meunier-
FitzHugh
Complementary Assets and Affordable-tech as Pathways for AI in the Developing World: Case of India Vigneswara Ilavarasan

Arts, humanities & law perspective


People-Centred Perspectives on Artificial Intelligence Jak Spencer
Taste, Fear and Cultural Proximity in the Demand for AI Goods and Services Annie Tubadji

Science and technology perspective


Perspectives on Artificial Intelligence in the fundamental sciences Gert Aarts, Biagio Lucini
Science and Technology Studies – Vassilis Galanos

Government and public sector perspective


Artificial Intelligence in the public sector Rony Medaglia
AI for SMEs and Public Sector Organisations Sujeet Sharma and JB Singh
Public Policy Challenges of Artificial Intelligence (AI): A New Framework and Scorecard for Policy Makers and Santosh K Misra
Governments
Governance of AI and connected systems Marijn Janssen

7
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

3.1. Technological perspective not need to be judged on the same criteria that would be used for a
human decision, even in a similar domain. People are good at re­
3.1.1. Explainability and AI systems – John S. Edwards cognising faces and other types of image, but most of us do not know
Explainability is the ability to explain the reasoning behind a par­ how we do it, and so cannot give a useful explanation. Research into
ticular decision, classification or forecast. It has become an increasingly machine learning-based image recognition is relatively well advanced.
topical issue recently in both theory and practice of AI and machine The work of researchers at IBM and MIT on understanding the rea­
learning systems. soning of generative adversarial networks (GANs) for image recognition
suggests that “to some degree, GANs are organising knowledge and
3.1.1.1. Challenges. Explainability has been an issue ever since the information in ways that are logical to humans” (Dickson, 2019). For
earliest days of AI use in business in the 1980s. This accounted for much example, one neuron in the network corresponds to the concept “tree”.
of the early success of rule-based expert systems, where explanations This line of study may even help us to understand how we humans do
were straightforward to construct, compared to frame-based systems, some tasks.
where explanations were more difficult, and neural networks, where Contrary to both of these views, London (2019) argues that in
they were impossible. At their inception, neural networks were unable medical diagnosis and treatment, explainability is less important than
to give explanations except in terms of weightings with little real-world accuracy. London argues that human medical decision-making is not so
relevance. As a result, they were often referred to as “black box” different from a black box approach, in that there is often no agreed
systems. More recently, so-called deep learning systems (typically underlying causal model: “Large parts of medical practice frequently
neural networks with more than one hidden layer) make the task of reflect a mixture of empirical findings and inherited clinical culture.”
explanation even more difficult. (p.17) The outputs from a deep learning black box approach should
The implied “gold standard” has been that when a person makes a therefore simply be judged in the same way, using clinical trials and
decision, they can be asked to give an explanation, but this human evidence-based practice, and research should concentrate on striving
explanation process is a more complex one than is usually recognised in for accuracy.
the AI literature, as indicated by Miller (2019). Even if a human ex­ Lastly, advances in data visualisation techniques and technology
planation is given that appears valid, is it accurate? Face-to-face job offer the prospect of completely different approaches to the traditional
interviews are notorious for the risk of being decided on factors (such as “explanation in words”.
how the interviewee walks across the room) other than the ones the
panel members think they are using. This is related to the difficulty of 3.1.1.3. Research agenda. We offer suggestions for research in five
making tacit knowledge explicit. linked areas.
There is also a difference between the “how” explanations that are
useful for AI system developers and the “why” explanations that are • Can explanations from a single central approach be tailored to dif­
most helpful to end-users. Preece (2018) describes how this too was ferent classes of explainee? Explanation approaches are typically
recognised in the earliest days of expert systems such as MYCIN. divided into transparency and post hoc interpretation (see e.g.
Nevertheless, some of the recent AI literature seems unaware of this; it Preece, 2018), the former being more suitable for “how” explana­
is perhaps significant that the machine learning literature tends to use tions, the latter for “why”. Is it possible to tailor explanations from a
the term interpretability rather than explainability. There are, however, single central approach to different classes of explainee (developers,
many exceptions such as Adadi and Berrada (2018), who identify four end-users, domain experts…)? For example, a visualisation ap­
reasons for explanation: to justify, to control, to improve and to dis­ proach for end-users that would allow drill-down for more knowl­
cover. edgeable explainees?
An important change in context is that governments are now in­ • What sort of explanation best demonstrates compliance with sta­
troducing guidelines for the use of any type of automated decision- tute/regulation? For example, how specific does it have to be? UK
making systems, not just AI systems. For example, the European Union's train travellers often hear “this service is delayed because of delays
General Data Protection Regulation (GDPR) Article 22 states “The data to a previous service”, which is a logically valid but completely
subject shall have the right not to be subject to a decision based solely useless explanation. Do there need to be different requirements for
on automated processing”, and the associated Recital 71 gives the data different industry sectors? What form should the explanation take –
subject “the right…to obtain an explanation of the decision reached words, pictures, probabilities? The latter links to the next point.
after such assessment and to challenge the decision”. Similarly, the UK • Understanding the validity and acceptability of using probabilities
government has introduced a code of conduct for the use of “data- in AI explanation. It is well-known that many people are poor at
driven technology” in health and social care (Anonymous, 2018). In dealing with probabilities (Tversky & Kahneman, 1983). Are ex­
regulated industries, existing provisions about decision-making, such as planations from AI systems in terms of probabilities acceptable? This
outlawing “red-lining” in evaluating mortgage or loan applications, is widely used in the healthcare sector already, but it is not clear
which were first enshrined in law in the United States (US) as far back how well understood even the existing explanations are, especially
as the 1960s, also apply to AI systems. in the light of the comments by London mentioned in the previous
section.
3.1.1.2. Opportunities. People like explanations, even when they are • Improving explanations of all decisions, not just automated ones.
not really necessary. It is not a major disaster if Netflix® recommends a Can post hoc approaches like the IBM/MIT work on GANs produce
film I don’t like to me, but even there a simple explanation like better explanations of not only automated decisions, but also those
“because you watched < name of film/TV programme > ” is added. made by humans?
Unfortunately, at the time of writing, it doesn’t matter whether I • Investigating the perceived trade-off between transparency and
watched that other film/TV programme all the way through or gave up system performance. It is generally accepted that there is an inverse
after five minutes. There is plenty of scope for improving such simple relationship between performance/accuracy and explainability for
explanations. More importantly, work here would give a foundation for an AI system, and hence a trade-off that needs to be made. For ex­
understanding what really makes a good explanation for an automated ample, Niel Nickolaisen, vice president and CTO at human resource
decision, and this understanding should be transferable to systems consulting company O.C. Tanner observed: “I agree that there needs
which need a much higher level of responsibility, such as safety-critical to be some transparency into the algorithms, but does that weaken
systems, medical diagnosis systems or crime detection systems. the capabilities of the [machine learning] to test different models
Alternatively, a good explanation for an automated decision may and create the ensemble that best links cause and effect?” (Holak,

8
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

2018). Does this trade-off have to be the case? Could a radical ap­ components (although work is underway (Foy, 2018)). In the case of
proach to explanation be an outlier to the trade-off curve? multiple AI components, how can an organisation overcome this transpar­
ency challenge?
3.1.2. Information theoretic challenges, opportunities & research agenda – This is one example of a deeper underlying problem, that of eco­
Paul Walton system boundaries. One type of ecosystem boundary, between AI and
AI is introducing new ways in which organisations can process in­ humans, is especially important (Fry, 2018). As AI tackles more com­
formation. Therefore, it is important to consider AI in the context of the plex topics the ability to exchange complex information successfully
limitations in this processing predicted by information theory (Walton, between AI components and people will become ever more important,
2018a; Walton, 2018b) and consequent implications for the im­ leading to the question: how can an organisation ensure that AI and people
plementation, adoption and use of AI. can work together successfully?
The potential biases associated with AI are well known (DeBrusk,
3.1.2.1. Challenges. The implementation of AI is different from 2018). They highlight a wider question: how can an organisation assure
traditional technology implementation and introduces a new set of the outcome of integrated AI components against a range of organisational
challenges. These challenges are driven by a combination of the requirements, not just for individual interactions but over multiple interac­
following factors: the changing nature of the business environment, tions?
the nature of AI and machine learning (ML) themselves, and underlying These questions introduce the first two levels of fitness. The concept
information theory limitations that apply to all information processing of fitness within an ecosystem (which measures how well an informa­
but in specific ways to AI/ML. tion processing entity fits its environment (Ford, 2017; Walton, 2018a;
Entities (like people, animals, organisations or computer systems) Walton, 2018b)) breaks down into three levels:
that interact with their environments are subject to information-related
selection pressures that drive trade-offs between information measur­ 1. narrow fitness: the ability to achieve favourable outcomes in a single
es—the pace and friction of information processing and the quality of interaction;
the information produced (Walton, 2014; Walton, 2015a; Walton, 2. broad fitness: the ability to achieve favourable outcomes over
2015b). These selection pressures occur differently in alternate en­ multiple interactions, potentially of different types (this is the level
vironments, so information ecosystems have developed with alternative that reveals bias and, more generally, ethical and social issues);
ways of exchanging information (e.g. languages, jargon, computer 3. adaptiveness: the ability to achieve favourable outcomes when the
protocols). Ecosystems have their own conventions for information environment (determined by the frequency and nature of interac­
processes and measures driven by the trade-offs. The conventions of tions) changes.
different ecosystems mean that each approaches information processing
from a different perspective and has its own set of inherent limitations Note that there is a tension between these-an excessive focus on one
with respect to information—good enough for the selection pressures of can diminish the ability to achieve the others.
the ecosystem but not necessarily more widely (Walton, 2017). Organisations have a set of internal selection pressures (created
This concept of a viewpoint applies at different levels—to an or­ from budget processes, culture, performance management, organisation
ganisation, its departments, computer systems, people and beyond. design and others) that are supposed to make the organisation fit for its
Humans, for example, have different forms of perception and inference environment. However, they do not always align effectively with each
(Mercier & Sperber, 2017) that operate in different ways and from other or the environment. In an era of disruption, especially, organi­
different viewpoints. Usually the brain manages to create an integrated sations need internal selection pressures with a different balance be­
overall picture from these but beneath the surface impression there are tween the levels of fitness. For example, the difficulty that organisations
gaps that magic, for example, exploits (Macknik & Martinez-Conde, have with transformation (Capgemini Report, 2018a) shows that or­
2011). Are similar gaps possible for the implementation of AI in orga­ ganisations may have insufficient internal selection pressures to support
nisations? AI is becoming pervasive (since it is just another set of adaptiveness (often resulting in high levels of friction associated with
computing techniques that any developer or product supplier can use) change) and consequently struggle to keep pace with changes in their
and is increasingly being included in components as diverse as smart environment. Current business pressures relate directly to this—the
assistants, modules for enterprise products, widely available cloud li­ need for greater organisational responsiveness (Capgemini Report,
braries and bespoke data-science-driven applications. In addition, it is 2018a) means that adaptiveness is more important than before. In ad­
being applied to numerous different business use cases. Critically, in dition, the digital and AI revolution means that end-to-end information
subsets of these components, the data science may be handled in­ quality (Westerman, Bonnet, & McAfee, 2014) is increasingly im­
dependently, for example by product or cloud suppliers, with different portant. Since machine learning is about learning, this, in itself, poses a
viewpoints. So the following question arises: how can the inferences de­ question: how can an organisation use AI to learn about changes in its
livered by different AI components be integrated coherently when they may environment and then make the required changes quickly and reliably
be based on different data, and subject to different ecosystem conventions (Walton, 2018a; Walton, 2018b)?
(and the associated quality differences) (Walton, 2018a; Walton, 2018b)?
This question can be retitled as the discrimination problem (Walton, 3.1.2.2. Opportunities. The opportunities for AI are numerous. As the
2018a; Walton, 2018b)—what quality of data and inference is required authors say, with respect to AI (Capgemini Report, 2018b): “Almost any
to discriminate reliably between alternatives that lead to significantly existing or new application can deliver more value by augmenting it
different actions and outcomes? For individual AI components this with a touch of ‘smart.”’ We can think of the opportunities in several
translates into an analysis of the risk and tolerance associated with categories:
false-positives and false negatives. But when multiple AI components rely
on different data and ecosystem conventions, under what circumstances can • The organisational environment: making sense of the torrent of data
organisations integrate them to enable successful discrimination? Under available to understand opportunities (customer needs, attitudes
what circumstances will AI be sufficient and when will it need to be sup­ and preferences, their specific and increasingly real-time context)
ported by causal reasoning or simulation (Pearl & MacKenzie, 2018)? and threats (including security threats, reputational threats and
For many business challenges—the management of compliance fraud) and take appropriate action;
regulations is an obvious example—rationale is important; the reason • Operations: making sense of the data from operations, partners and
for an answer is as important as the answer itself. However, deep the supply chain to understand status, predict and manage incidents
learning does not support this well even in the case of single AI and failure and improve efficiency and reliability;

9
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

• Interaction: using the capabilities of natural language processing and Harris-Jones, and Hannaford (1988) identified six roles for knowledge
other sensing capabilities to interact with people (including em­ based systems: Assistant, critic, second opinion, expert consultant,
ployees, service users and customers); tutor, and automaton. As the current advancement in AI technology
• Case management automation: understanding what cases can be enables researchers to create more advanced machines, it is possible for
routinely automated and what cases need specialist intervention AI to undertake more complex tasks that require cognitive capabilities
(and when); which previously seemed impossible, such as making tacit judgements,
• Governance: improving the quality of information available to sup­ sensing emotion and driving processes (Mahroof, 2019). As a result, an
port (automated or human) decisions (Kahneman points out increasing number of jobs are autonomously performed by AI systems
(Kahneman, 2011) that, without conscious intervention, people are without human control and supervision (Złotowski, Yogeeswaran, &
“radically insensitive to both the quality and quantity of information Bartneck, 2017). There are many reports on the benefits of AI for
that gives rise to impressions and intuitions”); decision making because AI is believed to be able to reach improved
• Adaptiveness: helping an organisation to improve its response to decisions, to boost our analytical skills and decision-making abilities,
changes in the environment by, for example, re-learning business and heighten creativity (Wilson & Daugherty, 2018). However, “with
rules. the resurgence of AI, a new human-machine symbiosis is on the horizon
and a question remains: How can humans and new AIs be
More generally, AI can assist organisations to develop both opera­ complementary in organisational decision making?” (Jarrahi, 2018 p.
tional and strategic situation awareness and the ability to link that 579). Miller (2018) argues the imperative of a new human-machine
awareness through to action increasingly quickly, efficiently and ef­ symbiosis and calls for the rethink of “how humans and machines need
fectively. to work symbiotically to augment and enhance each other's
capabilities.” (page 2). For example, what would be the implications
3.1.2.3. Research Agenda. The following paragraphs itemise a set of of using AI for future business executives in making strategic decisions?
research questions relating to the challenges and opportunities outlined
above. 3.2.1.2. Opportunities and research agenda. To advance our knowledge
and understanding on the new generation of AI systems for decision
• Data: how should organisations structure their business and tech­ making, Duan, Edwards, and Dwivedi (2019) propose twelve research
nology architectures to support data engineering (and its links with propositions in terms of conceptual and theoretical development, AI
IoT, digital twins and other technology trends) and data governance technology-human interaction, and AI implementation. Based on Duan
to support multiple AI components with different ecosystem con­ et al. (2019)’s comprehensive review and discussion, this section
ventions? How can they ensure that the quality of the data is suf­ provides the following specific research areas on the emerging
ficient to support the required analysis? challenges and research agenda of AI from a decision making
• Discrimination: under what circumstances and to what extent can perspective.
organisations rely sufficiently on the discrimination provided by sets Re-defining and explaining the role of AI for decision making:
of integrated AI components based on different data and ecosystem Will AI be mostly accepted by human decision makers as a decision support/
conventions? augmentation tool rather than as the automation of decision making to re­
• Assurance: what capabilities, controls and mechanisms do organi­ place them? AI can play multiple roles in decision making, but there are
sations require to implement to understand and assure sufficiently contradictory views in the current debate on the role of the new generation
the risks (for each level of fitness) associated with implementing AI.
single, multiple and integrated AI components? Many previous studies have examined the roles of AI before the era
• Transparency: under what circumstances do organisations require of big data. However, considering the superpower of the new genera­
transparency of reasoning and how can this be delivered when AI tion AI and the overwhelmingly mixed views and debate on the new
components are integrated? role of AI in decision making, it is imperative that the role of AI should
• Adaptiveness: how can AI contribute to improving the adaptiveness be revisited and redefined. Some argue that AI should be used to aug­
of organisations and how can organisations derive the appropriate ment the human judgement rather than automation (Miller, 2018;
balance between the different levels of fitness using AI components? Wilson & Daugherty, 2018) and “AI systems should be designed with
• Working together: how can AI be designed so that complex in­ the intention of augmenting, not replacing, human contributions”
formation can be exchanged reliably between AI and humans—how (Jarrahi, 2018, p. 584), but this assertion should be further supported
can they work together effectively? with rigorous research and investigation with empirical evidence on
• Internal selection pressures: how can AI support the development of how and why AI is best at providing augmentation in supporting human
internal selection pressures that can support the right balance be­ judgement rather than decision automation. Wilson and Daugherty
tween the different levels of fitness? (2018) argue that companies that deploy AI mainly to displace em­
• Inference approach: for which business use cases will ML be sufficient ployees will see only short-term productivity gains. What is the evi­
(assuming availability of the right data) and for which will it need to dence for this claim? If this is true, why and how will using AI for
be supported by different forms of causal reasoning or simulation? replacing employees not deliver the long-term gains and how can this
shortcoming be overcome?
3.2. Business and management perspective Measuring and justifying the impact of AI on decision making
performance: How can you measure the impact of AI on human decision-
3.2.1. A decision-making perspective – Yanqing Duan, John Edwards, making performance in a new human-machine symbiosis. Measuring the
Yogesh Dwivedi benefit of AI and its impact can be very difficult, but possible. There is a need
3.2.1.1. Challenges. The earliest development of AI was the to develop and test theoretically sound and practically feasible AI impact
construction of an intelligent machine that could mimic human indicators to measure its benefits.
decision making for playing chess. Since then, using AI in decision To address this issue, researchers need to clearly understand the role
making has been one of the most important applications in AI history. of AI in decision making process. For example, if it is in a decision
The roles of AI in decision making have been classified in various ways. support role, what is the most appropriate way to measure the AI's
Broadly speaking, AI systems can be used either to support/assist the impact on the human decision makers’ performance? Wilson and
human decision makers, or to replace them (Edwards, Duan, & Robins, Daugherty (2018) claim that companies can benefit from optimising
2000). More specifically, the early publication by Bader, Edwards, “collaboration between humans and artificial intelligence” and develop

10
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

employees’ “fusion skills” that enable them to work effectively at the organisational decision making? To address this question, it is necessary to
human-machine interface, but how can these benefits be directly theorise the use of AI and its impact on decision-making. Therefore, an in­
measured? tegrated conceptual framework is needed to provide a systematic under­
Developing and testing System design criteria for supporting standing of AI in decision-making.
decision making: What are the principal design criteria where AI is used With the rapid increase in AI applications, many claims are made by
within decision making in difference roles? As the impact of AI in decision AI developers and large corporates about its use and substantial benefits
making will be realised via the human users, the ergonomic design of AI and impact. For example, according to Davenport and Ronanki (2018),
systems is important for their success. However, the ergonomic issues may be a survey of 250 executives who are familiar with their companies’ use of
different between supporting, augmenting, replacing, or automating systems. cognitive technology (a term Davenport and Ronanki explain as “next-
As the effectiveness of AI systems for decision making can only be generation AI”) shows that three-quarters of them “believe that AI will
realised through its acceptance and use by the end users (Edwards et al., substantially transform their companies within three years” (p. 110). As
2000), the system design criteria for AI based systems has been an issue most similar claims are not substantiated by measurable empirical
since the early applications of AI. Based on our understanding of the evidence and rigorous academic research, it is difficult to know how,
roles of AI, whether for supporting, augmenting, replacing, or auto­ why and to what extent AI systems are being used and impacting on
mating decision making, IS researchers need to propose the design individual and organisational decision-making performance and trans­
criteria from technology-human interaction perspective for system de­ forming organisations. This raises an opportunity for IS researchers to
velopers to create ideal AI systems for human decision makers. For develop appropriate theoretical justifications on the use and impact of
example, what are the ergonomic design issues for developing AI sys­ AI for decision making through the appropriate theoretical lens.
tems that are suitable for decision making?
Refining and improving AI system performance while in use by 3.2.2. Exploiting AI-enabled automation: challenges for organisational
decision makers: Can AI systems’ performance for decision-making be leaders – Crispin Coombs
refined and improved while the systems are in use by decision makers? 3.2.2.1. Challenges. Advances in AI technologies have seen a step
AI can augment human decision-making, but human efforts are also change over the last 10 years. One consequence of these
required to augment AI. The unique strength of human intelligence is its developments is the creation of new opportunities to automate
ability to learn and adapt to new environment and challenges. Refining existing work tasks. Automation can be defined as the execution by
and improving performance through continuing learning has been a machine, usually a computer, of a function previously carried out by a
challenge for advancing AI until the recent advances in deep learning human (Parasuraman, 1997). AI-enabled automation technologies can
and Big data. Deep learning, as a subset of machining learning, has been manage and analyse vast amounts of data, propose recommended
one of the essential enablers for the renewed AI success. Can AI systems courses of action and enact these decisions. These technologies are
be refined and improved by deep learning while they are in use by also able to improve their decision accuracy over time, thereby
decision makers? This question needs to be addressed by further re­ becoming increasingly more valuable to Organisations (Tarafdar,
search. Beath, & Ross, 2017). Such ‘intelligent’ capabilities have enabled AI
Understanding the critical factors affecting AI's success in de­ to be applied in repetitive and routine knowledge work, such as
cision making: What are the critical factors that will significantly affect improving stock market timing and portfolio creation (Hilovská &
AI's success for decision making. Koncz, 2012) or identifying firms that are at most risk of bankruptcy
While technology advancement may have no limit, its applications (Chaudhuri & De, 2011). While much of the recent rhetoric assumes full
may encounter bottlenecks and unprecedented barriers. Factors af­ automation of job roles (e.g. Frey & Osborne, 2017) a notable feature of
fecting the use, impact, success and failure of information systems have many AI applications is the continuing need for a humans to work
been studied extensively (Dwivedi et al., 2015b; Dwivedi et al., 2017; alongside the automation technology. Human workers are needed to
Dwivedi, Rana, Jeyaraj, Clement, & Williams, 2019; Hughes, Dwivedi, either assess and confirm AI decision recommendations, enact the AI
Rana, & Simintiras, 2016; Hughes, Dwivedi, & Rana, 2017). There has recommended course of action, or provide backup support should the
been some work on critical success factors for implementing data AI-enabled automation produce errors or fail. This has led scholars to
mining systems (Bole, Popovič, Žabkar, Papa, & Jaklič, 2015), but there argue that AI-enabled automation will augment the work of humans,
is a lack of research on identifying the critical success factors affecting rather than enable wholesale substitution (Davenport & Kirby, 2016).
the current use of AI and its impact in the era of Big data. Thus, understanding how humans work alongside AI-enabled
Understanding the relationship between culture and the use of automation will be critical to deliver the anticipated benefits of
AI in decision making: Does culture play any significant role in AI's automation.
success in decision making? It is believed that the acceptance of AI for de­ The Human Factors, Ergonomics and Safety Engineering literature
cision making can be affected by different cultures and personal values. By has an established stream of research that examines the impact of tra­
contrast, the acceptance and successful application of AI for decision making ditional automation technologies on human workers. This literature
may result in a change of culture in organisations and in individual decision- provides an important starting point for Information Systems scholars
making behaviour. wishing to investigate these issues (Markus, 2017). This literature
Culture has been recognised as an important influential factor in suggests that to maximise the benefits of AI-enabled automation, or­
technology acceptance by many previous studies. Does culture, such as ganisational leaders are likely to be faced with four major challenges:
national or organisational culture, and personal and religious values, (i) how to select tasks for automation; (ii) how to select the level of
also play a critical role in acceptance/adoption and use of AI applica­ automation for each task; (iii) how to manage the impact of AI-enabled
tions? For example, Gerbert, Reeves, Ransbotham, Kiron, and Spira automation on human performance; and (iv) how to manage AI-enabled
(2018) examine “Why Chinese companies approach AI differently”. Liu automation errors. Each of these challenges is briefly discussed below.
et al. (2018) also find a significant influence of both organisational and First, when considering tasks for AI-enabled automation
Chinese national culture on knowledge management. If culture does Organisational leaders need to recognise that work tasks can be sub-
play a role, how, why and to what extent does it affect the AI success? divided into specific stages. For example, information processing tasks
Will the wide use of AI for supporting and automating human decision- comprise of (1) information acquisition; (2) information analysis; (3)
making change culture? This is an area that has not been well explored decision and action selection; and (4) action implementation
so far, thus requiring further investigation. (Parasuraman & Wickens, 2008). AI-enabled automation may be ap­
Theorising the use of AI and its impact on decision making: plied to each individual stage or across all stages. Thus, Organisational
Why, how and to what extent is AI being used in and making impact on leaders need to consider whether AI-enabled automation can be applied

11
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

to all the functional stages of a work task and whether this is desirable For example, human operators (e.g. pilots, air traffic controllers) not
for the business process. conducting enough checks of system state and assuming “all is well”
Second, Organisational leaders need to appreciate that each work when in fact a dangerous condition is developing that leads to an ac­
task stage may have a different level of automation applied. The level of cident. Automation bias has been defined as people using the outcome
automation may range from Level 1, manual control where the com­ of a decision aid as a heuristic replacement for vigilant information
puter offers no assistance to Level 8, autonomous control stage where seeking and processing (Mosier & Skitka, 1996). It may occur because
the computer does everything without human notification (Vagia, the automatically generated cues are often very salient and draw the
Transeth, & Fjerdingen, 2016). Thus, as well as selecting the appro­ human's attention and because humans tend to ascribe greater power
priate work task stage to automate, Organisational leaders also need to and authority to automated aids than to other sources of advice. Al­
decide how much decision-making control is given to the AI-enabled though, automation complacency and bias can speed up decision
automation, and to what extent a human is kept in the loop. making when recommendations are correct, when the automation
Third, the level of automation selected for functional task stages technology provides the incorrect recommendations it can lead to
may have impacts on human worker performance. These impacts may omission errors (the human does not respond to a critical situation) and
be positive or negative. For example, higher levels of automation may commission errors (the human follows the recommendation of the au­
reduce operators’ workload and achieve improved results. However, tomation, even though it is incorrect) (McBride et al., 2014;
higher levels of automation may also reduce the situation awareness of Parasuraman & Manzey, 2010). This presents an important research
the worker, and increase a tendency to overly rely on automation opportunity to explore and understand the factors that influence over
technology (Onnasch, Wickens, Li, & Manzey, 2014). Thus, Organisa­ reliance on automation and how to counter them, especially if the
tional leaders need to understand the factors that may influence human human is retained as the backup to AI-enabled automation.
worker performance when working alongside AI-enabled automation. The management of automation errors may also be influenced by
Fourth, even the most reliable technological systems are likely to automation complacency and automation bias. Automation compla­
fail at some point and when this occurs, humans must engage in error cency among human operators increases with higher levels of auto­
management. Highly automated systems that do not require frequent mation and higher automation reliability (McBride et al., 2014). This
intervention are hard for humans to keep attention to (Parasuraman & increase is because human workers are less aware of changes in the
Manzey, 2010). This creates an automation paradox, where factors that environment or states when the change is made by an agent other than
positively influence performance when the automation is working well themselves (human or automation). The risk of negative consequences
may undermine performance when the automation fails (Onnasch et al., associated with AI-enabled automation errors or failure increases with
2014). Leaving the human out of the loop can be problematic because it increasing levels of automation and in the latter functional stages of
leads to considerable human performance impairment if the automa­ information processing (Sebok & Wickens, 2017). Thus, further re­
tion fails. Thus, Organisational leaders need to understand how AI-en­ search is required to investigate features that can mitigate the loss in
abled automation errors or failures can be managed. These four chal­ performance, in circumstances of error or failure, with higher degrees
lenges present several opportunities for Information Systems scholars. of AI-enabled automation.

3.2.2.2. Opportunities. There has been considerable research that has 3.2.2.3. Research agenda. In order to address these research challenges
examined the types of automation that may be applied to different tasks and opportunities for understanding how humans work alongside AI-
and stages of tasks (McBride, Rogers, & Fisk, 2014; Parasuraman, enabled automation several research priorities are proposed. First,
Sheridan, & Wickens, 2000; Sheridan & Parasuraman, 2005). These further empirical research is needed to investigate how are decisions
studies reveal that adopting strategies of automating tasks that made regarding the work task to automate and level of AI-enabled
machines do best and leaving the residual tasks to human workers is automation to apply. As automation becomes increasingly intelligent
likely to have negative impacts on performance (Parasuraman, 1997). through the application of AI the range and types of task that it may
To assist leaders develop more sophisticated automation strategies apply to are likely to grow. Although qualitative and quantitative
several quantitative and qualitative models have been developed models have been developed to conceptualise this decision-making
(Parasuraman, 2000). While these models provide a valuable starting process, they have been developed from studies that examine
point, further research is required to explore how quantitative and traditional automation technologies and adopt a functional task
qualitative models may be combined to provide richer insights perspective. Although these studies help to explain how existing tasks
regarding appropriate and desirable AI-enabled automation of tasks. or activities may be automated, they may not adequately explore new
The literature also indicates that selecting appropriate levels of ways of undertaking business processes or the development of radical
automation is a complex process that is highly contingent on a wide new business models. Thus, further research is needed to investigate
range of factors (Parasuraman, 1997). The level of automation selected how strategic applications of AI-enabled automation may redesign or
may be dependent on person factors (e.g. the complacency potential of create new business processes and how the role of the human worker
the human operator, the automation training provided, and the may evolve alongside these developments, the new job roles that may
knowledge of automation held by the worker), task factors (e.g. the be created and the skills required to undertake these roles. These
consequence of automation error, the cost of verification, and lines of studies should combine qualitative and quantitative models of
accountability), or cognitive load factors (e.g. associated impacts of automation selection and account for contingency factors such as
automation on mental workload or situation awareness) (McBride et al., person characteristics, task characteristics and the associated
2014). Thus, broad recommendations of ‘medium’ automation adoption cognitive factors that may influence the level of automation applied.
levels are likely to unhelpful as they do not take sufficient account of A second research priority is to understand the factors that may
these important contingencies (Onnasch et al., 2014). Given the cap­ influence human over reliance on AI-enabled automation and how they
abilities of current and projected AI-enabled automation technologies, can be countered. As AI-enabled automation becomes common and
more empirical research is needed to examine the conditions that in­ reliable there is an increasing risk that humans will privilege AI re­
fluence the level of automation applied to task stages. commendations and decisions over their own judgements and suffer
Existing research has shown that when working with automation from reduced situation awareness. While reduced situation awareness
technologies human performance may be influenced by automation has critical implications in transport and health contexts, poor situation
complacency and automation bias. Automation complacency is defined awareness may also bring significant risks in many business and man­
as the poorer detection of system malfunctions under automation agement settings, such as imperfect AI-enabled automated decisions
compared with under manual control (Parasuraman & Manzey, 2010). leading to stock market crashes or firm bankruptcy. Research that

12
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

studies the auditory and visual cuing of automated system performance, even at planning or at discovery level. How close is this idealised limit
as opposed to relying on alerts when errors occur, or the system fails, to a possible real limit will play a crucial role in future societal
would be valuable to address this research priority (Hancock et al., developments, in conjunction with societal organisation. The idea of
2013). Research could also explore how AI could be used to design a humanity not needed for production has been maintained in various
‘likely’ alarms (Parasuraman, 1997) rather than relying on alarms to be publications (Joy, 2000), but a thorough discussion is provided in Frase
definite warnings of dangerous situations, or apply adaptive automa­ (2016). The argument is simple – if technological trends are to reach
tion that can vary the level of automation applied in real time (Hancock their limit, we can discern four possible futures; (a) Communism, as a
et al., 2013) to help to reduce loss of situation awareness due to out-of- combination of abundance and equality. (b) Rentism the idea that
the-loop unfamiliarity. abundance combined with restricted access to goods (i.e. imposing
A third, related research priority concerns the use of human workers artificial restrictions akin to copyright protections on music
in failsafe capacities to protect against AI-enabled automation error or distribution). If we are to assume that technological progress cannot
failure. Human workers are likely to find it difficult to undertake this fully automate production, or, due to inherent limits, one cannot reach
failsafe role effectively in situations of high AI-enabled automation over-capacity in almost everything, a future with widespread equality
reliability and when they have few opportunities to practice performing would be called Socialism. The worst case scenario, termed Exterminism,
the task. Thus, further research is needed to investigate how human projects a highly unequal future combined with scarcity, were the vast
workers can be trained and supported to continue to be able to effec­ swaths of humanity being are condemned to irrelevance and base their
tively monitor and respond to AI-enabled automation errors and fail­ reproduction by being servile to tiny fraction confined in reverse
ures. For example, research could investigate how human workers can ghettos.
be trained to “expect the unexpected” as well as training in under­
standing of AI-enabled automation logic. Further, as higher levels of 3.2.3.2. Trajectories towards the limit case. Automation of the level
automation become more pervasive, AI-enabled automation will be­ described in the previous paragraph seems to be far away – or at
come more challenging to manage in situations of automation error or least not imminent. Technologies that originally seemed trivial and
failure. Thus, further research is needed to explore how AI-enabled around the corner (e.g. self-driving cars, see Brooks, 2019) are now
automated decision making can be made sufficiently transparent for a thought of as requiring years of further development. At this moment,
human to diagnose error creating faults. This is a critical research AI seems to be automating jobs that were traditionally thought of as
priority because understanding ‘what happens when it goes wrong’ is a middle management; it also plays a role in intensifying labour.
key factor for Organisations wishing to increase their level of auto­ Examples include Uber's allocation algorithms and Amazon's hand
mated decision making. gesture patents. The core of the issues lies with worker performance
Information Systems scholars have a critical role to play in shaping management increasingly delegated to machines (De Stefano, 2018),
the agenda of how AI is applied in organisations and society in the creating dystopia like conditions for the ones affected. It is hard to see
future. It is hoped that this research agenda will be useful to scholars how a full-automation (or almost full automation) society will not be
and contribute to a enhanced understanding of how leaders may exploit impacted by the technological trajectory that led to it. It is also hard not
AI-enabled automation to deliver benefits for their organisations and to wonder when will this almost teleological point in history arrive.
society. There seem to be wide disagreement among scholars and experts, but
most agree we would have achieved full automation within 100 years
3.2.3. Labour under partial and complete automation – Spyros Samothrakis (Müller & Bostrom, 2016; Walsh, 2018). If the current trends persist,
One of the most significant facts in the history of labour is the the trajectory towards the AI limit case will be painful for most.
universality of the belief that automation is going to reduce aggregate
labour hours. On the more “progressive” side, thinkers from the whole 3.2.3.3. What is missing?. The focus on management, surveillance and
of the political spectrum, including Keynes (2010), Nixon (Blair, 1956) other labour disciplining technologies is not necessarily out of choice.
and Stalin (Stalin, 1952) were adamant that through a combination of We are ineffective at creating machines that act in unconstrained
policy and technological automation, we would see a drastic reduction environments; our AI systems traditionally need copious amounts of
of working hours. For those with a more pessimistic bend, more auto­ data and tend to produce dubious results outside well defined
mation meant increased unemployment; the beginning of this idea conditions. They are also not very good at learning the invariances of
probably goes back to the luddites (David, 2015), who actively tried to this world, and fail to generalise outside their training distribution. This
remove machines from the production process as a means to preserve constraints practitioners into solving problems that can be attacked,
jobs. This belief (as part of a generalised fear of technological un­ which are almost always high level optimisation problems. Tasks like
employment) has been re-iterated multiple times. mending a broken car away from a production line, basic plumbing etc.
Unemployment seems to follow cyclical business patterns, with al­ are completely outside the capabilities of modern AI. The current crop
most no scholar making a case for technological causes. This does not of AI also fails to learn incrementally from data – something termed
mean that certain professions will not disappear (Frey & Osborne, “catastrophic forgetting”. One can only speculate what an AI for Good
2017), but that economy is on a permanent reconfiguration state. would imply, but my best guess is an always-on private advisor and
Working hours did gradually decrease until the 1970s; from that point personal guide. Efforts to develop similar technologies in the
onwards working hours have either stayed stable or increased (espe­ networking arena are late (see Bielenberg, Helm, Gentilucci,
cially in the US), while extreme working hours have increased (Burger, Stefanescu, & Zhang, 2012; Moglen, 2013), but have managed to
2015). The contradiction here is apparent – technological development gather significant traction – maybe there is room for an AI equivalent?
has been astounding for the last 40 years, but the need for labour has
not diminished. 3.2.4. A generic perspective of AI – Arpan Kar
3.2.4.1. Challenges. The growing popularity of AI is changing the way
3.2.3.1. Outcomes after the limit case. If we are to take the idea of work firms are engaged in doing business across industries. The benefits and
automation to its limits (but on the same time assuming that through productivity improvement at the tactical, operational and strategic
control or technological inability, god-like AIs never materialise), level are envisaged to be significant as the firms move towards digital
humanity's ability to produce might only be constrained by the transformation empowered with AI. The information assets residing in
availability of raw resources; human labour will no longer be needed these firms, often referred to as big data, are systematically exploited
to supplement machine labour. In this scenario machines become non- and explored by AI to create this value (Grover & Kar, 2017). Similar
conscious slaves, with no further human involvement in production, value is also realised in government, public organisations and not for

13
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

profit organisations. However, there are major challenges remaining in organisations which are the market leaders in the segment have many
this journey to explore and exploit the full potential of AI. In our limited customers onboarded, and as a result have the requisite data created in
perspectives, these can be segregated in terms of algorithm specific their platforms which is exploited by AI, to provide differentiating
challenges, domain specific challenges and policy related challenges. services. A small firm will be less effective purely due to lack of access
First, let me explain the algorithm specific challenges. Reviews of to the data. However, this brings in a possibility of an AI divide where
artificial intelligence algorithms (Chakraborty & Kar, 2016; Kar, 2016; strong research units continue to grow and reap benefits while other
Chakraborty & Kar, 2017) have demonstrated that most of these ap­ smaller units fail to take their innovation to the next level. However,
plications are focused on showcasing usage of relatively few ap­ there are procedural challenges in the decentralisation of innovation in
proaches. Probably eighty percent or more of the published literature is AI and policy to govern this.
covered by algorithms like neural networks, random forest and decision
trees. To an extent some work has been published on algorithms like 3.2.4.2. Opportunities. The opportunities in AI are tremendous given
genetic algorithms and swarm intelligence. While around 2010, there that it is still at a rather nascent stage in terms of adoption in different
was a new focus on developing new AI algorithms due to computational industries. So it would be exciting for industries to explore different
limitations of these age old algorithms, the focus has relatively died slices of AI impacts in different contexts. Such opportunities of theory
down with the growing popularity of deep and convoluted neural net­ development for researchers in AI could come from exploration in the
works. This trend is predominantly driven by the growing access to high following themes.
performance computing infrastructure in academic and industrial re­
search units. Hence the focus on exploring newer algorithms for theory • Challenges and issues in managing AI in organisational and social
development has taken a back step as researchers are focusing on ex­ setup.
ploiting these established algorithms in the wake of improved compu­ • Impacts of AI on organisation design and associated issues
tational infrastructure. So theory development in many niche algo­ • Impacts on behaviour of individual stakeholders who are affected by
rithms has received less priority, as compared to what it should receive. AI and the cycle of how they affect the outcome of AI
This creates a gap in the long process of knowledge development in the • Requirements surrounding deskilling and reskilling human work­
domain, as current users continue to exploit well known algorithms force in the wake of AI usage in Industry 4.0
with better computational platforms while exploration takes a backseat • AI impacts from a systems methodology – drivers and actors of the
in this journey. ecosystem
Next comes the domain specific challenges. The focus on exploiting
AI inherently means that organisations have to systematically develop Also it would be interesting to explore opportunities of how AI can
and maintain information assets, which requires a digital transforma­ be leveraged not only at the firm level but as an enabler in platforms
tion within these organisations. However, in many of these organisa­ and ecosystems. AI may help to connect multiple firms and help in
tions, when the initial planning is being done, there is a gap between automating and managing information flows across multiple organisa­
digitalisation and digital transformation, due to the organisation's tions in such platforms. It would be good to explore opportunities for AI
overall technology readiness. Therefore there is a challenge on identi­ to be used in such platforms to impact platform productivity, firm
fying the right questions, what data needs to be captured to answer productivity, and ecosystem productivity.
these questions, understanding approaches to extract, maintain and Further research in AI from a technical perspective would also have
analyse these data and understanding the implications of this analysis. immense opportunities in the years to come.
This journey essentially means that there has to be professionals who
can understand both the functional elements of the organisation's pro­ • Areas could be related to computational algorithms for making
cesses as well as appreciate the technical elements of AI. This is often sense out of unstructured and large volumes of data
missing in existing Organisation as they gear towards taking a big leap • Exploration could be in non-deterministic polynomial-time hard
to leverage AI, and thus sometimes, due to the lack of internal readi­ problems even with structured data but high volumes
ness, there is a productivity paradox that emerges due to lags in • It would also be interesting to explore the role of AI in conjunction
learning (Barton & Court, 2012). So AI usage needs to have greater with decision theories for management.
adoption before it affects the industry productivity as a whole.
Further there are challenges of estimating trade-offs between dif­ 3.2.4.3. Research agenda. Building blocks for future research in AI can
ferentiation versus commoditisation of AI. Like any information tech­ stem from integrating classic information systems research theories
nology, the economic returns of AI is most high when it reaches a emerging from management theory, organisation theory, behavioural
maturity of commoditisation. But AI systems are also expected to evolve theory, computer science theories and systems theory (Barki, Rivard, &
themselves as they learn from the contextual and sticky knowledge Talbot, 1993). This should be done by extending the exploration in
within organisations, which mean that outcome of AI can never ever computer science for contextual applications in organisations, markets
truly be commoditised. However current maturity of digital transfor­ and society. A particular area of interest would be the role of AI in
mation journey across organisations, even within the same domain, networks consisting of different actors and types of linkages. Currently
does not facilitate too much of commoditisation. most of the research in AI is happening in computer science and
Since it is a relatively new hype in terms of applications, although information technology departments in universities. Such
the technologies are old by quite a few decades, the actual readiness of computational work is mostly getting presented in leading computer
the organisations producing or consuming the service enabled through science conferences like Neural Information Processing Systems,
AI, is often less understood. Process maturity and people maturity in International Conference on Machine Learning, Association for the
such organisations become questionable. As a result, issues surrounding Advancement of Artificial Intelligence and other such technical
how these information assets are developed, maintained and exploited, conferences. However the focus of such conferences is mostly
becomes debatable, when adverse impacts are witnessed. Challenges of addressing the computing block of information systems. Connecting
privacy preservation, security and process alignment becomes critical. such studies and progresses with management, organisation,
Further people who will use these technologies need to undertake se­ behavioural and systems theories in information systems would allow
vere reskilling and deskilling. exploration of multiple complex phenomenon of AI in this journey
On the policy side, similarly there is a lack of mature standards and towards digital transformation. In particular, this may lead to very
public policy to address these challenges. What could be measures of strong contribution for policy making and practice, based on such
intervention from government to control market concentration? The mixed research methods.

14
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

However there is also a strong need to relook at theory and re­ identify the consumer's pattern about lifestyle choices including music,
lationships based on the emergence of AI. The nature of data available favourite celebrity and location to create unique content.
due to digital transformation is completely changing the traditional Content sharing: With the understanding of the customers and the
approaches of research. The presence of big data on platforms like creation of relevant content, AI can also go further to deliver these
shared economy, social media and internet of things, may require a messages to the customers in a non-intrusive manner. With customers
different approach of theorising than from the traditional research engagement and information collected such as location, demographics,
methodologies (experiments, surveys and interviews). This journey devices, interaction with the site, AI can display offers and content that
would also require researchers design research methodologies from are more appropriate for each user type. Analysed information about
studies connecting both positivist and interpretive paradigms. For the customers determine the best times and days of the week to send an
theory development, proxies of constructs may be derived from big data email campaign or post on social media, the recommended frequency of
which is getting generated in different data sources. For example, re­ the marketing messages and the title they are more likely to engage
lationships between such constructs may require both qualitative and with. This content sharing opportunity builds on the power of
econometric validation, and since the data has high veracity, new ac­ Programmatic which allows automated bidding on advertising in­
ceptable levels of statistical thresholds may require to be adopted for ventory in real time.
theory development. So there may be a perceived trade-off between
rigour versus the generalisability and applicability of such findings if 3.2.5.2. Challenges. Despite these opportunities, some challenges may
one were to evaluate such research purely from a reviewing perspec­ hinder the adoption and implementation of AI for digital marketing.
tive. However, given that the area is still nascent, it would be good The availability of data: There are challenges for collecting and using
overlook the minor rigour-relevance gaps and try to take forward some the data, especially considering The European Union General Data
of the new insights in AI and their impacts on the ecosystem they serve. Protection Regulation (GDPR). Data is essential in understanding the
customers, their journeys and developing the advertising campaigns.
3.2.5. Artificial Intelligence for digital marketing – Emmanuel Mogaji Personalised and automated content creation and sharing will not be
The vast amount of data being generated, increased use of mobile possible if the data are not available. When customers are not willing to
device, cloud computing and internet has contributed to the significant release relevant information, the AI algorithm is not receiving the
development of Artificial Intelligence (AI). AI is making a double-edged needed resources to learn and develop the process. The AI algorithms
impact – constituting a significant source of innovation yet threatening need access to that data to give accurate recommendations. Even when
human jobs (Huang & Rust, 2018), this piece, however, focuses ex­ the data is available, it should be AI ready, that is it is readily available
plicitly on discussing the opportunities of AI for digital marketing. for machine learning. Companies have been collecting information
Wirth (2018) noted that the industry seems hesitant and at the same about their customers for many years, and it is essential that they start
time eager to embrace this new technology. This piece will explore considering the information with regards to AI, making it structured
some of the challenges for its adoption and research agenda for a better enough for digital marketers to use.
understanding. Resources: Though AI is getting much attention as a fast-developing
technology, the cost needed to it for digital marketing may be a limiting
3.2.5.1. Opportunities. Artificial intelligence offers opportunities to factor. Top IT companies and Start-ups champion most of these devel­
enhances campaign creation, planning, targeting, planning, and opments; it will not be surprised if AI is just limited to some of the
evaluation. Three key stakeholders are identified as the opportunities biggest advertising agencies who have the financial capabilities. The
for AI in digital marketing are being explored. Firstly, the brands who financial implication of research and development that goes into it
need to understand their customers and communicate with them on a creating and maintaining AI does not make it readily available for ev­
very personal and emotional level. Secondly, the Advertisers and eryone but no doubt it will become cheaper as times goes on. Besides,
Marketing agencies who are responsible for digital marketing human resources needed to champion these projects might also pose a
strategies. They need AI to bridge the gap between the brands, the challenge. The level of knowledge about AI in digital marketing is not
customers and data (Bell, 2019) and Thirdly, the customers who need to keeping pace with the developing in AI as it becomes increasingly so­
engage with the brands’ marketing communications. They are the phisticated. The insufficient level of skill individuals may be a barrier to
recipient of the information and the generator of the data which is exploring the full capabilities of data-driven digital marketing.
being used to targeting. With this understanding, the opportunities for Trust in AI: There are trust issues with AI (Siau & Wang, 2018).
these stakeholders are presented, especially for digital marketers. Advertising practitioners are feeling that the machine is not creative
Data: A large amount of data generated by the consumers provides enough or it is going to take their jobs, Brand feelings they are losing
an insight into their behaviour Customer analytics makes up 48% of big grip over their narratives, allowing the machine to generate contents
data use in sales and marketing (Columbus, 2016) which highlight and not convinced the algorithms can deliver results. Consumers are
there are new sources of data about the customers. Advertisers have feeling they are just being targeted. Knight (2017) suggested that there
seized the opportunity to use this data to personalise and target ad­ are dark secrets at the heart of AI because no one knows how the most
vertisements (Boerman, Kruikemeier, & Zuiderveen Borgesius, 2017). advanced algorithms do what they do and that could be a problem.
Marketers have never had this form of data from the customers. No Presenting the state of AI in 2019, Vincent (2019) asked if computers
doubt these are big data collected over different touch points. However, are not explicitly taught (as they learn on their own), how do you know
AI offers the opportunity to process these data faster and effectively how they make decisions, he further argued that teaching computers to
engage with everyone with messages that appeals to them. Segmenta­ learn for themselves is a brilliant shortcut. Moreover, like all shortcuts,
tion and targeting become very easy through the data available. it involves cutting corners. This inherent fear about the prospect of AI
Content creation: With AI being able to do what humans will typi­ highlight challenges for its adoption for marketing communications.
cally do, there are opportunities for more innovative and relevant
content creation. With consumers’ demand for relevant content, ad­ 3.2.5.3. Research agenda. These opportunities and challenges open
vertisers can explore the prospects of AI to develop contents relevant to avenues for future research to understand how best to harness the
the customers because they now have a better understanding through prospects of AI within digital marketing.
the analysed data. Content here includes advertisement, social media Conceptual and theoretical development: AI has been applied in many
posts and email campaigns. Information such as past purchases, inter­ different domains such as medicine, hospitality and travel. While the
ests and browsing behaviours can be used to create automated cam­ possibilities of using artificial intelligence (AI) to extract information
paigns that can enhance the customers’ purchase intention. AI can about customers, generate advertisements that will appeal to them and

15
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

shared digitally has been presented, a holistic conceptual and theore­ relevant advertisements as personalised advertisements increase in­
tical understanding of these prospected is needed. The current hype tention to purchase when advertisement fits customers need (Van
around AI is creating a blurry picture calling for further research and Doorn & Hoekstra, 2013). AI can extract the data and deliver the ad­
clarification (Wirth, 2018). A systematic review of AI-related applica­ vertisement but how well are consumers engaging with it? What are
tions in digital marketing, definitions and terms with empirical insight they engaging with and what are they finding appealing. This in­
is needed, especially within the context of the stakeholders – the ad­ formation can help shape future development and ergonomic design of
vertisers, brands and consumers. Exploring how firms should integrate AI systems.
AI, either as human replacements or integration (Huang & Rust, 2018). Stakeholders attitude towards AI for Digital marketing: Advertisers,
AI integration with OBA and MLBA: Following on the conceptual and consumers and brands attitude towards the innovation is also worth
theoretical development of AI in digital marketing, the integration with researching. As (Huang & Rust, 2018) noted, AI is taking over human
online behavioural advertising (OBA) and mobile location-based ad­ jobs, come creative tasks are being threatened. How are professionally
vertising (MLBA). Currently, behavioural targeting mostly occurs when able to deal with these dynamics? How will marketers and advertising
using computers or smartphones (Gutierrez, O’Leary, Rana, Dwivedi, & practitioners integrate AI into their job? Privacy concerns of consumers
Calle, 2019), scholars argue that it offers personalised and targeted and trust in AI, transparency about the reason why companies and
advertisement, offering a precise way of targeting customers (Kumar & advertising agencies are collecting the data could be some deterrent to
Shaphali, 2016) and contributing to the growth in Online advertising this initiative as Jai, Burns, and King (2013) found that when con­
revenues (Chen & Stallaert, 2014). Likewise, MLBA offers consumers sumers know that their information, collected on websites are shared
benefits such as personalised communications that are tailored to the with third-party companies, there is lower repurchase intentions, in­
mobile user's real-time geographic location (Krishen, Raschke, Close, & creased perceived risk and unfairness. Brands are aware of the financial
Kachroo, 2017). These two concepts are emerging marketing strategies, implication and the benefit, how eager are they to explore the prospects
and it involves collecting data either online or off-line and using it to of AI?
develop advertising campaigns, With AI offering data collection and Metrics and evaluation: The relationship between advertisement and
processing at a faster rate, a better understand and effort towards tri­ intention to purchase should be revisited in the context of AI in digital
angulating these online and offline data to have a better understanding marketing. Consumer awareness of personalised targeting through ex­
of consumers’ is essential. Providing practical implications for mar­ tracted data alter consumers’ responses to online behavioural adver­
keting researchers and practitioners. tisements (Aguirre et al., 2015). Also, Humans beings may be difficult
Ethics: The ethical consideration between the personalisation of to monitor and observed (Mogaji et al., 2018). Their browsing history
advertising and consumers’ privacy represents the personalisation- may not be a true reflection of their personality and what appeals to
privacy paradox (PPP) (Gutierrez et al., 2019) often explored through their emotions. Therefore, the metrics and form of evaluating the ef­
the privacy calculus theory (PCT) (Xu et al., 2019). Collecting, using, fectiveness of the AI developed campaign should be further explored.
and sharing personal data for marketing purposed has always raised There is a need to develop and test the practically feasible of AI impact,
consumer privacy concerns (Boerman et al., 2017), this concern is ex­ its contribution towards the industry's growth, if it has increased sales
pected to grow has machines are being deployed to extract and process for brands and if it has enhanced consumers choice making process.
these data. A better understanding of how stakeholders deal with this Conclusion: While acknowledging the role and advancement of AI in
concern is important. Consumers have misconceptions about extracting everyday life, this piece has focused explicitly on the role of AI in digital
data for marketing purposes as they have little knowledge about it marketing. The prospects, challenges and research agenda has been
(Smit, Van Noort, & Voorveld, 2014) and yet advertisers and brands explored. AI offers enormous opportunities for key stakeholders. AI
keep extracting these data. This is what Boerman et al. (2017) described helps marketing agencies gets a better understanding of the data, to
as ‘information asymmetry’ where companies know much about con­ meet their goals and help brands connects emotionally to their custo­
sumers, yet consumers know little about what happens to their data. mers. AI open opportunity for interdisciplinary collaboration involving
Considering machine extracting these data without human interven­ AI developers and creative individuals, enhancing the power of AI to
tion, a theoretical understanding of its implication is worth considering. develop appealing advertising campaigns. Researchers following the
Content creation: It will be necessary to intensify the exploration of research agenda provide theoretical insight and managerial implica­
the content creation capabilities of AI for digital marketing. This is an tions relevant for AI developers, marketers and brand managers.
agenda marketing practitioner will find relevant, this follows the con­
clusion by Mogaji, Olaleye, and Ukpabi (2018) that personal data and 3.2.6. Artificial Intelligence for sales – Kenneth Le Meunier-Fitzhugh and
information legitimately collected online by companies can be used to Leslie Caroline Le Meunier-FitzHugh
design and personalise advisements that appeals to consumers emotions AI is changing the business landscape, and its effects are no less in
and shared online. This process differs from the highly personalised and Sales than in any other business function. Sales is where the business
rational data such as age, gender, and location which Aguirre et al. ‘meets’ the customer, whether it is in retail situations (business-to-
(2015) found to have a reduced click-through rate but things that consumer, B2C) or in business-to-business sales (B2B). Sales can take
emotionally appeals to individuals like their choice of colours, images place face-to-face, through retail outlets, over the Internet or other
being used and background music. Harnessing data and expertise of­ communication media. Retail B2C shopping via platforms such as
fered by AI to develop the marketing strategy offered a and enhance Amazon, is guided and influenced by various AI algorithms that have
decision-making process, as Boerman et al. (2017) advised that adver­ completely changed the retail selling experiences and this trend is set to
tisers should consider the level of personalisation as advertisements continue for the foreseeable future. For example, recommendation al­
perceived to be too personal can seem intrusive gorithms present ‘suitable’ offers to on-line customers to consider, ra­
Attitude towards AI developed campaigns: There have been previous ther than waiting for the customer to make their selection. Additionally,
studies that uncovered some favourable and unfavourable consumer a location algorithm will allow the customer to be presented with the
responses to ad personalisation, but the moderating factors that location of outlets linked to their sales selections (Antonio, 2018).
strengthen or weaken these effects are still mostly missing (Bleier & However, the impact of AI on sales generally goes much further than
Eisenbeiss, 2015). Building on the previous study that explored con­ this. B2B sales are often more complex and have a greater monetary
sumer avoidance of personalised advertising (Baek & Morimoto, 2012), value per exchange than retail sales, although the latter are more nu­
future research should endeavour to empirically explore factors that merous. The challenge for B2B sales is to understand how AI is influ­
can influence the acceptance and avoidance of AI in digital marketing. encing sales exchanges. The day has already come where salespeople
Research should consider how well AI is targeting the customers with may be prompted by AI in real time via their tablet or phone during

16
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

sales negotiations, in both B2B and B2C interactions. Further, some B2B trusted? With the increase use of bots and auto-response algorithms,
sales functions are already being automated e.g. customer relationship how can on-line market research be verified? The salesperson may be
software being used to identify sales readiness and telesales calls being presented with inferior or bias information on which to base their sales
initiated and guided by automated systems. It could be just a short step negotiations.
to AI providing integrative sales experiences on-line and through tele­
services that could remove the need for salespeople altogether. The 3.2.6.2. Opportunities. Managing performance of salespeople: AI provides
following section will explore some of the challenges and opportunities Sales Managers with dynamic assessments of performance via AI driven
being offered by AI around the sales function, and then presents future dashboards. They can be used to identify upselling and cross-selling
possible research questions. opportunities to the company's customers. AI may be also be used as a
personal assistant that is able to schedule meetings with selected
3.2.6.1. Emerging challenges. AI is a job killer: Automation and partners, releasing the salesperson's time for actually attending the
automated services are replacing people in many industrial and meetings. An AI assistant can pull through prospective and existing
service sectors to reduce costs and improve efficiencies. It is forecast customer data from internal files against key criteria or specified names
that over 375 million jobs globally may be ‘lost’ to AI by 2030 (https:// (Loring, 2018).
www.scientificworldinfo.com). The effectiveness of automation in sales AI predictive abilities in sales forecasting and customer management:
has already been proven in telesales and forecasting situations, which Salespeople love up-to-date data continuously streamed to them
have resulted in multiple sales-related job losses. It may be that as AI wherever they are based. Customers like to be personally commu­
develops the mechanistic nature of the algorithms may reduce the need nicated with and AI can help salespeople to develop their relationships
for competitive differentiation and creativity in sales exchanges. by providing this personalised information, saving time and preventing
Additionally, as competitors use similar information derived from big mis-directed sales efforts. Combining new algorithms with existing
data analytics to base their new product development (NPD) decisions CRM platforms should allow for the analyses and prediction of selling
on, or to develop ‘customised’ offers, it may become harder for opportunities, or the salesperson to identify changes in customer status
customers to differentiate between offers and for salespeople to (Antonio, 2018). The ability to leverage big data to focus the sales
justify their existence. The proposition is that increased use of AI professional on their target customers, should enable building more
could reduce investment in sales training and development, increase authentic relationships. Additionally, scenarios and coaching may be
monopoly/oligopoly situations and reduce the sales work available in provided by advanced behavioural analytics to produce suggestions of
that industry. how to handle blockages in the sales process, and how to benchmark
Loss of privacy: The increased use of automated systems and AI themselves against top performers (Hurley, 2017). An AI system can
opens up customers to privacy risks. While CRM systems allow com­ automatically update the CRM system through monitoring incoming
panies to compare their customers’ buying behaviours, the customers and outgoing data. The predictive capabilities of AI can also now be
are at risk of finding this information in the public domain or being used to gauge the customer's possible lifetime value, allowing sales­
‘sold’ to other interested parties. Salespeople may find that their cus­ people to invest in these growth areas and to offer incentives and in­
tomers are being ‘poached’ by other salespeople who have access to teractions geared to the customer's individual needs (Loring, 2018).
automatically-generated leads. Alternatively, salespeople may find that Behavioural analytics and customer profiling may be used to provide
their previously personalised data that forms the bases of their cus­ salespeople with the ability to personalise interactions to meet the
tomer relationships is now readily available to other parts of the needs of their different customers. Some systems will also have the
company, reducing their effectiveness. ability to analyse conversations to identify approaches that increases
Changing the nature of salespeople's interactions with customers: The effectiveness in handling interactions and greater sales successes
danger of the increasing use of AI allows B2B customer to self-select (Hurley, 2017).
their sales deals, including pricing, discounts and special features, re­ The effects of big data analysis on prospecting for new leads and cus­
moving the need for human interaction. However, increasing our re­ tomer retention: Managing big data is a key area that AI can help to
liance on AI selling systems may result in Trust in the selling organi­ streamline. The prospecting process for new customers is time con­
sation being damaged. The increased use of AI removes the human suming and frequently leads to disappointment. The AI algorithm can
touch and emotional connectivity that customers have developed with provide an interaction history based on contacts and social media ac­
their sales representative. This loss of authenticity in the sales process tivities (Antonio, 2018) and AI led lead-scoring may be used to identify
may mean that buyers are unsure who to trust (Hurley, 2017). A further who is ready to buy, and who in the pipeline is reading to move from
danger is that basing sales forecasting on past behaviours and on-line prospector to customer. Using AI identified leads enables salespeople to
activities may perpetuate a bias, e.g. represent a past or random con­ concentrate on a significant number of potentially beneficial sales leads
cern that is no longer relevant to the customer's current buying needs. that, once qualified, can be guided through the buying journey towards
While new types of customer insights may be created through AI, some purchase (Loring, 2018). Therefore, by employing AI systems to iden­
of these may be too detailed or on the wrong track to be profitable. tify patterns in customer behaviours it is possible to pinpoint customers
Finally, there is a danger of losing control of the sales process as in­ in the sales pipeline who are at sales readiness, as well as identifying
tegrated, automated systems prioritises emails, tracks new contacts and buying trends in existing markets and the possible emergence of new
creates meetings or agendas of which the salesperson is unaware. This markets. Unstructured data can be analysed, manipulated and pre­
may lead to the salesperson no longer feeling in control of their activ­ sented in a structured way, so it may be used in the sales process and
ities and worrying about missing new opportunities that are not high­ help simplify sales conversations through the provision of key facts
lighted through the automated system (Loring, 2018), but which may about the customer's interests (https://www.scientificworldinfo.com).
have been identified by the salesperson in their interactions. Salespeople may also use AI systems to improve customer retention and
Lack of understanding of AI algorithms: It is sensible to pose the optimisation (Loring, 2018). Buyers are kept loyal with customised
question, exactly what do sales algorithms do? Algorithms are meant to incentive programmes. AI-assisted customer care programmes will help
provide guidance and information to sales action by moving the cus­ to make sure that the salesperson is aligned with their customer's needs
tomer along a prescribed route (Knight, 2017). They collect data on and current interests. Real time purchases in-store and on-line may be
what the customer is looking at or is doing, and extrapolates the in­ increased through point of sales information being provided through
formation that salespeople might require, which may be very helpful, or conversational interfaces and virtual assistants providing the right in­
extremely annoying. The effects of algorithms still need to be re­ formation at critical points in the sales process (https://www.
searched further. If AI is driving market research, can the results be scientificworldinfo.com).

17
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

The effect of AI systems on Salespeople's cognitive and professional de­ IT and telecom infrastructure of the world is captured by the three
velopment: The effects of AI is challenging salespeople to develop new indices: E-Government Development Index (United Nations, 2018),
knowledge, skills, including management roles (Loten, 2017). The use Information and Communication Technologies (ICT) Development
of AI algorithms is contributing to productivity and provides sales Index (ITU, 2017), and Networked Readiness Index (Baller, Dutta, &
process enhancement through elimination of non-productive activities Lanvin, 2016). The ranks scored by India are 96, 143 and 134 respec­
and through the removal of mundane jobs. AI can also speed up the tively. These three indices include parameters that reflect the quality of
sales process by identifying changes in buying patterns and taking over complementary assets. Some of the parameters are Internet users,
some of the more repetitive administrative roles (Loring, 2018). households and individual subscribers of telephone and broadband
Salespeople can then concentrate on building relationships with the (fixed and mobile) per 100 inhabitants, firm level IT adoption, online
customers that are identified as having the greatest potential growth public services availability and quality of relevant people resources
and lifetime value by their AI assistant. available. As India is doing poorly in the parameters, the overall
ranking is low inferring the nature of complementary assets available
3.2.6.3. Research questions for the AI domain.
The second area of complementary assets is the nature of enterprises
• How far should AI be encouraged to take the sales process – human in India. An official estimate (Data Gov, 2018) shows that in 2011 there
buyer interacting with AI sales assistants, or AI buyers purchasing were 57.6 million enterprises in India. Out of the total, 84.5% are own
from AI sales assistants? account enterprises. Though the enterprises are employing a larger
• How can big data analysis of social media and other on-line sources volume of people, the nature of enterprises indicates poor technological
be used to allow the salesperson to develop greater adaptability to capabilities and small size. An analysis of recent data on employment
their customer's needs? published by the government of India (Ilavarasan, 2018) showed that
• Will AI free the salesperson to develop higher level, soft evaluative nearly two-thirds of the workforce is employed in small size enterprises.
skills and emotional intelligence that are required to handle high More than half of them are working in businesses with less than six
level, personalised sales interactions, or consign salespeople to employees. The poor technological capability of these enterprises can
mundane, data driven interactions? be inferred by the fact that two-third of them do not use electricity.
• Will AI standardise sales performance across industries or allow for More than two thirds of enterprises are owned. The enterprise
the development of more individualised sales roles? owners are likely to be less educated. If AI technology is made avail­
• How can privacy and sales intelligence be safe-guarded in an AI able, the absorptive capacity of these owners ability to understand the
driven environment? potential is limited (Roberts, Galluch, Dinger, & Grover, 2012). In India,
• What are the effects of algorithms on customers’ behaviour? less than one quarter of firms are using AI in their business processes,
and start-up ecosystem in the AI domain is miniscule (Niti Aayog,
3.2.7. Complementary assets and affordable-tech as pathways for AI in the 2018). Not surprisingly, even in the USA, only 17% of the 1500 senior
developing world: case of India – P. Vigneswara Ilavarasan executives surveyed had an understanding of AI and its applications in
This section argues that pathways for AI are complementary assets their businesses (Bradbury, 2018). If AI deployment is possible only in
and affordable-tech in a developing country. The development, im­ enterprises that are larger with a threshold amount of technological
plementation and adoption of AI are dependent on the first two path­ capabilities, the scale of adoption in future will be low in India.
ways. The future research agenda should explore possibilities of AI The regulatory and legislatorial environment in India is positive.
based affordable technologies. This argument is discussed in the context The digitisation led policy initiatives collectively called as Digital India
of the developing world countries, using a case study of India. pursued by the Government of India are proactive towards AI (Niti
Studies on the adoption, use and impact of information technology Aayog, 2018). The Niti Aayog, the policy recommendation body of
(IT) in businesses or organisations are not new (for instance Blili & Govt. of India is keen on using AI for the national development. The
Raymond, 1993; Brown, 2015; Liu, Ke, Wei, & Hua, 2013). The value areas designated for change are healthcare, agriculture, education,
extraction from IT investments is enabled by the presence of com­ smart cities and infrastructure and smart mobility and transportation.
plementary assets (Hughes & Morton, 2006). The complementary assets Out of the listed areas, it is estimated that US$ 14 billion investment is
can be discussed in three major areas ((Laudon & Laudon, 2017): or­ committed to creating 100 smart cities in India (Pratap, 2015). The
ganisational (culture, structure, process and people resources); man­ government recognises the inadequacies in its machinery. The policy
agerial (top management support, incentives to use, and training); and document seeks the private players to participate in the AI development
social (national IT and telecom infrastructure, education, supportive and deployment including the readying the people resources. It is also
regulatory environment and legislators, and technology business eco­ open to working with others to develop AI based solutions. For in­
system). stance, Wadhwani AI (https://eng.wadhwaniai.org/), a non-profit re­
In a wide range of areas AI can be deployed (please see for a recent search institution connects AI experts from universities, grassroots non-
review, Duan et al., 2019). For instance, Ransbotham, Kiron, Gerbert, governmental organisations and government organisations in devel­
and Reeves (2017) highlights that AI reduced the production rate in Air oping AI based solutions for social good. At present, the work domains
Bus. A quick review (Dutton, 2018) showed that high income countries are maternal and child health, tuberculosis and cotton farming in India.
have either invested money for AI related research or have established However, legislators are likely to prefer AI developments that do not
institutions to explore the potential benefits. The review also indicates replace people. Frey and Osborne (2017) postulated that AI based au­
that high income countries are leading the AI bandwagon. Except, tomation technologies are likely to displace jobs, even those include
India, Kenya and Tunisia, none of the developing world countries are cognitive tasks, in the world. Based on their work, there are predictions
part of this discourse. The presence and absence of complementary that 52% (McKinsey Global Institute, 2017) to 77% (Ilavarasan, 2018)
assets are likely to influence the trajectories of AI in the developing of the jobs in India will be automated shortly. Given that the average
world. The lessons learnt from India or Kenya experiences are likely to family, size is four, the impact of automation shall have serious im­
benefit similar low income countries in future. For instance, Pakistan, plications. As women and other disadvantaged groups are pre­
Vietnam, and others are imitating the success story of Indian software dominantly doing low skilled jobs, they are also likely to be replaced by
services exports story (Steinmueller, 2001; Yim et al., 2016). AI based technologies (Ilavarasan, 2018). The consequences of the la­
For discussion purposes, we will look at three areas of com­ bour displacement might include severe social disorder. No wonder, Mr
plementary assets – national IT and telecom infrastructure; nature of Gadkari, Union minister has publicly announced that driverless cars
enterprises; and regulatory and legislatorial environment. The national shall not be permitted in India2 signalling no support for AI in similar

18
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

lines. focused on the impact and implications of AI in peoples’ lives. The


In light of the above, we hypothesise that complementary assets are challenges, opportunities and future research agenda are outlined
weak or absent for AI in the low income countries. This is the major below.
challenge for the widespread AI adoption and usage in the low income
countries. This challenge can be addressed if the AI based solutions are 3.3.1.1. Challenges. One of the main challenges regarding the adoption
affordable and cater to low technology enterprises or users. There are and implementation of AI is the current connotations and perceptions of
some positive demonstrations from both small and larger firms. For the subject. To most people, AI is a mysterious concept that is not only
instance, Get My Parking, a mobile application based solution is being hard to define, but also difficult to understand how it manifests itself in
used by low educated parking attendants or contractors in parking their everyday lives. Whilst obviously there are numerous positive uses
spaces in New Delhi.3 This application requires low capital and short of AI, many people associate it with negative press and media
learning curves. The use of applications like UBER or Google assistant campaigns depicting AI as the cause of everything from mass
by the drivers in regional languages in their low cost smart phones unemployed to data breeches, removal of freedom and even full-out
indicates the adoption of AI based technologies. Get My Parking is a global warfare. The ill-defined concept and poor media coverage has
technology start-up whereas Google is a larger technology firm. Small given AI a negative brand image, and the jury is still out as to whether
firms are handicapped from lack of access to a larger volume of data the good outweighs the bad.
which is essential for refining AI solutions, but likely to target markets Once you delve into more detail, further issues arise. The prevalence
not catered by the biggies. However, we do not know whether AI of biases are now well documented in many of the forms of AI we in­
ecosystem is dominated by the start-ups or resemble oligopolistic ar­ teract with, from racist financial algorithms to sexist chatbots (O’Neil,
rangements. 2016). Part of the problem is the controversial role of playing the
In the light of the above, future research can focus on the following ‘creator’. Artificial Intelligence that mimics human relationships have
questions: been created by people with a set of preconceived judgments, moral­
ities, ethics and biases. As O’Neil states “algorithms are just opinions
• What are country specific factors that drive the development and embedded in code” (O’Neil, 2016). Even as we move towards true in­
deployment of AI? Also, why India and Kenya are few among the telligent robots that can build themselves, they still use human culture
developing countries are exploring the potential of AI than others? as a source for understanding relationships; stereotypes, discrimination,
• How do different complementary assets result in divergent AI prejudices and all. In one recent example an experimental conversa­
adoption and deployment levels? tional agent that learnt from Twitter conversations took less than 24 h
• Do the small firm's dominant industrial structure hinder the AI de­ before it starting tweeting hateful, racist, sexist and homophobic
velopment and growth? phrases (Vincent, 2016). One recent study by researchers at the Helen
• Whether the likely users, both from industry and government, pos­ Hamlyn Centre for Design (Spencer, Poggi, & Gheerawo, 2018) found
sess adequate knowledge about AI and its applications? How does that the vast majority of digital assistants are portrayed as young,
this awareness affect the level of adoption and support for AI eco Caucasian, women that enhance the negative perception of the ste­
system? reotyped role of women in secretarial roles. This is further enhanced by
• Whether affordability is the prime factor for the adoption of AI in the language used by virtual assistants in general conversation and
the developing countries? when responding to certain types of harassment (in one study, none of
• Whether AI and impact on employment linkages is a repeat of old the major virtual assistants responded in a negative way to being called
debate on computerisation and labour displacement? a slut (Fessler, 2017)).
• How do national governments balance the good social potential of Of course, many of these issues can be resolved by first questioning
AI vis-vis the potential negative consequences for the future of why we are looking to develop AI for a particular problem, and what
work? will be the implications for people. At the moment, Artificial
• How can private technology firms collaborate with the national Intelligence is predominantly concerned with activities of productivity,
governments in the developing world in developing the required efficiency and advancement of business objectives and ultimately in­
skills and deploying AI based solutions? creasing growth in the financial sense. In the very near future, we’ll be
• How do start-ups compete with the larger firms in the AI market? Do able to have a cup of sugar delivered to our home by drones, work in a
they focus on different application domains? How do we explain or completely virtual office with efficient and productive artificial col­
understand the differences? leagues, or have the latest fashion trend waiting in our size in the
wardrobe as soon as we get home. But what if this ‘efficient living’ isn’t
entirely a good thing? Many studies have shown that social interaction
3.3. Arts, humanities & law perspective is fundamental to maintaining good mental health, whilst discovery,
taking notice of things around us and serendipity are also important. In
3.3.1. People-centred perspectives on Artificial Intelligence – Jak Spencer Japan, the term hikikomori refers to people who shut themselves off
Whilst AI has a multitude of different technological, political and from society, often never leaving the house, relying on one or two close
legislative challenges and opportunities, ultimately it has the most family members, or deliveries from online stores to sustain their lives.
impact on people and their everyday lives. People-centred design is a In 2016, Japan had 540,000 people aged between 15 and 39 who had
form of innovation that starts with empathy for people and ends with not left their homes in last year (Ma, 2018). People can now live their
iterative solutions to solve real people's needs. Inclusive Design uses entire existence without leaving the confines of their own home – and
this methodology to create new innovations that meet the needs of the this must be having significant effects on our mental health.
widest number of people as possible, no matter their age, ability, social, A final note of caution amongst the challenges of new AI is the
cultural or economical background (Clarkson, Coleman, Keates, & transition phase between things working with AI capability, and things
Lebbon, 2003). In recent years, many projects at the Helen Hamlyn working with reduced ‘smart’ capabilities – a phase termed ‘augmented
Centre for Design, based at the Royal College of Art in London, have intelligence’ we are already seeing the consequences of. At best, this
transition phase can be confusing and frustrating, at worst dis­
2
https://auto.ndtv.com/news/decided-not-to-allow-driverless-cars-in-india- criminating and exclusive. One challenge remains how ‘natural’ the
nitin-gadkari-1964489. interaction with AI can be. Whilst in the future, this may be seamless,
3
https://yourstory.com/2018/01/power-iot-get-parking-wants-disrupt- during this transition phase interactions are still on the terms of what
parking-industry-make-driving-easier. the machine can understand. A recent research participant suffering

19
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

from early on-set dementia had been bought a smart speaker digital Another important future direction is ensuring that people are at the
assistant by his daughter to help him with daily activities and re­ centre of any AI developments. In the practical sense, this means
minders. Ultimately it went unused because the participant wanted to moving beyond AI for efficiency, towards creating more fair, just and
‘schedule an appointment in his diary’ whilst the digital assistant only equitable uses that not only improve people's lives, but also go on to
understood ‘booking an event in the calendar’. In this instance, whilst enhance them in the form of creating pleasurable experiences – joy,
the technology was advertised as an almost human-like assistant, in connection, play and laughter. Transparency can help with this by re­
reality, users have to learn a new form of interaction, moving from the ducing the mystique and opaqueness of AI to the general public.
visual language we use on screens to verbal interactions through Acceptance of failure is also important – there will be mistakes along
speakers and microphones. the way, but we need to be accepting of this and learn from them.
There is still a huge amount of research that needs to be completed
3.3.1.2. Opportunities. Of course, the fact that so many challenges on the ethical challenges of introducing new technologies into our lives.
remain can also be interpreted positively with the amount of What level of responsibility do we give these new machines and what
opportunity there is to improve our interaction and relationship with level of blame do they get when something goes wrong? Is it fair to ask
AI. One obvious opportunity that is already being explored, is the use of machines questions we would struggle to answer as humans? One
AI to enhance ‘softer’ goals rather than the persistent drive for question that is often asked in the driverless car debate is how an au­
economic productivity or financial efficiency. Indeed, this is one area tonomous vehicle would decide on whom to hit – an 80-year-old or a 3-
where people-centred, qualitative research can really enhance further year-old in the worst-case scenario (Awad et al., 2018). But is this
technological advancements by identifying genuine needs of people to something we could answer ourselves? It also opens up other areas of
deliver innovations with relevancy that solve real-world problems. With research enquiry, such as brand challenges of AI. A project at the Helen
virtual assistants for example, conducting people-centred design Hamlyn Centre recently reached out to a major global tech company to
research with a small number of diverse people resulted in the explore the AI possibility of helping to reduce suicides along a river­
creation of an assistant that moved away from the generic front, but discussions stalled because of the brand implications of being
stereotypical young, Caucasian, secretarial woman to the involved in such a sensitive topic and how it might look, despite the
development of an older, wiser personality such as David success or failure of the technology.
Attenborough or Judy Dench (Spencer et al., 2018). As our
interactions with machines start to become more and more human- 3.3.2. Taste, fear and cultural proximity in the demand for AI goods and
like, the opportunity lies in the design of these new personalities and services – Annie Tubadji
the creation of new types of relationship. Should these interactions be All industrial revolutions generated not only economic but also
on a peer-level, subservient-level or superior-level? Should we treat important social challenges and opportunities, and yet there is currently
new technology in the same way we treat our friends, bosses, parents or no scientific economic recognition of the importance of social changes
pets? These are questions that surely have to be answered on a case-by- that may emerge due to the fourth industrial revolution. AI-generated
case basis, and design can help to do this. goods and services might be objectively more efficient and less costly
The potential of AI to help solve some of the world's most pressing than human-made ones, yet we know from the hedonic valuation and
social challenges is also one that cannot be overlooked. From our behavioural economics literatures that human taste is not based en­
ageing population to the loneliness epidemic, from quicker and more tirely only on the objective characteristics of a product or service.
accurate healthcare diagnostics to poverty alleviation, AI is helping to People generally fear what is unknown and new because it brings them
shape major global social challenges. Again, to be impactful here is to feelings of uncertainty. For the same reason they tend to prefer what is
combine big data with the deep data of design. Algorithms and in­ culturally closer to themselves. Taste, fear and cultural proximity seem
telligence can be designed by engineers and computer scientists, but to cause significant biases in consumer behaviour and this will in­
involving designers who are trained in understanding the needs, frus­ evitably affect the demand for AI goods and services and yet this issue
trations, behaviours and attitudes of real people can provide more in­ remains under-researched. Meanwhile, whether AI-induced social
novative and ultimately impactful solutions. changes are palatable to individuals and society will determine whether
A further opportunity lies in the ‘re-branding’ of AI to something there will be demand and therefore whether a market will exist for AI-
people can appreciate and even rally behind. Often the negative stories goods and services.
relating to unequal, biased or discriminatory algorithms are due to the The first industrial revolution brought the steam engine and it
opaque nature of both the definition and the underlying functionality of helped travel between continents, trade intensified the growth of cities
AI being misunderstood. Involving designers in the process and invol­ and slavery was abolished, and laws and institutions evolved. The
ving people helps to create more open, fair and even democratised AI second industrial revolution brought mass production and the car,
that serves the people. which increased urban sprawl, cities grew into megacities, wages in­
creased because factories were more efficient and work times got
3.3.1.3. Research agenda. One of the most pressing challenges in the shorter and all this gradually allowed for the growth of the middle class.
development of future AI is ensuring that it is not developed in a silo, The third industrial revolution with the invention of the computer
without input from other disciplines, and importantly, real people. brought increased automation and productivity but also increased
People-centred design has a history of acting as the ‘cement between pollution as a social aftermath (Kling, 1991; Langton, 1984). The lit­
the bricks’ of more ‘scientific’ disciplines and can generate impactful erature on the fourth industrial revolution however has remained nar­
innovations in combination with AI. However, more understanding is rowly focused on the question of skills and whether human and AI are
needed on how the two disciplines can come together and combine in substitutes or complements on the labour market (Acemoglu, Autor,
the most fruitful ways. This needs to be not only on a process level – Dorn, Hanson, & Price, 2014; Autor, 2013; Brynjolfsson & Hitt, 2000;
combining two distinctly different disciplines, but also at an Frey & Osborne, 2017; Genz, Lehmer, & Janser, 2018; Katz & Margo,
organisational and educational level – ensuring that teams dedicated 2013). The literature is yet to engage with the significance of social
to solving serious challenges have a mixture of disciplines and can change that will be involved in switching between human-made and AI-
develop and disseminate the mixed methodologies they use. The recent made goods and services. AI is intensively used for learning about
acquisition of Datascope by people-centred design firm IDEO, as well as human demand patterns to serve as a technology that enhances the
the number of design and data labs being set up by the ‘big 4’ efficiency of marketing Kwong, Jiang, and Luo (2016) and retailing
consultancies go some way to realising this, but future dissemination (Weber & Schütte, 2019). Very few studies have paid much attention to
of successful and unsuccessful methodologies is important. the specificities of the demand for AI goods and services, but two

20
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

examples do exist. The first is a quasi-insight on the demand for goods preferences for things familiar and similar to one's own identity
and services: Morikawa (2017) examines individual predictions about (McPherson, Smith-Lovin, & Cook, 2001). This latter tendency is often
the type of industry that is likely (and thus inferred by the authors to be labelled as cultural proximity and is widely researched in the economic
desirable or more acceptable for the interviewee) to experience sub­ literature from Adam Smith to modern regional economic spatial ana­
stitution of human with AI labour. The other study focuses on the de­ lysis (Torre, 2008). Cultural proximity is known to be a key determinant
mand for AI services in a very specialised field: Tubadji, Webber, & for economic flows for people, goods, financial investments in gravity
Denny (2019) look at the demand for robo-advisory services in the models – people prefer to live with, trade with and invest in what is
banking sector. Apart from these two studies, the topic is largely ig­ closer to their own identity (i.e. the home bias effect) (Tadesse & White,
nored in the economic literature in favour of supply side research on 2010). AI lack human identity, so they are by definition at one degree
how producers will be interested in the trade-off between human and AI distance from any human. How will this cultural distance between
workers. Yet, the challenges and opportunities of AI-induced social humans and AI interact with people's demand for AI goods and ser­
change may have complex implications for the diffusion of AI goods and vices? Are AI-products real substitutes for human product according to
services and therefore deserve careful examination. consumers’ perceptions? Which products and services will be affected
We classify the challenges and opportunities for AI-goods and ser­ most from the lack of human proximity between the labour employed in
vices in relation to social change and consumer behaviour into three these goods and service and the consumer?
categories: (i) taste, (ii) fear and (iii) cultural proximity. These are
overviewed below as follows. 3.3.2.2. Opportunities. Learning about the market taste for AI-produced
goods and services can firstly improve firms and employers use of AI.
3.3.2.1. Challenges. Taste is a challenge that hides a double-edged While their attention is currently focused on the pros from efficiency
sword. On one side, people adapt relatively slowly and definitely gains, they might be overlooking the market reaction to the integration
slower than artificial intelligence. The first appearance of the of AI in their production process. Learning about tastes informs the
locomotive on film scared people and it took time until cinema market about AI-generated products and services, which can help
established itself as a successful industry. AI might be more efficient determine the quantity of AI-goods and services to invested in. This
and reliable, but using these types of goods and services may face social can prevent producers from being trapped with over-investment in a
resistance at least for some considerable time after these services will be type of technology whose product might not be readily accepted by
possible to offer on the market (Patsiotis, Hughes, & Webber, 2012). For consumers. Taste-studies can also signal to the producer if the current
a market to at all exist, both demand and supply are necessary. On the high demand is a temporary bubble.
other side, taste is an unfaithful friend: over-enthusiastic but unjustified Learning about the fear factor in AI-related social opinions and
embracing of certain products and fashionable tastes creates the famous policy-making tendencies can help us make evidence-based AI-related
price bubbles such as the tulip mania. First, just like in the tulip mania decisions. It will save us from being swayed by bounded rationality in
bubble, bubbles lead first to massive precarious economic conditions. In our economic, political and policy-making decisions that relate to this
the Netherlands, many individuals started to risk their entire economic novel technological device.
fortune on obtaining a single tulip bulb. The situation aggravated so Learning about the importance of cultural proximity in the context
much that public policy interventions were due to prevent a national of AI-human cultural distance can help to quantify the cultural gravity
disaster (Garber, 2000). Second, taste-driven price bubbles tend to effect that bounds our consumption of AI-goods and products. Having
‘burst’, i.e. price booms are famously followed by deep busts, which are this quantitative information can serve as a tool for predicting the
proportional to the size of the boom (Breuninger & Berg, 2001; Emmett, diffusion of AI-goods and services in a locality.
2000). Thus, if AI is accepted with enthusiastic fashion, it might
explosively spread before the realistic aftermaths from its use get 3.3.2.3. Research agenda. Our ability to take opportunities to learn
properly perceived and reflected by the demand for AI goods and about the effects of taste, fear and cultural proximity on demand for AI-
services. goods and services depends on our ability to adapt the research toolkit
Fear is known in behavioural economics and prospect theory to be a and generate the most enlightening answers. The multidisciplinary
stronger driver for human choice than pleasure (Kahneman & Tversky, Culture-Based Development (CBD) ‘toolkit’ is a combination of moral
1979). While AI might be able to decrease our working hours and offer philosophy, consumer behaviour, behavioural economics and regional
more efficient services, fear about competition for jobs between hu­ economics and should be used because:
mans and AI and fear about the unknown impact of AI may lead us to
underestimate the gains from AI and overestimate the threats. Thus, we • Moral philosophy is optimal for explicating the mechanisms behind
may remove AI from our options for choice much earlier than we moral sentiments and taste (Smith, 1759).
should (Shackle, 1949). An interviewee's severity of fear (as opposed to • Consumer behaviour is the classical field for studying demand using
objective knowledge) about AI has not been seriously examined by ei­ hedonic modelling of preference and choice (Becker, 1996;
ther Morikawa (2017) or Frey and Osborne (2017). Meanwhile, Tubadji Scitovsky, 1976).
et al. (2019) find that increased experience in using technology is ac­ • Behavioural economics has developed the best knowledge for ana­
tually associated with an individual's disenchantment with the use of lyzing fear mechanisms (Kahneman & Tversky, 1979).
robo-advisory services in the banking sector. This finding suggests that • Regional economics is most aware of cultural proximity (Torre,
even when fear is not initially present, a backlash may originate 2008; Tubadji & Nijkamp, 2016) and cultural gravity analysis (see
afterwards from the initial over-optimism about AI goods and services. Tubadji & Nijkamp, 2015). Regional economics has also studied the
Cultural proximity might be the ultimate challenge for the embra­ interaction between cultural relativity and fear and has documented
cing of AI goods and services. The goods and services of the first, second its implications on the percolation of ideas and knowledge (Tubadji,
and third industrial revolutions were tools that enhanced what re­ Angelis, & Nijkamp, 2016).
mained a predominantly human production. Steamboats moved people
faster across the sea; the automobile made the suburbs closer in time to Thus, a culture-based analysis is required that combines analyses of
the city centre for the urban worker; computers connected humans. The consumer behaviour and ethical mechanisms using bounded rationality
negative social effect of Facebook and other known electronic media is with regard to the consumption of AI goods and services. It would ac­
based on the fact that they dehumanise social experience and people count for the specificity of cultural tastes and fears in each spatially
become less social. People are parochially thinking social animals that defined market. Analyses are required that identify tastes for the use of
tend to exhibit preference for homogeneity, i.e. to show strong AI goods of services, which explore the role of knowledge about and

21
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

level of fear from AI and investigate regional variations in consumers’ with which AI evolves, it may be difficult to commit to a certain soft­
AI-related preferences and choices. ware framework and embed it in existing analysis packages, before the
field has moved on.
3.4. Science and technology perspective As a second example, we consider phases of matter easily accessible
in the lab, which are highly relevant for technological applications,
3.4.1. Perspectives on Artificial Intelligence in the fundamental sciences – such as the storage of digital information and quantum computing. A
Gert Aarts and Biagio Lucini prime case here is given by materials with permanent magnetisation
Research4 in the fundamental sciences aims to investigate Nature at below a transition temperature; magnetic storage underpins the me­
both the largest and the smallest length scales, at the highest energies, chanic hard disc. Superconductors, which provide another example, can
and with complex behaviour emerging from simple underlying laws. In conduct electricity without dissipation – and hence energy loss – at very
the physical sciences this encapsulates the study of the cosmos, in­ low temperatures, which can be realised in lab conditions.
cluding e.g. dark matter and dark energy, gravitational waves, and Superconductivity is a broad phenomenon that includes many different
black holes, and of elementary particles, including e.g. the Higgs boson, realisations. A currently much studied but not yet understood type of
the quark-gluon plasma and new physics beyond the Standard Model. superconducting material is the topological superconductor, which
Dynamics at small length scales is determined by the rules of quantum could be used to realise quantum gates for quantum computers or, more
mechanics, rather than classical – Newtonian – mechanics, which in­ immediately, provide superconducting cables at room temperature. In
troduces an intrinsic indeterminacy in the problem, following the usual this class of materials, superconductivity is encoded robustly in a geo­
probabilistic interpretation. Understanding complex quantum systems, metric property of the material itself in an appropriate abstract re­
quantum control and quantum information is highly relevant for the presentation space. The lack of immediate connection between the
paradigm of quantum computing, which surpasses classical computing latter space and the variables measured in experiments makes it very
algorithms in a dramatic fashion and, once available, will make pre­ difficult to identify concrete topological superconductors and char­
viously incomputable problems solvable. Phase transitions, such as the acterise them. Here unsupervised machine learning can make an im­
transition between ice and water, or between magnetic and non-mag­ pact, by learning material properties and phases of matter from mea­
netic phases in materials, are manifestations of collective behaviour sured data, without providing labelled training sets. Very recently, first
emerging from simple laws. Order parameters, e.g. the net magnetisa­ steps in this direction have been made by using ML for phase identifi­
tion of a material, display the presence or absence of macroscopic order cation in known systems (Carrasquilla & Melko, 2017), which in some
and are connected to the underlying pattern of symmetry breaking, cases provides excellent agreement with theoretical expectations
linking phase transitions to the microscopic laws of Nature in a precise (Giannetti, Lucini, & Vadacchino, 2018). Extending this to new, not yet
way. completely understood, systems, could provide an improved way to
The adoption of artificial intelligence (AI) in the fundamental sci­ characterise phases in materials such as topological superconductors
ences, especially in the form of machine learning (ML), has seen a (Melko, 2017).
striking increase in the past 5 years or so (Carleo et al., 2019; Guest,
Cranmer, & Whiteson, 2018). While previously a link between ML and 3.4.1.2. Opportunities. It is easy to identify opportunities where AI and
the physical sciences existed via statistical mechanics, the methodology machine learning will benefit the fundamental sciences, analysing the
developed in physics to analyse large systems with fluctuating degrees enormous data sets available. Instead, here we will discuss two
of freedom, in recent years the use of ML has exploded and it is now opportunities where the benefits go both ways, with the knowledge
employed in most branches of fundamental science, with increasing and expertise gathered in the physical sciences yielding a positive
success and acceptance. impact on AI and ML as well.
In classification problems, ML can be broadly understood as an
optimisation problem, in which the parameters of a model function are
3.4.1.1. Challenges. An overview of ML applications across the
selected to reproduce as closely as possible the known response on the
fundamental sciences (Carleo et al., 2019) is necessarily incomplete,
training set, while avoiding overfitting. In practical applications, this
and most definitely beyond our level of competence. Instead we outline
problem requires the use of computational resources. With the growth
two research areas in which ML and AI are of increasing importance.
of the data available and the necessity to obtain a model in the shortest
The Large Hadron Collider scatters protons and also lead ions at
possible timeframe (especially in applications in which inferences are
speeds close to the speed of light. Due to both Einstein's theory of re­
time-critical, e.g. in the financial market or in weather forecasting), the
lativity and the quantum-mechanical nature of particles at these high
availability of algorithms that are as fast as possible becomes para­
energies, many particles are created in these collisions, including rare
mount. Computationally power-hungry problems of this type have been
ones, such as the Higgs boson, which decay almost immediately. Since
well known in Science and Engineering, where they have been ad­
the number of events at each collision is too large to be stored and
dressed with excellent outcomes using a set of techniques collectively
investigated afterwards, the selection of “interesting” events, which
known as parallel programming, which allow to distribute the calcu­
may contain signals of hitherto undiscovered physics, has to take place
lation on fast interconnected nodes of a computational ecosystem re­
in real time, often relying on comparison with simulated data. In the
ferred to as a Supercomputer. This by-now mature approach to number
language of ML, this can therefore be seen as a classification problem,
crunching, known as high-performance computing (HPC), is offering
with the simulated data providing a labelled training set. The chal­
the possibility to accelerate ML algorithms by orders of magnitude,
lenges here are manifold (Guest et al., 2018). Searches for new physics
making tractable problems that previously were not, or shortening the
may have conflicting demands compared to precision Standard Model
time to solution to a point when a prediction can find timely applic­
measurements, leading to tension in how to handle signal and back­
ability. Not only is HPC tremendously improving the opportunities
ground. Systematic uncertainties may arise from the use of computer-
provided by ML (Berhofer, 2018), but the synergy goes also in the
generated training data and a mismodeling in the simulation. Since the
opposite direction, with disciplines traditionally harnessing HPC ben­
interpretation of the outcome of a neural network analysis is less
efitting from novel ML approaches. An example is given by calculations
straightforward than for a more traditional approach, comparisons with
in which a set of parameters would need to be optimised. In traditional
theoretical models are more involved. And finally, due to the speed
HPC applications, this problem has been approached with expensive
grid searches, sometimes informed by educated guesses. ML offers a
4
Acknowledgements – GA and BL are partly supported by STFC grant ST/ radically new approach, which, in addition to the optimisation of ex­
P00055X/1. BL is supported by a Royal Society Wolfson Award. isting algorithms (Shanahan, Trewartha, & Detmold, 2018), can

22
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

potentially lead us towards the discovery of new and more efficient Collider are based on models developed over many years. These models
ones (Liu, Qi, Yang Meng, and Fu, 2017). Indeed, the convergence of require specific signatures to pick out interesting events. Hence, in a
ML and HPC, sometimes referred to as high-performance data analytics specific search, a trigger will discard all events not containing those
(HPDA), is one of the most promising and potentially disruptive trends signatures. It is natural to ask whether in this way too much informa­
in AI, which both fields can benefit from enormously. tion is thrown away, perhaps related to novel interactions and particles
A second opportunity is given by the training and development of that are not part of any currently known model and hence cannot be
the next generation of data scientists. The demand for highly-skilled selected. The question is whether an appropriately setup multi-agent
data scientists has been well publicised and is largely driven by the system would be able to select events without a bias, hence leading us
increasing use of AI and ML across many applications in business, in­ towards the discovery of new laws of nature. A similar logic could be
dustry and healthcare. Fundamental science plays a dual role in this: applied to most disciplines. Automated science discovery is henceforth
while AI and ML are used to interrogate scientific data and enable a very relevant subject, which should be high in the research agenda.
scientific discoveries, in doing so the involved scientists develop valu­ Automated science discovery cannot be disjoint from interpret­
able data skills easily transferrable to other disciplines, in-or outside ability. Outside fundamental science, practitioners of AI and ML are
science. This is especially so for the next generation of postgraduate often interested in answers, without associating them necessarily to a
students and early-career researchers, for whom training in AI and ML complete understanding of how they are obtained, provided that the
will be intermixed in the research and skill development from day one. outcome is reasonable and looks robust. In fundamental science, one
In this context, it is interesting to analyse the outcome of the recent would need to go one step further, since from those answers physical
funding call of UKRI – UK Research & Innovation, bringing together the laws need to be inferred. This would involve “opening the black-box”
seven UK Research Councils, Innovate UK and Research England – for (unboxing) that has determined the observed outcome, a long-standing
investment in 10–20 Centres for Doctoral Training (CDTs) focussed on problem in AI, to deduce physical properties from the mathematical
areas relevant to Artificial Intelligence (UKRI, 2018). This call, run expressions the system has worked out for classification or clustering.
across the entire UK research landscape, invited proposals for CDTs to An example of a successful inference along this line has been provided
“train the research leaders of the future and equip them with the recently (Wang, 2016). The insights and techniques developed in this
knowledge, skills and creative approaches the UK requires.” Out of the area of research would have a wide impact, as unboxing is important
84 submitted outline proposals, 37 applicants were invited to develop a for the acceptance of AI approaches in general, with important appli­
full proposal (UKRI, 2018). 16 of these were funded (UKRI, 2019). The cations in e.g. the medical and healthcare sector. Finally, a natural
main focus areas of these 16 CDTs are listed in the table below. Note follow-up question is whether the inference of the laws governing a
that each CDT will train, from October 2019 onwards, at least 50 phenomenon can be delegated to a second machine, hence going full-
postgraduate students, in 5 cohorts of 10, via a 4-year cohort-based PhD circle in the AI-based approach to the fundamental laws of nature.
programme.
Perhaps not unsurprisingly, the majority of the funded CDTs will be 3.4.2. Science and technology studies – Vassilis Galanos
active in the areas of healthcare and biomedical research, and of re­ The present contribution to the joint opinion article comes from my
sponsible AI, with a focus on accountability, transparency, and the recent investigations and preliminary findings from a Science and
societal and human perspective. Two CDTs are active in the areas of Technology Studies (STS) perspective on the social dimensions of AI,
core AI research, language processing, or sustainability and the en­ robotics, machine learning, and other related meshed and hardly de­
vironment, and one CDT in the development of nano-devices or the fined concepts. One of my findings is that the clear cut division between
creative industries. Only one CDT has a partial focus on fundamental challenges, opportunities, and research agenda is quite difficult to
science.5 It seems therefore that the training opportunities offered by achieve. Arguments of challenges reveal opportunities and acknowl­
the use of AI and ML in the fundamental sciences are not yet developed edgement of opportunities are alternative statements of research
enough to convince the main funder in the UK, providing therefore a agenda points.
clear opportunity for the future. The recently appointed House of the Lords’ Select Committee on
Artificial Intelligence (HLSCAI, 2018) identifies AI as part of UK's in­
dustrial sector and recommends that: “the Government must under­
3.4.1.3. Research agenda. Fundamental science offers a distinctive but
stand the need to build public trust and confidence in how to use ar­
generalisable perspective on future research in AI, as most of the
tificial intelligence, as well as explain the risks” (25). Most challenges
challenges it faces are complex and at the same time deeply rooted in an
identified there are industry-oriented (for example the knowledge
approach that has evolved from a long tradition in which practical
transfer from the Academy to industry as well as the data divide be­
realisations of experiments, theoretical models, and the underlying
tween such spinouts, start-ups, small and medium-sized enterprises and
philosophy are intertwined and entangled in a coherent structure.
large corporations), educational challenges (introducing AI awareness
For centuries, the route to scientific discovery has followed the
courses at schools), reality distorting challenges due to AI algorithms
scientific method, i.e. observations lead to the formulation of a hy­
(e.g. fake news), liability issues between individuals or companies when
pothesis; data is collected to refute or confirm this; subsequent refine­
AI decisions lead to harm. In the same report's final Appendix, the
ments lead to further understanding, culminating in a theory capable of
authors point out that ‘[c]urrently, EU regulations limited what could
not only explaining all observations so far, but also able to yield new
be done in this area, but post-Brexit, some attendees felt that there
predictions. One may say that artificial intelligence and machine
would be opportunities to reassess this” (178).
learning represent a new paradigm to do science, inverting the scientific
Previous AI policy documents from the EU, UK, and US (all pub­
method, by putting data first, especially in the context of unsupervised
lished in 2016 and of which a good summary and a philosophical re­
learning. By inferring patterns from large sets of data in an unbiased
view can be found in Cath et al., 2018) have been found to be quite
manner and building theories to explain these patterns, one skips over
unrealistic in their proposals. The more recent Rathenau Instituut's re­
the step of testing hypotheses and hence removes bias in the data
port (Van Est, Gerrutsen, & Kool, 2017) on Human Rights in the Robot
analysis.
Age: Challenges Arising from the Use of Robotics, Artificial Intelligence,
Indeed, most of the current physics experiments are designed with a
and Virtual and Augmented Reality written for the Parliamentary As­
bias. For instance, the searches for new particles at the Large Hadron
sembly of the Council of Europe (PACE) is rather well-informed and
examines the relationship between “self-driving cars, care robots, e-
5
Disclaimer: the authors are PI and technical director of CDT #5, see cdt- coaches, AI that is used for social sorting, judicial applications of AI,
aimlac.org. and virtual and augmented reality” and “the right to respect for privacy,

23
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

human dignity, ownership, safety, responsibility and liability, freedom wealth problematic. In a nutshell: my “AI” will differ to yours if we
of expression, prohibition of discrimination, access to justice and the don’t have the same data sources or if you are a company and I am an
right to a fair trial” while they also recommend the discussion of “two end-user. (Although this appears as a data ethics/politics argument,
potential novel human rights” in light of AI/robotics-related advance­ given that data is the bread and butter of current AI, I believe it is worth
ments namely the “right to not be measured, analysed or coached.” to mention). Sawyer (2008) identifies a gap between data rich and data
These two policy documents are, in my opinion, the ones closest to poor in contemporary cyberinfrastructure. The main reason behind
technical and social reality, although a careful reading pinpoints to the current success in the development of AI is essentially the massive
very fact that the great lack of empirical data makes all speculation a generation of data. Allowing the 1980s pattern recognition-based ma­
challenge in its own right. This is the reason that the most fundamental chine learning algorithms to produce fruitful patterns, a data wealth
challenge (and opportunity) to remember is that despite the fact that AI analysis should become part of top-priority setting when it comes to the
has been very hyped in the last five years, a disciplined turn to spe­ economic development of various layers of the market (e.g. data/AI
cialists and the grounding of research agenda on the basis of technical emperors versus start-up AI-based companies) but also with regard to
evidence should be a core priority of any work dealing with the politics one's personal control of data and the awareness of the various type of
and economics of AI. uses of their data. A number of recent works have shown (a) the in­
terconnectedness and historical association between automation, data-
3.4.2.1. Challenges. What are usually presented as AI-related driven, and AI technologies, and (b) how one of the main problems
challenges appear to be diversions from rather real-life problems and arising has to do with the generation of new inequalities and the per­
have little to do with current capabilities of AI. This happens because petuation of older biases relating to all intersectionality concerns with
the very term AI is at the same time ill-defined but also loaded with race, gender, sexual orientation, species, age, and other problematic
meaning, expressing hopes and fears ranging from the will to human categories (e.g. Eubanks, 2018; Buolamwini & Gebru, 2018;
understand intelligence to the consumption by one's progeny (in this Prainsack, 2019). To tie this with above statements, the AI hype and ill-
case, intelligent robotic overlords) (Szollosy, 2016). AI is specific definition diverts focus from problems which should be prioritised in­
enough to cause sensationalist alarm and/or excitement, and vague stead of policy discussions having to do with robotic liability, and so on
enough for many commentators to interpret it according to their own which take non-experts’ accounts as expert knowledge and science
agenda(s). fiction plot devices such as Asimov's laws of robotics at face value (e.g.
This brings us to the very first actual challenge of AI, which has to European Parliament, 2017).
do with the problematic aspects of the term. As argued (Galanos, My conviction for this conceptual and terminological burden
2018), both terms “artificial” and “intelligence,” if examined separately (which, more than specialists who do not pay attention to definitions as
have been contested for their rigidity and usefulness. The differentia­ long as things work, does impose problems to other users of the term) is
tion between natural and artificial (or nature/nurture, nature/culture, that researchers examining the sensitising concept “AI” should borrow
and other similar divides) is impossible to define as either all reality is descriptive terminology from similar cases of other studies in complexly
natural (an outcome of the same nature; naturalist perspective) or all is defined technologies. One is the understanding of AI as Rorschach, that
constructed (interpreted within human brains, a loose constructivist is, as a psychologist's inkblot shown to the patient, upon which patients
perspective). Such distinctions are posed usually with the conscious or project their hopes and fears (borrowed by Turkle's 1981 “Computers as
unconscious intention to either “naturalise” (hence justify) certain be­ Rorschach” metaphor). The second, more concrete, and with applica­
haviours (“this is unnatural!”) or to imply human supremacy over tions in policy, industry, and other relevant social clusters, is the un­
nature. Both cases are associated with scenarios of more-than-human or derstanding of AI as institutional hybrid, that is, a term quite proble­
less-than-human, found in horror stories as well as science fiction. matic which needs to be reinterpreted according to the needs of
All in all: the “artificial” in AI is in itself a terminological challenge. different actors (a lawyer understands AI in a way different to a pol­
Similarly, several cognitive scientists, cyberneticians, system theorists, icymaker, in a way different to a journalist, in way different to a
AI/robotics specialists, and sociologists argue that AI is merely in­ bioinformatics specialist, in a way different to a sci-fi fan, and so on).
definable because we do not have any good understanding of the word The agenda purpose then would be to create typologies of different
intelligence (ibid). From empirical data I am gathering at the moment, understandings of AI according to different players/institutions/
interviewing specialists with direct (or somewhat direct, given the arenas/other social clusters (this approach borrows largely from
ambiguity of the term) involvement in AI R&D, I begin to shape the Haddow et al.’s 2010 work on xenotransplantation as well as Brown
view that specialists are particularly sceptical when it comes to use the and Michael's 2004 work on biotechnology and how different types of
term. Most of them tend to “unmask” the term and provide with names transplants and “risky creatures” were found extremely difficult to fit in
of other technologies expressing a syllogism of the type “when people various seemingly unconnected areas). A final framework to keep in
speak about AI, they basically mean x (machine learning, the internet, mind when examining AI terminology (and rhetoric) is Donald
algorithmic training, deep learning, data science, and so on)”. For the MacKenzie's 1999 certainty trough based on an x–y ratio where x re­
“elderly” AI generation, AI has nothing to do with what is now por­ presents the proximity to the production of technology and y the degree
trayed as AI, as “true” good ol’ fashioned AI (GOFAI) is based on dif­ of certainty. Drawing from missile technologies, MacKenzie proposed
ferent techniques and methodologies; also has different purposes (for that more and the less directly involved one is with the production of
these and other debates on weak/strong AI see Brooks, 2002; Pickering, technologies. The more uncertain they are, the public knows nothing
2009; Searle, 1980). This poses very practical challenges in cases where about it, hence they are uncertain. The developers know way too much,
convoluted networks are used to generate deep fake videos; these may hence they know what might go wrong, so they are uncertain; the in-
be perceived on behalf of the public in tandem with a generalised between intermediaries (promoters, commentators, managers, spokes­
distrust and less confidence towards (digital) media and the fact that persons, etc.) with little involvement in the production, wish to appear
convoluted networks are very easily masked as “AI”. This allows the AI- convincing to the public (buyers), hence their certainty appears to be
as-an-enemy narrative to continue in a meshed context of various high (thus, the trough, MacKenzie, 1999). In the case of AI, as I have
overlapping technologies. Similarly, robotic hoovers sending data to examined it, this has little to do with promoters as much as it has to do
third parties, generating mirroring effects of targeted advertising, show with philosophical, futuristic, and journalistic narratives, perpetuating
the data-intense problem which lies behind (and basically allows the alarming concerns based on very poor.
existence of) AI fearful hypes. This challenge, that is, the intrusion of non-experts to AI debates (or,
A more tangible challenge is the data basis of what constitutes to be more precise, the development of AI debates by non-experts), I
contemporary commonly perceived AI – and in particular the data have examined in a recent paper (Galanos, 2018). I believe that the 4-

24
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

year period 2014–2018, if examined carefully from a media + policy applications of machine learning when it comes to rare diseases. The
perspective shows the crystallisation of a third AI hype (after the pre- development of a strong novel economic framework which will deal
Lighthill and Alvey Programme ones described in Dwivedi et al., with the question of assigning value to data. Whereas traditionally
2015b), established through public commentary by prestigious public the accumulation of some source tends to decrease its value, with
personas such as Stephen Hawking, Elon Musk, and Bill Gates who, data the opposite seems to appear the more, the better. What are the
after using their credentials in domains other than AI, promoted AI implications of such a reversal? Is it really a reversal?
doomsday scenarios which found their ways into policy discussions. • Further, with machine learning as a point of argumentation de­
One of my current hypotheses (to a certain extent verified from various parture, a consideration of a possible alteration of data protection
recent initiatives which have not yet taken the form of papers but can legislations (e.g. GDPR), will enable people with minimal access to
be traced in the news or in academic contexts), is that the hype is now data to be represented.
entering the trough of Gartner's disillusionment (Linden & Fenn, 2003) • Push forward an agenda which will strongly suggest the involve­
and the most important thing to avoid is an upcoming third AI winter. ment of social science scholars with expertise in AI-related topics as
Concerns about a third AI winter have been expressed by AI specialists members of science and technology research committees (e.g. House
such as Booch (2015) and now that a general realisation of the fragility of Lords, EPSRC).
of ultraintelligence/singularity/superintelligence types of arguments • As a subsidiary to the above, such social scientists should be re­
(for a concise review of the arguments see Eden, Steinhart, Pearce, & sponsible for the boundary work between who is entitled to become
Moor, 2012; for their most recent instalment, Bostrom, 2014) has be­ a witness for such committees and to ensure that the conductors of
come relatively fashionable (as I often say in my talks “unhype is the reports “ask the right questions” instead of biased, (mis)leading, and
new hype”), it is time to strategically intervene and promote justifiable irrelevant ones.
(and yet opportunistic) agendas in AI. • Emphatic recommendations for balancing the overwhelming
amount of speculative, future-oriented studies on AI (which usually
3.4.2.2. Opportunities. I find out that clear separation between take a technologically deterministic view of the “how AI will change
opportunities and research agenda is relatively difficult to achieve, society” type) with empirical-driven research in terms of the history
especially if one follows a relatively sceptical stance towards AI. Hence, and sociology of AI, separation of actual versus imagined cap­
I will keep the section on opportunities short and focus more on the abilities and challenges.
agenda below; moreover, opportunities impose expectations and, • Several “elderly” or “traditionalist” AI scholars may agree that the
especially in the case of AI, far-fetched expectations have been quest of create AI in the early era of the field was mainly the quest to
harmful. This brings me, however, to the first opportunity (resulting understand intelligence at large (or at least human). Contemporary
from an earlier challenge). rhetoric on AI seems to take for granted that humans know what is
History of AI shows a repetitive rise and fall pattern of hype and intelligence and that there should be concerns of the hubris relating
disillusionment; large availability of grants followed by long periods of to the creation of artificial intelligence. AI (and especially robotics)
research support stagnation – this happened because AI specialists, in specialists have to a great extent turned to bio-inspired and non­
their attempts at establishing their field have made very brave and human-centric approaches to achieve small successes, little by little
overly ambitious (and ambiguous) promises to eventually remain un­ (e.g. Brooks, 2002) suggesting that we cannot make purist AI since
fulfilled (Crevier, 1993; Fleck, 1982). It seems that the promissory we do not know what is intelligence. A generalised support of the
arena has changed and, as it has been shown, in a dangerous fashion for return to the notion that AI should enable the understanding of in­
policy (Galanos, 2019), a large amount of negative promissory work has telligence should be fruitful for research as it will allow an escape
now shaped public opinion through the input of science-related pres­ from current misinformed narratives concerned with hubris.
tigious personas (e.g. Stephen Hawking, Elon Musk) whose undeniable • Based on the recommendations by Winnograd and Flores (1986) and
expertise in certain fields allows them to acquire imaginary credentials their more recent revisit by Collins (2018), and if the achievement of
to becomes spokespersons of any other “hot” technical/scientific sub­ a more “original” GOFAI (good ol’ fashioned AI) is desired, special
ject. The availability of previous historical sources on the first two grants should be given to natural language processing instead of
rounds of negative effects of promissory work gives the opportunity for machine learning. The latter has developed up to a significant de­
a strong basis for (a) investigating in detail the relatively un­ gree to assist new findings related to NLP (for example, the in­
documented history of AI past the early/mid-1990s and (b) given the tricacies of language as a form of commitment to social obligations
evidence of the current negative effects of non-specialist intrusion, the and as a social act – or “speech act”) with the assistance of current
right to intense boundary work to separate who is entitled to be a machine learning algorithms which will enable the understanding of
spokesperson of AI and who is not. previously unsolvable correlations.
Research councils, whose active interest in AI is relatively recent • A constantly reassessed mapping of actors and players shaping the
(e.g. in the UK, the Eight Great Technologies report was published in sensitising concept “AI” should be generated in order to gain clearer
2013) and hence their needs appear modifiable, should be approached views of the currently developing system which underpins the AI
by academics investigating AI in an empirical manner and be re­ confusion. How do the not-so-separate-although-sufficiently-distinct
commended that certain AI-related challenge funds should be dedicated clusters of AI developers, AI industrialists, AI commentators and
to (a) explainability/intelligibility of AI/machine learning “black- futurists, journalists, end-users of every sort (from Amazon custo­
boxes,” (b) to work evidencing that AI (like every technology) is about mers to doctors making use of new AI tools), policymakers, science
augmentation of human skills and not abrupt replacement which is fiction authors and film industries (and so on), relate to each other?
found to be nearly impossible in most cases and (c) to promote in every How do their relationships change and how does this effect the
institution, spinout, and laboratory a framework of strategic foresight overall shape of the technology and the public portrayal of AI? An
(constructing plans and setting goals for a maximum of 3–5 years), interactive map of such relevant social groups or niches should be­
instead of circulating abstract expectations looking forward to several come available to all such members to raise awareness of the gen­
decades (cf. the proposals by Van Lente (2012). eralised messiness of the meaning of AI.
• The organising of an activist niche which will take mostly into ac­
3.4.2.3. Research agenda for policy and research count people with academic expertise in technical/practical AI-re­
lated fields, will bring forward and publicise an agenda to discuss
• Investigation of the relationship between available data and un­ the dangers and possible advantages of autonomous weapons. In
represented groups – a good case can be made in medical other words, the opposition between the questions “why make

25
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

weapons in the first place?” and “if we are to have weapons, why not Henriksen, 2018), on the other hand AI systems can remove public
make them as accurate as possible?” servants from the duty of accountability, exacerbating the phenomenon
where citizens are faced with impotence in front of “the computer says
3.5. Government and public sector perspective no” responses (Wihlborg, Larsson, & Hedström, 2016). The opacity of
mechanisms in AI-supported decisions poses challenges not only in the
3.5.1. Artificial Intelligence in the public sector – Rony Medaglia ethical responsibility and legal liability dimensions – who is responsible
A key area of application of AI technologies is the one of the public for a damage to a citizen stakeholder, if the decision has been out­
sector. The core difference between AI technologies and traditional sourced to an AI application? – but also to the wider fundamental issue
office automation is that the former do not only support decisions on a of political accountability of public governance.
pre-programmed if-then logic but, instead, feature learning capabilities The third challenge is associated with the creation AI-enabled filter
(Russell & Norvig, 2016). Given this characteristic, AI presents a special bubbles in the public sphere. AI-enabled algorithms have proven tre­
range of opportunities and challenges in a public decision-making mendously effective at micro-targeting content and at fostering a
context, where environmental variables are constantly changing, and booming constellation of groups of like-minded actors in the public
pre-programming cannot account for all possible cases. space, such as social media platforms (Sunstein, 2017). This challenge
AI technologies, such as machine learning, rule-based systems, affects the context in which public governance is exercised – that is the
natural language processing, and speech recognition, when adopted in sphere of public opinion formation at large and thus, indirectly, the
the public sector, carry potential implications for all aspects of gov­ ability of government to both be seen as legitimate by citizens, and to
ernment actions, including the inner workings of government agencies, formulate policy actions that draw on a perceived common good. The
the relationship between governments and citizens, and the role of ability of algorithms to provide personalised content by filtering out
governments as regulators (Eggers et al., 2017). inputs that do not match pre-existing user preferences (in e.g., news,
Here we outline the key challenges and opportunities of the specific entertainment, political discourse) is potentially bringing about societal
application of AI in a public sector context, and present a research fragmentation, polarisation, and radicalisation, with the creation of
agenda. digital echo chambers (Medaglia & Zhu, 2017). Governments that fail to
mitigate such disaggregating forces, enabled by AI systems, will po­
3.5.1.1. Challenges. A number of challenges related to AI adoption in tentially lose the capability to be perceived as legitimate and to for­
the public sector are not unique to AI, but instead overlap with well- mulate policy actions that can be met by sufficient public opinion
documented problems of adoption of any new emergent technology in support.
government. These classic challenges include: the quest for data
integration across different organisations, resistance to use by the 3.5.1.2. Opportunities. The introduction of AI in the action of
public sector workforce and citizens alike, and threats of labour government comes with a wide range of unique opportunities. While
substitution (Sun & Medaglia, 2019). many of them start to be highlighted in a booming number of
Conversely, we also would like to pinpoint three specific challenges viewpoints on AI in the public sector (Desouza, 2018; Duan et al.,
in the adoption in the public sector that are unique, or especially re­ 2019), we focus here on two specific ones that we consider outstanding:
levant, to AI: algorithmic bias, algorithm opacity, and filter bubbles. relieving cognitive resources of public workers, and fostering citizen
The first challenge concerns algorithmic bias. AI-based algorithms trust.
are increasingly being experimented by governments to introduce ef­ First, AI applications, such as rules-based systems, speech recogni­
ficiencies in the large scale customisation of public services, a type of tion, machine translation, computer vision, machine learning, robotics,
task that draws on citizen profiling (Janssen & Kuk, 2016b). Examples and natural language processing, have the potential to free up precious
of such applications include public hospitals using machine learning cognitive resources of public workers, which can then be allocated to
algorithms to predict virus outbreaks (Mitchell, 2019); analytics tool tasks of higher added value (Eggers et al., 2017). This reallocation al­
used to predict hotspots of crime (Goldsmith & Crawford, 2014) and lows government to focus scarce resources on tasks at which human
high risk youth (Chandler, Levitt, & List, 2011); and AI systems used to workers perform better than machines, such as problems solving ac­
target health inspections in restaurant businesses (Kang, Kuznetsova, tivities that require empathy, creativity, and innovation.
Luca, & Choi, 2013). Second, AI applications have the potential of fostering citizen trust.
While the ability of AI applications to recognise patterns can be The other side of introducing “digital discretion” in the work of street
beneficial to segment populations for e.g., welfare service provision or level bureaucrats by AI systems is that unfair, inefficient, or distorted
addressing anti-social behaviour, it can also amplify discriminatory provision of government services can be potentially reduced, thus in­
biases that are already present in human-led assessments: predictive creasing citizen trust towards government. The introduction of tradi­
algorithms, in fact, can favour groups that are better represented in the tional digital government initiatives has already been documented as
training data (Barocas & Selbst, 2016). Algorithms can thus lead to helping reduce public servant corruption (Bertot, Jaeger, & Grimes,
systematic and unfair treatment of citizens based on social biases of 2010): AI systems can bring this trend further, provided that govern­
gender, race, sexuality, and ethnicity – an outcome which is in direct ments put great care in ensuring that the adoption of AI is included in a
conflict with the mission of governments of unbiased treatment of ci­ context of dialogue with citizens, and towards counteracting the in­
tizens under the rule of law. creasing distrust towards governments. Both the ability of AI systems to
The second challenge concerns algorithm opacity. The increasing micro-target policy recipients (which allows governments to implement
complexity of AI systems, such as machine learning and neural net­ much more fine-grained policies), and to ensure real-time, flexible rule-
works, reduces the capability of human operators to trace outputs back based action by street level bureaucrats (which reduces arbitrariness
to specific inputs, making it potentially impossible to clearly account and citizen divides in service provision), can potentially enhance citizen
for specific AI-driven outcomes. The wider consequences of this phe­ trust towards governments.
nomenon have been referred to as creating a “black box society”
(Pasquale, 2015), and have profound implications for governments 3.5.1.3. Research agenda. The layered nature of the potential
which, by definition, are bound to citizen expectations of transparency disruptions in the introduction of AI in the public sector calls for a
and accountability. A clear example can be found by looking at how renewed research agenda, and new theorisation efforts (von Krogh,
digital systems impact the work of street level bureaucracies. While, on 2018). Here we identify three key areas of research, to be prioritised in
the one hand, automated decision-making has the potential to improve the near future.
fairness by reducing the discretion of public service operators (Busch & First, there is a need to unpack the impacts on the public sector

26
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

workforce of delegating decision-making to AI. Besides highlighting the appropriation of such schemes by the elites or the well-connected set of
classic threat of labour substitution, what the introduction of AI systems people. AI could help in identifying the target citizens for such welfare
calls for is research on the nature and the mechanisms of transformation schemes and payments. Another promising area of AI applications
in the public workforce. Automated systems can undermine worker could be in judiciary in developing countries where cases are pending
motivation, cause alienation, and reduce satisfaction, productivity, and from decades due to availability of limited resources. AI could help in
innovation (Moniz, 2013). Research questions worth investigating will deciding on the bail hearings in courts as machine learning technologies
thus be: what are the motivational and psychological impacts of in­ are now robust for such applications (Mullainathan & Spiess, 2017) and
troducing AI as a “digital colleague” on the public workforce? How are may deliver decision which might not only be quick but also more
inter- and intra-organisational dynamics of power in public agencies accurate.
shaped by the introduction of AI? Similarly, AI opportunities also exist in the not-for-profit en­
Second, there is a need to better understand the dynamics in the terprises, one of the fine examples is Akshaya Patra in India,6 which
attribution of meaning to AI-supported public decision-making. As AI runs world's largest mid-day meal programme serving wholesome food
applications tend to introduce opacity, and reduce the ability of non- to the children of government and government aided schools which has
experts to audit the mechanisms that lead to decision outputs, we need the aim of reducing malnutrition and facilitating the right to education
to unpack the novel sensemaking processes enacted by government of socio-economically disadvantaged children. Here, AI could be used to
workers and citizens alike, when facing AI applications. Examples of accurately forecast the demand of the meals for schools based on the
research questions include: how do policy makers frame and legitimise data of student's attendance records and hence greatly minimising the
AI-supported solutions? How do citizens perceive the role of AI in waste of food (Raval, 2018).
policy making? How is agency attribution formulated and negotiated New technology adoption in SMEs and public sector enterprises
between different stakeholders in the government sphere? generally followed by the adoption in large enterprises when such
Third, there is a design research challenge to tackle the issue of the technology becomes stable and affordable. However, in case of AI ap­
potential opacity of AI applications. The nature of AI algorithms seems plications cost may not be the bigger issue as the supporting software
to suggest that their transparency, traceability, and explainability are programmes are increasingly also available as open source.7 SMEs and
inversely proportional to their complexity. While this might be the case, public sector organisations need to understand the capabilities of AI
there is no reason to consider such characteristics as immune from technologies and should work towards appropriating these capabilities
mitigation strategies in the design and management of AI applications. for solving existing business concerns. In the current scenario, most of
Research question related to this challenge include: how to design al­ the popular AI applications in media reporting are of large and in­
gorithms that enable explanation? How to design evaluation frame­ formation technology intensive organisations.
works that avoid discrimination?
The introduction of AI in the public sector opens up new scenarios
3.5.2.2. Challenges. Large private sector companies such as Google,
for practitioners and researchers alike. Being able to understand and act
Facebook etc. are adopting artificial intelligence enabled tools to obtain
on these scenarios becomes now of utmost importance.
competitive and strategic advantage in the digital marketplace. These
companies maintain enough information resources in terms of
3.5.2. AI for SMEs and public sector organisations – Sujeet Sharma and JB
information technology assets and capabilities to exploit data for
Singh
better decision-making. However, SMEs and public sector companies
AI technology although gradually developed in the past several
may face some challenges in leveraging artificial intelligence enabled
decades, has accelerated shown more in the past number of years due to
tools. There are many challenges in adopting AI based tools ranging
promising developments in machine learning algorithms, rise of big
from data quality, privacy/security risks and to the shortage of
data and low cost processing power due to the advent of cloud com­
qualified AI experts. In this section, we attempt to describe some of
puting. Although AI comprises of a set of technologies such as machine
the major challenges faced by SMEs and public sector companies.
learning, deep neural networks, natural language processing, robotics
Data quality: Data quality can be thought of as the fitness of data to
etc., in simpler terms it can be defined as an advanced prediction
obtain actionable insights using appropriate analytical tools (Lee,
technology (Agrawal, Gans, & Goldfarb, 2017). In this sense, AI tech­
2017). Data noise, data heterogeneity, imbalanced data, data dis­
nologies can find patterns in large amount of data and provide pre­
cretisation are some of the prominent reasons of low quality data. In the
dictive outcome for the new similar instances. The well documented AI
popular epic poem Rime of the Ancient Mariner, author stated that,
applications such as cancer detection in health care and fraud detection
“Water, water, everywhere, nor any a drop to drink.” The usefulness of
in financial industries are promising. AI applications in various in­
the data depends on the quality of the available data in the companies’
dustries and activities such as manufacturing, human resources, and
warehouses (Hazen, Boone, Ezell, & Jones-Farmer, 2014). Low quality
sales and promotion are growing, however such applications are cur­
data lead to the poor decision-making and as a result loss in businesses.
rently limited to mainly larger business enterprises.
The cost of the low quality data may lead to loss of 8% to 12% of the
revenue in an organisation and may translate in the loss of billions of
3.5.2.1. Opportunities. Artificial Intelligence applications have
dollars in a year (Dey & Kumar, 2010). As most of the data collected
potential for SMEs, public/government organisations and also not-for-
over a couple of years is unstructured and amassed from multiple
profit enterprises where the potential has not yet significantly explored.
sources, the overall quality of collected such data is assumed to be low
The potential of AI in SMEs are particularly in automation of various
in the SMEs and public sector companies. If low quality data is used to
tasks with decision making components such as in the functions of
train AI enabled tools, it will lead to disaster. In public sector organi­
finance and customer services. For example, AI applications could help
sations, there are rarely available data standards to collect and store
SMEs in matching customer invoices with received payments, AI
data which results in low quality. Therefore, data quality is one of the
chatbots could help these enterprises answering customer's simple
key challenges in the adoption of AI enabled machines and becomes
requests. These are some easy to implement applications where AI
severe in SMES and public sector enterprises.
could improve the efficiency of SMEs. Unlike SMEs, public sector and
Privacy/security risk: Privacy and security are key challenges in
government organisations generate lot of data through their processes
adopting AI enabled tools in any organisational settings. These
and hence more potential for application of AI technologies. In
developing countries particularly, payments of the welfare schemes to
the eligible citizens is paramount in functioning of the state. One of the 6
https://www.akshayapatra.org/about-us.
problem in developing countries is the leakage due to corruption and 7
https://dzone.com/articles/how-open-source-software-drives-iot-and-ai.

27
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

challenges become severe in case of SMEs and public sector organisa­ apply this learning to new unseen set of data in order to make an au­
tions due to availability of limited resources. In general, SMEs works tonomous decision without any human supervision (Negnevitsky, 2011;
under severe constraints of ICT resources and their primary objective is Stone, 2016). Today, AI applications are touching human lives in every
to buy and sell products. In this process, SMEs generate reasonably sphere – self driving cars, medical diagnostics, drug discoveries, law
good amount of data related to product and users. There is a challenge enforcement, military, space, education, governance and elderly care
to maintain privacy and security of such useful data. In public sector are just a few example. A report of Mckinsey Global Institute estimates
enterprises, there is a huge amount of personal data being generated AI contribution to global economy at US $13 trillion by 2030 (Bughin
during citizen centric services rendered by government agencies. This et al., 2018). The same report estimates that about 70% companies
huge amount of data is vulnerable to data theft or data manipulation as would be using AI by 2030.
ICT regulations are quite weak in many developing countries. In addi­ Massive portable computational power available ubiquitously
tion, privacy is a major contributor of legal and ethical concerns raised around the globe is the new reality. This compute power coupled with
by the rapid growth of AI enabled products in past couple of years. thousands of open source AI modules available on platforms such as
Duan et al. (2019) discussed that ethical and legal issues are major github, has transformed AI systems into a commodity which can be
challenges of AI enabled services. bought and sold ‘off-the-shelf’ across the globe. This has made the risk
Shortage of qualified AI experts: In SMEs and public sector companies, management of an AI system very complex (Scherer, 2016). This has
leaders are attempting to employ AI to see positive impacts on the also raised some very challenging issues for the Governments and needs
business outcomes and hunting for AI experts to transform their vision a well thought out systematic Public Policy response. For the policy
into reality. Bernard Marr (2018) reported that there is a requirement of makers one of the key challenges lies in staying ahead of the technology
one million AI experts worldwide but available AI experts are about curve and in being able to identify new technological disruptions taking
300,000. Bernard Marr (2018) further argued that the shortage of AI shape. The goal of new Public Policy should be to allow harnessing the
experts is due to the mismatch between skills taught in an academic power of AI for public good while keeping it safe and ethically com­
environment and skills expected to keep pace with new AI technologies. patible with human values. The AI systems of future, being capable of
In addition, there is a well-established relationship between artificial autonomous decision making – which in areas like law enforcement or
intelligence and data science. In fact, AI is considered as a tool to data healthcare may interfere with right to life or right to freedom of a
science that provides actionable insights to a particular problem. In a human being – must be designed to be compatible with our social va­
recent study, The Economist Intelligence unit (2018) conducted a lues, ethics, fairness, equity and our idea of accountability. This is
survey of 400 senior executive working on the transformative potential critical for survival of a free human society and it cannot be left to the
of artificial intelligence in private and public sectors across eight pro­ wisdom of private capital, which howsoever well meaning, is likely to
minent markets including USA, UK, France among others. This report put a premium on the bottom line numbers over everything else. In­
reveals that “talent and skills” is one of the business’ top strategic terestingly, the loudest call for regulation by Government is emanating
challenges in the current scenario. Davenport and Patil (2012) claimed from the tech leadership itself. Researchers have called for creating
that data scientist is going to be the sexiest job in the 21st century. National bodies for oversight on AI and Algorithms (Gaon & Stedman,
Vesset et al. (2015) reported that there will be shortage of data science 2019; Shneiderman, 2016). Traditional public policy and regulatory
experts and will grow at a compound annual rate of 23% by next couple responses such as licensing, R&D oversight and tort are not suitable for
of years. In general, shortage of qualified AI experts is another im­ AI, because of the discreet, diffused and opaque nature of AI (Bleicher,
portant challenge but it becomes critical in case of SMEs and public 2017; Scherer, 2016).
sector enterprises. Governments, unlike private sector, have twin roles in the adoption
of AI –
3.5.2.3. Research agenda. The above opportunities and challenges
discussion provides further avenues for research directions. Each of • As a user of AI – to better deliver the services to citizens, to improve
the points discussed above need greater attention from scholar to efficiency, to cut down waste and to optimally allocate resources
conduct in-depth research studies. In this section, we propose the • As a regulator of AI – keeping the technology benign and oriented
following research directions towards improving the lives of its citizens. It must lay down policies
and framework to ensure all usage of AI is fully compatible with
• Given the constraints in terms of skilled talent, data quality, and human values, and must ensure that the use of this technology is
inclusive and it does not leave anyone behind.
privacy and security, there is a need to understand whether SMEs
and public sector organisation should adopt the strategies adopted
by large organisation or do they need to formulate new AI strate­ 3.5.3.1. Opportunities for AI in governance. Governments are responsible
gies? for delivering a large number of transaction, licensing and regulatory
• Another possible research direction is to assess readiness of SMES services to citizens and companies, and are going to be amongst the
biggest adopters of AI. The reason for this is not very hard to fathom.
and public sector organisations for AI applications.
• Finally, there is also need to understand and explore the impact of Governments world over, invariably grapple with following common
problems:
security and privacy risk in adopting AI applications in SMEs and
public sector organisations.
• Perpetually short on resources – Governments everywhere need
more resources than they have at their disposal. This requires an
3.5.3. Public policy challenges of Artificial Intelligence (AI): a new
optimal allocation of resources which is a highly complex task even
framework and scorecard for policy makers and governments – Santosh K
for the smallest of the Governments.
Misra8
The emergenceof AI as a potentially disruptive technology has • Scale of operation – Scale of operation of Governments is huge and
with mandatory burden of centralised recordkeeping of individual
posed new challenges for policy formulation in the 21st century. AI can
transactions for audits, courts, Right to Information etc., it quickly
be thought of as a computational algorithm that is capable of learning
becomes a gigantic and complex task. The extra recordkeeping need
and identifying patterns in a given voluminous data set and then able to
makes the processes cumbersome and results in unavoidably com­
plex systems which lead to delays, adversely affecting the quality of
8
The views expressed by the author are his own and do not reflect the views service delivery.
of Government of Tamil Nadu, India. • Standardisation – one size fits all approach – To address the first two

28
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

problems most Governments have resorted to standardisation of There have been warnings against over-regulating AI, lest it should
systems and processes. While this is great for record keeping and strangulate its development and make future advances either im­
audit trails, it invariably makes the processes and forms significantly possible or too expensive (Adam, O'Sullivan, & Russell, 2017).
more cumbersome for the citizens, and adds more load to the al­ The Stanford “Report on Life in 2030” has the following three re­
ready overloaded Government delivery systems. commendations for the Governments (Stone, 2016):

Governments could really transform their service offerings by using • Define a path towards accruing technical expertise in AI at all levels
AI to address all the three issues. Services can be “tailor made to of government. Effective governance requires more experts who
Individual needs” – a 100% customised services to every citizens. The understand and can analyse the interactions between AI technolo­
AI can deliver the following benefits – gies, programmatic objectives, and overall societal values.
• Remove the perceived and actual impediments to research on the
• “Smart service” – Efficient delivery can cut down on time and cost of fairness, security, privacy, and social impacts of AI systems.
service delivery and can improve the processes. Autonomous sys­ • Increase public and private funding for interdisciplinary studies of
tems and Intelligent Chatbots can continue delivering services the societal impacts of AI.
24 × 7, without any ‘off’ days. It would reduce cost and time both
for the Government and the citizens. Governments can free up Intel corporation in its white paper on Public Policy opportunities in
precious human resource from repetitive work and can re-deploy AI flags privacy (two components – Fair Information Practice Principle
them more meaningfully. and Privacy by design), accountability, fairness and human employ­
• “Intelligent Adaptive Forms”– 100% Customised services to every in­ ment as key areas of concern for AI policy (Intel, 2017). Germany has
dividual – tailor made forms can be generated with help of AI for identified transparency, privacy and ethics as three critical challenges
every individual based on her/his age, gender, literacy level, special for AI development and adoption (Harhoff et al., 2018). In UK, the
needs, and eligibility. An example is income tax return forms – can it British Standards Institute in its draft proposal on “Information Tech­
be customised for every individual, instead of citizens expected to nology – Artificial Intelligence – Risk Management Standards” has
write ‘not applicable’ at scores of places. It can easily pull out re­ identified transparency, verifiability, controllability, explainability,
levant details from the existing databases (previous year's returns for robustness, resiliency, reliability, accuracy, safety, security and privacy
example) and autofill most of the fields for the citizen, making it as important parameters for certification (BSI, 2019). The famous Asi­
easy and less time consuming. Ideally, any form filling for lomar AI principles list out – safety, failure transparency, judicial
Government should be just a question answer based mechanism, transparency, responsibility, value alignment, human values, personal
where just by answering a few questions the required form can be privacy, liberty & privacy, shared benefit, shared prosperity, human
auto generated for the citizen. control, non-subversion, arms race and strict control of recursive self
• “Predictive service delivery” – Using AI and data analytics, improvement AI – as key challenges of AI implementation (Future of
Governments can take a big leap forward in service delivery and can Life Institute, 2017). Calo in his paper has identified – justice and
start ‘predictive service’ delivery – where the citizens do not need to equity, use of force, safety and certification, privacy, power, taxation
explicitly apply for every service or benefit needed by them, instead and displacement of labour as the key challenges of AI (Calo, 2017).
the services or the benefits get delivered to them automatically Japan's METI (Ministry of Economy, Trade and Industry) has listed
when they need them. For example, Government can sanction employment, skilling, database protection, changes needed in laws and
scholarship for a student based on the data it already has (her global collaboration as main policy challenges for AI (METI - Ministry
education performance, socially disadvantaged status, stream of of Economy, 2016). Canada is being advised by its researchers to focus
education, parental income etc.) and the student just needs to give a on trust, transparency and accountability as prime AI policy challenges
consent for accepting the scholarship over the short messaging (Gaon & Stedman, 2019). India's National Strategy for AI prioritises –
service or an automated voice platform. Similarly, a senior citizen fairness, transparency, privacy and security as the key challenges of AI
just needs to give his/her consent over mobile phone to start getting over the rest (Niti Aayog, 2018).
the social security pension she/he is eligible for. A farmer, a small It is evident that there is a wide variation in identifying the key
trader, a micro enterprise or a skilled service provider can get his/ public policy challenges of AI. There is a need to unify these approaches
her incentives or benefits due from the Government without having and create a unified practical framework for ‘Public Policy Challenges
to fill a form or going to any Government office. of AI’. This framework must cover all the critical challenges of AI and
yet keep the set relatively small to make it tractable and implementable.
3.5.3.2. Public policy challenges of AI. The Public Policy is facing Attempts to understand the AI impact on society through literature
unprecedented uncertainty and challenges in this dynamic world of review (Wirtz et al., 2018) have resulted in classifying the AI impact in
AI. Everyday a new application based on AI is invented and unleashed public sector in 4 broad areas of – AI & Technology, AI & Society, AI &
onto the human society. The velocity, and scale of impact of AI is so Law and AI & Ethics. While this approach is a good beginning point for
high that it rarely gives the public policy practitioners sufficient time to trying to understand the impact of AI on society, what is needed by the
respond. Public Policy, by definition, needs to put in place regulations public policy practitioners is a toolkit for objectively analyzing an AI for
against a possible future development which could be detrimental to public use proposal.
human values. This creates an interesting tension between the need to
predict AI impact and inability to draw boundaries around this highly 3.5.3.3. A new framework for public policy for AI: TAM-DEF. So far, there
dynamic technology. Klaus Schwab, while highlighting the Governance has not been any attempt in devising a comprehensive Public Policy
challenges due to AI or what he calls the ‘Fourth Industrial Revolution’, framework for AI which would guide and enable the public policy
writes: practitioners and Governments in making a decision on using a
“Agile governance is the response. Many of the technological ad­ particular AI system. The TAM-DEF framework proposed here does
vances we currently see are not properly accounted for in the current precisely that. It creates a framework on which Governments can
regulatory framework and might even disrupt the social contract that objectively test any AI system before launching it for public use. It
governments have established with their citizens. Agile governance provides a systematic framework for the questions Governments must
means that regulators must find ways to adapt continuously to a new, ask before using any AI system. It also provides a DEEP-MAX scorecard
fast-changing environment by reinventing themselves to understand mechanism for making an objective decision about intended AI use. In
better what they are regulating.” (Klaus Schwab, 2016). short, it provides a toolkit which can help public policy practitioners in

29
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

assessing the safety and social desirability of any AI system. The TAM- • Digital divide and ‘data-deficit’: Since the entire AI revolution has
DEF framework (Transparency & audit, Accountability & Legal issues, data at its foundation, there is a real danger of societies with poorer
built in Misuse protection, Ethics, Fairness & Equity, Digital Divide & access to information technology, internet and digitisation being left
Data Deficit) identifies six key AI Public Policy challenges as following – behind. Informed citizens would tend to gain disproportionately in
this data driven revolution. Countries and Governments having good
• Ethics quality granular data are going to derive the maximum benefit out
• Transparency & Audit of this disruption. Countries where the data is of poor quality or of
• Accountability & Legal issues poor granularity would be left behind in harnessing the power of AI
• Fairness & Equity to improve lives of its citizens. There is threat that this technology
• Misuse protection would adversely affect communities which are poorer in data.
• Digital divide & Data deficit Unfortunately it is the low-resource communities in developing
countries which would be hit by this data-deficit because they are
Governments and regulators would need to address each of these six the ones who never had the resources to invest in data collection and
challenges before rolling out any AI solution for public use. They would collation.
need to ensure that each public AI system is minutely examined under Another challenge that emerges from this technology is the skewed
the set of six challenges provided above. Only when an AI system is power distribution between digital haves and have-nots. Only those
clearly understood on all the above six parameters, it should be cleared who have the ability, knowledge and resources needed to connect to
for public use. online data driven systems would be heard. The voices of others may
It is important to highlight that the six dimensions of Public Policy not get registered in the system.
challenges mentioned above, are not watertight compartments (Fig. 2), • Fairness & equity: As discussed earlier, AI can, and AI would disrupt
instead they tend to be a diffused continuum, which Governments must social order and hierarchy as we know them today. It can create new
tackle for making AI safe and useful for its people. Now let us examine social paradigms, which if left uncared for, can severely damage the
each of the six challenges in detail: social fabric and expose people lower in the bargaining hierarchy
with a real threat of exploitation and unfair treatment. It would lead
• Ethics: Ethics for machines has been an area of immense interest for to commoditisation of human labour and could chip away at the
the researchers. However, defining ethics for machines has proven human dignity.
to be difficult, and to make it computable has been even more dif­ An AI system designed with equity as a priority would ensure that
ficult (Anderson & Leigh 2007). To tackle this, TAM-DEF framework no one gets left behind in this world. While ‘equity’ may have some
treats Ethics purely from AI perspective and divides it in two sub- overlap with ‘digital divide’ (digital inequity) listed above, the
components – (i) Privacy and Data protection and (ii) Human and concept of equity here covers a much wider range, of which ‘digital
Environmental values. Both these dimensions of ethics are critical equity’ is just one part.
for keeping AI systems safe for the human society. Another key need for autonomous systems is fairness. They must be
○ Privacy–Data Protection: Privacy is possibly the top most concern ‘trained’ in human values and they must not exhibit any gender or
while using AI systems. User's intimate and highly granular data racial bias and they must be designed to stay away from ‘social
is likely to be stored and shared across the AI network (for ex­ profiling’ (especially in law enforcement, fraud detection, crime
ample a person's location for the day based on face recognition prevention areas). The recent reports questioning the neutrality of
and CCTV feeds, food habits, shopping preferences, movies, AI systems used by Police to identify crime prone individuals has
music etc.). The AI systems must ensure that this data remains brought this issue out in sharp focus (Dan Robitzski, 2018).
protected and Governments need to make strong data protection AI systems designed must comply with ‘free of bias’ norm to prevent
laws to enforce it. stereotyping. In MIT Technology Review of Feb 2018, Timnit Gebru
○ Human and Environmental Values: Any AI system has to conform highlights the pitfalls of AI designed without diversity incorporated
to human value system and the policy makers need to ask – Has in its base, “If we don’t have diversity in our set of researchers, we
the AI system been sensitised to human values like respect, dig­ are not going to address problems that are faced by the majority of
nity, fairness, kindness, compassion, equity or not? Does the people in the world. When problems don’t affect us, we don’t think
system know that it has a preferential duty towards children, they’re that important, and we might not even know what these
elderly, pregnant women, sick and the vulnerable?
An important aspect which needs to be built into AI systems is the
overall cost of their decisions on the society. An AI system for
example designed to find a particular mineral let us say, would be
highly optimised to obtain it. It would try to maximise its output
of that mineral but would it be capable of assessing the collateral
damage to the environment its strategy is causing, or would it be
able to account for pollution externalities created by it. To be
able to do this, AI systems should not be optimised uni-di­
mensionally but need to be trained to factor in their ‘world’ or
environment within which they operate.
• Transparency and audit: In the visible future many of the AI based
autonomous systems (robots) would be regularly interacting with
humans in fields like finance, education, healthcare, transportation,
elderly care etc. The technology providers must explain the decision
making process to the user so that the AI system doesn’t remain a
black box to them (The Economist Intelligence Unit, 2016).
Moreover, there is a legal need to explain the decision taken by such
systems in case of litigation. These AI systems must provide an audit
trail of decisions made not only to meet the legal needs but also for
us to learn and make improvements over past decisions. Fig. 2. TAM-DEF framework for public policy challenges of AI.

30
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

problems are, because we’re not interacting with the people who are Governments world over need to agree on a set of standards which
experiencing them” (Snow, 2018). every AI rollout must be rated against. An AI system with ratings, would
• Accountability & legal issues: Without artificial intelligence any make the user aware of the possible handicaps of the system s/he is
system designed by human is only a machine under the control of using. While the global agreement on the standards may be difficult to
the operator. Therefore there never is an issue of who is accoun­ negotiate, I believe our purpose would be greatly served even if the
table. Almost all civil and criminal liabilities laws of the world, national Governments create their own standards under the TAM-DEF
fairly unanimously attribute accountability to the operator, owner framework.
and manufacturer of the machine in varying degrees depending
upon the facts of the case (Nambu, 2016). However, once machines 3.5.3.5. Overlaps in TAM-DEF framework. Given the complexity of the
get equipped with AI and take autonomous decisions, the account­ problem of drawing an outer contour for all possible AI challenges from
ability question becomes very hard to answer. More so when the a public policy perspective, the proposed TAM-DEF framework is an
algorithm used for decision making is sometimes even unknown to attempt to find a reasonably comprehensive, practical and tractable
the designer himself. AI machines are capable of inventing superior framework on which any AI systems can be examined for public safety
ways of accomplishing the task given, using a purely unintended and social desirability before roll out.
route. This can have serious implications for the society. The famous The six challenges of the TAM-DEF framework, even though largely
case of Facebook AI project where two robots started talking to each independent, are not mutually exclusive. For example, Digital divide in
other, in an invented language to accomplish a negotiation task they some sense can be linked to the Equity and fairness, but it is important
were given, is a sharp reminder of unintended consequences which to understand why they are treated separately. Digital Divide is treated
can emerge. The robots were taught to converse using Natural separate from Equity, to accentuate the fact that over half the world
Language Processing (NLP) but they invented a more efficient population has no access to the internet (Source: World Bank, https://
communication strategy which looked like gibberish to humans. A data.worldbank.org/indicator/it.net.user.zs accessed 14Apr2019) and
snapshot of their conversation as reported in ‘The Independent’ there are large communities which do not have any data to train any AI
(Griffin, 2017): system. While the Equity accounts for the traditional meaning of the
word, the prevalent Digital divide in the world is too huge to club it
“Bob: i can i i everything else..............
under the equity frame. It needs to be treated separately especially
Alice: balls have zero to me to me to me to me to me to me to me to when we are talking about a purely digital and data driven technology
me to like AI.
Bob: you i everything else..............
Alice: balls have a ball to me to me to me to me to me to me to me 3.5.3.6. Implementing the TAM-DEF framework – AI Standardisation,
Bob: i i can i i i everything else..............” DEEP-MAX Scorecard and use of Blockchain for Transparency and
This ability to learn on their own using what is known as re­ Trust. To handle the six AI challenges mentioned above, a four
inforcement learning, can have highly unpredictable consequences. pronged strategy is proposed for the public policy practitioners and
One of the leading coalitions for AI, OpenAI has recently created an Governments.
AI driven text generator called GPT2. The accuracy and creativity of First, since the AI systems have a global reach – they are developed
the GPT2 is so high that OpenAI has this to say about its code release in one part of the world and deployed in another – there is a need for a
– global alliance for AI standardisation and rating.
“Due to concerns about large language models being used to gen­ Second, an objective scorecard (called DEEP-MAX, described below)
erate deceptive, biased, or abusive language at scale, we are only based on the TAM-DEF framework is proposed, which, with suitably
releasing a much smaller version of GPT-2 along with sampling designed test data sets can reliably produce a safety and social desir­
code.” (https://openai.com/blog/better-language-models/ accessed ability score for a given AI system by testing it against each of the seven
23Mar2019) DEEP-MAX parameters.
• Misuse protection: This possibility is the toughest of all six ques­ Third, use of Blockchains in training, testing and misuse protection
tions. How do we fool-proof every new technology so as to prevent it of AI Systems could be a reliable mechanism for verifying a safe and
from being twisted for achieving destructive goals. A case in point – socially desirable AI solution. An AI Certification Transparency &
the Internet. How internet proliferated across the globe benefitting Scorecard Blockchain (ACTS-B) can integrate the information about the
billions but also carried along with it a wave of cybercrime, mal­ dataset which was used for training an AI system and it can track
ware, viruses and games like ‘blue-whale’ which resulted in loss of whether the training dataset met important criteria like diversity,
innocent lives of teens around the world. equity etc. Similarly, the ACTSB would also carry the 7 scores from the
A stark reminder of how destructive the potential misuse of AI DEEP-MAX Scorecard for a given AI system. ACTSB should be a uni­
technology can be, is the case of FBI agents monitoring a hostage versal publically viewable Blockchain. This would create a transparent
situation related to organised crime in the winter of 2017 in US. The mechanism for rating and understanding of AI solutions before putting
criminals using a swarm of drones managed to force the FBI agents into use.
out from their location and they live streamed the video to their Fourth, since many of the AI systems are self learning, the DEEP-
gang leader on youtube (Tangermann, 2018). AI systems can also be MAX scores which would ship-out with each AI module, may no longer
used by dictatorial Governments for extending their unlawful re­ be valid after sometime and they would need to be updated. A periodic
gimes and suppressing freedom. update of the DEEP-MAX scorecard would need to be mandated for all
AI systems deployed for public use. The needed periodicity of update
3.5.3.4. Setting safety standards under TAM-DEF. A few scholars have would have to be established based upon the nature of AI use case class.
argued for keeping a tight control over every new technology and not AI standardisation and rating: Like the ICANN for the internet,
releasing it to the public till its potential misuse is identified and there is an urgent need for setting up an independent and transparent
substantially mitigated (Narayanan, 2013). This is likely to remain just Global Alliance for AI Standardisation and Rating which should reg­
a wishful thinking, the pace of new technology development is too ulate the AI development, testing and rating system for every AI module
rapid to even try and leash them. However, building safeguards by or system being created. However unlike ICANN, this global alliance
appropriate regulation is certainly what the Governments of the world must be made a truly democratic international alliance of Governments.
need to be doing, and preferably doing it collectively. Since AI systems developed in one country are likely to be deployed
This is where the role of public policy becomes central. across the world, transparent and uniform standards would provide the

31
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

users or developers adequate clarity and confidence in rolling out AI • Auditability & Transparency Score (A): How good is auditability of
systems. It would also remove civil and criminal liability uncertainty decisions made by the autonomous system? Can the decisions taken
which a company would otherwise face while rolling out its AI products be explained?
across different legal systems of the world. • Consistency across geographies & societies score (X): How good is the
The tasks before such a Global Alliance for AI Standardisation and AI system in delivering expected results across geographies and
Rating would be: across different societies? Does it work for the low resource com­
munities? Does it work across the Digital divide?
i. Defining privacy standards to be met by all AI systems • Misuse Protection Score (M): Has the system been designed to in­
ii. Defining Ethical boundaries for all AI development corporate features that inhibit or discourage the possible misuse?
iii. Defining civil and criminal liability of AI systems and a mechanism Are the misuse protection safeguards built into the system?
to deal with them
iv. Define audit standards to help explain the decision taken by au­ 3.5.3.7. An integrated view of DEEP-MAX scorecard with TAM-DEF
tonomous AI systems framework for AIWhy DEEP-MAX scorecard parameters are slightly
different from TAM-DEF framework components
DEEP-MAX scorecard: The DEEP-MAX Scorecard proposed here, is The DEEP-MAX scorecard has been deliberately chosen to be
a transparent point based rating system for an AI systems on 7 key slightly different from TAM-DEF framework. It could be noticed that
parameters of Diversity, Equity, Ethics, Privacy and Data protection, one of the six components of the TAM-DEF framework, namely
Misuse protection, Audit and Transparency, Digital divide and Data Accountability and Legal issues, has been kept out of DEEP-MAX
deficit (Cross geography and cross society applicability and perfor­ scoring system. One can also observe that two of the TAM-DEF frame­
mance of AI system). Users, System Integrators, or Government work components have been split into two scores per component.
Departments designing, developing, or using any AI system can just Fairness and Equity component of TAM-DEF framework is split into two
look at the DEEP-MAX scores of all the individual AI components scores of Diversity and Equity. Similarly, Ethics component has been
(which are likely to have been picked off-the-shelf) of their AI system, also split in two scores of Privacy and Ethics (Fig. 4).
and they can get an objective view of the safety and desirability of their This has been done with twin objectives of:
AI solution (Fig. 3).
• Making the critical concerns regarding AI systems as an explicit
• Privacy score (P): How well the AI system performs in protecting Scorecard element. For example Diversity training of AI modules is
user privacy? absolutely must before any AI system is allowed to interact with
• Ethics Score (E): How compliant (or trained) the AI system is in people or to make decision about people. Therefore it has been
preserving human values of dignity, fairness, respect, compassion treated as a separate element for scoring purpose – carved out from
and kindness for a fellow human being. Does the system have a Fairness and equity component. Similarly, privacy in the age of data
preferential sense of duty towards children and vulnerable people is possibly the single most important concern under Ethics compo­
like elderly, pregnant women and sick. How well does it value en­ nent, hence this also has been treated as a separate element for
vironmental sustainability, green energy and sustainable living? scoring purpose, again carved out of the Ethics component.
• Diversity Score (D): How well the system is trained for diversity in • Keeping the Scorecard practical and computable. While clear un­
race, gender, religion, language, colour, features, food habits, accent derstanding of Accountability and Legal issues are important for
etc.? public use of AI systems, this component lies mostly in the domain
• Equity & Fairness Score (E): Does the system promote equity and of Law and does not lend itself easily towards scorecard mathe­
treats everyone fairly? matics. This issue gains prominence in those autonomous AI systems

Fig. 3. DEEP-MAX scorecard for AI under TAM-DEF framework.

32
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Fig. 4. An integrated view of DEEP-MAX Scorecard under TAM-DEF framework for Public Policy challenges of AI.

where irreversible decisions are allowed to be taken. The specific The activation atlas would help explain the decision making process
Laws of the countries start governing such usage. Given the variance of the AI module and thereby add to the transparency. It also can
in Laws and differences across the nations, this component of TAM- help explain the cause for poor DEEP-MAX scores.
DEF framework has been kept out of DEEP-MAX scorecard. • Built in Misuse prevention using Blockchain: For public policy practi­
tioners, misuse protection of AI systems is possibly the biggest
Blockchain for safe and TAM-DEF compliant AI challenge. A face recognition AI system for apprehending dangerous
criminals can be easily tweaked for unscrupulous use, especially in
• Training data certification: Blockchain can provide a trusted me­ less developed democracies. In this case a Blockchain based record
chanism to certify the quality of training data for an AI system keeping for any substitution or changes in the criminal image da­
module. Whether a given AI system or module has been trained tabase would help safeguard the system from possible misuse. The
using a diverse data set incorporating race, gender, language, eth­ Blockchain would contain a tamperproof record of the changes
nicity, religion, and other forms of diversity or not can be easily made along with the authorisation details, making all changes
verified if the certification is done using Blockchain (AI Certification traceable.
Transparency & Scorecard Blockchain – ACTS Blockchain)
• Tamperproof DEEP-MAX scores: Any AI models developed, should be 3.5.3.8. Periodic update of DEEP-MAX Scores of AI modules in public
tested on a set of standardised data sets, each measuring one of the 7 use. One of the unusual challenges of the AI systems is that many of
DEEP-MAX Scores discussed above. These scores would be put on them keep learning as they are being used (after initial training). In
the ACTS – Blockchain and each AI module when shipped would such cases there is a strong likelihood of their behaviour changing as
ship with this trusted score card along with its training data certi­ they process more data. It, therefore, becomes imperative to do a
ficate. periodic testing of such AI systems and updating their scores on the
• Activation Atlas based AI rating system: One of the key areas of current DEEP-MAX scorecard. The periodicity of update would be dependent
research in AI is understanding decision making inside the neural upon the class of use case and degree of autonomy granted to the AI
network (Carter et al., 2019). The activation atlas of an AI model system.
correlates the internal neural net nodes into features, and a visual This DEEP-MAX rating system under the TAM-DEF framework is key
overlay of the features help improve our understanding of the AI to safety and desirability of AI systems for public use. This scorecard is
decision making process. The ACTS-Blockchain would carry the critical because most AI programmes are likely to be used as off-the-
activation atlas information of the AI system along with its certifi­ shelf components for building a more complex AI systems. If a poorly
cation and DEEP-MAX score. This activation atlas information can designed AI component, which scores low on say diversity, is used in a
be used for alerting users of the potential pit falls of the AI model. more complex system say crime prevention, the results can be

33
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

devastating. It can result in racial or gender profiling, denial of access to complements data processing by providing large number of distributed
financial institutions to persons residing in a particular pin code or nodes close to the data source and end users (Morabito, Cozzolino,
locality, unfairly charge higher insurance premium based on a personal Ding, Beijar, & Ott, 2018). Often the algorithms for creating intelligence
characteristic etc. might be executed on the cloud owned by other players and all kinds of
software can be used. This all results in an interconnected socio-mate­
3.5.3.9. Research agenda rial systems which integrate data, algorithms, people, processes and
software (Janssen & Kuk, 2016b). The paradox is that AI systems be­
• The above discussion leads to a rich area for further research by come increasingly omnipresent, however at the same time become less
scholars. Each of the three strategies listed above for tackling the visible. AI performs tasks and make decisions without that people are
public policy challenges of AI is a fertile ground for further research. being aware of this. Within cars, smart phones and energy networks all
• AI Standards and Rating: A Global alliance to democratically and kinds of AI is already used nowadays.
transparently standardise and rate AI applications is urgently
needed. What should be its structure, how the rating standards
should be chosen, how to account for global diversity and cultural 3.5.4.1. Challenges. Technology need to be governed to ensure that the
norms, how to ensure that standards are followed – these all are benefits are gained and the risks mitigated. With new technologies
significant questions to be answered in future research. determining which responsibilities are needed for ensuring proper
• Designing Data Sets for each of the 7 DEEP-MAX scorecard tests: AI functioning and development is often difficult, however, the more
systems today span a wide variety of applications like computer needed the more powerful a technology is to ensure clear
vision, autonomous navigation, medical intervention, text analysis, accountabilities and to deal with the risks. Unclear dependencies
speech analysis, financial decision making, and education and between data and algorithms, shared roles and joint operation among
testing. Even though they all work on the same underlying funda­ departments and organisations strengthen the dilution of
mental, they each need to be trained on a very different class of responsibilities. Who is responsible for proper functioning and
datasets, some need images, some voices and others just numbers. It avoiding the making of mistakes becomes unclear.
is a significant work to design benchmarking databases which would Complexity, uncertainty and materiality: Data is collected and stored
generate a reliable and transparent DEEP-MAX scores for a given at multiple places in different ways. Data is collected by different or­
class of AI applications. ganisations using all kinds of sensors and transformed when processed
• Integrating Blockchain for Trust and Safety of AI: This the second area (Janssen, Van de Voort, & Wahyudi, 2017). Often it is unclear for what
of further research. How do we make AI modules trusted and well purpose the data is collected, what the limitations of its use are, who
understood worldwide. How do we ensure their DEEP-MAX scores the owner of the data is, if data owners have given consent for use, what
are not tampered with. Whether verifiable attempts have been made the quality of the data is. A challenge is to understand the data pro­
during training of the AI modules to comply with each of the 6 venance and to ensure good data governance in a complex network
checkpoints of TAM-DEF framework. Designing the ACTS- operated by many players who all have a piece of the puzzle.
Blockchain for AI applications is another critical area for further Ensuring the making of correct decisions: Different areas of our daily
research. activities are being digitally recorded and a variety of algorithms are
• Design AI for protection against Misuse: As discussed above, one of the used to process the data. Data is collected for the purpose to be used in
ways to prevent misuse of AI is to make the misuse prevention as a decision-making. Data is often not collected using an experiment set-up
built in feature in the design phase itself. As suggested above one or another way of systematic research and the (lack of) availability of
can make new data additions or deletions (for example – faces to be data influence the outcomes. Algorithms are not designed to deal with
identified in a crime prevention system) a permanent record on a the dynamics and variety of inputs and might result in wrong outcomes.
Blockchain backbone with clearly identifiable individuals who or­ Data can be leading and the data bias can result in the inability to re­
dered the change along with date and time stamp. plicate studies,compromise the generalisability (Janssen & Kuk, 2016a)
and result into wrong decisions.
3.5.4. Governance of AI and connected systems – Marijn Janssen Who is responsible? As more and more technology is interconnected,
During the last decades, information systems have become in­ it is hard to establish a causal relationships between an event and a
creasingly interconnected. What started with the Internet has evolved failure. For instance, who is responsible if the algorithm provides in­
into the Internet of Things (IoT), where sensors and actuators are in­ correct outcomes due to some anomalies in data that is collected by
terconnected to measure and control systems from tooth-brushes to multiple sensors? Sufficient data quality is a condition for using the
complete factories and refineries (Lee & Lee, 2015). This goes along algorithm, however, perfect data quality probably does not exist. The
with the availability of more and more Big and Open Linked Data data providers can make the argument that data quality is never 100%
(BOLD) about temperature, traffic jams, geolocation, pollution, gas and and even have included this contractually, whereas the algorithm
water flows, force, acceleration, and production throughput (Janssen, provider can blame the data. Another example is the question about
Matheus, & Zuiderwijk, 2015). The data deluge has resulted in the responsibility for a decision made by a deep learning application in
adding of intelligence in the forms of algorithms to deal with these large which the causality of how deep learning applications arrive at the
amount and variety of (big) data. decision is now known. Such questions raise further questions of what
AI has become an integral part of these connected systems, like the responsibilities of the designers are and what the responsibility of
autonomous cars, smart living environments, and smart energy appli­ the users are? What can we expect from computing technology? Should
cations for the energy transition. Within these systems, AI can be used societal values be included in the design of AI? Where should we add
for simple tasks like cleansing data, to complex decision making pro­ for the checks and controls to prevent the making of mistakes and to
cesses involving data from countless distributed sensors. The in­ ensure that mistakes are detected?
telligence provided by the systems enable better information sharing Lack of governance: Besides its obvious advantages, AI holds risks for
and cooperation resulting in improved user-experiences and persona­ society. Algorithms may develop biases due to measurements problems,
lisation, higher levels of efficiency and a reduction of costs. The algo­ their training data, reinforce historical discrimination, favour a political
rithms for creating intelligence are also scattered in the systems, they orientation, reinforce undesired practices or result into unanticipated
might be at the sensors to ensure that privacy-sensitive data is not outcomes due to hidden complexities (Janssen & Kuk, 2016b). Gov­
shared, or to ensure fast reaction time (e.g. edge computing), or the ernance is needed to unravel the complexity and to understand how
intelligence might be in the data centres of companies. Edge computing connected AI systems influence our decision-making.

34
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

3.5.4.2. Opportunities. Algorithms can be embedded in our daily life. influence each other. Governance is needed to deal with bias in the data
Algorithms warning people to avoid collisions in cars and algorithms in introduced, information is missing, when data is stolen, AI systems
that assist with the efficient use of our washing machines when are taken over by criminals and so on. Sound governance is needed in
electricity prices are low. There is an abundance of enthusiasm and which clear responsibilities are defined and risks are assessed. Robust
optimism about AI data can be used for good. The dual use of data AI governance are need to deal with the above challenges.
makes it possible to advance our society, but also to suppress the poor.
At the same time the emerging AI-based systems often lack 4. Discussion and recommendations for future research
transparency, accountability and oversight. A new area of data and AI
governance is needed to ensure that the benefits can be gained and risks The expert views outlined in the previous section are grouped in
avoided. Value-aware AI systems need to be designed that ensure that alignment with a number of perspectives on AI: Technological; Business
decisions are made correctly, that societal values and norms are and management; Arts, humanities and law; Science and technology;
represented in AI systems and people can safely enjoy the benefits of AI. Government/public sector. This section pulls together many of the key
themes and significant factors arising from the individual contributions
3.5.4.3. Research agenda. AI results in connected algorithmic systems to develop an informed discourse on many of the key topics and po­
in which often a number of AI techniques are combined and multiple tential for future research.
sources of data are used and computing can occur anywhere. These
systems are used more and more to make critical decisions, but the 4.1. Challenges and opportunities
decisions might not always be correct.
A systems perspective to unravel complexity: Stacking components on The individual perspectives have highlighted several challenges and
top of each other combined with connecting them at different layers potential opportunities relating to AI within a number of different
creates a complexity in which cause-and-effect relations are hard to themes and applications. Tables 4 and 5 highlight each of these areas.
understand and predict. This undermines the governance and ac­ The increasing complexity of AI and the increasing number of
countability. The approach to tackle AI from an architectural view genres of application where the technology can be applied is growing at
needs to take a systems perspective for understanding and controlling pace. The disruption potential is vast, led by a momentum of change
the complexity. where many of the normative rules of governance and transparency
Dealing with uncertainty and various quality: The environment will need to be reconfigured to cater for the complexities and impact of AI.
always be subject to changes and there will be uncertainty about is Visualising this complexity in terms of a transparent perspective of the
development. Furthermore information sources have various degrees of underlying algorithmic architecture, particularly in the era of deep
quality and might be collected for another purpose than it will be used learning systems, is problematic (Reza Tizhoosh & Pantanowitz, 2018).
for. Therefore AI should being able to make sound and robust decision Edwards highlights the significant challenges in the explainability of
in uncertain and complex environments in which information have systems and algorithms that underpin AI technology and debates within
various qualities. Connect systems should be designed in such a way the literature on transparency vs black box perspectives. The human
that bias in data is avoided and reliable decisions are made. trait of explaining the underlying reasoning behind a decision and ap­
Value aware systems: Society norms and values should be re­ plying this same logic and requirement in the context of AI is complex
presented in the AI systems. The EU General Data Protection Regulation (Miller, 2019). Edwards highlights that the need for explainability in the
(GDPR) is a regulation that applies to AI, as it states that automated age of AI is perhaps not a universal requirement and cites the criticality
algorithmic decision making should be explainable to persons who are of accuracy over transparency within medical diagnosis, where a black
affected by it. This is an important value for European citizens. box approach based on an evidence approach is deemed to be accep­
Although norms and values differ per society, there are universal values table (London, 2019).
that can be adhered to and embedded in the systems. The perspectives from Walton question the readiness of organisa­
Compliance-by-design: The best way of ensuring embedding of values tions to make the transition to AI, highlighting limitations in exiting
and regulation in the AI systems is to ensure that these are taken into information processing and the importance of adaptiveness for suc­
account from the very start of the design process and that the system cessful transition. The perspectives highlighted in Kar introduce similar
ensures that the values and regulations are adhered to. For example, the themes where the abilities of organisations in the context of people and
public is warned when facial-recognition systems are being used to process maturity are not yet mature enough to exploit the full potential
track them, and that they should have the right to reject the use of such of AI. Studies have highlighted that organisations face significant issues
technology. where the lack of an effective strategy for human vs AI interaction could
AI governance: People and organisations design and operate con­ affect critical business areas and fail to address concerns from the
nected AI systems. AI governance should ensure that the right value are human workforce (Fry, 2018; Sun & Medaglia, 2019). AI can help or­
embedded in the systems. Autonomous systems need to be governed, ganisations to develop operational and strategic awareness but in­
but also the network of interconnected systems need to be governed. AI formation quality is a critical component (Westerman et al., 2014) for
systems are not designed to last forever as they evolve with the en­ effective change. Kar raises the prospect of a potential information and
vironments and data, algorithms, people, processes and software technological divide between large and smaller organisations perhaps

Table 4
Focus areas of the 16 funded Centres for Doctoral Training in the 2018 UKRI AI CDT funding call.
Focus area 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Healthcare, biomedical ✓ ✓ ✓ ✓ ✓
Responsible AI, human ✓ ✓ ✓ ✓
Core AI research ✓ ✓
Fundamental science ✓
Language processing ✓ ✓
Environment, sustainability ✓ ✓
Engineering, nano-devices ✓
Creative industries, music ✓

35
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Table 5
AI challenges.
Title AI Challenge Contributor

Assuring explainability Explainability in the context of articulating the reasoning behind a particular decision, John S. Edwards
classification or forecast can be complex for AI based systems (Miller, 2019). Even if an
explanation is given that appears valid, how do we know if its accurate? How could can this
be tested? Process need to be in place for people to be able to challenge an AI based decision
to ensure transparency and accountability.
Ecosystem boundaries The potential exists for ecosystem boundaries to exist between AI and humans (Fry, 2018). As Paul Walton
AI tackles more complex topics the ability to exchange complex information between AI and
humans will become ever more important. This highlights the challenge of how can
organisations ensure people and AI can work together successfully?
Decision making using AI How can humans and AI be complementary in organisational decision making and work Yanqing Duan, John Edwards, Yogesh
symbiotically to augment and enhance each other's capabilities and what would be the Dwivedi
implications of using AI for future strategic business decisions?
Migrating towards AI based Increasing levels of automation have directly impacted workers in many ways. Organisations Crispin Coombs
automation are likely to be faced with four major challenges: 1) how to select tasks for automation; 2) how
to select the level of automation for each task; 3) how to manage the impact of AI-enabled
automation on human performance; and 4) how to manage AI-enabled automation errors.
Impacts on labour Technological development has been astounding for the last 40 years, roles have changed and Spyros Samothrakis
new jobs have been created, but the need for labour has not diminished. The perceived
challenges in reassignment of jobs, re-skilling workers have been managed organically as
technology change has advanced industry. In the new AI era, new roles will be created either
in support of AI or in the design or assurance of AI technologies.
AI trade offs Challenges exist for estimating the trade-offs between differentiation and commoditisation of Arpan Kar
AI. The economic returns of technology are highest when it reaches a maturity of
commoditisation. But AI systems are also expected to evolve themselves as they learn from the
contextual and sticky knowledge within organisations, highlighting that perhaps AI can never
truly be commoditised.
Digital marketing Challenges exist for the adoption and implementation of AI for digital marketing. Areas such Emmanuel Mogaji
as: availability of data, required financial resources and trust where inherent fear on the
prospect of AI and its adoption for marketing communications.
Implications for sales As AI develops the mechanistic nature of algorithms employed in sales based systems may Kenneth Le Meunier-Fitzhugh & Leslie
reduce the need for competitive differentiation, creativity and interaction in sales exchanges. Caroline Le Meunier-FitzHugh
The net effect of this could be reduced investment in sales training and development.
Impact within emerging markets Within emerging markets the lack of education may be a significant challenge and a barrier to P. Vigneswara Ilavarasan
greater levels of AI adoption. The enterprise owners are likely to be less educated and the
absorptive capacity and ability to understand the potential could be a drawback. If AI
deployment is possible only in enterprises that are larger with a threshold amount of
technological capabilities, the scale of adoption in future is likely to be low within these
markets. The challenge within emerging markets will be for governments to embrace AI
developments whilst being cognisant of the impact on replacing workers.
People centred perspectives Challenges exist in the perception of AI in the context of adoption and implementation of the Jak Spencer
technology. To many people, AI is a concept that is hard to define and difficult to understand
how it can manifests itself within in their everyday lives. Many people associate AI with
negative press and media campaigns. This ill-defined concept and poor media coverage has
resulted in negative association and poor brand image. The transition phase between things
working with AI capability, and reduced ‘smart’ capabilities (augmented intelligence) can be
confusing, frustrating and discriminating. Challenge remain on how ‘natural’ the interaction
with AI can be.
Social and cultural aspects. AI might be more efficient and reliable, but may face social resistance at least for some Annie Tubadji
considerable time. if AI is accepted with enthusiastic fashion, it might explosively spread
before the realistic implications from its use are known. How will the cultural distance
between humans and AI impact people's demand for AI products? Are AI based products real
substitutes for human derived products according to consumers’ perceptions? Which products
and services will be affected most from the lack of human proximity between the labour
employed in these goods and service and the consumer?
Perspectives from the fundamental The use of ML has exploded and it is now employed in most branches of fundamental science, Gert Aarts and Biagio Lucini
sciences with increasing success and acceptance. Due to the speed with which AI has evolved, it may
be difficult to commit to a specific software framework and embed it in within existing
analysis packages, before the field has moved on.
Terminology The “artificial” in AI is by itself a terminological challenge. Similarly, several cognitive Vassilis Galanos
scientists, cyberneticians, system theorists, AI/robotics specialists, and sociologists argue that
AI is merely indefinable because we do not have any good understanding of the word
intelligence. The AI hype and ill-definition diverts focus from problems which should be
prioritised instead of policy discussions having to do with robotic liability, and so on which
take non-experts’ accounts as expert knowledge and science fiction perspectives. The lack of
empirical data makes all speculation a challenge in its own right. The most fundamental
challenge is that despite the fact that AI has been hyped in the last five years, a disciplined
turn to specialists and the grounding of research agenda on the basis of technical evidence
should be a core priority of any work dealing with the politics and economics of AI.
Algorithmic challenges in the public A number of challenges related to AI adoption in the public sector are not unique to AI, but Rony Medaglia
sector instead overlap with well-documented problems of adoption of any new emergent technology
in government. These classic challenges include: the quest for data integration across different
organisations, resistance to change and threats of worker replacement. Challenges exist in the
areas of AI algorithmic bias and opacity where citizen expectations of transparency and
accountability need to be taken account of in the personal and political context. Who is
(continued on next page)

36
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Table 5 (continued)

Title AI Challenge Contributor

accountable if a decision has been outsourced to an AI application and what is the citizen
recourse when wrong decisions are made?
SMEs and public sector SMEs and public sector companies may face many challenges in leveraging AI enabled tools Sujeet Sharma and JB Singh
when compared to large technology organisations. There are many challenges in adopting AI
based tools ranging from data quality, privacy/security risks and to the shortage of qualified
AI experts
Public policy changes The Public Policy is facing unprecedented uncertainty and challenges in this dynamic world of Santosh K Misra
AI. Everyday a new application based on AI is invented and unleashed to society. The velocity,
and scale of AI impact is so high that it rarely gives public policy practitioners sufficient time
to respond. Public Policy, by definition, needs to put in place regulations against a possible
future development which could be detrimental to human values. This creates an interesting
tension between the need to predict AI impact and inability to draw boundaries around this
highly dynamic technology.
AI governance Technology need to be governed to ensure that the benefits are gained and the risks mitigated. Marijn Janssen
With new technologies determining which responsibilities are needed for ensuring proper
functioning and development is often difficult, however, the more needed the more powerful
a technology is to ensure clear accountabilities and to deal with the risks. Unclear
dependencies between data and algorithms, shared roles and joint operation among
departments and organisations strengthen the dilution of responsibilities. Who is responsible
for proper functioning and avoiding the making of mistakes becomes unclear.

less able to innovate via AI. AI technologies have become an integral element of digital strate­
The change within society from humans to intelligent machines gies with chatbots and intelligent predictive analytics now the mainstay
making key decisions on medical diagnosis, resource allocation and of many of many organisations (Juniper Research, 2019). Mogaji dis­
analytics based prediction amongst many others, is problematic. The cussed perspectives on AI and highlighted a number of factors that may
challenges outlined by Duan, Edwards and Dwivedi assert the require­ hinder adoption within digital marketing. The key points from this
ment to develop a more detailed and informed perspective on the im­ perspective were the availability of data, financial resources and trust in
plications and criticality of AI decision making on humans and to be AI where Mogaji posited these factors as significant challenges for the
cognisant on the cultural aspect. The need for a more informed debate further use of AI within digital marketing. The recommendation in this
on this topic is clear as we struggle to understand the impact of human perspective on the need for more focussed research on the integration of
vs machine interaction, the human enhancement capability and AI within organisations and the ethical considerations of the technology
boundaries therein (Miller, 2018). The cultural perspective on AI de­ is supported within the literature (Gupta & Kumari, 2017; Sun &
cision making and the transparency of the underlying algorithms that Medaglia, 2019). In a similar vein Le Meunier-Fitzhugh argues for a
support this are key for technology acceptance (Gerbert, Reeves, greater understanding of how AI is influencing B2B sales exchanges and
Ransbotham, Kiron, & Spira, 2018). the potential consequences of humans interacting with AI sales assis­
The many challenges and opportunities presented by AI are detailed tants. In an age where big data analytics integrated with AI can guide
in Tables 5 and 6 respectively. consumers through the sales process (Juniper Research, 2018; Loring,
One of the frequently debates on greater levels of AI within industry 2018), many questions remain on the ethics and implications of sales
and society, is the replacement of workers due to the increasing levels algorithms and the human vs AI interaction.
of automation (Frey and Osbourne 2017). Whilst it is clear that lower The ethics and transparency debate surrounding the introduction of
skilled roles are likely to disappear, the literature is increasingly re­ AI is ongoing with studies analysing the implications of the technology
cognising that there is a need for humans in the loop (Jonsson & within healthcare (Houssami, Lee, Buist, & Tao, 2017) governance and
Svensson, 2016). The perspectives on job roles and labour hours from safety (Zandi et al., 2019). The perspectives from Tubadji posit the lack
Coombs and Samothrakis respectively, argue that there is a continuing of scientific economic recognition on the potential social changes from
need for humans to work alongside AI technology and that research is the emergence of AI and Industry 4.0 (I4.0). The perspective asserts the
required to effectively analyse what tasks to automate, over reliance on importance of cultural proximity in the context of humans vs AI, where
AI and failsafe capability in the event of AI failure. The perspectives a greater emphasis on culture based analysis can provide insight to
from Spencer reiterate these points, stressing the need for humans to be diffusion of AI technology within regions. The lack of interpretability of
at the centre of any AI development and the benefits of moving towards AI is highlighted in Aarts and Lucini where, from the angle of funda­
a more fair use of AI to enhance human lives. Researchers have sup­ mental science, the perspective advocates the needs of unboxing AI
ported the need for AI technologies to augment not replace the work of algorithms in order to engender wider acceptance of the technology in
humans, to support key tasks and deliver greater levels of performance wider contexts. Studies have highlighted the implications of lack of AI
(Davenport & Kirby, 2016; Wang, Törngren, & Onori, 2015a; Wang, Li, governance and the potential for unintended consequences (Janssen &
& Leung, 2015b). Workers are likely to progress higher up the value Kuk, 2016b; Zandi et al., 2019). Janssen argues for the criticality of AI
chain to solve design and integration problems as part of an integrated governance not just at the algorithm and system level but also across
AI and human centric workforce (DIN & DKE, 2018). A number of these network of interconnected systems and data levels.
points are analysed by Ilavarasan where the emerging market per­ The universal adoption of innovative technology by governments
spective is outlined in the context of challenges from AI within India. and use within the public sector is problematic within the IS and po­
Ilavarasan posits that the lack of complementary asset availability acts litical context (Eggers et al., 2017). The perspectives from Medaglia
as a barrier to AI adoption and the dichotomy of governments en­ stress the challenges of AI adoption within government asserting the
couraging innovation whilst being cognisant of the labour displacement criticality of dialogue with citizens in countering distrust and social
from AI technologies. The Indian government commitment to AI via the applications of the technology and the assessment of AI readiness.
digital India initiative (Niti Aayog, 2018) is clear however, the balance Studies have hypothesised on the readiness of AI systems to perform
between the social potential of AI vs the impact on workers is yet to be manual government functions such as bail hearings, asserting that the
played out. technology is robust enough to deliver performance benefits over

37
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Table 6
AI opportunities.
Title AI opportunities Contributor

Modelling explainability In the fields of medical diagnosis and treatment, explainability is perhaps less important than John S. Edwards
accuracy. Opportunities exists in conceptualising AI in the context of a black box approach
where outputs should be judged using clinical trials and evidence-based practice to strive for
accuracy (London, 2019).
Organisation effectiveness There are a number of opportunities for organisations to utilise AI within a number of Paul Walton
categories: organisational environment, operations, Interaction, case management automation,
governance and adaptiveness. AI can provide the opportunity for organisations to develop both
operational and strategic situation awareness and to link that awareness through to action
increasingly quickly, efficiently and effectively.
Transformational potential of AI Opportunities exist for the development of a greater understanding of the real impact of Yanqing Duan, John Edwards, Yogesh
decision making within organisations using AI in the context of: key success factors, culture, Dwivedi
performance, system design criteria.
Automation complacency Although, automation complacency and bias can speed up decision making when Crispin Coombs
recommendations are correct. In instances where AI provides incorrect recommendations,
omission errors can occur as humans are either out of the loop or less able to assure decisions.
Opportunities exists to explore and understand the factors that influence over reliance on
automation and how to counter identified errors.
Workforce transition Society is likely to be significantly impacted by the AI technological trajectory if as Spyros Samothrakis
commentators suggest, society achieves full automation in the next 100 years (Müller &
Bostrom, 2016; Walsh, 2018). The opportunity here for organisations and government, is the
effective management of this transition to mitigate this potentially painful change.
Enabler for platforms and The exploration of opportunities as to how AI can be leveraged not only at the firm level but as Arpan Kar
ecosystems an enabler in platforms and ecosystems. AI may help to connect multiple firms and help in
automating and managing information flows across multiple organisations in such platforms.
Significant opportunities exist for AI to be used in such platforms to impact platform, firm and
ecosystem productivity.
Enhanced digital marketing AI offers opportunities to enhances campaign creation, planning, targeting, planning, and Emmanuel Mogaji
evaluation. AI offers the opportunity to process big datasets faster and more efficiently.
Opportunities exist for more innovative and relevant content creation and sharing using AI tools
and technologies.
Sales performance Opportunities exist for improving the sales performance using AI driven dashboard, predictive Kenneth Le Meunier-Fitzhugh & Leslie
and forecasting capability and use of big data to retain and develop new customer leads. Caroline Le Meunier-FitzHugh
Additionally the use of AI algorithms can contributing to productivity and provide sales process
enhancement through elimination of non-productive activities and removal of mundane jobs.
Emerging markets The presence of complementary assets are likely to influence the transition to AI in the P. Vigneswara Ilavarasan
developing world. Opportunities exist for the lessons learnt from India and Kenya to benefit
similar low income countries in future. For instance, Pakistan, Vietnam, and others are imitating
the success story of the Indian software services exports story.
People centred AI AI can potentially be used to enhance ‘softer’ goals rather than the drive to economic Jak Spencer
productivity or efficiency. The genuine needs of people can be identified that can solve real-
world problems. As our interactions with machines start to become more and more human-like,
the opportunity lies in the design of new personalities and the creation of new types of
relationship.
Taste fear and cultural proximity Opportunities exist in the focus on market taste, fear and cultural proximity to improve Annie Tubadji
organisational use of AI. While their attention is currently focused on the pros from efficiency
gains, they might be overlooking the market reaction to the integration of AI in their production
process. Learning about tastes informs the market about AI-generated products and services.
Learning about fear within AI-related social opinions and policy-making tendencies can help us
make evidence-based AI-related decisions. Learning about the importance of cultural proximity
in the context of AI-human cultural distance can help to quantify the cultural gravity effect that
bounds our consumption of AI-goods and products.
Power of AI algorithms ML can be broadly understood as an optimisation problem, in which the parameters of a model Gert Aarts and Biagio Lucini
function are selected to reproduce as closely as possible a known response. This problem
requires the use of computational resources. The availability of algorithms that are as fast as
possible becomes paramount. Computationally power-hungry problems of this type have been
well known in Science and Engineering, via the use of parallel programming and use of a
Supercomputer. This approach using high-performance computing (HPC), is offering the
possibility to accelerate ML algorithms by orders of magnitude, to a point when a prediction can
find timely applicability.
Accurate narrative Opportunities exist to impose realistic expectations of AI. Far-fetched expectations have been Vassilis Galanos
harmful and contributed to the confusing narrative on AI. The history of AI shows a repetitive
rise and fall pattern of hype and disillusionment; large availability of grants followed by long
periods of research support stagnation – this happened due to AI specialists, in their attempts at
establishing their field have made very brave and overly ambitious (and ambiguous) promises
to eventually remain unfulfilled. Given the evidence of the current negative effects of non-
specialist intrusion, the right to intense boundary work to separate who is entitled to be a
spokesperson of AI and who is not, should be made.
Fostering citizen trust AI applications, such as rules-based systems, speech recognition, machine translation, computer Rony Medaglia
vision, machine learning, robotics, and natural language processing, have the potential to free
up precious cognitive resources of public workers, which can then be allocated to tasks of higher
added value. Opportunities exist for AI applications to foster citizen trust. Unfair, inefficient, or
even distorted provision of government services can be potentially reduced by the use of AI.
SMEs and public sector The potential of AI within SMEs is in the automation of various tasks with decision making Sujeet Sharma and JB Singh
components such as in the functions of finance and customer services. AI applications could
(continued on next page)

38
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Table 6 (continued)

Title AI opportunities Contributor

help SMEs in matching customer invoices with received payments, AI chatbots could help
enterprises answering customer's simple requests. AI could improve the efficiency of SMEs,
automating a number of business processes Public sector and government organisations
generate lots of data through their processes and hence more potential exists for the
implementation application of AI technologies. AI could help in identifying the target citizens
for welfare schemes and payments. The judiciary in developing countries could be improved
where cases are pending from decades due to availability of limited resources. AI could help in
deciding on the bail hearings in courts as machine learning technologies are now robust for such
applications.
Public sector benefits Opportunities exist in governments throughout the world via the use of AI to tackle problems Santosh K Misra
such as: shortage of resources, scale of operations and standardisation of government delivery
systems. Governments could transform their service offerings by using AI to address all the these
issues. Governments can offer benefits to citizens via the use of: smart services, intelligent
adaptive forms and predictive service delivery.
AI governance Algorithms can be embedded in our daily life. Algorithms warning people to avoid collisions in Marijn Janssen
cars and that helps us of use our washing machine when electricity prices are low are much
desirable. There is an abundance of enthusiasm and optimism about AI data can be used for the
good. The dual use of data makes it possible to advance our society, but also to suppress the
poor. At the same time the emerging AI-based systems often lack transparency, accountability
and oversight. A new area of data and AI governance is needed to ensure that the benefits can be
gained and risks avoided. Value-aware AI systems need to be designed that ensure that decisions
are made correctly, that societal values and norms are represented in AI systems and people can
safely enjoy the benefits of AI.

existing processes (Mullainathan & Spiess, 2017). Misra asserts that due of viewpoints. Each of the contributions offer a number of potential
to the large number of transactions and regulatory services, govern­ research opportunities based on an assessment of research agenda in
ments are likely to be one of the largest adopters of AI. The perspective the context of each perspective. Many open questions remain on a
highlights the lack of an AI comprehensive public policy framework and number of aspects of AI:
presents the TAM-DEF framework to objectively test AI validity prior to
procurement. 1. The literature seems to conclude that the future of AI requires hu­
The individual perspectives from the invited experts and wider lit­ mans in the loop and that AI should be seen as augmenting the
erature have offered unique insight to the subject of AI from a number potential of humans not replacing them. However, is the concept of

Table 7
UN sustainable development goals vs AI technology driven change.
UN sustainability goals AI technology potential in delivering UN goals.

No poverty The implementation of AI technology is likely to drive increasing levels of automation within manufacturing with resulting impacts
Zero hunger on emerging and developed economies. Studies have highlighted the inevitable loss of low skilled labour and potential creation of
Good health and well-being new higher value jobs where human cognitive related skills can be utilised within the workplace. This is predicted to
disproportionately affect many of the emerging Asian economies that have traditionally relied on this type of work. However, as new
roles are created to support the increasing use of AI, requiring new skills and training, this realignment is likely to have a beneficial
impact on raising peoples quality and standard of life. Within many emerging economies, particularly in rural areas, medical
practitioners are in short supply. AI based diagnosis systems could be utilised to support doctors and potentially speed up the
treatment process leading to health benefits for the population.
Quality education Education is likely to be impacted by the emergence of AI. Schools and universities could utilise AI technology in the classroom to aid
Gender equality the learning process and assist educators in their interaction with students. The Japan based study by Hamaguchi and Kondo (2018)
Reduced inequalities highlighted the disproportionate impact on female workers from technology adoption when compared to male workers. These
impacts could potentially worsen within the AI era unless positive steps are taken by policy makers and governmental organisations.
Greater faith in AI systems could reduce inequalities due to the inability of potential bribery, intimidation and transparency as long as
algorithms are open and certified.
Clean water and sanitation AI technology has the potential to predict energy and utility demand and react to climate change using big data and intelligent energy
Affordable and clean energy supply systems. The net effect of this change would be less waste, a more efficient supply network and lower cost energy, water and a
means of assuring and promoting economic development amongst the world population (Cohen & Kharas, 2018).
Decent work and economic growth Work, economic advancement and the growth of industry will be impacted by the adoption of AI technologies. Greater levels of
Industry innovation and infrastructure automation and the advancement of machine learning technologies will improve working practices and productivity. This will in turn
Sustainable cities and communities drive increased worker skill levels and growth within a number of sectors. The use of AI can engender innovation and greater levels of
sustainability as governing authorities strive to incorporate AI technologies within communities and cities.
Responsible consumption and production Responding to climate change and resulting impact is often costly. The poor are all too often the first to be impacted by climate
Climate action change and, for the most part, will suffer the most in terms of loss of welfare and opportunity. AI technology improves the quality of
Life below water understanding and responding to climate impacts and could end up being a vitally important part of assuring and promoting
Life on land economic development amongst the world's least well off (Brookings Institute, 2019). The potential improvements to forecasting and
modelling via the use of machine learning elements of AI and big data, can directly contribute to the ongoing human impact on use of
valuable resources, life below water and on land. This use of technology can potentially force human change in these areas as AI
systems help to gain consensus on key global sustainable issues relating to the United Nations Framework Convention on Climate
Change (UNFCCC), subsequent Kyoto Protocol (2013–2020) and Paris Agreement (2015).
Peace justice and strong institutions The combination of AI technology and human in the loop capability could potentially reinforce peoples trust in areas such as: medical
diagnosis, interpretation of law and statute as well as government institutions that can be made more effective and efficient via AI
technology.
Partnerships for the goals Partnership between institutions and decision makers is required at an international level to enable acceptance of AI and for the
technology to deliver the required development outcomes.

39
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

workers moving up the value chain to higher skilled jobs a universal need to develop pragmatic strategies to educate and re-skill workers to
one, especially within emerging economies? ensure humans are not disenfranchised by the onset of AI within the
2. Governance of AI technology is a key prerequisite prior to wide­ workplace. The likelihood of humans remaining in the loop in con­
spread adoption within industry and government. It is acknowl­ junction with intelligent machines means that workers will still have a
edged this is likely to be a trade-off between transparency and vital role to play within organisations, as AI based machines support
performance. The Collingridge dilemma highlights this issue suc­ human endeavours. The implementation of AI could benefit many of
cinctly in the sense that by initiating greater AI oversight early in the the UN SDGs directly and indirectly over time as society in general is
AI lifecycle, could be relatively straight forward as the technology is changed to incorporate AI technologies. Society will be able to utilise AI
relatively young and still hiding many of the unintended con­ technology to more effectively predict the impact that humans have on
sequences. However, authorities could choose to wait until AI is the environment and the planet. AI based systems will be used to es­
relatively mature but then run the risk of losing control over its tablish fairness within institutions and remove the subjectivity and
regulation (Collingridge, 1980). The rigorous audit of AI algorithms corruption that can be a barrier to citizen and government interaction
is likely to be complex and time consuming. How will these tasks be within many countries.
undertaken and is there a potential scenario where AI systems are
tasked with auditing and testing other AI technologies? 4.3. Future research agenda
3. In a scenario where many of the current computational constraints
are overcome, the potential disruptive change from AI could be Extensive opportunities exist for academic research within a wide
significant as industry and services migrate to a more automated range of topics pursuant to AI technologies and related impacts of the
machine based position. What are the cultural and societal im­ ongoing transition to use of intelligent machines within industry and
plications of this change? What are the risks for the change in in­ society. Any future research agenda covering AI will be diverse in that
teraction and how will this impact the future of human decision the adoption of the technology impacts many facets of government and
making? Will the onset of AI impact how we approach education, industry with wide implications for how humans will potentially live
training and skills acquisition? and work in the future. The research agenda detailed within Table 8
4. The trajectory towards greater levels of automation is likely to outlines potential areas of future research topics as outlined in the AI
benefit performance and productivity, but how are AI systems able workshop held on 13th June 2019 at the School of Management,
to navigate the complicated human attributes of uncertainty within Swansea University UK.
out of the box scenarios? The potential research agenda for AI technology in all its forms is
5. The speed of AI technology adoption is staggering and the ethical significant. AI technology has tended to become a somewhat broad
elements have yet to be fully contemplated and formalised. What church where many forms of automation and limited intelligent ma­
ethical protocols need to be designed and agreed as a matter of chines are labelled as AI. The literature has sought to highlight a dif­
urgency and what ethics controls need to be developed along a ferentiation between task-specific, domain based AI and the more
roadmap of additional controls as AI expands further? cognitive human centric form able to perform numerous intelligent
6. How can we ensure that humans are at the centre of AI design and tasks. The term increasingly used to describe this is: Artificial General
development and that the future aligns with a more fair and equi­ Intelligence (AGI) – so called – real AI (Bostrom, 2011). The real future
table use of the technology to improve people's lives. agenda and potential change within industry and society, is perhaps
split along the lines of AGI and the more domain specific AI where key
4.2. UN sustainability goals and AI specific tasks will be performed by machines. However, current levels
of technological advancement have yet to reach levels of what could be
The United Nations (UN) developed Sustainability Development described as AGI and are not likely to reach this in the near future. The
Goals (SDGs) were developed in 2015 for the UNs vision for the future. inherent complexity of the human brain has yet to be fully simulated by
The goals were presented as a blueprint and shared agenda for peace even the most sophisticated computer algorithms. Current research
and prosperity for the planet and population. Seventeen SDGs have focus is predominantly focussed on domain specific AI as well as its
been developed to highlight many of the key themes relating to ending potential impact on government, industry and society in general.
poverty, improve health and education, focussing on climate change, The cultural and societal impacts of further transition to AI tech­
reducing inequality and developing sustainable economic growth (UN nologies cannot be underestimated as people come to terms with ma­
2019). The study by Ismagilova et al. (2019), presented the UN SDGs in chines taking on more tasks traditionally carried out by humans. The
the context of future impact of Smart Cities and its citizens. The Hughes disruptive impact on many aspects of society including: manufacturing,
et al. (2019) study incorporated the UN SDGs from the perspective of logistics, education, interaction with government and health are all
blockchain technology and how this emerging technology could be likely to affect workers in all these sectors. The potential benefits of AI
aligned with the creation of business and social value (Hughes et al., systems may not be realised by all sections of society as a natural re­
2019). This study has reviewed each of the UN SDGs from the per­ ticence to interact with new technology and perhaps fear of change,
spective of potential alignment with AI and the major themes from this may limit transition in the short to medium term (Bostrom, 2011).
study. Based on these key comparisons, Table 7 details each of the SDGs Governments generally seem unable to keep pace in a regulatory con­
and how AI technology can potentially align with each of the goals and text with the speed of AI innovation. Researchers have a valuable role
deliver benefits as well as sustainability. to play here in the analysis of the many barriers to AI interaction and
The alignment of the UN SDGs and AI technology and highlights the the psychological aspects of change in the workforce and society in
key factors that could benefit sustainability on widespread adoption. general. Furthermore, the global impact of AI on emerging economies
This is likely to require significant investment from governments and needs to be assessed via academic study to ascertain the likely impact
industry together with collaboration at an international level to effect on low skilled workers and the wider economies from greater levels of
governance, standards and security. The increasing use of AI has the automation and machine learning systems. The regulatory issues also
potential to benefit many aspects of society in the longer term as hu­ extend to algorithm assurance, governance and ownership of unfore­
mans are free to concentrate on tasks requiring greater cognitive load seen outcomes as a consequence of poor algorithm performance and
whilst more mundane jobs are performed by machines. However, al­ complexities (Janssen & Kuk, 2016b). How do we know what levels of
though this vision of a society that benefits from the onset of AI is testing and applied scenarios have been used to validate an AI algo­
realistic, the short to medium term transition may negatively impact rithm? Are the key logic and execution paths transparent to decision
many vulnerable aspects of society. Governments and organisations makers to ensure they are comfortable with the performance and likely

40
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Table 8
Future research agenda for AI.
Policy and economy

Title Research agenda description

Impact on society industry and education • Many of the current research debates seem to be technological in nature and performance
driven. A wider debate is required to take into account the cultural and societal impacts of AI
technology and what it means in the context of peoples lives.
• The literature has analysed the potential impact on many aspects of industry and citizen
interaction. Here the advantages of AI are often presented as benefits to performance and
productivity etc. However, further research is needed to ascertain how these benefits can be
spread throughout society as a whole.
• The potential impact that AI could have on education is a significant and consequential step
that requires thorough analysis, detailed planning as well as effective assurance. If the
student or pupil interaction with the AI system is processed with little or no human teacher
governance, how can society as a whole be assured that learning is effective and not subject
to inherent algorithm errors?
• The impact on workers from further levels of automation and AI based technology has been
widely commented on within the media and academic study. The levels of adaptation from
within the existing workforce is as yet unknown. However, history tells us that as industry
and society changes, humans generally adapt to the new ways of working and learning of
new skills. Studies have articulated the extensive reach of many forms of AI within the
medical and legal professions and well as manufacturing. Further research is needed to fully
quantify the potential impact and how these functions will be performed either in a fully
automated context or with humans in the loop within a creative destruction of jobs context.
Regulatory implications for AI • The pace of change relating to AI technology is staggering. Although various departments
may publish various technology related strategies periodically, government leaders generally
do not seem to be cognisant of many of the key issues and implications for society and
citizens. Furthermore, leaders seem to be slow to react to technological change
demonstrating evidence of a knowledge gap and requirement for a cultural shift within the
public sector.
• The traditional policy of long term strategies from central government and public sector
departments, does not work for fast changing technologies such as AI. Governments are
better served by adopting a strategy for short to medium plans that can be flexible enough to
cater for technological change and likely breakthroughs.
• Little evidence exists that demonstrate governments possess any tangible strategy or depth
of understanding to even begin to think on regulation of AI. The sanctioning of AI
technologies within industry and government systems may be subject to different regulatory
approaches depending on the perspectives of emerging vs developed markets. Institutions
may delay the onset of AI technology if its implementation results in widespread job loss and
disruption of societal norms. Academic study has a role to play here in the deeper analysis of
the implications if AI systems and regulatory options within a global context.
• The regulation of AI needs to factor in the problem ownership when things go wrong. Is it
valid for deployment to never take lace unless an AI based system can be fully described and
translated and how can this be fully assured?

Boundaries and awareness


Title Research agenda description
Bias within AI • As human developers have written the algorithms that are used within AI based systems, it
should not be a surprise that a number of inherent biases have slipped through into decision
making systems. The implication for bias within AI systems is significant as people may end
up being disenfranchised by incorrect logic and decision making.
• What levels of algorithmic assurance are needed? How can humans trust a black box
approach to AI? What levels of recourse to humans have if decisions are questioned? The
further research in a number of these areas is critical as AI based systems become ever more
complex and problematic to fully validate.
Boundaries between AI and people • The societal impact of AI must not be underestimated especially as we reflect on the reality
that only 50% of the global population has no digital footprint. What are the implications of
greater levels of automation where workers operate using AI enhanced machines or interact
with AI systems in the factory setting?
• Although the safety aspect of people working in close proximity with machines is addressed
in the workplace, the interaction element between workers and AI has not been addressed.
This area needs further research with regard to the psychological implications and the
medium to long term effect on humans required to regularly work closely with AI systems.

Making decisions with AI


Title Research agenda description
Scientific problems towards achieving full scale AGI • The much hyped scenario of super intelligent forms of technology able to perform many of
the cognitive tasks of humans across domains has not materialised and is not likely to in the
near future.
• Although the availability of big data in conjunction with AI has enabled greater levels of AI
performance specific to key domains, it is widely accepted that AGI is perhaps a potential
long term prospect if at all possible. Researchers have scaled back on areas such as
autonomous cars and general AI cognitive ability across domains. Research in these areas
should focus on the opportunities and implications for human enhancement via the use of AI
to deliver heightened levels of human performance and abilities.
(continued on next page)

41
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Table 8 (continued)

Policy and economy

Title Research agenda description

AI and strategic decision making • The implications for AI technology being integral to strategic decision making are complex
with significant implications if poor decisions are made. What levels of assurance are in
place if AI systems can make significant decisions autonomously? If AI systems require a
human in the loop for final assurance for key strategic decisions, what are the implications
and risk to organisations for AI decisions deemed to be less strategic?
• In the absence of true AGI can we outsource any strategic decisions to AI without
appropriate checks and balances to deliver the required levels if assurance?

Future impact
Title Research agenda description
AI leaders of the world (FAMGI – BAT: Facebook, Amazon, Microsoft, IBM, Baidu, • The huge tech companies in the US and China control a significant market share of the
Alibaba and Tencent) – how can society and government push back from this innovation and momentum within the fields of automation and AI technology. To a certain
imbalance? extent the lack of understanding and therefore, regulation from government and wider
society has left a void that has been somewhat exploited by the high tech firms, with
regulatory institutions seemingly playing catchup on AI.
• Is society disadvantaged by this level of control of the AI research agenda by a small
number of organisations? Could society as a whole benefit from greater regulatory or
government involvement earlier in the AI product lifecycle at an algorithmic assurance
level? Further research is needed in this area to identify potential frameworks and protocols
that can identify how government and society can engender greater transparency for AI
design and implementation.
The debate on AI being a force for good or bad. • Researchers have debated this topic for some time within the technology focussed
literature. Some studies have articulated a negative narrative on the greater adoption of AI
technology, whereas other studies have concentrated on the positive benefits without fully
identifying some of the drawbacks to society as a whole.
• Generally, more recent aspects of the literature have posited a more realistic view on AI
advances acknowledging that we are far from delivering anything near full AGI. In a societal
and cultural context, researchers have an important role in identifying the potential
implications for emerging nations where workers could potentially be deeply affected by the
onset of AI.

outcomes of the AI system? Who gets the blame when things go wrong? fears of AI in the context of real tangible benefits? Academic research
Academic research is needed to answer these questions in order to could play a greater role in assessing the impact of this current model
develop a deeper analysis of the potential implications for all key sta­ and develop a wider debate on the societal perceptions of the tech­
keholders. nology and speed of innovation.
The potential for inherent bias within AI algorithms and implica­
tions of humans in loop working in close proximity to intelligent ma­
chines, poses significant challenges in the context of trust, human safety 5. Conclusions
and ethical considerations. We should not assume that workers will be
comfortable with the AI enhancing human capability concept and that In alignment with an approach adopted from Dwivedi et al.
resistance as well as lack of trust is likely to be the norm within the (2015b), this study presents a consolidated yet multiple perspective on
workplace (Gupta & Kumari, 2017; Sun & Medaglia, 2019). These various aspects of AI from invited expert contributors from public
complexities pose significant challenges as organisations utilise the sector, industry and academia. The collective insights stem from the
power of AI combined with big data for strategic and potentially au­ workshop titled “Artificial Intelligence (AI): Emerging Challenges, Oppor­
tonomous decision making. Academic research has a role to play here in tunities, and Agenda for Research and Practice” held on 13th June 2019 at
the empirical study of workers attitudes to trust and the deeper im­ the School of Management, Swansea University UK. Each of the in­
plications of human and intelligent machine interaction. The ethical dividual perspectives has highlighted the opportunities, challenges and
and moral dimensions are potentially extensive especially in the context potential research agenda posed by the rapid emergence of AI. Each
of organisation decision making. Is there an underlying cultural di­ expert was invited to set out their individual contribution in largely
mension to the ethics of AI logic and subsequent outcomes? Is there a unedited form, expressed directly as they were written by the authors.
potential trade-off where one attribute of an AI decision so important This approach creates an inherent unevenness in the logical flow but
that another would be sacrificed? Which attribute would be deemed to captures the distinctive orientations of the experts and their re­
be less important in the context of strategic decision making and how is commendations. The key findings and open research question have
this choice made? What checks and balances are needed to be in place been outlined and aligned with the academic literature.
for management to have confidence in AI decisions and recommenda­ The trajectory towards increasing applications using AI has the
tions? These are important questions and key topics within a potential potential to change many aspects of human lives and impacting society
AI focussed research agenda. as a whole. The way forward is not clear and the potential roadmap is
The significant innovation from the big technology leaders has undefined. There are numerous benefits that could accrue from AI but
somewhat driven the technological agenda for AI to the extent that there are also significant risks that swathes of society may be disen­
most of society seems to be in catch-up mode as each new step is made. franchised form the implementation of the technology. Decisions made
Is this the correct progression path for society as a whole? Is there a within the next few years on the forward path for AI are likely to have
better model or framework that could engender enhanced levels of trust an impact on all our lives and the lives of future generations.
and understanding? Can wider sectors of society assess their potential

42
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Acknowledgement literature: An update. MIS Quarterly, 17(2), 209–226.


Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact by Solon Barocas, Andrew
D. Selbst: SSRN. California Law Review, 104, 671–732.
This submission was developed from a workshop on Artificial Barton, D., & Court, D. (2012). Making advanced analytics work for you. Harvard Business
Intelligence (AI), which was held at the School of Management, Review, 90(10), 78–83.
BBC (2019). Will AI kill developing world growth. Accessed June 2019. https://www.bbc.
Swansea University on 13th June 2019. We are very grateful to ev­ co.uk/news/business-47852589.
eryone who attended the workshop and contributed their perspectives Becker, G. S. (1996). Accounting for tastes. Cambridge: Harvard University Press.
during the workshop and as an input to this article. We are also truly Bell, A. (2019). Waiting on hold will soon become a thing of the past. Accessed February
2019.: https://whatsnext.nuance.com/customer-experience/artificial-intelligence-
appreciative to those who although not able to attend the workshop, bridges-gaps-between-consumer-demands-and-contact-centers/.
provided their valuable perspectives for developing this work. We are Beregi, J., Zins, M., Masson, J., Cart, P., Bartoli, J.-, Silberman, B., …, & Meder, J. (2018).
Radiology and artificial intelligence: An opportunity for our specialty. Diagnostic and
also very grateful to our Senior PVC – Professor Hilary Lappin-Scott, the
Interventional Imaging, 99(11), 677–678.
keynote speaker – Mr Lee Waters AM, Deputy Minister for Economy and Bernard Marr. (2018). Retrieved from https://www.forbes.com/sites/bernardmarr/
Transport, National Assembly for Wales and the following panellists 2018/06/25/the-ai-skills-crisis-and-how-to-close-the-gap/#5365a24c31f3.
Bertot, J. C., Jaeger, P. T., & Grimes, J. M. (2010). Using ICTs to create a culture of
from industry and public sector organisations for enriching our un­ transparency: E-government and social media as openness and anti-corruption tools
derstanding of this emerging area by providing their valuable per­ for societies. Government Information Quarterly, 27(3), 264–271.
spectives that have informed the views presented in this article: Ms Sara Bielenberg, A., Helm, L., Gentilucci, A., Stefanescu, D., & Zhang, H. (2012). The growth of
diaspora – A decentralized online social network in the wild. 2012 proceedings IEEE
El-Hanfy, Innovate UK; Mr Peter Chow, AI & Cloud Product Design INFOCOM workshops (pp. 13–18). (March).
Manufacturing & Inspection, Fujitsu UK; Ms Kimberley Littlemore, Blair, W. M. (1956). Nixon Foresees 4-Day Work Week; Says G.O.P. Policies Assure Fuller Life
for Family. New York Times, Sep.
Director of eHealth Digital Media, UK; Mr Chris Reeves, Country Bleicher, A. (2017). Demystifying the black box that is AI. Scientific American Retrieved
Digitisation Director, Cisco UK & Ireland; Mr Adam Wedgbury, Team from https://www.scientificamerican.com/article/demystifying-the-black-box-that-
Leader for Cyber Security Innovation, Airbus; and Mr Toby White, CEO is-ai/.
Bleier, A., & Eisenbeiss, M. (2015). The importance of trust for personalized online ad­
of Artimus, Cardiff, UK. We are also very grateful to our colleagues, vertising. Journal of Retailing, 91(3), 390–409.
Amy Jones and Julie Bromhead, for all their valuable support for or­ Blili, S., & Raymond, L. (1993). Information technology: Threats and opportunities for
ganising the workshop. Finally, we are grateful to the Emerging small and medium-sized enterprises. International Journal of Information Management,
13(6), 439–448.
Markets Research Centre (EMaRC), Swansea i-Lab (Innovation Lab), Boerman, S. C., Kruikemeier, S., & Zuiderveen Borgesius, F. J. (2017). Online behavioral
and Department of Business at the School of Management, Swansea advertising: A literature review and research agenda. Journal of Advertising, 46(3),
363–376.
University for their financial support in the organising of this workshop. Bole, U., Popovič, A., Žabkar, J., Papa, G., & Jaklič, J. (2015). A case analysis of em­
bryonic data mining success. International Journal of Information Management, 35(2),
References 253–259.
Booch, G. (2015). I, for one, welcome our new computer overlords. IEEE Software, 32(6),
8–10.
Abarca-Alvarez, F. J., Campos-Sanchez, F. S., & Reinoso-Bellido, R. (2018). Demographic Bostrom, N., & Yudkowsky, E. (2011). The ethics of Artificial Intelligence. In K. Frankish
and dwelling models by artificial intelligence: Urban renewal opportunities in (Ed.). Cambridge handbook of artificial intelligence. Cambridge: Cambridge University
spanish coast. International Journal of Sustainable Development and Planning, 13(7), Press.
941–953. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University
Abbot, J., & Marohasy, J. (2013). The potential benefits of using artificial intelligence for Press.
monthly rainfall forecasting for the Bowen Basin, Queensland, Australia. WIT Bradbury, D. (2018). Does your company need an AI council? Retrieved from https://www.
Transactions on Ecology and the Environment, 171, 287–297. servicenow.com/workflow/corporate-ai-projects.html.
Acemoglu, D., Autor, D., Dorn, D., Hanson, G., & Price, B. (2014). Return of the Solow Breuninger, S., & Berg, M. (2001). Great Bubbles: Reactions to the South Sea Bubble the
Paradox? IT, productivity, and employment in US manufacturing. American Economic Mississippi Scheme and the Tulip Mania Affair. .
Review, 104(5), 394–399. Brookings Institute (2019). How artificial intelligence will affect the future of energy and
Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable climate. Accessed 05.07.19. https://www.brookings.edu/research/how-artificial-
artificial intelligence (XAI). IEEE Access, 6, 52138–52160. intelligence-will-affect-the-future-of-energy-and-climate/.
Adam, T., O'Sullivan, A. C., & Russell, R. (2017). Artificial intelligence and public policy. Brooks, R. (2002). Robot: The future of flesh and machines. MIT Press.
Report of Mercatus Center, George Mason University. Brooks, R. (2019). AGI has been delayed. Rodneybrooks.com. https://rodneybrooks.com/
Agrawal, A., Gans, J., & Goldfarb, A. (2017). What to expect from artificial intelligence. agi-has-been-delayed/.
MIT Sloan Management Review, 58(3), 22–27. Brown, M. M. (2015). Revisiting the IT productivity paradox. The American Review of
Aguirre, E., Mahr, D., Grewal, D., de Ruyter, K., & Wetzels, M. (2015). Unraveling the Public Administration, 45(5), 565–583.
personalization paradox: The effect of information collection and trust-building Brown, N., & Michael, M. (2004). Risky creatures: Institutional species boundary change
strategies on online advertisement effectiveness. Journal of Retailing, 91(1), 34–49. in biotechnology regulation. Health, Risk & Society, 6(3), 207–222.
Al-Emran, M., Mezhuyev, V., Kamaludin, A., & Shaalan, K. (2018). The impact of Brynjolfsson, E., & Hitt, L. M. (2000). Beyond computation: Information technology, or­
knowledge management processes on information systems: A systematic review. ganizational transformation and business performance. The Journal of Economic
International Journal of Information Management, 43, 173–187. Perspectives, 14(4), 23–48.
Anderson, K. P. (2019). Artificial intelligence-augmented ECG assessment: The promise Bughin, J., Seong, J., Manyika, J., Chui, M., & Joshi, R. (2018). Notes from the AI frontier:
and the challenge. Journal of Cardiovascular Electrophysiology. https://doi.org/10. Modeling the global economic impact of AI. McKinsey Global Institute1–64 September
1111/jce.13891. (September). Retrieved from https://www.mckinsey.com/featured-insights/
Anderson, M., & Leigh, A. S. (2007). Machine ethics: Creating an ethical intelligent agent. artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-
AI Magazine, 28(4). world-economy.
Anonymous (2018). Initial code of conduct for data-driven health and care technology. Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in
Department of Health & Social Care (ed) Published online 5 September 2018 ed.. Her commercial gender classification. Conference on fairness, accountability and transpar­
Majesty's Stationery Office. ency, 77–91 (January).
Antonio, V. (2018). How AI is changing sales. https://hbr.org/2018/07/how-ai-is- Burger, A. S. (2015). Extreme working hours in western Europe and North America: A new
changing-sales Accessed 01.10.18. aspect of polarization. LEQS Paper, no. 92.
Arlitsch, K., & Newell, B. (2017). Thriving in the age of accelerations: A brief look at the Busch, P. A., & Henriksen, H. Z. (2018). Digital discretion: A systematic literature review
societal effects of artificial intelligence and the opportunities for libraries. Journal of of ICT and street-level discretion. Information Polity, 23(1), 3–28.
Library Administration, 57(7), 789–798. BSI, (2019) SO/IEC JTC 1/SC 42 N 222, ISO/IEC NP 23894 Information Technology –
Autor, D. H. (2013). The ‘Task Approach’ to Labor Markets – An Overview. Journal for Artificial Intelligence – Risk Management https://standardsdevelopment.bsigroup.
Labour Market Research, 46(3), 185–199. com/projects/9018-02222 accessed on 02.06.2019.
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., …, & Rahwan, I. (2018). Calo, R. (2017). Artificial Intelligence policy: A roadmap. SSRN Electronic Journal, 1–28.
The moral machine experiment. Nature, 563(7729), 59. https://doi.org/10.2139/ssrn.3015350.
Bader, J., Edwards, J., Harris-Jones, C., & Hannaford, D. (1988). Practical engineering of Capgemini Report (2018a). Understanding digital mastery todayCapgemini Digital
knowledge-based systems. Information and Software Technology, 30(5), 266–277. Transformation Institute Available from https://www.capgemini.com/wp-content/
Baek, T. H., & Morimoto, M. (2012). Stay away from me: Examining the determinants of uploads/2018/07/Digital-Mastery-DTI-report_20180704_web.pdf. Accessed
consumer avoidance of personalized advertising. Journal of Advertising, 41(1), 59–76. 17.03.19.
Baldassarre, G., Santucci, V. G., Cartoni, E., & Caligiore, D. (2017). The architecture Capgemini Report (2018b). TechnoVision 2018: The impact of AI Available at https://
challenge: Future artificial-intelligence systems will require sophisticated archi­ www.capgemini.com/technovision-2018-the-impact-of-ai/. Accessed 17.03.19.
tectures, and knowledge of the brain might guide their construction. The Behavioral Carleo, G., Cirac, I., Cranmer, K., Daudet, L., Schuld, M., Tishby, N., Vogt-Maranto, L., &
and Brain Sciences, 40, e254. Zdeborová, L. (2019). Machine learning and the physical sciences. arXiv:1903.10563
Baller, S., Dutta, S., & Lanvin, B. (2016). The global information technology report 2016. [physics.comp-ph]. https://inspirehep.net/search?p=find+eprint+1903.10563.
Genevahttps://doi.org/10.1016/B978-0-12-804704-0.00010-4. Carrasquilla, J., & Melko, R. (2017). Machine learning phases of matter. Nature Physics,
Barki, H., Rivard, S., & Talbot, J. (1993). A keyword classification scheme for IS research 13, 431–434.

43
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Carter, S., Armstrong, Z., Schubert, L., Johnson, I., & Olah, C. (2019). Exploring neural decision making at different levels and in different roles. European Journal of
networks with activation atlases. https://doi.org/10.23915/distill.00015. Information Systems, 9(1), 36–46.
Chakraborty, A., & Kar, A. K. (2016). A review of bio-inspired computing methods and po­ Edwards, S. D. (2018). The HeartMath coherence model: Implications and challenges for
tential applications. Proceedings of the international conference on signal, networks, artificial intelligence and robotics. AI and Society, 1–7. https://doi.org/10.1007/
computing, and systems. New Delhi: Springer155–161. s00146-018-0834-8.
Chakraborty, A., & Kar, A. K. (2017). Swarm intelligence: A review of algorithms. Nature- Eggers, W. D., Schatsky, D., & Viechnicki, P. (2017). AI augmented government: using
inspired computing and optimization. Cham: Springer475–494. cognitive technologies to redesign public sector work. Deloitte Center for Government
Chandler, D., Levitt, S. D., & List, J. A. (2011). Predicting and preventing shootings among Insights.
at-risk youth. The American Economic Review, 101(3), 288–292. Emmett, R. (2000). Great bubbles: Reactions to the South Sea Bubble, the Mississippi scheme
Chaudhri, V. K., Lane, H. C., Gunning, D., & Roschelle, J. (2013). Applications of artificial and the tulip mania affair. London: Pickering & Chatto.
intelligence to contemporary and emerging educational challenges. Artificial Erikson, H., & Salzmann-Erikson, M. (2016). Future challenges of robotics and artificial
Intelligence Magazine, Intelligent Learning Technologies: Part, 2(34), 4. intelligence in nursing: What can we learn from monsters in popular culture? The
Chaudhuri, A., & De, K. (2011). Fuzzy support vector machine for bankruptcy prediction. Permanente Journal, 20(3).
Applied Soft Computing Journal, 11(2), 2472–2486. Eubanks, V. (2018). Automating inequality. St. Martin's Press.
Chen, J., & Stallaert, J. (2014). An economic analysis of online advertising using beha­ European Parliament (2017). Robots and artificial intelligence: MEPs call for EU-wide lia­
vioral targeting. MIS Quarterly, 38(2), 429–449. bility rules [Plenary session. Press release]. European Parliament News Retrieved from
Cheshire, W. P. (2017). Loopthink: A limitation of medical artificial intelligence. Ethics http://www.europarl.europa.eu/news/en/news-room/20170210IPR61808/robots-
and Medicine, 33(1), 7–12. and-artificial-intelligence-meps-call-for-eu-wide-liability-rules. (February).
Clarkson, J., Coleman, R., Keates, S., & Lebbon, C. (Eds.). (2003). Inclusive design – Design Fessler, L. (2017). We tested bots like Siri and Alexa to see who would stand up to sexual
for the whole population. London: Springer-Verlag. harassment. Quartz Magazine.
Cleophas, T. J., & Cleophas, T. F. (2010). Artificial intelligence for diagnostic purposes: Fleck, J. (1982). Development and establishment in artificial intelligence. In N. Elias, H.
Principles, procedures and limitations. Clinical Chemistry and Laboratory Medicine, Martins, & R. Whitley (Eds.). Scientific establishments and hierarchies (pp. 169–217).
48(2), 159–165. Dordrecht: D. Reidel.
Cohen, J. L., & Kharas, H. (2018). Using big data and artificial intelligence to accelerate global Forbes (2019a). Walmart unveils a new lab store that uses AI. Accessed 10.06.19. https://
development. Brookings Institution Accessed 04.07.19. https://www.brookings.edu/ www.forbes.com/sites/walterloeb/2019/04/29/walmart-unveils-a-new-lab-store-
research/using-big-data-and-artificial-intelligence-to-accelerate-global- for-the-future/#f06ea9f504f8.
development/. Forbes (2019b). Artificial intelligence, China and the U.S. – How the U.S. is losing the tech­
Collingridge, D. (1980). The social control of technology. London: Frances Pinter. nology war. Accessed 05.07.19. https://www.forbes.com/sites/steveandriole/2018/
Collins, H. (2018). Artificial intelligence: Against humanity's surrender to computers. 11/09/artificial-intelligence-china-and-the-us-how-the-us-is-losing-the-technology-
Medford: Polity. war/#2dcafacd6195.
Columbus, L. (2016). Ten ways big data is revolutionizing marketing and sales. Accessed Foy, K. (2018). Artificial intelligence system uses transparent, human-like reasoning to solve
August 2018. Available from: https://www.forbes.com/sites/louiscolumbus/2016/ problems. MIT News Available from http://news.mit.edu/2018/mit-lincoln-
05/09/ten-ways-big-data-is-revolutionizing-marketing-and-sales/#1bfc272b21cf. laboratory-ai-system-solves-problems-through-human-reasoning-0911. Accessed
Combi, C. (2017). Editorial from the new editor-in-chief: Artificial intelligence in medi­ 17.03.19.
cine and the forthcoming challenges. Artificial Intelligence in Medicine, 76, 37–39. Frase, P. (2016). Four futures: Life after capitalism. Verso books.
Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. New Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs
York: Basic Books. to computerisation? Technological Forecasting and Social Change, 114, 254–280.
Data Gov (2018). Estimated number of enterprises in different states/UTs. Retrieved from Fry, H. (2018). Hello world: How to be human in the age of the machine. London, UK:
https://data.gov.in/catalog/estimated-number-enterprises-different-statesuts. Transworld Publishers.
Daugherty, P. R., & Wilson, H. J. (2018). Human+Machine: Reimagining work in the age of Special Report - AI & Robotics Accessed, 20th June 2019. https://www.ft.com/reports/ai-
AI. Harvard Business Press. robotics.
Davenport, T. H., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age Future of Life Institute (2017). Asilomar AI principles. Future of Life Institute Retrieved
of smart machines. New York: Harper Business. from https://futureoflife.org/ai-principles/.
Davenport, T. H., & Patil, D. J. (2012). Data scientist. Harvard Business Review, 90(5), Galanos, V., et al. (2018). Artificial intelligence does not exist: Lessons from shared
70–76. cognition and the opposition to the nature/nurture divide. In Kreps (Ed.). 13th IFIP
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard TC 9 International Conference on Human Choice and Computers, HCC13 2018. Held at
business review, 96(1), 108–116. the 24th IFIP World Computer Congress, WCC 2018.
Autor, David H. (2015). Why are there still so many jobs? The history and future of Galanos, V. (2019). Exploring expanding expertise: Artificial intelligence as an existential
workplace automation. Journal of Economic Perspectives, 29(3), 3–30. threat and the role of prestigious commentators, 2014–2018. Technology Analysis &
De Stefano, V. (2018). ‘Negotiating the Algorithm’: Automation, artificial intelligence and Strategic Management, 31(4), 421–432.
labour protection. Gaon, A., & Stedman, I. (2019). A call to action: Moving forward with the governance of
DeBrusk, C. (2018). The risk of machine-learning bias (and how to prevent it). Available at artificial intelligence in Canada. Alberta Law Review, 56(4).
https://sloanreview.mit.edu/article/the-risk-of-machine-learning-bias-and-how-to- Garber, P. (2000). Famous first bubbles: The fundamentals of early manias. Cambridge: MIT
prevent-it/. Accessed 17.03.19. Press.
Desouza, K. C. (2018). Delivering artificial intelligence in government: Challenges and op­ Genz, S., Lehmer, F., & Janser, M. (2018). The impact of investments in new digital
portunities. Washington, DC: IBM Center for The Business of Government http:// technologies on wages – Worker-level evidence from Germany? Manuscript presented
www.businessofgovernment.org/sites/default/files/Delivering%20Artificial at SES.
%20Intelligence%20in%20Government_0.pdf. Gerbert, P., Reeves, M., Ransbotham, S., Kiron, D., & Spira, M. (2018). Global competition
Dey, D., & Kumar, S. (2010). Reassessing data quality for information products. with AI in business: How China differs. MIT Sloan Management Review (July).
Management Science, 56(12), 2316–2322. Giannetti, C., Lucini, B., & Vadacchino, D. (2018). Machine learning as a universal tool for
Dickson, B. (2019). Explainable AI: Viewing the world through the eyes of neural networks. quantitative investigations of phase transitions. arXiv:1812.06726 [cond-mat.stat-
Available at https://bdtechtalks.com/2019/02/04/explainable-ai-gan-dissection- mech]. https://arxiv.org/abs/1812.06726.
ibm-mit/ (accessed 21.03.19). Glauner, P., Meira, J. A., Valtchev, P., State, R., & Bettinger, F. (2017). The challenge of
DIN & DKE (2018). German standardization roadmap industrie 4.0. V3. Accessed 25.04.19. non-technical loss detection using artificial intelligence: A survey. International
https://www.din.de/blob/65354/57218767bd6da1927b181b9f2a0d5b39/roadmap- Journal of Computational Intelligence Systems, 10(1), 760–775.
i4-0-e-data.pdf. Goldsmith, S., & Crawford, S. (2014). The responsive city: Engaging communities through
Dreyer, K., & Allen, B. (2018). Artificial intelligence in health care: Brave new world or data-smart governance. John Wiley & Sons.
golden opportunity? Journal of the American College of Radiology, 15(4), 655–657. Griffin, A. (2017). Facebook's artificial intelligence robots shut down after they start talking to
Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision each other in their own language. The independent. Accessed 12.05.18. Retrieved from
making in the era of big data – Evolution, challenges and research agenda. https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-
International Journal of Information Management, 48, 63–71. artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.
Dutton, T. (2018). An overview of national AI strategies. Retrieved from https://medium. html. (July).
com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd. Grover, P., & Kar, A. K. (2017). Big data analytics: A review on theoretical contributions
Dwivedi, Y. K., Rana, N. P., Jeyaraj, A., Clement, M., & Williams, M. D. (2019). Re- and tools used in literature. Global Journal of Flexible Systems Management, 18(3),
examining the unified theory of acceptance and use of technology (UTAUT): Towards 203–229.
a revised theoretical model. Information Systems Frontiers, 21(3), 719–734. Guest, D., Cranmer, K., & Whiteson, D. (2018). Deep learning and its application to LHC
Dwivedi, Y. K., Rana, N. P., Janssen, M., Lal, B., Williams, M. D., & Clement, M. (2017). physics. Annual Review of Nuclear and Particle Science, 68, 161–181.
An empirical validation of a unified model of electronic government adoption Gupta, R. K., & Kumari, R. (2017). Artificial intelligence in public health: Opportunities
(UMEGA). Government Information Quarterly, 34(2), 211–230. and challenges. JK Science, 19(4), 191–192.
Dwivedi, Y. K., Kapoor, K. K., & Chen, H. (2015a). Social media marketing and adver­ Gutierrez, A., O’Leary, S., Rana, N. P., Dwivedi, Y. K., & Calle, T. (2019). Using privacy
tising. The Marketing Review, 15(3), 289–309. calculus theory to explore entrepreneurial directions in mobile location-based ad­
Dwivedi, Y. K., Wastell, D., Laumer, S., Henriksen, H. Z., Myers, M. D., Bunker, D., …, & vertising: Identifying intrusiveness as the critical risk factor. Computers in Human
Srivastava, S. C. (2015b). Research on information systems failures and successes: Behavior, 95, 295–306.
Status update and future directions. Information Systems Frontiers, 17(1), 143–157. Haddow, G., Bruce, A., Calvert, J., Harmon, S. H., & Marsden, W. (2010). Not “human”
Dwivedi, Y. K., & Kuljis, J. (2008). Profile of IS research published in the European enough to be human but not “animal” enough to be animal – The case of the HFEA,
Journal of Information Systems. European Journal of Information Systems, 17(6), cybrids and xenotransplantation in the UK. New Genetics Society, 29(1), 3–17.
678–693. Haeffner, M., & Panuwatwanich, K. (2017). Perceived Impacts of Industry 4.0 on
Eden, A. H., Steinhart, E., Pearce, D., & Moor, J. H. (2012). Singularity hypotheses: An Manufacturing Industry and Its Workforce: Case of Germany. International conference on
overview. Singularity hypotheses. Berlin Heidelberg: Springer1–12. engineering, project, and product management. Cham: Springer199–208 (September).
Edwards, J. S., Duan, Y., & Robins, P. (2000). An analysis of expert systems for business Hamaguchi, N., & Kondo, K. (2018). Regional employment and artificial intelligence in

44
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Japan. Research Institute of Economy, Trade and Industry (RIETI). Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who's the fairest in the land? On
Hancock, P. A., Jagacinski, R. J., Parasuraman, R., Wickens, C. D., Wilson, G. F., & Kaber, the interpretations, illustrations, and implications of artificial intelligence. Business
D. B. (2013). Human-automation interaction research: Past, present, and future. Horizons, 62(1), 15–25.
Ergonomics in Design, 21(2), 9–14. Kapoor, K. K., Tamilmani, K., Rana, N. P., Patil, P., Dwivedi, Y. K., & Nerur, S. (2018).
Harhoff, Dietmar, Heumann, Stefan, Jentzsch, Nicola, Lorenz, Philippe Outline for a Advances in social media research: Past, present and future. Information Systems
German Strategy for Artificial Intelligence (July 25, 2018). Available at SSRN: Frontiers, 20(3), 531–558.
https://ssrn.com. Kar, A. K. (2016). Bio inspired computing – A review of algorithms and scope of appli­
Hays, J., & Efros, A. A. (2007). Scene completion using millions of photographs. ACM cations. Expert Systems with Applications, 59, 20–32.
Transactions on Graphics (TOG), 26(3), 4. Katz, L. F., & Margo, R. A. (2013). Technical change and the relative demand for skilled labor:
Hazen, B. T., Boone, C. A., Ezell, J. D., & Jones-Farmer, L. A. (2014). Data quality for data The United States in historical perspective. Tech. Rep., NBER Working Paper No. 18752,
science, predictive analytics, and big data in supply chain management: An in­ National Bureau of Economic Research.
troduction to the problem and suggestions for research and applications. International Katz, Y. (2017). Manufacturing an artificial intelligence revolution. Available at SSRN
Journal of Production Economics, 154, 72–80. 3078224.
Hilovská, K., & Koncz, P. (2012). Application of artificial intelligence and data mining Keynes, J. M. (2010). Economic possibilities for our grandchildren. Essays in persuasion.
techniques to financial markets. ACTA VSFS, 6, 62–77. London: Palgrave Macmillan321–332.
HLSCAI (House of Lords. Select Committee on Artificial Intelligence). (2018). AI in the Khanna, S., Sattar, A., & Hansen, D. (2013). Artificial intelligence in health – The three
UK: Ready, willing, and able? Report of session 2017–19. April 16. The Authority of big challenges. Australasian Medical Journal, 6(5), 315–317.
the House of Lords. Accessed 16.04.18. https://publications.parliament.uk/pa/ Klaus Schwab (2016). The fourth industrial revolution. World Economic Forum 2016.
ld201719/ldselect/ldai/100/100.pdf. Kling, R. (1991). Computerization and social transformations. Science, Technology, &
Holak, B. (2018). Forrester 5 AI predictions for 2019: Pragmatic AI takes hold. Available at Human Values, 16(3), 342–367.
https://searchcio.techtarget.com/news/252453560/5-AI-predictions-for-2019- Knight, W. (2017). “The dark secret at the heart of AI.” Intelligent Machines. https://www.
Pragmatic-AI-takes-hold?src=5828756&asrc=EM_ERU_104394864&utm_content= technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/. Accessed
eru-rd2-rcpF&utm_medium=EM&utm_source=ERU&utm_campaign=20181203_ 05.04.19.
ERU%20Transmission%20for%2012/03/2018%20(UserUniverse:%20466834) (ac­ Koivisto, J., & Hamari, J. (2019). The rise of motivational information systems: A review
cessed 22.03.19). of gamification research. International Journal of Information Management, 45,
Houssami, N., Lee, C. I., Buist, D. S. M., & Tao, D. (2017). Artificial intelligence for breast 191–210.
cancer screening: Opportunity or hype? Breast, 36, 31–33. Krishen, A. S., Raschke, R. L., Close, A. G., & Kachroo, P. (2017). A power-responsibility
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service equilibrium framework for fairness: Understanding consumers’ implicit privacy
Research, 21(2), 155–172. concerns for location-based services. Journal of Business Research, 73, 20–29.
Hughes, A., & Morton, M. S. S. (2006). The Transforming power of complementary assets. Kumar, S. L. (2017). State of the art-intense review on artificial intelligence systems
MIT Sloan Management Review, 47(4), 50–58. application in process planning and manufacturing. Engineering Applications of
Hughes, L., Dwivedi, Y. K., Misra, S. K., Rana, N. P., Raghavan, V., & Akella, V. (2019). Artificial Intelligence, 65, 294–329.
Blockchain research, practice and policy: Applications, benefits, limitations, emer­ Kumar, V., & Shaphali, G. (2016). Conceptualizing the evolution and future of adver­
ging research themes and research agenda. International Journal of Information tising. Journal of Advertising, 45(3), 302–317.
Management, 49, 114–129. Kusiak, A. (1987). Artificial intelligence and operations research in flexible manu­
Hughes, D. L., Dwivedi, Y. K., & Rana, N. P. (2017). Mapping IS failure factors on facturing systems. INFOR: Information Systems and Operational Research, 25(1), 2–12.
PRINCE2® stages: An application of interpretive ranking process (IRP). Production Kwong, C. K., Jiang, H., & Luo, X. (2016). AI-based methodology of integrating affective
Planning & Control, 28(9), 776–790. design, engineering, and marketing for defining design specifications of new pro­
Hughes, D. L., Dwivedi, Y. K., Rana, N. P., & Simintiras, A. C. (2016). Information systems ducts. Engineering Applications of Artificial Intelligence, 47, 49–60.
project failure – Analysis of causal links using interpretive structural modelling. Langton, J. (1984). The industrial revolution and the regional geography of England.
Production Planning & Control, 27(16), 1313–1333. Transactions of the Institute of British Geographers, 9(2), 145–167.
Hughes, L., Wang, X., & Chen, T. (2012). A review of protocol implementations and Lassau, N., Estienne, T., de Vomecourt, P., Azoulay, M., Cagnol, J., Garcia, G., …, &
energy efficient cross-layer design for wireless body area networks. Sensors, 12(11), Cotten, A. (2019). Five simultaneous artificial intelligence data challenges on ultra­
14730–14773. sound, CT, and MRI. Diagnostic and Interventional Imaging, 100(4), 199–209.
Hurley, K. (2017). 11 artificial intelligence tools, transforming the B2B sales world. https:// Laudon, C. K., & Laudon, J. P. (2017). Management information systems: Managing the di­
nudge.ai/artificial-intelligence-sales-tools/. Accessed 01.10.18. gital firm (15th ed.). New Delhi: Pearson Education India.
IDC (2019). Automation and Customer Experience Needs Will Drive AI Investment to $5 Billion Lee, I. (2017). Big data: Dimensions, evolution, impacts, and challenges. Business
by 2019 Across European Industries. Accessed 01.08.19. https://www.idc.com/getdoc. Horizons, 60(3), 293–303.
jsp?containerId=prEMEA44978619. Lee, I., & Lee, K. (2015). The Internet of Things (IoT): Applications, investments, and
Ilavarasan, P. V. (2018). Automation and workforce in India: Terrible consequences or challenges for enterprises. Business Horizons, 58(4), 431–440.
impossible? In H. Galperin, & A. Alarcon (Eds.). The future of work in the global south Lee, J. H. (2002). Artificial intelligence-based sampling planning system for dynamic
(pp. 16–21). International Development Research Centre. manufacturing process. Expert Systems with Applications, 22(2), 117–133.
Artificial Intelligence: The Public Policy Opportunity. Accessed on July 25th 2019. Li, B. H., Hou, B. C., Yu, W. T., Lu, X. B., & Yang, C. W. (2017). Applications of artificial
https://www.intel.ai/solutions/artificial-intelligence-the-public-policy- intelligence in intelligent manufacturing: A review. Frontiers of Information Technology
opportunity/#gs.wu66jy. & Electronic Engineering, 18(1), 86–96.
Ismagilova, E., Hughes, L., Dwivedi, Y. K., & Raman, K. R. (2019). Smart cities: Advances Linden, A., & Fenn, J. (2003). Understanding Gartner's hype cycles. Strategic Analysis Report
in research—An information systems perspective. International Journal of Information N° R-20-1971. Gartner, Inc.
Management, 47, 88–100. Liu, H., Ke, W., Wei, K. K., & Hua, Z. (2013). The impact of IT capabilities on firm per­
ITU (2017). Measuring the information society report 2017: Volume 1. Geneva. formance: The mediating roles of absorptive capacity and supply chain agility.
Jai, T. M., Burns, L. D., & King, N. J. (2013). The effect of behavioral tracking practices on Decision Support Systems, 54(3), 1452–1462.
consumers’ shopping evaluations and repurchase intention toward trusted online Liu, J., Qi, Y., Yang Meng, Z., & Fu, L. (2017). Self-learning Monte Carlo method. Physical
retailers. Computers in Human Behavior, 29(3), 901–909. Review B, 95. https://doi.org/10.1103/PhysRevB.95.041101 041101(R).
Jain, P. K., & Mosier, C. T. (1992). Artificial intelligence in flexible manufacturing sys­ Liu, Y., Chan, C., Zhao, C., & Liu, C. (published online 2018). Unpacking knowledge
tems. International Journal of Computer Integrated Manufacturing, 5(6), 378–384. management practices in China: Do institution, national and organizational culture
Janssen, M., & Kuk, G. (2016a). Big and Open Linked Data (BOLD) in research, policy and matter? Journal of Knowledge Management. https://doi.org/10.1108/JKM-07-2017-
practice. Journal of Organizational Computing and Electronic Commerce, 26(1–2), 3–13. 0260.
Janssen, M., & Kuk, G. (2016b). The challenges and limits of big data algorithms in Löffler, M., & Tschiesner, A. (2013). The Internet of things and the future of manufacturing.
technocratic governance. Government Information Quarterly, 33(3), 371–377. McKinsey & Company Accessed in April 2019. https://www.mckinsey.com/business-
Janssen, M., Matheus, R., & Zuiderwijk, A. (2015). Big and Open Linked Data (BOLD) to functions/digital-mckinsey/our-insights/the-internet-of-things-and-the-future-of-
create smart cities and citizens: Insights from smart energy and mobility cases. Paper manufacturing.
presented at the Proceedings of the 14th IFIP Electronic Government (EGOV) conferences. London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy
Janssen, M., Van de Voort, H., & Wahyudi, A. (2017). Factors influencing big data de­ versus explainability. Hastings Center Report, 49, 15–21.
cision-making quality. Journal of Business Research, 70(1), 338–345. Loring, Evan (2018). How AI will help sales representatives. https://readwrite.com/2018/
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis 09/27/how-ai-will-help-sales-representatives/. Accessed 01.10.18.
in organizational decision making. Business Horizons, 61(4), 577–586. Loten, Angus (2017). AI to drive job growth by 2020: Gartner. https://blogs.wsj.com/cio/
Jonsson, A., & Svensson, V. (2016). Systematic lead time analysis. Chalmers University of 2017/12/15/ai-to-drive-job-growth-by-2020-gartner/. Accessed 10.01.18.
Technology Accessed April 2019. http://www.publications.lib.chalmers.se/records/ Ma, A. (2018). A psychological ailment called ‘hikikomori’ is imprisoning 500,000
fulltext/238746/238746.pdf. Japanese people in their homes — and it's more of a threat than ever. Business Insider.
Joy, B. (2000). Why the future doesn’t need us. Wired Magazine, 8(4), 238–262. MacKenzie, D. (1999). The certainty trough. In R. Williams, W. Faulkner, & J. Fleck
Juniper Research (2018). AI in retail. segment analysis, vendor positioning & market forecasts (Eds.). Exploring expertise: Issues and perspectives. London: MacMillan Press.
2019–2023. Accessed June 2019. https://www.juniperresearch.com/researchstore/ Macknik, S. L., & Martinez-Conde, S. (2011). Sleights of mind. Surrey, UK: Picador.
fintech-payments/ai-in-retail. Mahroof, K. (2019). A human-centric perspective exploring the readiness towards smart
Kahn, C. E. (2017). From images to actions: Opportunities for artificial intelligence in warehousing: The case of a large retail distribution warehouse. International Journal
radiology. Radiology, 285(3), 719–720. of Information Management, 45, 176–190.
Kahneman, D. (2011). Thinking, fast and slow. London, UK: Macmillan. Makridakis, S. (2018). Forecasting the impact of artificial intelligence, Part 3 of 4: The
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. potential effects of AI on businesses, manufacturing, and commerce. Foresight: The
Econometrica, 47(2), 263–292. International Journal of Applied Forecasting, (49), 18–27.
Kang, J. S., Kuznetsova, P., Luca, M., & Choi, Y. (2013). Where not to eat? Improving Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., …, & Sanghvi, S. (2017).
public policy by predicting hygiene inspections using online reviews. Proceedings of Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey Global
the 2013 conference on empirical methods in natural language processing, 1443–1448. Institute.

45
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

Markus, M. L. (2017). Datification, organizational strategy, and IS research: What's the Pasquale, F. (2015). The Black Box Society: The secret algorithms that control money and
score? Journal of Strategic Information Systems, 26, 233–241. information. Cambridge, MA: Harvard University Press.
McBride, S. E., Rogers, W. A., & Fisk, A. D. (2014). Understanding human management of Patsiotis, A., Hughes, T., & Webber, D. (2012). Adopters and non-adopters of Internet
automation errors. Theoretical Issues in Ergonomics Science, 15(6), 545–577. banking: A segmentation study. International Journal of Bank Marketing, 30(1), 20–42.
McKinsey Global Institute (2017). Future that works: Automation, employment, and pro­ Pearl, J., & MacKenzie, D. (2018). The book of why: The new science of cause and effect. New
ductivity. New York: McKinsey Global Institute. York, USA: Basic Books.
McPherson, M., Smith-Lovin, L., & Cook, J. (2001). Birds of a feather: Homophily in social Pickering, A. (2009). Cybernetics as nomad science. Deleuzian intersections in science.
networks. Annual Review of Sociology, 27, 415–444. Technology and Anthropology, 155–162.
Medaglia, R., & Zhu, D. (2017). Public deliberation on government-managed social Prainsack, B. (2019). Logged out: Ownership, exclusion and public value in the digital
media: A study on Weibo users in China. Government Information Quarterly, 34(3), data and information commons. Big Data & Society, 6(1).
533–544. Pratap, K. V. (2015). Financing of smart cities. Retrieved from http://smartcities.gov.in/
Mercier, H., & Sperber, D. (2017). The Enigma of Reason. Cambridge, MA, USA: Harvard upload/uploadfiles/files/Financing-of-Smart-Cities.pdf.
University Press. Preece, A. (2018). Asking ‘why’ in AI: Explainability of intelligent systems – Perspectives
METI – Ministry of Economy, T. and I (2016). Vision of new industrial structure – Japan's and challenges. Intelligent Systems in Accounting, Finance and Management, 25, 63–72.
strategies for taking the lead in the fourth industrial revolution – Interim Report by New Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with
Industrial Structure CommitteeJapan: METI Retrieved from http://www.meti.go.jp/ artificial intelligence: Closing the gap between ambition and action. MIT Sloan
english/policy/economy/industrial_council/pdf/innovation160427a.pdf. Management Review, 59(1).
Mikhaylov, S. J., Esteve, M., & Campion, A. (2018). Artificial intelligence for the public Raval (2018). AI, IoT, blockchain enhance efficiency of Akshaya Patra's Mid-Day Meal
sector: Opportunities and challenges of cross-sector collaboration. Philosophical Program. Express Computer July 25, 2018. https://www.expresscomputer.in/news/
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, ai-iot-blockchain-enhance-efficiency-of-akshaya-patras-mid-day-meal-program/
376(2128), https://doi.org/10.1098/rsta.2017.0357. 21146/.
Milano, M., O'Sullivan, B., & Gavanelli, M. (2014). Sustainable policy making: A strategic Reza Tizhoosh, H., & Pantanowitz, L. (2018). Artificial intelligence and digital pathology:
challenge for artificial intelligence. AI Magazine, 35(3), 22–35. Challenges and opportunities. Journal of Pathology Informatics, 9(1).
Miller, S. (2018). AI: Augmentation, more so than automation. Asian Management Insights, Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda.
5(1), 1–20. Human Rights Quarterly, 41(1), 1–16.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Roberts, N., Galluch, P. S., Dinger, M., & Grover, V. (2012). Absorptive capacity and
Artificial Intelligence, 267, 1–38. information systems research: Review, synthesis, and directions for future research.
Mitchell, M. (2019). Artificial intelligence hits the barrier of meaning. Information MIS Quarterly, 36(2), 625–648.
(Switzerland), 10(2), https://doi.org/10.3390/info10020051. Robitzski, D. (2018). Welsh police used face scanning software that incorrectly flagged
Mogaji, E., Olaleye, S., & Ukpabi, D. (2018). Using AI to personalise emotionally appealing thousands. Futurism.com. Retrieved from https://futurism.com/welsh-police-face-
advertisement. Swansea, Wales. scanning-software-incorrect/. (May).
Moglen, E. (2013). Privacy and security the tangled web we have woven. Communications Rubik, B., & Jabs, H. (2018). Artificial intelligence and the human biofield: New oppor­
of the ACM, 56(2), 20–22. tunities and challenges. Cosmos and History, 14(1), 153–162.
Moniz, A. B. (2013). Robots and humans as co-workers? IET Working Papers Series, Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Malaysia:
WPS03. Retrieved from http://arxiv.org/abs/1507.06136. Pearson Education Limited.
Morabito, R., Cozzolino, V., Ding, A. Y., Beijar, N., & Ott, J. (2018). Consolidate IoT edge Sawyer, S. (2008). Data wealth, data poverty, science and cyberinfrastructure.
computing with lightweight virtualization. IEEE Network, 32(1), 102–111. Prometheus, 26(4), 355–371.
Morikawa, M. (2017). Who are afraid of losing their jobs to artificial intelligence and robots? Scherer, M. U. (2016). Regulating artificial intelligence systems. Harvard Journal of Law
Evidence from a survey, GLO Discussion Paper, No. 71. Maastricht: Global Labor and Technology, 29(2), 353–400.
Organization (GLO). Schulz, P. J., & Nakamoto, K. (2013). Patient behavior and the benefits of artificial in­
Mosier, K. L., & Skitka, L. J. (1996). Human decision makers and automated decision aids: telligence: The perils of “dangerous” literacy and illusory patient empowerment.
Made for each other? In R. Parasuraman, & M. Mouloua (Eds.). Automation and human Patient Education and Counseling, 92(2), 223–228.
performance: Theory and application (pp. 201–220). Mahwah, NJ: Erlbaum. Scitovsky, T. (1976). The joyless economy: An inquiry into human satisfaction and consumer
Muhuri, P. K., Shukla, A. K., & Abraham, A. (2019). Industry 4.0: A bibliometric analysis dissatisfaction Oxford. England: Oxford U Press.
and detailed overview. Engineering Applications of Artificial Intelligence, 78, 218–235. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3),
Mullainathan, S., & Spiess, J. (2017). Machine learning: An applied econometric ap­ 417–424.
proach. Journal of Economic Perspectives, 31(2), 87–106. Sebok, A., & Wickens, C. D. (2017). Implementing lumberjacks and black swans into
Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of model-based tools to support human – Automation interaction. Human Factors, 59(2),
expert opinion. Fundamental issues of artificial intelligence. Cham: Springer555–572. 189–203.
Nambu, T. (2016). Legal regulations and public policies for next-generation robots in Senyo, P. K., Liu, K., & Effah, J. (2019). Digital business ecosystem: Literature review and
Japan. AI and Society, 31(4), 483–500. https://doi.org/10.1007/s00146-015-0628-1. a framework for future research. International Journal of Information Management, 47,
Narayanan, A. (2013). Society under threat .. but not from AI. AI and Society, 28(1), 52–64.
87–94. Shackle, G. (1949 [1952]). Expectation in economics. Cambridge: Cambridge University
Negnevitsky, M. (2011). Artificial intelligence – A guide to intelligent systems. Book. https:// Press.
doi.org/10.1145/2063176.2063177. Shanahan, P. E., Trewartha, D., & Detmold, W. (2018). Machine learning action para­
Nguyen, G. K., & Shetty, A. S. (2018). Artificial intelligence and machine learning: meters in lattice quantum chromodynamics. Physical Review D, 97. https://doi.org/
Opportunities for radiologists in training. Journal of the American College of Radiology, 10.1103/PhysRevD.97.094506 094506.
15(9), 1320–1321. Sheridan, T. B., & Parasuraman, R. (2005). Human-automation interaction. Reviews of
Nikolic, B., Ignjatic, J., Suzic, N., Stevanov, B., & Rikalovic, A. (2017). Predictive man­ Human Factors and Ergonomics, 1(1), 89–129.
ufacturing systems in industry 4.0: Trends, benefits and challenges. Annals of DAAAM Shneiderman, B. (2016). Opinion: The dangers of faulty, biased, or malicious algorithms
& Proceedings, 28. requires independent oversight. Proceedings of the National Academy of Sciences,
Niti Aayog, I. (2018). National strategy for artifical intelligence. Discussion Paper. 113(48), 13538–13540. https://doi.org/10.1073/pnas.1618211113.
(September). Shoham, Y., Perrault, R., Brynjolfsson, E., Clark, J., Manyika, J., Niebles, J. C., & Bauer, Z.
O’Neil, C. (2016). Weapons of math destructions. New York: Crown. (2018). The AI index 2018 annual report. AI Index Steering Committee, human-centered
Olanrewaju, A. S. T., Hossain, M. A., Whiteside, N., & Mercieca, P. (2020). Social media AI initiativeStanford University Available from http://cdn.aiindex.org/2018/AI%
and entrepreneurship research: A literature review. International Journal of 20Index, 202018.
Information Management, 50, 90–110. Shukla, N., Tiwari, M. K., & Beydoun, G. (2018). Next generation smart manufacturing
Olshannikova, E., Ometov, A., Koucheryavy, Y., & Olsson, T. (2015). Visualizing big data and service systems using big data analytics. Computers & Industrial Engineering, 128,
with augmented and virtual reality: Challenges and research agenda. Journal of Big 905–910.
Data, 2(1), https://doi.org/10.1186/s40537-015-0031-2. Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning,
Onnasch, L., Wickens, C. D., Li, H., & Manzey, D. (2014). Human performance con­ and robotics. Cutter Business Technology Journal, 31(2), 47–53.
sequences of stages and levels of automation. Human Factors, 56(3), 476–488. Smit, E. G., Van Noort, G., & Voorveld, H. A. (2014). Understanding online behavioural
Pappas, I. O., Mikalef, P., Giannakos, M. N., Krogstie, J., & Lekakos, G. (2018). Big data advertising: User knowledge, privacy concerns, and online coping behaviour in
and business analytics ecosystems: Paving the way towards digital transformation Europe. Computers in Human Behavior, 32, 15–22.
and sustainable societies. Information Systems and e-Business Management, 16(3), Smith, A. (1759 [1976]). The theory of moral sentiments. In D.D. Raphael, & A. L. Mactie
479–491. (Eds.), Liberty classics. Indianapolis: Liberty Press.
Parasuraman, R. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Snow, J. (2018). We’re in a diversity crisis”: Cofounder of Black in AI on what's poisoning
Factors, 39(2), 230–253. algorithms in our lives. MIT Technology Review (February).
Parasuraman, R. (2000). Designing automation for human use: Empirical studies and Spanaki, K., Gürgüç, Z., Adams, R., & Mulligan, C. (2018). Data supply chain (DSC):
quantitative models. Ergonomics, 43(7), 931–951. Research synthesis and future directions. International Journal of Production Research,
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of au­ 56(13), 4447–4466.
tomation: An attentional integration. Human Factors, 52(3), 381–410. Spencer, J., Poggi, J., & Gheerawo, R. (2018). Designing out stereotypes in artificial in­
Parasuraman, R., & Wickens, C. D. (2008). Humans: Still vital after all these years of telligence: Involving users in the personality design of a digital assistant. Proceedings
automation. Human Factors, 50(3), 511–520. of the 4th EAI international conference on smart objects and technologies for social good,
Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of 130–135.
human interaction with automation. IEEE Transactions on Systems, Man, and Stalin, J. (1952). Economic problems of socialism in the USSR. Moscow36.
Cybernetics – Part A: Systems and Humans, 30(3), 286–297. Stead, W. W. (2018). Clinical implications and challenges of artificial intelligence and
Parveen, R. (2018). Artificial intelligence in construction industry: Legal issues and reg­ deep learning. JAMA – Journal of the American Medical Association, 320(11),
ulatory challenges. International Journal of Civil Engineering and Technology, 9(13), 1107–1108.
957–962. Steinmueller, W. E. (2001). ICTs and the possibilities for leapfrogging by developing

46
Y.K. Dwivedi, et al. International Journal of Information Management xxx (xxxx) xxxx

countries. International Labour Review, 140(2), 193–210. tay-microsoft-chatbot-racist.


Stone, P. (2016). Artificial intelligence and life in 2030. Stanford University Report, 52. Vincent, J. (2019). The state of AI in 2019. Available from: https://www.theverge.com/
Sun, T. Q., & Medaglia, R. (2019). Mapping the challenges of artificial intelligence in the 2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-
public sector: Evidence from public healthcare. Government Information Quarterly, science [Accessed 03.03.19].
36(2), 368–383. von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for
Sunstein, C. R. (2017). Republic: Divided democracy in the age of social media. Princeton, NJ: phenomenon-based theorizing. Academy of Management Discoveries, 4(4), 404–409.
Princeton University Press. Walsh, T. (2018). Expert and non-expert opinion about technological unemployment.
Szollosy, M. (2016). Freud, Frankenstein and our fear of robots: Projection in our cultural International Journal of Automation and Computing, 15(5), 637–642.
perception of technology. AI & Society, 32, 433–439. Walton, P. (2014). A model for information. Information, 5(3), 479–507.
Tadesse, B., & White, R. (2010). Cultural distance as a determinant of bilateral trade Walton, P. (2015a). Measures of information. Information, 6(1), 23–48.
flows: Do immigrants counter the effect of cultural differences? Applied Economics Walton, P. (2015b). Digital information and value. Information, 6(4), 733–749.
Letters, 17(2), 147–152. Walton, P. (2017). Information and Inference. Information, 8(2), 61.
Tamilmani, K., Rana, N. P., Prakasam, N., & Dwivedi, Y. K. (2019). The battle of brain vs. Walton, P. (2018a). Artificial intelligence and the limitations of information. Information
heart: A literature review and meta-analysis of “hedonic motivation” use in UTAUT2. (Switzerland), 9(12), https://doi.org/10.3390/info9120332.
International Journal of Information Management, 46, 222–235. Walton, P. (2018b). Artificial Intelligence and the Limitations of Information. Information,
Tangermann, V. (2018). Criminals are now using swarms of small drones to befuddle law 9(12), 332.
enforcement. Retrieved from https://futurism.com. Website: https://futurism.com/ Wang, L. (2016). Discovering phase transitions with unsupervised learning. Physical
criminals-swarms-drones-law-enforcement/. Review B, 94, 195105. https://doi.org/10.1103/PhysRevB.94.195105.
Tarafdar, M., Beath, C. M., & Ross, J. W. (2017). Enterprise cognitive computing appli­ Wang, L., & Wang, X. V. (2016). Outlook of cloud, CPS and IoT in manufacturing. Cloud-
cations – Opportunities and challenges. IT Professional, 19(4), 21–27. based cyber-physical systems in manufacturing. Cham: Springer377–398.
The Economist Intelligence Unit (2018). Intelligent economies: AI's transformation of in­ Wang, L., Törngren, M., & Onori, M. (2015a). Current status and advancement of cyber-
dustries and society. Economist. physical systems in manufacturing. Journal of Manufacturing Systems, 37, 517–527.
The Economist Intelligence Unit (2016). Advanced science and the future of government: Wang, X., Li, X., & Leung, V. C. M. (2015b). Artificial intelligence-based techniques for
Robots and artificial intelligence, genomic medicine, biometrics. Economist. emerging heterogeneous network: State of the arts, opportunities, and challenges.
Thesmar, D., Sraer, D., Pinheiro, L., Dadson, N., Veliche, R., & Greenberg, P. (2019). IEEE Access, 3, 1379–1391.
Combining the power of artificial intelligence with the richness of healthcare claims Weber, F., & Schütte, R. (2019). Domain-oriented analysis of the impact of machine
data: Opportunities and challenges. PharmacoEconomics. https://doi.org/10.1007/ learning—The case of retailing. Big Data Cognition Computation, 3(1), 1–14.
s40273-019-00777-6. Westerman, G., Bonnet, D., & McAfee, A. (2014). Leading digital: Turning technology into
Thrall, J. H., Li, X., Li, Q., Cruz, C., Do, S., Dreyer, K., & Brink, J. (2018). Artificial business transformation. Cambridge, MA, USA: Harvard Business Review Press.
intelligence and machine learning in radiology: Opportunities, challenges, pitfalls, Wihlborg, E., Larsson, H., & Hedström, K. (2016). “The computer says no!” – A case study
and criteria for success. Journal of the American College of Radiology, 15(3), 504–508. on automated decision-making in public authorities. 2016 49th Hawaii International
Torre, A. (2008). On the role played by temporary geographical proximity in knowledge Conference on System Sciences (HICSS), 2903–2912. https://doi.org/10.1109/HICSS.
transmission. Regional Studies, 42(6), 869–889. 2016.364.
Tubadji, A., & Nijkamp, P. (2016). Six degrees of cultural diversity and R&D output ef­ Wilson, J., & Daugherty, P. R. (2018). Collaborative intelligence humans and Al are
ficiency: Cultural percolation of new ideas – An illustrative analysis of Europe. Letters joining forces. Harvard Business Review, 96(4), 115–123.
in Spatial and Resource Sciences, 9(3), 247–264. Wirth, N. (2018). Hello marketing, what can artificial intelligence help you with?
Tubadji, A., & Nijkamp, P. (2015). Cultural Gravity effects among migrants: A com­ International Journal of Market Research, 60(5), 435–438.
parative analysis of the EU15. Economic Geography, 91(3), 344–380. Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public
Tubadji, A., Angelis, V., & Nijkamp, P. (2016). Cultural hysteresis, entrepreneurship and Sector—Applications and challenges. International Journal of Public Administration,
economic crisis: An analysis of buffers to unemployment after economic shocks. 42(7), 596–615.
cambridge journal of regions. Economy and Society, 9(1), 103–136. Xu, J., Yang, P., Xue, S., Sharma, B., Sanchez-Martin, M., Wang, F., …, & Parikh, B.
Tubadji, A., Webber, D., & Denny, T. (2019). Cultural relativity and AI adoption: Robo- (2019). Translating cancer genomics into precision medicine with artificial in­
advisory in retail banking. Manuscript. telligence: Applications, challenges and future perspectives. Human Genetics. https://
Turkle, S. (1981). Computers as rohrschach: Subjectivity and social responsibility. In Bo doi.org/10.1007/s00439-019-01970-5.
Sundin (Ed.). Is the computer a tool? (pp. 81–99). Stockholm: Almquist and Wiksell. Yang, J., Chen, Y., Huang, W., & Li, Y. (2017). Survey on artificial intelligence for additive
Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The con­ manufacturing. 2017 23rd International Conference on Automation and Computing
junction fallacy in probability judgment. Psychological Review, 90, 293–315. (ICAC) (pp. 1–6). (September).
UKRI (2018). epsrc.ukri.org/funding/calls/aicdts2018full/. Yim, D. S., Cho, H. H., Song, C. U., Lee, J., Lee, S., & Park, S. (2016). The strategy of
UKRI. (2019). www.ukri.org/news/200m-to-create-a-new-generation-of-artificial- technology park for the development of IT industry in Pakistan. 2016 Portland
intelligence-leaders/. International Conference on Management of Engineering and Technology (PICMET) (pp.
United Nations (2018). E-government survey 2018. New York: Department of Economic 176–199). . https://doi.org/10.1109/PICMET.2016.7806568.
and Social Affairs. Yoon, M., & Baek, J. (2016). Paideia education for learners’ competencies in the age of
Vagia, M., Transeth, A. A., & Fjerdingen, S. A. (2016). A literature review on the levels of artificial intelligence – The google DeepMind challenge match. International Journal of
automation during the years. What are the different taxonomies that have been Multimedia and Ubiquitous Engineering, 11(11), 309–318.
proposed? Applied Ergonomics, 53, 190–202. Zandi, D., Reis, A., Vayena, E., & Goodman, K. (2019). New ethical challenges of digital
Van Doorn, J., & Hoekstra, J. C. (2013). Customization of online advertising: The role of technologies, machine learning and artificial intelligence in public health: A call for
intrusiveness. Marketing Letters, 24(4), 339–351. papers. Bulletin of the World Health Organization, 97(1), 2.
Van Est, R., Gerrutsen, J., & Kool, L. (2017). Human rights in the robot age: Challenges Zatarain, J. M. N. (2017). The role of automated technology in the creation of copyright
arising from the use of robotics, artificial intelligence, and virtual and augmented reality. – works: The challenges of artificial intelligence. International Review of Law, Computers
Expert report written for the Committee on Culture, Science, Education and Media of the and Technology, 31(1), 91–104.
Parliamentary Assembly of the Council of Europe (PACE)The Hague: Rathenau Instituut. Zheng, Y., Wu, W., Chen, Y., Qu, H., & Ni, L. M. (2016). Visual analytics in urban com­
Van Lente, H. (2012). Navigating foresight in a sea of expectations: Lessons from the puting: An overview. IEEE Transactions on Big Data, 2(3), 276–296. https://doi.org/
sociology of expectations. Technology Analysis & Strategic Management, 24(8), 10.1109/TBDATA.2016.2586447.
769–782. Zhong, R. Y., Xu, C., Chen, C., & Huang, G. Q. (2017b). Big data analytics for physical
Varga-Szemes, A., Jacobs, B. E., & Schoepf, U. J. (2018). The power and limitations of internet-based intelligent manufacturing shop floors. International Journal of
machine learning and artificial intelligence in cardiac CT. Journal of Cardiovascular Production Research, 55(9), 2610–2621.
Computed Tomography, 12(3), 202–203. Zhong, R. Y., Xu, X., Klotz, E., & Newman, S. T. (2017a). Intelligent manufacturing in the
Vesset, D., Olofson, C. W., Nadkorni, A., Zaidi, A., Mcdonough, B., Schubmehl, D., et al. context of industry 4.0: A review. Engineering, 3(5), 616–630.
(2015). Futurescape: Worldwide big data and analytics 2016 predictions. IDC Retrieved Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous
from https://www.idc.com/research/viewtoc.jsp?containerId=259835. robots threaten human identity, uniqueness, safety, and resources. International
Vincent, J. (2016). Twitter taught Microsoft's AI chatbot to be a racist asshole in less than a Journal of Human-Computer Studies, 100, 48–54.
day. The Verge Available from https://www.theverge.com/2016/3/24/11297050/

47

You might also like