Collaboration in The Machine Age, Trustworthy Human-AI Collaboration

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Collaboration in the Machine Age: Trustworthy Human-AI Collaboration

By Liana Razmerita1, Armelle Brun2, Thierry Nabeth3

1 [email protected], Copenhagen Business School, Denmark


2 [email protected], Université de Lorraine, CNRS, France
3 [email protected], P-Val Conseil, France

Abstract
Collaboration in the machine age will increasingly involve collaboration with Artificial
Intelligence (AI) technologies. This chapter aims to provide insights in the state of the art of
AI developments in relation to human-AI collaboration. It presents a brief historic overview
of developments in AI, three different forms of human-AI collaboration (e.g. conversational
agents) and introduces the main areas of research in relation to human-AI collaboration and
potential pitfalls. The chapter discusses the emergent multifaceted role of AI for collaboration
in organizations and introduces the concept of trustworthy human-AI collaboration.

1. Introduction
Artificial Intelligence (AI) has been a field of study for some years now. It is concerned with
embedding intelligent behavior in artifacts and how intelligent behavior is generated and
learned. AI has as one of its long-term objectives the development of machines that do the
same things as humans, possibly even better (Nilsson, 1998).

Building such AI systems has been a daunting, complex and controversial task as its main goal
has been to understand intelligent behavior, emotions and cognition in order to instill it in
machines. This ambitious scientific goal was emphasized by James Albus paraphrased by
Nilson (1998) in his introductory chapter:

“understanding intelligence involves understanding how knowledge is acquired, represented,


and stored; how intelligent behavior is generated and learned; how motives, and emotions,
and priorities are developed and used; how sensory signals are transformed into symbols; how
symbols are manipulated to perform logic, to reason about the past, and plan for the future;
and how the mechanisms of intelligence produce the phenomena of illusion, belief, hope, fear,
and dreams and yes even kindness and love”.

AI was traditionally a science and engineering domain “concerned with the theory and
practice of developing systems that exhibit the characteristics associated with intelligence in
human behavior, such as perception, natural language processing, problem solving and
planning, learning and adaptation, and acting on the environment”. AI systems were
developed as components of other more complex applications by adding intelligence in
various ways (e.g. reasoning, learning, adapting) (Tecuci, 2012).

However, in the last decades new AI developments towards a more numerical or data-
driven approach of AI have been made possible due to:
 The availability of a large amount of data (big data) that can be used to discover patterns
(e.g. digital traces of user activities available from social media platforms, Google or
other digital platforms).
 The availability of huge and affordable computing power (e.g. graphical processors) and
storage capabilities (e.g. cloud).
 New advances in machine learning algorithms, associated tools and data science that
can be used to collect and analyze data. (various tools and open source libraries support
these processes: e.g. IBM Watson, PowerBI, TensorFlow, Weka, Matlab facilitate
developing AI systems of various complexity, etc.)
 Progress in the creation of agents or chatbots.

Such recent developments of AI open up new opportunities to integrate AI systems and data
technologies in various types of applications, for example, natural language processing,
human-machine interaction, information retrieval, graphics and image processing or robotics
and provide new opportunities for businesses to innovate and derive value in new ways. As a
result, in the past decade we have witnessed an explosion of the use of AI in a wide range of
sectors such as healthcare, services, education, mobility, or commerce, which promises to
revolutionize these sectors in the future by automating the existing process and enabling or
inventing totally new applications.

Furthermore, leveraging AI systems can make data smart by developing new ways to process
data going beyond the use of analytics in organizations. Conversely, AI systems develop AI
technologies integrating (e.g. machine learning algorithms) that learn from data and inform
decisions. The use of AI within a business context has led to the development of a new set of
terminologies such as business intelligence, cognitive computing or computational
intelligence (Davenport & Ronanki, 2019). AI technologies offer new possibilities for the
relationship between humans and machines with respect to performing work tasks on digital
platforms, and for the effective design and governance of platforms (Rai, Constantinides, &
Sarker, 2019).

AI plays an increasing role in knowledge collaboration, through facilitation of human-AI


interaction: conversational agents in the form of assistants, chatbots, and personalization
using algorithms or recommender systems. The emergence of big data and data science will
yield to unprecedented data-driven decisions and business insights driven by artificial
intelligence, algorithms and machine learning techniques.

According to (Davenport & Ronanki, 2019) AI will support three important business needs
such as:
1. automating business processes,
2. gaining insights through data analysis and
3. engaging with customers and employees.

The use of AI in a business context is also associated with the term business intelligence, in
particular when AI systems are used to gain insights and support decisions based on data
analysis. This data-driven intelligence can be used to build informed decisions and business
strategies. We provide some examples of how artificial intelligence collaborates with humans
in the different cases below:
 In the service sector, AI is used to assist call centers by proposing personalized
conversational agents that are able to answer 24/7 the more basic questions and
alleviating the workload of the call center agents. They are considered to be a next
generation paradigm for customer services: arguably this technology allows employees to
focus their time and energy on more complex customer problems and help eliminate rote
work that might be repetitive and tedious.
 In the mobility sector, artificial intelligence is used to provide automatic driving assistance,
and in the future will be used to assist driving automatic cars or drones, thus enabling
goods to be delivered easily.
 In the healthcare sector, AI is already used on a large scale for analyzing radiological
images and diagnosing cancer in collaboration with expert doctors. In the future, AI will
be used to monitor health and provide a continuous, personalized and just in time
healthcare assistance to every citizen, preventing the development of diseases.
 In the e-commerce sector, AI intervenes in the analysis of customer behavior, anticipates
their needs and provides recommendations in order to manage their attention toward
items of interest and eventually persuade them to buy them. Using data from different
sources, AI systems may also support the process of managing inventories.
 In e-education, AI is used to capture and understand students’ learning strategies, level
of knowledge, etc. and can be used to move them to personalized learning goals. AI can
also be used to encourage learners to interact with peers, through group collaboration,
to improve the learning outcome, increase motivation, attendance, etc.

Recent pop culture has also developed new science fiction scenarios of AI use. Movies like
“The operating system” have popularized the idea that AI can become human and even able
to develop relationships with other humans. Other TV series like ”Black mirror” create a
rather dystopian view of how algorithms and AI technology may impact human relationships
and our civilization. The Social dilemma documentary showcases how AI and user profiling
based on social media data (Facebook) have been used to influence elections. The
documentary also highlights the danger of misusing personal data to influence users and
manipulating them through targeted interventions. At this level of AI development, it
becomes particularly important to address trustworthy collaboration with AI and the ethics
of AI.

Within this chapter, we focus on applications to areas where AI can support collaboration
with humans (e.g. by engaging with customers and employees) in different forms of
personalized interaction, attention management and persuasion.

Collaboration in this context refers to the process of humans and AI systems working together
to pursue a common objective. Indeed, in the machine era, AI goes well beyond mere ICT as
passive tools (e.g. word processors) that are controlled by the user, and actually help the user
in the realization of a task. More specifically, AI systems have certain cognitive capabilities
(perception, interpretation, plan making, execution and learning), as well as a level of
autonomy, and are driven by goals that can be conducted without direct human supervision.
An agent may play the role of an assistant at the service of this user, but it can also be
autonomous and take initiative, and even serve the goals of other actors (such as the
company offering this agent as a service).
The objective of this paper is to look at the use of AI to support collaboration, and future
development. This is increasingly important as recent developments of AI are opening up new
avenues for developing new capabilities that will impact behaviors, organizing and work in
general (Faraj, Pachidi, & Sayegh, 2018; Leonardi, 2021). As organizations integrate AI
systems, collaboration with AI is emerging in different scenarios of knowledge work.
Trustworthy AI looks at the different elements that intervene in collaboration and the
associated challenges.

We distinguish between three types of collaboration with AI:


1. Human- computer collaboration where AI is embedded
2. Human- AI collaboration (or conversational AI)
3. Human – human collaboration where AI can intervene

These three types of collaboration will be discussed in detail in section 3. This chapter will
consist of the following sections: the second part presents a brief introduction and historical
overview of artificial Intelligence. The third part presents how AI ca be applied to support
collaboration at the tree levels: enhancing the collaborative process and making it more fluid;
providing more advanced and proactive mechanisms supporting the collaborative & social
processes (trust, motivation, stimulation); in informing the organization design of more
collaborative organizations. The fourth part briefly overviews challenges and ethical issues of
the use of AI (e.g. privacy, impact on the society) followed by a conclusion.

2. Artificial Intelligence: an overview

2.1 The role of AI- definitions and a short historic overview

The term AI is difficult to define in a unified way to reach consensus among different fields
and application domains. As already mentioned above in the introduction, AI encompasses a
cluster of computing technologies including intelligent agents, machine learning, natural
language processing and decision-making supported by algorithms (Tredinnick, 2017).

The term "Artificial Intelligence" was first coined by John McCarthy together with other AI
influential scholars, including (e.g. Allen Newell, Marvin Minsky, Herbert Simon) in a 1956
workshop held at Dartmouth.

Although the discipline of Artificial Intelligence AI was created more than 60 years ago, its
exact definition has been the subject of numerous debates and has embraced a number of
concepts and objectives (Haenlein & Kaplan, 2019).

Early AI systems were designed in symbolic programming languages (e.g. Lisp, Prolog,
Scheme) or using agent-based architectures. In the early stages, AI was divided into different
isolated subfields such as: natural language processing, knowledge representation, problem
solving and planning, machine learning, robotics and computer vision. Given the expectation
that AI can help create machines that think and learn and even surpass human intelligence, it
was not surprising that many of the early AI systems failed to scale up and solve complex real-
world problems or to exhibit real intelligence (Tecuci, 2012).
According to Haenlein & Kaplan (2019), AI is defined as “a system’s ability to interpret external
data correctly, to learn from such data, and to use those learnings to achieve specific goals
and tasks through flexible adaptation”. This definition seems to relate to the recent Machine
Learning perspective, since “learning” is not a necessary characteristic of all artificial
intelligence systems.

In this chapter, we define artificial intelligence informally as systems (e.g. algorithms, robots)
with a high level of autonomy, aiming at assisting, guiding or automating human tasks. The
authors of this article argue that the term AI is overused and not all simple systems can be
classified as AI. Hence, not all algorithms are AI but all AI systems are based on algorithms.

Historically AI has been associated with the design of an “artificial general intelligence” aiming
at replicating human intelligence, and its ability to solve a broad range of problems. Modern
AI dates back to the Turing test, which was originally developed in 1950 by Alan Turing. It was
one of the first attempts to embed human intelligence in a system. The challenge was to
create a system that “could think”, that answered questions similar to the way a human
would, and ideally could not be differentiated from a human. This was dubbed the “imitation
game”.

Turing’s test is fundamental but also controversial as it reduced intelligence to a conversation.


The test is considered passed by the machine if the machine answers could not be
distinguished from answers by humans. Later in 1963, Allen Newell and Herbert A.
Simon developed the idea that the mind can be viewed as a system that manipulates bits of
information according to formal rules. They proposed the idea that "symbol manipulation"
was the essence of both human and machine intelligence. AI consisted of programs based on
symbolic reasoning or was based on rules (if ... then) in earlier phases.

A further step in AI development beyond: “Can machines think?” is the problem of


consciousness or intentionality. A mind usually has thoughts, ideas, plans or goals. Several
questions have been addressed by AI researchers and philosophers: “Can machines have a
mind?” or “Can machines have consciousness?” or If we assume that AI will become similar
to humans and thus imitate human characteristics, other questions can be further developed,
e.g., “Can machines have emotions?”, “Can machines be self-aware?” or “Can machines be
creative?”, “Can machines have a soul or can machines be hostile?”

It has become important to address these questions as the idea that AI can become self-
sufficient, autonomous, and make its own decisions has become popular in the last years. The
concept of “singularity” was introduced to present a vision of technological developments
and “intelligence explosion”, or super intelligence embodied in machines, which could
become dangerous for humanity. It has been promoted by both science-fiction writers such
as (Venge, 1993) as well as famous scientists and celebrities (Stephen Hawkings and Elon
Musk).

2.2. AI and agents


An agent is defined as a knowledge-based system that perceives its environment (which may
be the physical world or other agents, or a complex environment); reasons to interpret
perceptions, draw inferences, solve problems, and determine actions; and acts upon that
environment to realize a set of goals or tasks for which it has been designed. Agents may
continuously improve their knowledge and performance through learning based on the
interaction with other agents, users, or based on other type of data (Tecuci, 2012).

In the past, various types of agents have been designed. Eliza was one of the first computer-
enabled technologies designed to build some sort of human-technology interaction
(Weizenbaum, 1966).

The development of AI is linked to the development of different types of intelligent agents


that perform different functions in a Society of Mind (Minsky, 1986). The society of mind
theory views the human mind and any other naturally evolved cognitive systems as relying
on individually simple processes known as agents. This theory has developed as a collection
of essays and ideas that Minsky started writing in the early 1970. These agents cooperate in
a similar way with people in a society.

Historically agents have been part of the AI endeavor to converse (chatbots) or provide
information, to entertain, support humans in various tasks (e.g. learning), or guide or
persuade users. Rosalind Picard has introduced the concept of affective computing as an
additional endeavor for achieving genuine intelligence, by considering the role of emotions.
Affective computing recognizes the role of emotions and try to give computers the ability to
recognize, express and understand emotions (Picard, 1997).

2.3 Beyond Modern AI

AI currently primarily relies on a data-driven approach (e.g. machine learning) however, in


the past many systems have been developed based on a symbolic approach (e.g. expert
systems or rule-based systems). In some other cases, it consisted of the automation by a
machine of the reasoning of human experts in specific domains (cf. the expert systems), the
realization of some complex planification tasks (cf. constraint programming), or the modelling
of knowledge and associated cognitive processes. It has integrated sophisticated mechanisms
aiming at solving complex problems in novel ways (e.g. genetic algorithms inspired by
evolutionary principles, or fuzzy logics) based on mechanisms such as emergence and
adaptation.

More recently, artificial intelligence has been associated with machine learning. However,
machine learning is just a subset of AI. Machine learning AI is able to learn and to adapt by
processing sizeable amounts of data, and to identify automatically patterns that the systems
will be able to use to solve problems in similar situations. Machine learning algorithms have
a rather narrow scope and limited capabilities. Nowadays deep learning allows for use of both
numerical and symbolic data, such as in recommender systems (Zhang, Yao, Sun, & Tay,
2019).

From a usage perspective, artificial intelligence has been considered both as a means (1) to
augment human cognitive capabilities (e.g. helping them sensing, interpreting the context,
making plans, and implementing decision), or (2) to automate the human process completely
(replacing intellectual operations conducted by humans by machines). In the first case, AI
maintains the role of humans at the center of the decision loops, whereas in the second case
it makes the human superfluous.

3. The role of AI for Collaboration

Collaboration is an integral part of organizational working and learning practices. In the last
years, new forms of collaboration have emerged due to new forms of collaboration
technology, a continuing trend in globalization and global scale adoption of hybrid or remote
work. Digital collaboration can be defined as the articulation of personal knowledge into
collective wisdom made possible via a diversity of digital platforms, including enterprise social
media (e.g. blogs, micro-blogs and wikis) (Razmerita et al., 2014), collaborative platforms (e.g.
GoogleDocs, Dropbox) or more recently even using AI technologies ( e.g. enterprise AI
platforms Grace1).

Collaboration can be established at different levels. According to Wikipedia, collaboration is


the process of two or more people, entities or organizations working together to complete a
task or achieve a goal.

Collaboration can consist in the interaction of two individuals that interact (share information,
contribute to the production of something) in order to produce an output more effectively.
Collaboration can also extend to the interaction of a group of people aiming at the production
of a common good belonging to this group. Collaboration may also be considered at a more
global level (societal level), in which the member of a society may contribute to the realization
of a common good. An example of this form of collaborative innovation for the common good
is the production of a new protocol of care or development of the Covid 19 vaccine. In fact,
AI has been used for the development of the Covid 19 vaccine.

Different factors can foster or hamper collaboration. AI can be used to build predictive models
that assess the likelihood that a user pursues certain actions (e.g. dropout from a course) and
user’s intentions such as the user’s intention to engage or not in collaboration. Predictive
models can be built taking into account various factors (e.g. independent variables that
influence a dependent variable, individual and communal factors). Prior research has outlined
important factors that contribute to the intention and decision to engage in collaboration in
a digital environment. Engagement in collaboration may be influenced by goals that are set
and expectations from collaboration arising from previous experiences, but also the
perceived ability to work in groups and the peers’ attitudes towards collaboration (Razmerita
et al., 2020).

Trust represents one of the most important factors, among those acknowledged in the
literature, for enabling collaboration. Trust is the expectation individuals have of behaviors of
others and in particular cooperative behaviors of others. Virtual agents can play a key role in
a value creation process and the establishment of engagement and online trust (Castellano,
Khelladi, Charlemagne, & Susini, 2018). AI can help to construct trust through reputation

1
https://2021.ai/offerings/grace-enterprise-ai-platform/
mechanisms or recommendations (Kunkel, Donkers, Michael, Barbu, & Ziegler, 2019).

3.1 Human-computer collaboration where AI is embedded

Collaboration with AI can be seamless or “automated” by making the content customized or


personalized through algorithms (e.g. recommender systems). This area has been the subject
of research for many years for scholars in different fields, including artificial intelligence,
computer science, focusing on user modelling, recommender systems and more recently data
science and digital marketing. Personalization aims to give users, customers, and employees
a web experience with the highest relevancy possible but also to achieve socially intelligent
behavior. Personalization is achieved through programs or algorithms that take into account
individual users’ preferences, behaviors as well as context.

The objective of personalization is to improve the efficiency of the interaction with users for
the sake of simplification and to make complex systems more usable. Personalization is
particularly important in e-business. On the one hand, consumers expect personalized
interaction with online retailers, but on the other hand, personalization is perceived to
conflict with the desire for privacy. Personalization relies on the collection and the
exploitation of personal data which may raise serious privacy concerns and lead to a
“personalization-privacy paradox” (Kobsa, Cho, & Knijnenburg, 2016).

Figure 1 presents a taxonomy of personalization techniques. This taxonomy summarizes


different forms of personalized human-computer interaction (what can be personalized?) and
the elements that contribute to supporting it (how to achieve this). Personalization relies on
different elements such as the user’s preferences, or other characteristics of the users (e.g.
age, gender, culture, demographic profiles, personality traits that could be captured from
available data or through user profiling) and/or context (e.g. location). Users’ profile data may
also include biometric information (e.g. fingerprints, iris scans) or medical data that can easily
be captured and stored using different apps and devices (e.g. cell phones, smart watches).
Such big data sets may be stored and mined for different purposes (e.g. personalization,
predictions, or even unexpressed desires). They may be used to create a digital identity (e.g.
create digitized human clones) and offer a variety of personalized services for the users, but
access to these sets raises privacy concerns for the users.

Personalization can be implemented in different forms that include: personalization of


structure, content, modality, presentation, attention support, and persuasion. User profiling
is a form of user modeling based on the users’ data or digital traces of interaction with the
system using a multitude of methods as presented in (Brun, Boyer, & Razmerita, 2010). It is a
way to bypass the lack of information provided by the users that allows one to personalize
the interaction with applications that adapt to their user needs and accommodate their
preferences (Razmerita et al., 2012). Context awareness allows AI systems (e.g. agents or
other intelligent systems) to adapt to the environment and to the users’ characteristics and
needs. Furthermore, it is an important element for integrating intelligence or intelligent
behavior.
Figure 1- A taxonomy of personalization techniques according to (Razmerita et al., 2012)

Forms of personalization can be implemented as a form of social information access. Social


information access is defined as a stream of research that explores “methods of organizing
the past interactions of users community in order to provide better access to information for
future users” (Brusilovsky & He, 2018).

Personalization of structure refers to the way in which the hypermedia space is structured
and presented to the different groups of users. “Personalized” can be agent-based interaction
or automatic (system-initiated using algorithms). Agents can be conversational or embodied
agents (with or without anthropomorphic features). Their physical aspect can also be selected
for embodied agents (for instance, it can be close to a human agent to provide a certain
“human touch”). The design of personalized interaction within computer-based systems (e.g.
learning environments) is often designed based on agents or even multi-agent systems that
can interact autonomously.

Agent architectures are implemented to collaborate with users or carry out tasks on their
behalf, based on knowledge about users, their goals or desires. Agents can intervene in
supporting different forms of human-computer collaboration through personalized
interventions. For example, pedagogical agents have been designed to support learning
processes taking into account students characteristics, perceived needs in relation to learning
objectives and emotions (Brna, P., Cooper, B., Razmerita, 2001). Some early prototypes of the
agents have been implemented in the KInCA system (Knowledge Intelligent Conversational
Agents) described in (Angehrn, Nabeth, Razmerita, & Roda, 2001; Liana Razmerita, Nabeth,
Angehrn, & Roda, 2004). KInCA is designed as an agent-based architecture to support the
adoption of knowledge-sharing practices within organizations. Several expert agents are
assigned different roles such as: interacting with the user for diagnosing the user state,
implementing persuasion strategies such as storytelling. KInCA relies on the idea of offering
personalized user support. The system observes the user's actions and, whenever
appropriate, makes suggestions, introduces concepts, proposes activities that support the
adoption of the desired behaviors. Conversational agents aim at providing personalized
guidance through the whole adoption process; from the introduction of the behaviors to the
user (e.g. explaining what the desired behaviors are and why they should be adopted) to their
practice within the community. Such conversational agents are designed to fulfill the role of
change agents as they motivate people to learn and adopt new behaviors (C. Roda, Angehrn,
Nabeth, & Razmerita, 2003) using different strategies at different stages based on users’
activity. Change agents implement different forms of persuasion. Persuasion strategies are
associated with interventions that may include tracking and monitoring users’ activity,
sharing, social support, but also gamification strategies (e.g. competition, comparison or
rankings). Persuasion technologies are currently used in many different domains including
education, commerce, healthcare and well-being (F. A. Orji, Oyibo, Greer, & Vassileva, 2019;
R. Orji & Moffatt, 2018).

Agents have also been designed to better manage attention of the users in a social context
through various interventions. Based on observation of users, agents’ intervention includes:
the guidance on the use of the platform, reminder about the completion of the user’s profile,
notification of approaching deadlines, tracking inattention, identifying bursts of attention by
the community as a whole, or suggesting the adoption of practice (e.g. open up to others) or
encouraging pro-social behaviors (Claudia Roda & Nabeth, 2008).

Different forms of personalization, including attention support and persuasion, are


increasingly used in data-driven digital marketing. Personalization of content is effective in
capturing the attention of customers and tailor communication to specific customers. It can
be combined with predictive models to find the best time to send communication to specific
customers (termed “mechanical AI”). Personalized communication can leverage the effect of
persuasion and increase the chance of cross-selling opportunities. For example, companies
delivering flowers can remind users of birthdates and special events that can be accompanied
with a nice bouquet of flowers. Furthermore, other associated items (e.g. chocolates or
champagne) may be recommended to them. The analysis of the card texts that accompany
the flowers are used to detect emotions and make specific recommendations for gifts that fit
the specific occasion. However, these algorithms require handling of big amounts of data. AI
use in e-commerce is associated with new flavors of AI: “feeling AI”, “thinking AI” and the
implementation of emotional commerce. AI can thus contribute not only to competitive
advantage but also to an enhanced customer experience (Williamson & Akeren, 2021).

3.2 Human- AI collaboration (or conversational AI)

Human-AI collaboration can take place through the use of conversational agents or AI-based
digital assistants. Digital assistants have a specific extent of interactivity and intelligence in
order to help users perform tasks. AI-based assistants rely on a conversational user interface
(e.g. speech-based, text-based or visual input) for receiving input and delivering output to the
users on the basis of natural language processing or machine-learning algorithms. AI-based
assistants have representation of the domain knowledge and the capability to acquire new
knowledge using machine learning algorithms (Maedche et al., 2019).

They are not only a next-generation paradigm for customer service and in everyday domestic
interactions. AI-assistants are increasingly integrated in workflows and interactions using
enterprise platforms (e.g. Teams, Slack) and thus shape the future of work and collaboration.

Different platforms incorporate AI in the form of assistants: Siri by Apple, Alexa by Amazon,
Google Home by Google or Cortana by Microsoft. These assistants combine natural language
processing capabilities with AI-powered search and Internet of Things (IoT). Conversational
AI coupled with AI powered search assist users in various way including finding favorite music
tunes, informing them on demand about the weather forecasts, activating smart home
features or searching information on the web. (Schmidt, Alt, & Zimmermann, 2021).

Chatbots represent an increasingly popular technology for supporting customer service. “A


chatbot is a virtual person who can effectively talk to any human being using interactive
textual as well as verbal skills” (Trivedi & Thakkar, 2019). A chatbot can be designed and
integrated in a system and fulfil different purposes e.g., for customer support, or to augment
the software development process in open-source communities (see Github2). More recently
chatbots have even been used to support well-being or digitize dead persons and assist the
grieving process (Brown, 2021).

Agents have started to take part in basic interactions on social media platforms. Agents, also
or chatbots, can be given different tasks, such as: coordination of knowledge work, brokerage
of knowledge, communication and knowledge collaboration. These assistants are designed to
act or react as a human would in a dialogue session. Natural language processing, including
dialogue processing, is at the core of these assistants (Nuruzzaman & Hussain, 2018). One
remaining challenge to make the collaboration more intuitive and useful is to design adaptive
scenarios (Colace, de Santo, Lombardi, & Santaniello, 2019).

Furthermore, in order to behave in a human-like manner different forms of human


intelligence need to be implemented. Among different forms of intelligence (e.g. abstract,
practical, kinesthetic), social intelligence is particularly important as it relates to “the ability
to get along with others and get them to cooperate with you”. The authenticity of the agents
represents another important element to consider, as it is a crucial element in preventing
manipulation, and establishing cooperation and trust (Neururer, Schlögl, Brinkschulte, &
Groth, 2018).

3.3 Human – human collaboration where AI can intervene

In this section we provide an overview of the use of AI as systems designed to support


collaboration and how they can be used at the tree levels: for enhancing the collaborative
process and making it more fluid; providing more advanced and proactive mechanisms
supporting collaborative social processes (e.g. trust, motivation, stimulation); and in
informing the organization design of more collaborative organizations.

AI can also be used to change the workplace culture (e.g. for helping to design organizations
that are collaborative or help organizations to become more collaborative). The integration
of AI into workflows and business processes also contributes to the emergence of analytics
driven organizations and it transforms organizational culture towards a culture of analytics.
Such organizational culture relies on data, rather than intuition and experience of human
managers. An analytics of culture fosters data literacy and uses insights from data for
supporting business decisions in various activities, including organizational design. We will
outline below the emergent role of AI and Analytics for Collaboration and Organizational
Design.

2
Making a Github bot (https://www.geeksforgeeks.org/making-a-github-bot/)
3.3.1. Data science and analytics for organizational design

The availability of data, and progress in machine learning have augmented considerably the
possibility to study and analyze the functioning of organizations and use tools to inform the
design of organizations in a more objective or scientific manner. A scientific manner involves
relying on factual data analysis rather than only on mere human intuitions and experience.
The term organizational design analytics has been introduced as a branch of computational
social science. It combines data science and analytics methods for organizational design.
Computational social science is defined as the study of society and human behavior through
the prism of computational analyses and the sensing of human signals. It is becoming more
and more of a reality. It aims to be applied to the design and the operation of more effective
organizations. The use of “scientific method” such as psychometric analysis (i.e. personality
tests) has been used for a long time, notably in helping organizations to recruit the right
profiles. They have been limited by the effort required to use them, originating from the
necessity for respondents to fill out questionnaires, and for the recruiters to employ staff
possessing specific qualifications to analyze the results. Besides, employee surveys and
questionnaires have significant shortcomings since employee self- reports are often tainted
with cognitive bias (Corritore, Goldberg, & Srivastava, 2020). The values and beliefs that
people proclaim may be significantly different from how they behave in real life and who they
really are. The validity of psychological analysis methods has also been questioned for its
inability to validate the method in real world settings, and more specifically afterward to
assess the reality of its predictive power.

Scholars and practitioners have started to make the link between AI and organizational design
(Morrison, 2015; Puranam & Clément, 2020) by adopting a data science approach to design
more effective organizations. This approach is referred to as “organizational analytics”
(Morrison 2015) or “organizational design analytics” (Puranam & Clément 2020). It relies on
the use of algorithms to analyze data from organizations and use the result of this analysis as
input to be used to build a better performing organization. More specifically, Puranam &
Clément (2020) propose the use of a suite of methodologies and tools helping to better
perceive the organizational environments of a company, make predictions and experiment
quickly with new practices.

The term “Org 2.0” that they suggest refers to a radical evolution of organizational design,
and involves enabling designers to make much more sophisticated design decisions than in
the past, and moving away from merely copying other designs in favor of “haute couture”,
specifically adapted to the situation. Their analytics driven approach can be conducted at
three levels:
 perception, which is based on the combination of big data and traditional statistical
methods to capture the current situation;
 prediction, which is based on the application of machine learning and AI to Big Data, to
forecast what is going to happen;
 prototyping, which is based on agent-based modelling for testing the hypothesis.

Morrison (2015) proposes a number of analytical tools to organization design teams and HR
to provide them with a new and better way to design, transform and operate their
organizations.
A cultural analysis can be derived from the digital traces of people interaction (Corritore,
Goldberg & Srivastava 2020). Previous researchers have mined millions of e-mail messages
exchanged among employees in a high-technology firm for assessing the cultural fit of its
employees and monitoring its evolution (Goldberg et al. 2015).

Using means of artificial intelligence such as natural language understanding (NLU) applied
to the information that employees provide in electronic communication (email, Slack
messages, and Glassdoor comments) offers new ways to provide insights into the culture of
an organization and how people behave effectively, rather than what people claim they do
(Corritore, Goldberg & Srivastava 2020).

AI provides the means to collect and analyze information much more effectively than humans.
AI also guides organizations in the formation of teams that are likely to collaborate optimally
by encapsulating and making easily available the expertise of the members of the team in the
composition of teams. Effective teams should be composed of people with different profiles
that complement each other’s strengths. More specifically, AI can help:
 To collect data: for instance, web agents are used to scrape social media data (e.g.
LinkedIn, Twitter, Glassdoor) that will be available for the analysis.
 To analyze and make sense of this data (profiling): for instance, machine learning
algorithms (supervised or non-supervised learning; classical machine learning or deep
learning) are used to analyze data that can be available in a large variety of forms (e.g.
number, natural language). Dimension reduction, natural language understanding, or
clustering represent examples of means that are now available for data analysis.
 To provide guidance. AI systems, including analytics recommender systems or expert
systems, can help organizational managers to design more effective organizations or help
the transformation of organizations (fix misfunctioning organizations, or the fusion of
organizations).

3.3.2 Team members’ personality, team composition, sociology and collaborative culture

Exploiting information of people personality, group sociology or culture represent means that
can be exploited for guiding the design or the transformation of more collaboration effective
organizations.

First, the personality of a team member as an individual has some impact on the likeliness of
this team member to collaborate with others. Good team players are often defined in terms
of traits like being dependable, flexible, or cooperative (Driskell, Goodwin, Salas, & O’Shea,
2006). Different researchers have explored the link between the personality of team
members and teamwork effectiveness (Curşeu, Ilies, Vîrgă, Maricuţoiu, & Sava, 2019; Driskell
et al., 2006). For instance, they have looked, using the big five personality model, how
extraversion, agreeableness and conscientiousness can be associated with peer-rated
contributions to teamwork. For instance, a good level of extraversion can be associated
positively to collaboration since it favors the establishment of social interaction.
Agreeableness is particularly useful in situations involving interpersonal conflict by alleviating
conflicts that may arise in collaborative endeavors and interactions. And very conscientious
members are likely to be highly committed to group tasks. However, these traits should not
be in excess, in which case they may have a disruptive effect (Curşeu et al. 2019). An excessive
level of extraversion may originate from a dominant personality resulting in interaction based
on power relationships and competition rather than cooperation. Members very high on
conscientiousness may be perfectionists, being overly focused on their individual goals
leading to relationship tensions, whereas too agreeable members may be too risk-averse to
conflicts, leading to the reduction in the quality of the interaction.

Second, at a collective level, the composition of a team will also impact the quality of
collaboration. The creation of highly effective teams therefore involves assembling different
members with expertise and competencies required to tackle the problems for which this
team was created, but also to fill different roles that are necessary when solving problems.
Belbin’s (1981) work on the composition of teams has proposed that highly effective teams
should be built by assembling in the same team a combination of members having
preferences for fulfilling certain roles. For example a plants role is a behavioral attitude about
creative, unorthodox and generators of ideas; A teamworker role is about acting as the "oil"
between the cogs that keep the machine that is the team running smoothly) (Alberola, Del
Val, Sanchez-Anguix, Palomares, & Dolores Teruel, 2016; Mostert, 2015) .

Finally, organizational design can take into consideration the social and cultural levels to make
organizations more collaborative. Social roles and social norms, which refer to the mostly
unwritten rules that manage how human agents interact with one another or in groups, also
have a strong influence on the way people collaborate. Some societies and organizations that
rely in an important manner on status impose strong constraints on the level of interaction,
for instance by limiting the expression of dissident views based on factors like seniority,
position, and gender. The lack of trust between the members of an organization may also
limit the willingness to engage in interaction and take risks. Sociological theories such as the
work of Boltansky and Thevenot (1987) on people justifying their actions in a social context
can then be used to describe the functioning of organizations from a sociological point of view
(Fridenson, 1989) and provide guidance about how to improve them. For instance, one of
their theories identifies six categories of “worlds” to which one can associate different sets of
justification of social action, such as the domestic world that is driven by values of tradition
or family, or the civic world that relies on democratic value and consensus.

Organizational culture can be defined as a collection of shared values, expectations, and


internalized practices that guide behaviors of the members. Some organizational cultures will
favor collaboration, whereas others will make it very difficult. An important stream of
literature exists on this subject. Previous research on cross-cultural communication has
shown how cultural differences can create barriers to interactions (Trompenaars, 2006).

3.3.3 Data science, Organizational design and Collaboration

Personality-related information is not something new in the design of more collaborative


teams (e.g. Belbin teamwork inventory was used well before the generalization of the use of
data science technique), but the advent of artificial intelligence promises to considerably
augment its utilization in the design of more collaboration-effective organizations. AI and
machine learning (ML) enable us to derive insights from different types of data (e.g. social
media, enterprise data, or other digital communications). Social Media Analytics allow us to
infer individuals' personality characteristics from digital data such as emails, text messages,
tweets, or forum posts. However, user profiling, or the inference of personality traits based
on data, online interactions or digital text raises ethical concerns. In particular, such
algorithms may embed biases or may give rise to discrimination. An example of such a tool is
IBM Watson personality insights3 although this has been recently discontinued.

AI may support groups or teams’ formation in organizations. Creating heterogeneous, diverse


groups is important for performance, creativity and learning. Collaborative outcomes (e.g.
quality of learning) depend on characteristics of the group and the group composition.
Algorithms can support the formation of groups taking into account specific characteristics
and certain criteria (e.g. culture, gender, personality). Previous work has been done on the
use of AI to support groups in classrooms and to support diverse or heterogeneous teams
e.g., (Alberola et al., 2016; L. Razmerita & Brun, 2011).

In relation to the data collection, recent research, based on the processing of 25,104 job
advertisements published on the online job platforms, proposes a text mining approach
combining topic modeling, clustering, and expert assessment in order to identify and
characterize six job roles in data science. Monster, and Glassdoor (Michalczyk, Nadj,
Maedche, & Gröger, 2021).

Consulting companies have designed a set of analytical tools that can help the design or the
transformation of organizations:

Crystal4 offers a set of tools that integrate the DISC personality insights, and that can be used
to identify team strengths and weaknesses. These tools make use of machine learning
techniques, which have been employed in the profiling of people personality based on their
LinkedIn profile (D'Agostino & Skloot 2019). For instance the tool “Crystal for Teams” offers
teams personality-based insights to navigate important conversations, including one-on-
ones, performance reviews, group meetings, or conflicts.

Talentoday5 offers people an analytics platform based on the analysis of personalities used
both at the individual and collective levels. If it seems to be primarily used by human resource
professionals; it is also used by organizational designers to guide the fusion of organizations.
One of the tools that this company offers to its clients, “MyPrint Collaboration report” is
aimed at improving one-to-one collaboration, and consists in 10 pages report automatically
generated that includes an analysis of the areas of synergy and risk between two individuals
in terms of personality.

P-Val conseil6 has developed “la méthode monde”, a method based on the work of the
sociologists Boltansky and Thevenot (1987) on people justifications of their actions in a social
context. Boltansky and Thévenot have identified six worlds (‘Inspiration’, ‘Merchant’,
‘Industrial’, ‘Civic’, ‘Domestic’ and Opinion) that are governed by different values. Based on
these, they have derived a set of sociological profiles. P-Val has derived from this work a set

3
IBM Watson personality insight (https://cloud.ibm.com/apidocs/personality-insights)
4
Crystal (https://www.crystalknows.com/ )
5
Talentoday (https://www.talentoday.com/ )
6
P-Val conseil (https://pval.com/)
of sociological profiles that are used both at the individual and collective levels to help decode
and anticipate resistance to change in organization transformation. P-Val uses this approach
to facilitate the merging of organizations by assessing cultural proximity of the organizations
to be merged. P-Val uses artificial intelligence as a means to profile and analyze sociological
data, using methods such as clustering or natural language understanding. More specifically,
a ‘civic’-oriented person may receive in this personalized report some advices about how to
deal the more effectively with a ‘merchant’-oriented person.

Assessment tools can be used to identify different ways in which people are inclined to make
an impact and a contribution. For example, the firm GC Index7 proposes five categories of
individuals among which the Game Changers are the individuals who generate the ideas and
possibilities that have the potential to be transformational. The Play Makers individuals focus
on getting the best from others, individually and collectively, in support of agreed objectives.

3.3.4 AI collaboration and the management of attention

The advent of social media has put a strong emphasis on online social interaction. With the
Web 2.0 technology, the Internet is no longer used only to access a massive amount of
information, but also to interact and collaborate with others on a global scale. At an
organizational level, the technology is also increasingly used to share and collaborate with
others, a trend that has been considerably reinforced with the Covid 19 pandemic which has
forced people to work at home (part time, or even full time during the more difficult times of
the pandemic). Thus, a variety of tools (e.g. email, collaborative platforms, video-
conferencing systems) are now used on a regular basis to work and “collaborate with others”.
This phenomenon has created the conditions of a massive social interaction overload where
people are overwhelmed by solicitations and opportunities to engage in social exchanges, but
they have little means to deal effectively with this new level of interaction (Nabeth &
Maisonneuve, 2011). Different studies have shown that instant messaging notification on the
desktop creates distraction and disruption, which is detrimental to productivity (Czerwinski,
Cutrell, & Horvitz, 2000).

More generally, the individualization of work has made it more difficult for knowledge
workers to manage their time, with the risk of having a substantial part of their time
consumed in the interaction, and the difficulty to separate work life from private life. The
solution to this problem has been proposed in the form of systems that are able to help users
to manage their attention, e.g., the time that they dedicate in each category of tasks (e.g.
how much time they work on their own on a text editor, how much time they spend writing
emails, or how much they interact or collaborate). An example of such a system is Microsoft
MyAnalytics Office 365, which “allows people to work smarter with productivity insights”.
Such tools even though dedicated to self-monitor and provide individual feedback it may also
be used in a concerning way for individuals.

Attention aware systems rely largely on advanced analytics, which are implemented based
on a variety of AI techniques, such as the collection of attention data and their analysis using
machine learning or deep learning techniques.

7
GC Index (https://www.thegcindex.com/)
4. Challenges of using AI: toward a Trustworthy AI

AI is becoming embedded in different forms of human-computer interaction. Furthermore,


big data associated with the use of AI leads to new opportunities for businesses, but also
raises ethical and privacy concerns. However, important challenges are associated with the
use of AI on a large scale. Among these challenges the most important ones are: lack of
transparency of algorithms, systems fragmentation, data privacy regulations, but also data
quantity and quality.

Most AI technologies rely on data, but these technologies can also make data smart and can
be used in a range of scenarios. People are increasingly aware and concerned about the
power of big data, its potential use and misuse in both organizations and society. Our digital
traces expose everything we do and like in the digital world. Hence transparency and
management of “visibilities” become crucial to consider in a digital world (Flyverbom, 2019).
As new technologies, including artificial intelligence and robotics, “have the potential to
infringe on human dignity and compromise core values of being human”(Jasanoff, 2016),
ethics and privacy concerns are important to be considered a truly trustworthy human-AI
collaboration.

Recent research has emphasized the fact that learning algorithms are distinguished by four
consequential aspects: ‘black-boxed performance, comprehensive digitization,
anticipatory quantification, and hidden politics” (Faraj et al., 2018).Trustworthy AI needs to
overcome such negative consequential aspects in which humans are reluctant to engage with
AI or regard AI as a competitor rather than an assistant.

Trustworthy AI is an emerging topic for both academics and industry (Thiebes, Lins, &
Sunyaev, 2021). Trustworthy AI aims to contribute to the well-being of individuals as well as
the prosperity and advancement of organizations and societies (Thiebes et al., 2021), while
avoiding risks of infringing individuals’ privacy, discriminating part of the population and being
unfair. To reach this goal, legal and ethical dimensions are at the core of work in Trustworthy
AI. We can for example cite beneficence, non-maleficence, autonomy, justice, and
explicability, proposed in (Floridi & Cowls, 2019).

More generally, trustworthy AI is fully human-centered and aims to offer high levels of
human control, with the goal to lead to wider adoption and increase human performance,
while supporting human self-efficacy, mastery, creativity, and responsibility. Notice here that
there is a twofold danger: excessive human control and excessive computer control. One
challenge lies in the trade-off between both controls. From this point of view, trustworthy AI
has a significant role to play. Indeed, not only does AI has to foster collaboration, it also has
to guarantee that this collaboration will be beneficial for both parties. Specifically, it may not
only focus on increasing collaboration when AI is used but may foster a long-term
collaboration regardless of AI use. Indeed, designing an AI that is temporally myopic can be
counterproductive in the long term. Considering that a positive benefit is obtained in the
short term, if no control or knowledge about the long-term impact may come down to
designing an AI that only compensates the consequences of its passed decisions. In addition,
AI has to be fair: each person in the collaboration deserves to be equally considered so that
there is no discrimination between those persons.
The adoption of AI by society, both at the individual and global levels, depends to a large
extent to the trust that people can develop toward AI technologies and their application.
People should believe in the ability of AI to deliver value without at the same time
representing a risk that would lessen this value. In management and organizational science
trust can be approached from different perspectives.

First different factors of perceived trust in AI intervene in the formation of trust (Mayer, Davis,
& Schoorman, 1995; Puranam & Vanneste, 2021):
1) the ability of the system to fulfill its role. The lack of explainability of AI can for instance
generate suspicion of the workability of an AI solution.
2) the benevolence of the system. People have to be convinced that AI is used for their own
good, and not for the benefit of tierce parties. The belief that AI systems are controlled by the
small group of organizations such as the GAFAM (Google, Apple, Facebook, Amazon and
Microsoft) or government agencies (even in democratic countries) may create rejection.
3) the integrity of the system, and in particular the perception that the rules on which it is
based are clear. For instance, if the rules that explain the acceptance of a loan by a bank using
AI to support their decisions are too obscure to the citizens, there is little possibility that AI
will appeal.

Second, trust can also be based on rational reasoning, i.e. cold and rational calculation that
have been formalized with the agency theory in economy (Puranam & Vanneste, 2021) versus
gut feeling. In the latter case, people are subject to biases and may develop an irrational
negative feeling that is disproportionate with reality. For instance, they may fear that the AI
machines take over the world (as in the movie Terminator), even if this is for now only a very
distant possibility.

5. Conclusion

The focus of our chapter has been to discuss the emergent role of AI in shaping collaboration
in different forms. We have highlighted the benefits and pitfalls of human-AI collaboration
and introduced the concept of trustworthy AI. This chapter has presented a vision of the role
of ICT in the machine age that has shifted from the traditional ICT tools totally dedicated to
the realization of specific functions (controlled by humans, or automating processes), to AI
systems with some cognitive capabilities, autonomy and individual goals, and which support
or collaborate with humans to complete certain tasks. These AI entities may be at the direct
service of the humans that they serve, but also controlled and serve the goal of other entities
(e.g. GAFAM or tech savvy organizations).

We have seen in this chapter that this vision can be observed at different levels:
 At the individual level, with personalized services that have developed a certain
understanding of the users (e.g. user profile) and that can be used by an agent or chatbot,
personalized web services (e.g. in e-commerce), and also via cell phones, becoming in the
latter case a cognitive extension to the human being.
 At the organizational level, with the augmented / smart organization, that is constantly
monitoring its environment (analyzing data), and acting with some level of autonomy to
solve problems and adapt, or in helping the design of more effective organizations, thanks
to the new AI-powered advanced organization analytics

Yet this chapter has also raised the concerns of this machine age such as: loss of control and
expandability; risk of these AI systems to be controlled by tierce entities that may not be
benevolent to the users (e.g. authoritarian regimes, business-oriented companies firstly
driven by their own commercial goals).

In sum, we distinguish between three types of collaboration with AI. First, it can be used to
support personalized interaction where AI is in the background. Second, Artificial Intelligence
can intervene in the collaborative process in knowledge & social platforms to support the
social mechanisms associated with collaboration. It can provide feedback on the
collaborators. It can help in the construction of trust in knowledge and social platforms. It can
provide recommendations contributing in team formation by suggesting who to connect to
(e.g. people that have more affinity). Thirdly, Artificial Intelligence can be used more indirectly
by informing the design and transformation of organizations that are more likely to be
collaborative. AI may be used to suggest the formation of highly functioning teams or to guide
the cultural transformation of organization towards a more collaborative or knowledge-
sharing oriented culture.

Acknowledgement We would like to thank to Inger Mees for readproofing and Daniel Hardt
for providing feedback and comments on this chapter.

References

Alberola, J. M., Del Val, E., Sanchez-Anguix, V., Palomares, A., & Dolores Teruel, M. (2016).
An artificial intelligence tool for heterogeneous team formation in the classroom.
Knowledge-Based Systems. https://doi.org/10.1016/j.knosys.2016.02.010
Angehrn, A., Nabeth, T., Razmerita, L., & Roda, C. (2001). K-InCA: Using artificial agents to
help people learn and adopt new behaviours. In Proceedings - IEEE International
Conference on Advanced Learning Technologies, ICALT 2001.
https://doi.org/10.1109/ICALT.2001.943906
Brna, P., Cooper, B., Razmerita, L. (2001). Marching to the wrong distant drum: pedagogic
agents, emotion and student modeling. In Proceedings of Workshop on Attitude,
Personality and Emotions in User-Adapted Interaction. Sonthofen, Germany.
Brown, D. (2021). AI chat bots can bring you back from the dead. The Washington Post.
Retrieved from https://www.washingtonpost.com/technology/2021/02/04/chat-bots-
reincarnation-dead/
Brun, A., Boyer, A., & Razmerita, L. (2010). Compass to locate the user model I need:
Building the bridge between researchers and practitioners in user modeling. In Lecture
Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence
and Lecture Notes in Bioinformatics). https://doi.org/10.1007/978-3-642-13470-8_28
Brusilovsky, P., & He, D. (2018). Introduction to social information access. In Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and
Lecture Notes in Bioinformatics). https://doi.org/10.1007/978-3-319-90092-6_1
Castellano, S., Khelladi, I., Charlemagne, J., & Susini, J.-P. (2018). Uncovering the role of
virtual agents in co-creation contexts. Management Decision, 56(6), 1232–1246.
https://doi.org/10.1108/MD-04-2017-0444
Colace, F., de Santo, M., Lombardi, M., & Santaniello, D. (2019). Chars: A cultural heritage
adaptive recommender system. In TESCA 2019 - Proceedings of the 2019 1st ACM
International Workshop on Technology Enablers and Innovative Applications for Smart
Cities and Communities, co-located with the 6th ACM International Conference on
Systems for Energy-Efficient Buildings, Cities (pp. 58–61).
https://doi.org/10.1145/3364544.3364830
Corritore, M., Goldberg, A., & Srivastava, S. (2020). The new analytics of culture. Harvard
Business Review.
Curşeu, P. L., Ilies, R., Vîrgă, D., Maricuţoiu, L., & Sava, F. A. (2019). Personality
characteristics that are valued in teams: Not always “more is better”? International
Journal of Psychology, 54(638–649). https://doi.org/10.1002/ijop.12511
Davenport, T. H., & Ronanki, R. (2019). Artificial Intelligence for the Real World. In On AI,
Analytics and the New Machine Age (Harvard, pp. 1–17). Boston, MA.
Driskell, J. E., Goodwin, G. F., Salas, E., & O’Shea, P. G. (2006). What makes a good team
player? Personality and team effectiveness. Group Dynamics.
https://doi.org/10.1037/1089-2699.10.4.249
Faraj, S., Pachidi, S., & Sayegh, K. (2018). Working and organizing in the age of the learning
algorithm. Information and Organization, 28(1), 62–70.
https://doi.org/https://doi.org/10.1016/j.infoandorg.2018.02.005
Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society.
Harvard Data Science Review, 1(1), 1–15. https://doi.org/10.1162/99608f92.8cd550d1
Flyverbom, M. (2019). The digital prism transparency and managed visibilities in a datafied
world. The Digital Prism: Transparency and Managed Visibilities in a Datafied World.
https://doi.org/10.1017/9781316442692
Fridenson, P. (1989). Luc Boltanski et Laurent Thévenot, Les économies de la grandeur,
Paris, Presses Universitaires de France, « Cahiers du Centre d’études de l’emploi »,
1987, XVI- 367 p. Annales. Histoire, Sciences Sociales.
https://doi.org/10.1017/s0395264900063204
Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past,
present, and future of artificial intelligence. California Management Review.
https://doi.org/10.1177/0008125619864925
Jasanoff, S. (2016). The Power of Technology. In The Ethics of Invention: Technology and the
Human Future. (pp. 4–58). W.W.Norton.
Kobsa, A., Cho, H., & Knijnenburg, B. P. (2016). The effect of personalization provider
characteristics on privacy attitudes and behaviors: An Elaboration Likelihood Model
approach. Journal of the Association for Information Science and Technology.
https://doi.org/10.1002/asi.23629
Kunkel, J., Donkers, T., Michael, L., Barbu, C. M., & Ziegler, J. (2019). Let me explain: Impact
of personal and impersonal explanations on trust in recommender systems. In
Conference on Human Factors in Computing Systems - Proceedings (pp. 1–12).
https://doi.org/10.1145/3290605.3300717
Leonardi, P. M. (2021). COVID-19 and the New Technologies of Organizing: Digital Exhaust,
Digital Footprints, and Artificial Intelligence in the Wake of Remote Work. Journal of
Management Studies. https://doi.org/10.1111/joms.12648
Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., … Söllner, M. (2019). AI-
Based Digital Assistants. Business & Information Systems Engineering.
https://doi.org/10.1007/s12599-019-00600-8
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). AN INTEGRATIVE MODEL OF
ORGANIZATIONAL TRUST. Academy of Management Review.
https://doi.org/10.5465/amr.1995.9508080335
Michalczyk, S., Nadj, M., Maedche, A., & Gröger, C. (2021). “Demystifying Job Roles in Data
Science: A Text Mining Approach.” In ECIS (p. 115). Retrieved from
https://aisel.aisnet.org/ecis2021_rp/115/
Minsky, M. (1986). The Society of Mind.
Morrison, R. (2015). Data-driven Organization Design: Sustaining the Competitive Edge
Through Organizational Analytics.
Mostert, N. M. (2015). Belbin-the way forward for innovation teams. Journal of Creativity
and Business Innovation.
Nabeth, T., & Maisonneuve, N. (2011). Managing attention in the social web: the AtGentNet
approach. In Human Attention in Digital Environments (pp. 281–310). Cambridge
University Press. https://doi.org/10.1017/cbo9780511974519.012
Neururer, M., Schlögl, S., Brinkschulte, L., & Groth, A. (2018). Perceptions on authenticity in
chat bots. Multimodal Technologies and Interaction, 2(60), 2–19.
https://doi.org/10.3390/mti2030060
Nuruzzaman, M., & Hussain, O. K. (2018). A Survey on Chatbot Implementation in Customer
Service Industry through Deep Neural Networks. In Proceedings - 2018 IEEE 15th
International Conference on e-Business Engineering, ICEBE 2018 (pp. 54–61).
https://doi.org/10.1109/ICEBE.2018.00019
Orji, F. A., Oyibo, K., Greer, J., & Vassileva, J. (2019). Drivers of competitive behavior in
persuasive technology in education. In ACM UMAP 2019 Adjunct - Adjunct Publication
of the 27th Conference on User Modeling, Adaptation and Personalization (pp. 127–
134). https://doi.org/10.1145/3314183.3323850
Orji, R., & Moffatt, K. (2018). Persuasive technology for health and wellness: State-of-the-art
and emerging trends. Health Informatics Journal.
https://doi.org/10.1177/1460458216650979
Picard, R. W. (1997). Affective Computing. MIT press.
Puranam, P., & Clément, J. (2020). The Organisational Analytics eBook: A Guide to Data-
Driven Organisation Design. Version 1.0. Retrieved from
https://knowledge.insead.edu/blog/insead-blog/organisational-data-the-silver-lining-
in-the-covid-19-cloud-15516
Puranam, P., & Vanneste, B. (2021). Artificial Intelligence, Trust, and Perceptions of Agency.
SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3897704
Rai, A., Constantinides, P., & Sarker, S. (2019). Editor’s Comments: Next-Generation Digital
Platforms: Toward Human–AI Hybrids. Management Information Systems Quarterly,
43(1), 9.
Razmerita, L., & Brun, A. (2011). Collaborative learning in heterogeneous classes: Towards a
group formation methodology. In CSEDU 2011 - Proceedings of the 3rd International
Conference on Computer Supported Education (Vol. 2).
Razmerita, Liana, Kirchner, K., Hockerts, K., & Tan, C.-W. (2020). Modeling collaborative
intentions and behavior in Digital Environments: The case of a Massive Open Online
Course (MOOC). Academy of Management Learning & Education, 19(423), 469–502.
Razmerita, Liana, Kirchner, K., & Nabeth, T. (2014). Social media in organizations: leveraging
personal and collective knowledge processes. Journal of Organizational Computing and
Electronic Commerce, 24(1), 74–93.
Razmerita, Liana, Nabeth, T., Angehrn, A., & Roda, C. (2004). Inca: An Intelligent Cognitive
Agent-Based Framework for Adaptive and Interactive Learning.
Razmerita, Liana, Nabeth, T., & Kirchner, K. (2012). User Modeling and Attention Support. In
Centric- The fith International Conference on Advances of Human Oriented and
Personalized Mechanisms (pp. 27–33). Lisbon.
Roda, C., Angehrn, A., Nabeth, T., & Razmerita, L. (2003). Using conversational agents to
support the adoption of knowledge sharing practices. Interacting with Computers,
15(1), 57–89. https://doi.org/10.1016/S0953-5438(02)00029-2
Roda, Claudia, & Nabeth, T. (2008). Attention management in organizations: Four levels of
support in information systems. In Organisational Capital: Modelling, Measuring and
Contextualising. https://doi.org/10.4324/9780203885215
Schmidt, R., Alt, R., & Zimmermann, A. (2021). A conceptual model for assistant platforms.
In Proceedings of the Annual Hawaii International Conference on System Sciences (pp.
4024–4033). https://doi.org/10.24251/hicss.2021.490
Tecuci, G. (2012). Artificial intelligence. Wiley Interdisciplinary Reviews: Computational
Statistics. https://doi.org/10.1002/wics.200
Thiebes, S., Lins, S., & Sunyaev, A. (2021). Trustworthy artificial intelligence. Electronic
Markets. https://doi.org/10.1007/s12525-020-00441-4
Tredinnick, L. (2017). Artificial intelligence and professional roles. Business Information
Review, 34(1), 37–41. https://doi.org/10.1177/0266382117692621
Trivedi, A., & Thakkar, Z. (2019). Chatbot generation and integration: A review. International
Journal of Advance Research, 5(2), 1308–1312.
Trompenaars, F. (2006). Managing people across cultures. Proceedings of 20th IPMA World
Congress on Project Management.
Venge, V. (1993). Technology Singularity.
Weizenbaum, J. (1966). ELIZA-A computer program for the study of natural language
communication between man and machine. Communications of the ACM.
https://doi.org/10.1145/365153.365168
Williamson, A. M., & Akeren, K. M. (2021). Artificial Intelligence in Digital Marketing.
Copenhagen Business School.
Zhang, S., Yao, L., Sun, A., & Tay, Y. (2019). Deep learning based recommender system: A
survey and new perspectives. ACM Computing Surveys, 52(1), 1–38.
https://doi.org/10.1145/3285029

Short Bios:

Liana Razmerita is associate professor in learning technologies at Copenhagen Business School. Her
research investigates new ways of organizing, collaborating and learning in the digital age. She is
interested in how emerging technologies (such as AI) and ICT shape new ways of working, learning
and co-creating value for organizational and social change or innovation. She holds a PhD from
University of Toulouse, France and an engineering degree in automation and computer science from
University of Galati, Romania. She has previously worked at INSEAD Fontainebleau, INRIA Sophia-
Antipolis, France and University of Leeds, UK. She has published over 100 scholarly written articles in
refereed journals, conference proceedings and book chapters. Her work has been published in
journals such as: Academy of Management Learning & Education, Journal of Knowledge Management,
Online Information Review, Journal of Organizational Computing and Electronic Commerce, Journal
of Applied Artificial Intelligence, Interacting with Computers and IEEE Systems, Man and Cybernetics.

Armelle Brun (PhD HDR), works on recommender systems, data mining, explainable algorithms, user
privacy and ethics. She is involved in several European, national and regional projects, where she is in
charge of work packages. She leads a work package in the French eFran METAL project (2016-2021)
dedicated to the mining of logs of learners’ activities and the recommendation of resources. She
coordinates the National PEACE Numerilab project (for the French Ministry of Education) (2019-2022).
She currently holds the scientific excellence distinction for research and doctoral supervision. She has
published over 90 articles and is regularly involved in national and international conference and
workshops organisation (including chairing). She was awarded the “best paper” at the ASONAM 2009
conference. Her recent paper about grey sheep users modeling has been nominated “outstanding
paper” at the ACM UMAP 2016 conference.

Thierry Nabeth is a senior A.I. scientist at P-Val Conseil working on a variety of Artificial Intelligent
projects such as advanced organizational analytics, personalized chatbots, or natural language
generation of reports for the banking sector. He has previously worked at INSEAD Centre for
Advanced Learning Technologies in Fontainebleau, France as a senior research fellow, in the domain
of advanced knowledge management systems, advanced social platforms, or learning technologies.

You might also like