Abstract
Organizations increasingly introduce collaborative technologies in form of virtual assistants (VAs) to save valuable resources, especially when employees are assisted with work-related tasks. However, the effect of VAs on virtual teams and collaboration remains uncertain, particularly whether employees show social loafing (SL) tendencies, i.e., applying less effort for collective tasks compared to working alone. While extant research indicates that VAs collaboratively working in teams exert greater results, less is known about SL in virtual collaboration and how responsibility attribution alters. An online experiment with N = 102 was conducted in which participants were assisted by a VA in solving a task. The results indicate SL tendencies in virtual collaboration with VAs and that participants tend to cede responsibility to the VA. This study makes a first foray and extends the information systems (IS) literature by analyzing SL and responsibility attribution thus updates our knowledge on virtual collaboration with VAs.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In today’s business world, technological advancements constantly reshape organizational efforts for remaining competitive (Cetindamar Kozanoglu & Abedin, 2020; N. Frick & Marx, 2021; Soto Setzke et al., 2021), transforming digital workplaces in enterprises for exploiting relative advantages (Junglas et al., 2019; Majhi et al., 2021; Meske & Junglas, 2020). This also implies employees and teams increasingly collaborate with and via technology (Changizi & Lanz, 2019; de Vreede & Briggs, 2005). The collaboration with technologies enabled by artificial intelligence (AI), such as virtual assistants (VAs), shifts from simply using a tool for virtual collaboration with other employees to shaping a novel and independent virtual environment to collaborate with VAs (Maedche et al., 2019; Mirbabaie et al., 2020; Seeber, Waizenegger, et al. 2020a). VAs are software dialog systems simulating the behavior of humans which can be addressed via voice- or text-based commands and respond to the users’ input appropriately (Brachten et al., 2020; Mirbabaie et al., 2020, 2021b). The application possibilities in organizations are manifold but VAs are foremost used as work facilitators (Brachten et al., 2020; Luger & Sellen, 2016; Mirbabaie et al., 2020). VAs collaborate with employees to optimize internal processes (Norman, 2017), generate additional revenue or cost savings (Quarteroni, 2018), and increase customer satisfaction (Behera et al., 2021; Verhagen et al., 2014), thus aim to establish substantial advantages over market competitors (Benbya & Leidner, 2018; Yan et al., 2018). Even though VAs do not provide a physical interaction and human representation (Maniscalco et al., 2020), they are increasingly used in virtual collaboration (Panganiban et al., 2020; Seeber et al. 2020a) and their distribution in enterprises is likely to grow (Maedche et al., 2019).
Many teams in organizations collaborate in virtual teams, which might even be globally distributed (Andres & Shipps, 2019; Hassell & Cotton, 2017; Massey et al., 2003; Plotnick et al., 2016). Virtual collaboration happens through using simple tools such as Microsoft Teams or Slack (N. R. J. Frick, et al., 2021a, 2021b, 2021c) and reaches to computer-generated virtual realities (Fromm et al., 2020; Litvinova et al., 2018). Existing research takes the standpoint that employees and VAs collaboratively working in virtual teams exert greater results (Seeber, et al., 2020b; Waizenegger et al., 2020). However, less research is concerned with potential downsides. As in virtual collaboration, working with members in physical teams should inspire individuals to maximize their potential and to work particularly hard (Harkins & Petty, 1982). Nevertheless, research on teamwork also identified social loafing (SL), i.e., individuals working less for collective tasks than for individual tasks (Hardy & Latané, 1988; Karau & Williams, 1993). Employees might apply less effort to achieve a goal in a team compared to when working alone as the individual contribution is perceived as unnecessary and/or responsibility attribution is distributed among team members (Jassawalla et al., 2009; Karau & Williams, 1993; Latané et al., 1979). In corporate contexts, a possible decrease in motivation is one of the biggest obstacles for teamwork (George, 1992); it is thus vital to minimize individual tendencies to loaf (Schippers, 2014) especially for maintaining cohesiveness of teams (Taylor et al., 1983).
SL has been observed multiple times in physical teams uncovering scientific evidence that individuals working in teams show less effort compared to working independently (Karau & Williams, 1993; Latané et al., 1979). However, individuals increasingly collaborate in virtual teams with VAs (Maedche et al., 2019; Mirbabaie et al., 2020; Seeber, et al., 2020a, 2020b, 2020c) thus scholars need to adjust their understanding on the different aspects of collaborative settings (Mirbabaie et al., 2020). To gain a deeper understanding on virtual collaboration with VAs, knowledge from human-to-human collaboration research needs to be exploited (Demir et al., 2020). As collaboration with VAs in organizations is likely to become commonplace, and VAs are increasingly perceived as human-like actors since they are being inherently anthropomorphic (Feine et al., 2019; Hussain et al., 2019; Pfeuffer et al., 2019; Porra et al., 2020), there is an urgent demand for the information systems (IS) discipline to conduct further research on virtual collaboration to reveal differences and similarities to human teams (Mirbabaie et al., 2021a). Associated consequences might significantly alter theoretical and practical viewpoints of how and for what purpose VAs are applied in virtual collaboration. Employees no longer identifying themselves with a decision made by a VA, not questioning or taking responsibility for it, which possibly means making poor or even wrong choices (Trocin et al., 2021). This generates enormous risks for both enterprises (i.e., diminishing reputation or profitability) and individuals (i.e., feeling less valued and more stressed) (Chidambaram & Tung, 2005; Grimes et al., 2021). Missing responsibility attribution in human-VA teams decisively thus impacts the way VAs are implemented and how they are being applied in organizations, restricting their deployment to certain industries, departments, teams and purposes. Moreno et al. (2001) clarified that there is generally a contradiction on positive (constructivist hypothesis) and negative (interference hypothesis) outcomes when collaborating with VAs. On the one hand, VAs assist employees in virtual collaboration, freeing them from unwanted duties and allowing them to focus on their key responsibilities (Brachten et al., 2020). On the other hand, intensified collaboration with VAs might yield in employees becoming dependent and showing greater SL tendencies and missing responsibility attribution (Mirbabaie et al., 2021b). To examine the role of VAs in virtual collaboration and related consequences regarding SL and responsibility attribution, our research is guided by the following questions:
RQ1
To what extent do virtual assistants cause social loafing in virtual collaboration?
RQ2
How does responsibility attribution differ in virtual teams?
To answer these questions, we conducted an online experiment with 102 participants who were assisted in the execution of a work-related task by a text-based VA. We measured and compared general SL behavior and SL behavior in virtual collaboration with a VA as well as the attribution of responsibility. This study contributes to theory and practice by structuring our understanding of collaboration with VAs and related negative effects. Researchers will find the novel insights fruitful in understanding what consequences are related to virtual collaboration with VAs assisting in work-related tasks. Practitioners, such as managers and software developers, will be able to comprehend possible downsides for human-VA teams and which measures might be rewarding to cope and/or prevent negative outcomes. This article seeks to extend the IS literature by attempting to make a first foray into the examination SL tendencies and responsibility attribution in virtual collaboration with VAs to drive future research in this field of high relevance.
2 Related Work
Collaboration with technology was examined from multiple perspectives within the IS discipline (Bajwa et al., 2007; Bednar & Welch, 2020; Beer et al., 2005; Frohberg & Schwabe, 2006; Schwabe, 2003; You & Robert, 2018). Throughout decades of fundamental research, several terms evolved for systems behaving alike. Popular terms that are related to VAs and found in theory and practice are chatbots (Stieglitz et al., 2018), digital assistants (Maedche et al., 2019) and conversational agents (Frick et al., 2021a, 2021b, 2021c). Scholars examined VAs from different perspectives (e.g., Luger & Sellen, 2016; Saffarizadeh et al. 2017; Seeber et al., 2018). Gnewuch et al. (2017) differentiated VAs in their primary mode of communication (how users interact with VAs) and their main purpose (whether a VA covers a narrow or broad task). Knote et al. (2019) characterized VAs by their design characteristics, for instance, adaptivity (how VAs learn and adjust to changing environments) and assistance domain (in which area VAs provide assistance). While research is lacking a consistent classification of VAs due to possible overlaps in their capabilities (Mirbabaie et al., 2021b), multiple definitions emerged but somewhat limit the manifold functionalities of VAs. In this research we follow the interpretation of Mirbabaie et al. (2021b) who defined VAs as “dialog systems simulating the behavior of humans via different modes of communication (e.g., written or spoken natural language, haptics, gestures, facial expressions, graphics), continuously learn and develop over time by analyzing and interpreting a given input combined with additional data sources for assisting with divergent tasks or execute them autonomously” (p. 4). Figure 1 outlines two examples of VAs that were used in earlier studies (Brachten et al., 2020; Mirbabaie et al., 2020).
VAs have become increasingly relevant as work-facilitator in organizations in recent years due to constant technological enhancements (Berg, 2015; Spohrer & Banavar, 2015) turning regular information technology (IT) or decision support systems into smart systems (Alter, 2020). The development of VAs is based on AI and underlying machine learning (ML) algorithms attempting to create intelligent systems augmenting the abilities of humans (McTear, 2017). However, there are certain distinctions of intelligent behavior (Mirbabaie et al., 2020). One the one hand, there are agents that respond to a certain input based on predefined rules (Russel & Norvig, 2016). On the other hand, there are adaptive systems that behave reactively, proactively and autonomously (von Wolff et al., 2019) while continuously learning and developing over time by processing different types of digital information (Mitchell et al., 2018). VAs aim to combine the complementary strengths of humans and AI (Kamar, 2016), for example, associating emotional intelligence of humans with the processing of huge amounts of data (Dellermann, et al., 2019a, 2019b). Thereby, VAs supply humans with feedback to assist in decision-making (e.g., AI in the loop of human intelligence), and humans supply VAs with feedback to optimize their capabilities (e.g., human intelligence in the loop of AI) (Dellermann, et al., 2019a, 2019b; Dellermann, et al., 2019a, 2019b; Mirbabaie et al., 2021b).
In virtual collaboration, VAs use multiple communication styles, such as written or spoken natural language, and are capable to interpret and react to gestures or facial expressions (Laumer et al., 2019; Nakano & Fukuhara, 2012). They adapt to users with varying roles by addressing them via different languages (Gnewuch et al., 2020; Pfeuffer et al., 2019), interpret emotions of individual team members (McDuff & Czerwinski, 2018) and foster disclosing of relevant information (Frick et al., 2021a, 2021b, 2021c). VAs are further inherently anthropomorphic since they use verbal, visual, auditory, and invisible social cues, for instance jokes, facial expression, laughing and response time, which makes users feel like talking to another human (Feine et al., 2019; Hussain et al., 2019; Pfeuffer et al., 2019). Within the research stream of CASA (Computers Are Social Actors) it has been identified that individuals exhibit social reactions when collaborating with technology (Nass & Moon, 2000). As explained by the Social Response Theory, individuals subconsciously associate social rules towards technology that uses human-like traits or behavior (Nass & Moon, 2000; Reeves & Nass, 1996). Furthermore, individuals have certain expectations towards VAs that use human-like design characteristics which might not be in line with their actual capabilities or purpose (Luger & Sellen, 2016). Nevertheless, yet existing limited conversational skills of VAs remind individuals that they are still collaborating with a machine, raising feelings of strangeness possibly leading to a discontinued usage (Diederich et al., 2020). Furthermore, there are also certain challenges and dangers related to security and privacy heavily impacting trust in VAs (de Barcelos Silva et al., 2020; Lee et al., 2020). For example, the feeling of being surveilled or that data is misused or retrieved for dubious purposes decreases trust in the agent and hinders the distribution of information (Frick et al. 2021). Moreover, interacting with VAs increases the transparency of working methods, roles and tasks which causes discomfort (Wünderlich & Paluch, 2017) and yields in a feeling of being monitored, losing personal value and control as well individual knowledge (Mirbabaie et al., 2020).
The application of VAs in organizational surroundings is fruitful on many levels and is already implemented across industries. VAs are used internally and externally to optimize processes by assisting in the execution of work-related tasks (Norman, 2017), increase the overall customer satisfaction (Cui et al., 2017; Verhagen et al., 2014), establish substantial competitive advantage (Benbya & Leidner, 2018) and generate additional revenue or cost savings (Quarteroni, 2018). For example, VAs are used in form of a virtual colleague from the human resource department to enhance the onboarding processes of new hires by providing flawless guidance (Shamekhi et al., 2018). Furthermore, VAs are applied in customer service and assist in answering customer inquiries more quickly (Gnewuch et al., 2017; Hu et al., 2018). In principle, the application of VA in organizations aims to reduce the workload of employees by assisting with repetitive tasks (Brachten et al., 2020; McTear, 2017; Mirbabaie et al., 2020).
Recent research demonstrates that VAs are able to assist in the decision-making process (Seeber, et al., 2020b; Waizenegger et al., 2020) and tackle collaborative issues frequently arise (Mirbabaie et al., 2021b). However, since VAs are becoming more human-like (Feine et al., 2019; Hussain et al., 2019; Pfeuffer et al., 2019) and are already perceived as legitimate team members (Seeber et al., 2018) it can be assumed that knowledge from human teams is transferable to virtual collaboration with VAs (Mirbabaie et al., 2020). This includes positive and negative aspects and possibly disadvantages known from human–human collaboration.
3 Theoretical Background
3.1 The Phenomenon Social Loafing
In 1913, Maximilien Ringelmann conducted an experiment to suggest a possible decrement in the individual motivation as a result of working in a group (Kravitz & Martin, 1986). Male volunteers were asked to pull on a rope as hard as they can (tug-of-war) in groups of varying sizes where a measuring device determined the total effort. The results showed that as the group sizes increased, the group performance was increasingly lower than from the addition of individual performances. This behavior was later explained as SL, “the reduction in motivation and effort when individuals work collectively compared with when they work individually” (Karau & Williams, 1993, p. 681). SL might arise when individuals perceive their contribution to be unnecessary and/or responsibility is distributed among members of a team (Karau & Williams, 1993; Latané et al., 1979). SL is also expected to rise when team sizes increase as the responsibility is distributed among several humans (Dennis et al., 2005). A decrease of SL is possible when individuals believe that they are being monitored and measured by their personal performance (Karau & Williams, 1993). Thus, the individual contribution is reduced when members of nominal groups believe that they are collectively working in teams.
While the observability of SL is hardly contestable, there is still no clear explanation for this social phenomenon. An extensive body of literature focused on the different determinants (Vaghefi & Lapointe, 2012), such as individual characteristics and differences, group sizes and memberships or task contribution efforts and visibility (e.g., Gavala-González et al., 2020; ONeill et al., 2020; Zhu et al., 2019). For example, Smith et al. (2001) observed that a lower need for cognition is associated with SL. Individuals who enjoy and participate in elaborating cognitive tasks are less likely to reduce their efforts in groups. Schippers (2014) examined personality traits and indicated that a high occurrence of conscientiousness and agreeableness within a group might compensate SL tendencies. In contrast, a recent study of Hou et al. (2021) examined SL in online brand communities and revealed that agreeableness and conscientiousness do not have a direct impact on SL. Nevertheless, the authors explained that personality traits are indirectly related to SL behavior. Research further identified apathy and social disconnectedness as antecedents for SL and explained that participants took compensatory actions when members of their teams loaf (Jassawalla et al., 2009). Individuals with low self-efficacy and self-confidence, who consider themselves more valuable than others, show SL tendencies (Hart et al., 2004). Moreover, competency, emotional relationship, and collective identity were found to be key determinants of social loafing (Luo et al., 2021).
Studies have further researched SL in organizational environments and analyzed that measuring and disclosing the individual contribution of team members increases the individual performance (Lount & Wilk, 2014) and that a task’s attractiveness impacts SL (Zaccaro, 1984). Byun et al. (2020) examined individual and situational factors and found that lower exchange ideology might significantly reduce employees’ SL tendencies. Furthermore, supervisory factors are related to lower SL declaring that managerial guidelines are reasonable to reduce SL in organizations. Khan et al. (2020) specifically identified that transformational leadership has a significant positive relationship with employees’ intrinsic motivation and thus is capable of decreasing SL. The authors clarified that managers should possess transformational attributes for informing and inspiring their employees to achieve greater outcomes.
Besides observing certain determinants, many different theories emerged on how SL behavior can be explained. Although several scholars offered theories on SL, they are generally limited to explaining one of several possible causal mechanisms and do not attempt to include the wide range of variables (Karau & Williams, 1993). The explanations provided by the researchers can usually be assigned to one of the following three main theories. Within the Social Impact Theory, a diffusion of responsibility leads to SL. Latané et al. (1979) explained that individuals feel deindividualized when the size of a group increases, distancing themselves from their own individual performance which decreases personal responsibility. Thus, when the number of team members increases, the feeling of pressure reduces as the burden is distributed among multiple individuals. The Free Rider Theory explains that team members recognize that completing a task is achievable without using their full potential and that the benefit of performing a task does not increase by hanging in (Albanese & van Fleet, 1985). Finally, within the Theory of Social Comparison SL is caused as individuals consider outside reference points to assess their own effort, abilities, and skills. Thereby, individuals match their performance to that of their team members (Harkins & Szymanski, 1989). When a team member perceives that others in the group are slacking or showing laziness, they are likely to reduce their own efforts (Jackson & Williams, 1985).
3.2 Social Loafing in IS Research
Research on SL within the IS discipline can generally be divided into two main streams (Vaghefi & Lapointe, 2012). One the one hand, IS scholars applied SL theories in IT related contexts and assessed whether existing assumption are still valid (e.g., Dennis et al., 2005). On the other hand, studies focused on possible negative outcomes when using IT and how SL might be decreased (e.g., Suleiman & Watson, 2008). For example, an early study of Suleiman and Watson (2008) examined the diminishing of SL in technology-supported teams. The authors indicated that SL occurs in teams supported by technology and argued that further research is urgently needed as mixed virtual teams become more prevalent thus employees increasingly gain the opportunity to loaf. Alnuaimi et al. (2010) argued that, due to new possibilities in IT, teams no longer need to be tied to a specific location but can collaborate across national borders. The authors assigned 140 participants in 32 teams which had to solve a brainstorming task assisted by a group system software. The results revealed that the diffusion of responsibility, attribution of blame, and dehumanization meditates the effect of team size and dispersion of SL. Current research further found that the type of trust is important to understand how scrutiny changes the influence of trust on individual SL in virtual teams (Robert, 2020). Thereby, Robert (2020) identified that cognitive trust negatively influences SL, while affective trust tends to strengthen SL. Furthermore, Lv et al. (2021) described that SL is related to decreased perceived justice and functional benefits which leads to negative word-of-mouth, switching behavior, and counterproductive work behavior.
Recent IS research is also concerned with modern teamwork and the applied methods and working models. Chen and Cheng (2018) analyzed Lean-Kanban (i.e., the constant improvement of work processes across resources to avoid bottlenecks) to solve SL. Based on the results of a case study within a non-profit organization, the authors suggest that Lean-Kanban improves the production environment in organizations and prevents situations that may cause SL. Another study of Fronza and Wang (2021) determined rules to prevent social loafing behavior in agile software development teams using a mixed-method approach which includes an action case study of software development teams and analyzing secondary data. The results indicate that the formulation of team expectations agreements, for instance, on meeting attendance and contribution, respect of tasks, roles and teammates, and collaboration, have the potential to prevent SL.
In principle, SL is generalizable across tasks and population (Karau & Williams, 1993). SL tendencies and the diffusion of responsibility does not necessarily have to take place in human teams but might also occur in virtual collaboration with VAs (Mirbabaie et al., 2021b). Thus, VAs might even encourage SL as the individual contribution is no longer measurable (Vaghefi & Lapointe, 2012) and individuals tend to rely on technical assistance (Mirbabaie et al., 2021b).
3.3 Derivation of Hypotheses
SL is a well-known phenomenon emerging in physical and virtual teams (Alnuaimi et al., 2010; Karau & Williams, 1993; Latané et al., 1979; Suleiman & Watson, 2008). Employees in organizations increasingly collaborate with technology (Changizi & Lanz, 2019; de Vreede & Briggs, 2005) where VAs shape novel and independent collaborative environments (Maedche et al., 2019; Mirbabaie et al., 2020; Seeber, et al., 2020b). CASA explained that many of the findings from human–human interaction also occur in computer interaction, for example, the feeling of team spirit after being grouped up with technology (Nass & Moon, 2000). Previous research has stated that VAs alter the way employees collaborate in organizations (Dias et al., 2019; Wang & Siau, 2018) explaining the pressing need for theory and practice to understand how employees collaborate with VAs within their virtual teams (Mirbabaie et al., 2020; Seeber et al., 2018). VAs are already perceived as legitimate team members (Seeber et al., 2018) and being inherently anthropomorphic where individuals feel like interacting with another human (Feine et al., 2019; Hussain et al., 2019; Pfeuffer et al., 2019). Therefore, it can be assumed that employees show SL tendencies when collaborating with a VA in a virtual team. We thus developed the following hypothesis:
H1
There is a positive correlation between general SL tendencies and SL tendencies in virtual collaboration with VAs.
Latané et al. (1979) explained that increasing the number of members in teams reduces the pressure on individuals as the burden is divided among more people. This decrease of social forces on individuals results in less participation. In terms of SL, this is referred to as diffusion of responsibility. Individuals feel deindividualized when the size of the group increases resulting in distancing themselves from their own individual performance decreasing personal responsibility (Latané, 1981; Latané et al., 1979). In virtual collaboration, employees might apply less effort to achieve a goal in a team compared to when working alone as the responsibility attribution is distributed among team members (Jassawalla et al., 2009; Karau & Williams, 1993; Latané et al., 1979). Furthermore, recent studies also explained that employees might become dependent and gaining a feeling of false security (Mirbabaie et al., 2021b). Therefore, in the context of this study, it can be assumed that individuals who show stronger SL tendencies are more likely to hand over responsibility to VAs. We derived the following hypothesis:
H2
There is a positive correlation between SL tendencies in virtual collaboration with VAs and responsibility attribution to the VAs.
Conscientiousness explains the individual organization and self-discipline (Berry et al., 2007) while agreeableness refers to the degree of being helpful and cooperative (Kurylo & Stevenson, 2011). Individuals with a high level of conscientiousness and agreeableness are likely to maintain cooperation in group settings and preserve self-discipline (Liao & Chuang, 2004; Tangney et al., 2004). When individuals in teams notice the reduced efforts of other team members, they tend to increase their own efforts to compensate the slacking of others and still achieve proper results (Williams & Karau, 1991). Both personality traits have been investigated in the context of collaborative technology and their association with teamwork and task accomplishment (Devaraj et al., 2008; Mouakket & Sun, 2020; Soltani et al., 2013). Earlier studies identified that conscientiousness and agreeableness negatively influence SL tendencies in human–human as well as in technology-supported teams (Bolin & Neuman, 2006; Hoon & Tan, 2008; Morgeson et al., 2005; Schippers, 2014). A recent study of Hou et al. (2021) revealed that agreeableness and conscientiousness do not have a direct impact on SL in online brand communities. Nevertheless, personality traits indirectly impact SL behavior due to a high level of social distances in technological surroundings resulting in dehumanization (Hou et al., 2021). The mere question of whether SL occurs in virtual collaboration with VAs cannot be answered solely. Personality traits are repeatedly mentioned as relevant aspects in extant literature (e.g., Hoon & Tan, 2008; Schippers, 2014), however, the impact on SL in virtual teams remains indistinct (Byun et al., 2020). We thus aim to identify parallels of human teams and claim that conscientiousness and agreeableness of individuals in teams lowers SL tendencies when collaborating with a VA. We derived the following hypotheses:
H3a
There is a negative correlation between conscientiousness and SL tendencies in virtual collaboration with VA.
H3b
There is a negative correlation between agreeableness and SL tendencies in virtual collaboration with VA.
Besides conscientiousness and agreeableness, research has further shown that that individuals who tend to enjoy and participate in elaborating cognitive tasks are less likely to reduce their efforts in group tasks (Smith et al., 2001). The effect of need for cognition refers to “an individual's tendency to engage in and enjoy effortful cognitive endeavors” (Cacioppo et al., 1996, p. 197) and is “a need to structure relevant situations in meaningful, integrated ways” (Cohen et al., 1955, p. 291). The broad familiarity of this construct serves as additional factor to explain the equality of virtual collaboration with a VA to human teams. We proposed the following hypothesis:
H4
There is a negative correlation between need for cognition and SL tendencies in virtual collaboration with VA.
Systems such as VAs are ultimately supposed to relieve human workers and free them from unwanted tasks to release cognitive resources (Dang et al., 2020) which can be used for more creative and strategic duties. Brachten et al. (2020) indicated that reducing the cognitive load of employees, i.e., the amount of working memory used while processing a task (Sweller, 1988), using a VA in virtual collaboration is achievable. However, this might also lead to individuals not questioning outcomes of a task anymore (Mirbabaie et al., 2021b). We argue that too much relief of employees or taking over too many or extensive tasks might advocate SL tendencies (Bluhm, 2009). We therefore hypothesized:
H5
There is a negative correlation between cognitive load and SL tendencies in virtual collaboration with VA.
Finally, research examined that the extent to which team members rate the task as engaging and relevant reduces the likelihood of SL tendencies occurring in collaborative settings (Karau & Williams, 1993). Therefore, a team member who believes to fail in contributing to the completion of the task is more likely to show SL tendencies. Current studies also indicated that individuals might no longer identify themselves with a certain task and cognitive skills might become superfluous (Mirbabaie et al., 2021b). To consider these aspects in the context of our study, we assess how participants rate their individual knowledge regarding the task they are being assisted with. Leaning on earlier research, we assume that individuals with poor knowledge are more likely to show SL tendencies in completing the task. We derived our final hypotheses (cf. Table 1):
H6: There is a negative correlation between knowledge about the task and SL tendencies in virtual collaboration with VA.
4 Method
4.1 Participants
Since this study coincides with the COVID-19 pandemic, we decided to conduct an online experiment rather than using a laboratory setting to examine SL and responsibility attribution in virtual collaboration with VAs. The participants were recruited using SurveyCircle enabling researchers to identify suitable subjects while guaranteeing a diversified sample (SurveyCircle, 2021). The platform allows students and researchers in the German-speaking countries to collect points by participating in studies, which, in turn, can be passed on to participants in own studies. SurveyCircle further ensures that surveys are carried out correctly, for example, subjects who fall significantly short in the processing time are penalized or even banned. Participants are thus more likely to provide complete answers instead of performing the study hastily or incorrectly. Nevertheless, we screened the data manually for anomalies and suspicious responses (i.e., very short processing times and similar or identical answers) but did not need to exclude any participants. In terms of sample size, Onwuegbuzie and Leech (2005) advise to include at least 64 participants for one-tailed, and 82 participants for two-tailed hypothesis. For determining the necessary sample size, we conducted a power analysis using G*Power (Version 3.1.9.6) suggesting a minimal sample size of 84. In total, 102 people aged 19 to 57 years (M = 26.67, SD = 6.59) participated in our study, 68 of them female and 34 male, enabling significant statistical results. Most of the respondents already had a university degree (60.8%) or the highest school-leaving qualification (38.2%). The vast majority (81.4%) indicated that they were still studying, while the minority were already employed (18.6%). Since the study is intended to only consider individuals who have already worked professionally, individuals who indicated that they were still studying were also asked to state whether they were part-time employed. Participants who did not fulfil this requirement were not included in the analysis. Students are generally considered suitable subjects as, compared to experienced professionals, they may tend to be less biased due to less work experience (Brachten et al., 2020). Additionally, they might have less issues operating VAs than an average adult since younger individuals are more affine to modern communication technology (Brachten et al., 2020).
4.2 Materials
4.2.1 Virtual Assistant
We developed a VA using Google's cloud-based platform Dialogflow (https://dialogflow.com). The platform allows the development of VAs without extensive programming knowledge and includes several features such to provide natural and rich conversational interfaces (Canonico & De Russis, 2018). VAs developed using Dialogflow can be integrated in various applications or simply as iframe on websites. We embedded the VA into LimeSurvey’s online platform so that subjects were able to interact using a question–answer component (Lamontagne et al., 2014; Morrissey & Kirakowski, 2013) without switching applications. We used a text-based interface to maintain a simplified interaction with the VA (Araujo, 2018) and avoid overwhelming participants with multiple input possibilities. The VA was designed to be simple rather than sophisticated, as the development of a highly complex VA using numerous social cues would have been beyond the scope of this research. Thereby, the VA simulates intelligent behavior by selecting a prefabricated answer based on distinct keywords used in the participant’s input (detailed examples on prefabricated answers according to corresponding keywords can be found in the "Appendix"). Initially, the VA welcomes participants and explains its functionalities. Since a VA should also be able to participate in social discussions (Brachten et al., 2020; Feine et al., 2019), it is able to participate in a whole range of different small talk topics, i.e., the question of mood and feeling. For example, when asking the VA “How old are you?”, it answers “I am 0 years old. I was developed only this year.”. When telling the VA that “You are really no help”, it replies with “Hey, give me a chance!”. Finally, to make participants aware of interacting with a VA, we displayed an icon of a bot and named our VA “ConsultantBot” (c.f. Fig. 2).
A pre-test was conducted to uncover possible ambiguities or problems during the implementation to provide participants with a realistic but manageable task in the subsequent main experiment. We compared a brainstorming task with a classification task as both were used in earlier studies and are commonly performed in organizations (Alnuaimi et al., 2010; Brachten et al., 2020; Mirbabaie et al., 2020; Schmidt et al., 2020). The tasks were chosen to be challenging but not too difficult as SL is more likely to occur when tasks are perceived as simple, and the invested effort is not measured (Karau & Williams, 1993; Latané et al., 1979). As we were not interested in how fast the tasks were solved but whether SL and ceding attribution was observable, the actual outcome of each task did not have any effect on our analysis. There was thus no certain benefit when successfully completing the task. The sample for our pre-test of 10 participants (5 female, 5 male) consists of randomly selected humans of which 6 were full-time employed, 3 still studying and 1 did not provided any information.
The brainstorming task was adapted from Alnuaimi et al. (2010) who examined SL in technology supported teams using a group support system software. The authors randomly assigned 140 students to 32 teams which needed to generate as many ideas as possible to improve a company’s image. Similar to this study, participants in our brainstorming task should generate ideas to increase the sustainability of a fictional company. The VA could be asked for additional ideas thus was intended to inspire participants by providing suggestions. The classification task was based on recent studies of Brachten et al. (2020) as well as Mirbabaie et al. (2020). Participants were asked to apply the critical path method to arrange certain activities according to their dependencies for calculating an overall duration. Thereby, a VA provided subjects with guidance on solving the task. In our classification task, participants were asked to read car reviews and rate whether they were a recommendation to buy. Accordingly, a VA was able to assist in evaluating the reviews. The verdict is not actually based on an intelligent analysis of the VA but on the assessments of a real person who was not involved in the intention of the study at the time of the evaluation. It should be noted that the VA’s assessment was not always correct but corresponded to the authentic assessment of the real person. The responses also varied in terms of certainty, the VA might have given confident and definite or uncertain and insecure estimates. In addition, the VA was able to provide answers to detailed questions, for instance, reassuring to stick with a decision even if the review contained opposing or deviating information.
Our analyses indicated higher interaction in the classification task. On average, participants interacted 41.13 s (SD = 34.08) with the VA and made 5.10 queries (SD = 2.08). Dialogflow’s usage protocol revealed that every participant used the VA in the classification task. In the brainstorming task, the interaction time was 19.80 s on average (SD = 18.66) with 2.10 queries (SD = 2.38). The usage protocol of Dialogflow showed that 4 of the 10 participants did not use the VA at all in this task. Furthermore 3 other participants stated that they did use the VA, but only after they had already noted down their ideas. Overall, subjects rather determined a shared responsibility for the task (M = 4.10, SD = 3.11) for the classification task compared to the brainstorming task (M = 2.90, SD = 1.85). In total, 8 out of 10 participants evaluated the VA within the classification task as more helpful compared to the brainstorming task. The results also indicated higher interaction with the VA in classification task which was not inevitably needed to complete the assignment (i.e., queries about details in the assessment). Since the VA needed to be rated as supporting by its user for observing SL tendencies, and less interaction with the VA in the brainstorming task might have led to less meaningful insights, we chose the classification task for the subsequent main study. Detailed information on the analyses can be found in the "Appendix".
4.2.2 Social Loafing and Responsibility Attribution
To measure SL tendencies, a distinction was made between general SL tendencies and SL in virtual collaboration with the VA. General SL tendencies were measured using four items adapted from Schippers (2014) who examined SL in student group works. A higher score on this scale indicates stronger tendencies towards SL. The items were rated on a five-point Likert scale (disagree completely to agree completely) and achieved a high reliability (α = 0.85). One example item is "I put less effort into the task than other members of my team". In our study, the subscale of the modified general SL behavior had a high reliability (α = 0.87).
As, to our knowledge, no research on SL in virtual collaboration with VAs was conducted yet, a new scale for that purpose was created and validated in advance. Building on Schippers (2014), the scales measured the own work behavior within a team context and concrete components of work. We narrowed down our validated 8 questions to 6 items achieving a satisfying reliability with Cronbach’s α = 0.77. The items are: "I read the evaluations completely and attentively", "I only skimmed the evaluations", "I adopted the evaluations of the VA", "The VA enabled me to complete the task more quickly", "The VA facilitated the processing of the task" and "I first asked the VA’s evaluation before I dealt with the evaluation myself". In accordance with the SL scale adapted from Schippers (2014), we measured the items on a five-point Likert scale (disagree completely to agree completely). It is reasonable to query general SL tendencies as well as SL tendencies in the task via self-rating, since Conway and Lance (2010) state that correlations collected between different test methods tend to be weaker and less accurate. Karau and Williams (1993) suggested that self-ratings of SL affect self-reported effect sizes when a cover story was used to inform participants that the study was about effort versus performance or when maximizing rather than optimizing tasks were used (Schippers, 2014). None of this is the case in this study. Our developed scale reached high reliability in the main experiment (α = 0.86).
Finally, we assessed the participants’ perceptions of attributing responsibility to the VA. It is common to measure responsibility with self-ratings similar to as participants can easily report their own perceived responsibility. The scale chosen here differs from common scales, such as the felt responsibility scale (Pearce & Gregersen, 1991), in that participants in our experiment were not only asked to report their own perceived responsibility, but decided whether the VA or themselves were more responsible for the outcome. We created a five-point Likert scale ranging from “I was fully responsible” to “the VA was fully responsible”.
4.2.3 Agreeableness, Conscientiousness and Need for Cognition
For measuring personality traits, we adapted the Big-5 for agreeableness and conscientiousness as well as the short version of the need for cognition scale (Beißert et al., 2014). The Big Five is a model of personality psychology consisting of five main dimensions: openness to experience (open-mindedness), conscientiousness (perfectionism), extraversion (sociability), agreeableness (consideration, cooperativeness, empathy), and neuroticism (emotional lability and vulnerability). The Big Five have been substantiated by many studies and is internationally regarded as the universal standard model in personality research (John et al., 2008). Agreeableness and conscientiousness were each measured with twelve items using a five-point Likert scale (disagrees completely to fully agree). Example items for agreeableness are "I am interested in people" and "I make people feel at ease". Conscientiousness included, for instance, "I am always prepared" and "I often forget to put things back in their proper place". The items proved a high reliability in our study both for agreeableness (α = 0.79) and conscientiousness (α = 0.85).
Cacioppo et al. (1984) originally developed a standardized questionnaire comprising 45 items to assess need for cognition which were further shortened by several researchers (e.g., Bless et al., 1994). We used the short version of the need for cognition scale consisting of four items (Beißert et al., 2014): "I would prefer more complicated problems to simple problems", "First and foremost I think because I have to", "It is enough for me simply to know the answer without understanding the reasons for answering a problem" and "I like my life to be full of tricky tasks that I have to solve. The items were measured using a five-point Likert scale (disagree completely to agree completely) and achieved a medium reliability (α = 0.64).
4.2.4 Cognitive Load
The NASA task load index (NASA-TLX) was used to measure the cognitive load of the participants. Although analyzing cognitive load in virtual collaboration with a VA was not the main objective of this study, it provides profound indications whether participants are relieved when collaboration with a VA. The NASA-TLX scale was developed by the American national Aeronautics and Space Administration (Hart & Staveland, 1988) and determines the perceived workload of a task (Galy et al., 2012). It has successfully been applied in several experimental settings (Cao et al., 2009; Noyes & Bruneau, 2007; Rubio et al., 2004) and even in the context of VAs (Brachten et al., 2020; Mirbabaie et al., 2020) achieving high reliability (α = 0.89). The scale includes six subscales (1) mental demands, (2) physical demands, (3) time demands, (4) performance, (5) effort and (6) frustration, and were measured on a five-point Likert scale (very low to very high). Mental demand assesses cognitive efforts and physical demands manual efforts. Temporal demand determines the perceived time pressure when executing a task. Performance captures the individual perception of accomplishment. Effort estimates the expense which had to be undertaken to reach a result, and frustration explains the level of saddening when solving a task. In this study, the subscale reached a medium reliability (α = 0.71).
4.3 Procedure
The participants were given a brief introduction and informed about data privacy protection. However, we deliberately did not explain the specific research context to avoid any bias. We reminded the participants to imagine a real-world working scenario and answer the questions related to their current teamwork.
Initially, the participants were asked to provide socio-demographic data such as age, gender and educational as well as the current professional activity and the industry. Afterwards, the participants were told to imaging being employees of a large company who were entrusted with the task of determining the new fleet of company cars from the small car segment. The participants were requested to read various test reports on different car models and needed to judge whether they recommend purchasing a specific car or not. Therefore, the evaluation was rated on a six-point scale (1 = very good; 2 = good; 3 = satisfactory; 4 = sufficient; 5 = poor; 6 = insufficient). However, participants were told that they were not alone in completing the task but assisted by a VA. Before rating the cars, we explained how to interact with the VA, that the communication is like talking to a human-being and that the VA achieves equal results compared to a real individual. Furthermore, subjects were able perform an exemplary interaction to become familiar with the VA. After the participants confirmed that they had understood how to interact with the VA, they needed to perform the actual task. We outlined the brand, name, and model of the cars as well as the text-based reports and asked whether they recommend buying the car or not. In total, the participants needed to provide five recommendations assisted by the VA. After the task, we collected information on general SL tendencies, SL in virtual collaboration with the VA, attribution of responsibility, personality traits (agreeableness, conscientiousness and need for cognition), and finally cognitive load. The online experiment concluded with a debriefing including an explanation of the purpose of this study. The major steps of our experiment are visualized in Fig. 3.
5 Results
To assess whether there was any interaction with the VA suiting as foundation for validating our derived hypotheses, we initially determined the interaction time of the VA. Since this measure might not be robust enough to interpret the actual utilization, we used Dialogflow’s usage protocol for assessing the amount of interactions with the VA. Overall, we identified an average interaction time of 61.60 s with 5.85 queries for every car the participants needed to evaluate. Table 2 outlines the usage of VA within the experiment.
In the following, we present the results of the observed scales (social loafing, responsibility attribution, agreeableness, conscientiousness, need for cognition and cognitive load) including their reliability and validity measures (Cronbach & Meehl, 1955; O’Leary et al., 2017; Peters, 2018). We used the Pearson correlation coefficient as reliable and widely accepted statistical metric allowing scholars to measure the strength of a linear relationship between two variables with normally distributed data (Schober et al., 2018; Zhou et al., 2016). Table 3 summarizes the Pearson correlations of the specific constructs (Field, 2013). The analyses were calculated using SPSS Statistics (Version 25).
In order to examine the extent to which a positive relationship between general SL tendencies and SL in collaboration with a VA can be observed, a correlation was calculated between the general SL scale and the scale for SL in collaboration with a VA. According to Field (2013) a Pearson correlation is the most suitable way to investigate a relationship between two metric variables. The calculation showed a moderately strong positive correlation (r = 0.344, p < 0.001) (Cohen et al., 1955). The effect was calculated using a linear regression to draw conclusions about a deterministic relationship (Field, 2013), which was significant with F(1, 100) = 13.381, p < 0.001, R2 = 0.118. This confirmed hypothesis H1, that there is a positive correlation between general SL tendencies and SL in virtual collaboration with VAs. Furthermore, the calculation of the correlation between the two variables (Field, 2013) of SL tendencies in virtual collaboration with VAs and responsibility attribution to the VAs indicated a significantly strong positive relationship (r = 0.710, p < 0.001) (Cohen et al., 1955), confirming H2.
To check whether certain personality traits influence SL tendencies, we analyzed agreeableness and conscientiousness as well as need for cognition. We assessed correlations between these characteristics and general SL as well as SL in virtual collaboration with VAs. Agreeableness and general SL have a weak negative correlation (r = -0.237 and p < 0.001). Conscientiousness and general SL showed a strong negative correlation (r = -0.496 and p < 0.001) and need for cognition correlates moderately strongly with general SL (r = -0.406 and p < 0.001). However, correlations between SL in virtual collaboration with VAs and agreeableness (r = 0.079, p = 0.429), conscientiousness (r = -0.125, p = 0.211) and need for cognition (r = -0.187, p = 0.060) were not significant discarding H3a, H3b and H4.
To test whether virtual collaboration with VAs is negatively related to cognitive load, we again calculated a correlation using the NASA-TLX. The result indicated a weak negative correlation (r = -0.228 and p < 0.005). Therefore, H5 could be confirmed. Table 4 outlines the correlations between SL in virtual collaboration with a VA and the items of the NASA-TLX. We finally hypothesized a correlation of prior knowledge of a task on virtual collaboration with a VAs. The results suggested a slight significant weak negative correlation (r = -0.196, p < 0.049) confirming H6. A summary of supported or rejected hypotheses is depicted in Table 5.
6 Discussion
6.1 Key Findings
In this study, we examined whether SL tendencies are observable in virtual collaboration with VAs. The first key finding of this paper is that SL tendencies are not limited to physical and virtual collaboration with other humans, but also detectable in virtual collaboration with VAs (H1). The results revealed that individuals who show general SL tendencies also tend to reduce their efforts when collaborating with a VA (as measured adapted from Schippers, 2014). Participants do not seem to be indifferent towards the task and hence show less efforts as completing the study would have been possible without reading and evaluating the test reports and even without collaborating with the VA. Instead, participants strive to complete the task in a reasonable manner but seem to rely on the VA as indicated by the average interaction time of 61.60 s with 5.85 queries per car. However, since a successful execution is possible without exploiting the own full potential (Albanese & van Fleet, 1985), this corresponds to the actual understanding of the phenomenon SL (Kravitz & Martin, 1986) as also examined in early research on human group settings (e.g., Albanese & van Fleet, 1985; Latané et al., 1979; Williams & Karau, 1991). Thus, similar to the collaboration in physical and virtual teams consisting out of human actors, individuals collaborating virtually with VAs are likely to avail the opportunity to loaf and might show less effort compared to working alone (Karau & Williams, 1993; Latané et al., 1979). This may be explained by that fact that VAs are increasingly perceived as legitimate and equal team members to human colleagues (Seeber et al., 2018). This result goes in line with earlier research explaining that VAs are becoming more human-like (Feine et al., 2019; Hussain et al., 2019; Pfeuffer et al., 2019; Porra et al., 2020) and that the unique capabilities of humans are increasingly difficult to differentiate from VAs’ characteristics (Cha et al., 2020). This raises several questions for IS scholars: In which tasks jointly solved in virtual teams does social loafing occur? How should VAs need to be designed in order to avoid or diminish SL tendencies in virtual collaboration? What effect is SL causing on the team spirit of virtual teams?
The second key finding of this study is that the responsibility of solving a task in virtual collaboration is likely to be attributed to the VA (H2). Thus, responsibility in virtual collaboration seems to be distributed regardless of whether collaborators are human or VAs. In our experiment, individuals might have been distancing themselves from their own individual performance or have perceived their contribution to be unnecessary since the VA might have been observed as collaborator capable of solving the task almost alone (Karau & Williams, 1993; Latané et al., 1979). The level of interaction with the VA (61.60 s with 5.85 queries per car on average) further indicates that the participants value the VA’s evaluation of the test reports. Our results have also shown that SL correlates negatively with cognitive load (H5). These results must be interpreted from an interference as well as constructivist perspective (Moreno et al., 2001). From an interference perspective, we first interpret this to the mean that members of a virtual team might fail to identify with overall team outcomes and joint accomplishments. However, this could lead to virtual teams failing in gaining a mutual understanding on shared objectives and deteriorate spirit and cohesiveness of teams (George, 1992; Taylor et al., 1983). Mirbabaie et al. (2021b) argued that this might result in “employees no longer identifying themselves with a decision and not questioning or taking responsibility for it” (p. 11) and further explain that essential information might be forgotten, and cognitive skills become expendable. Interpreting the findings from a constructivist standpoint explains that VAs in virtual collaboration are able to assist in the execution of tasks which might lead to more efficient and effective results (Seeber, et al., 2020a, 2020b, 2020c; Waizenegger et al., 2020). This has already confirmed by existing research (i.e., Brachten et al., 2020; Moreno et al., 2001) who demonstrated that individuals collaborating with VAs in virtual teams outperform humans in solving a task. Furthermore, Mirbabaie et al. (2020) argued that the support provided by a VA is equal to that of a human being. Interpreting this within the context of SL, VAs in virtual teams are perceived as equal and human-like actors and findings of existing research on human teams are apparently transferable to virtual collaboration with VAs (Mirbabaie et al., 2021b).
There is no significant correlation between personality traits (conscientiousness, agreeableness and need for cognition) and SL tendencies in virtual collaboration with VA (H3a, H3b, H4). This is in line with Hou et al. (2021) who identified that conscientiousness and agreeableness are not directly related to SL. However, this is contrary to Schippers (2014) who showed that conscientiousness and agreeableness impact SL tendencies in teams. Furthermore, Smith et al. (2001) indicated that individuals who are likely to enjoy elaborating cognitive tasks do not tend to reduce efforts in group tasks. Since our online experiment was constructed that individuals were supported by a VA exclusively and not by other team members, there might not have been any correlation with personality traits comparable to convenient teamwork and participants did not need to compensate the slacking of others (Williams & Karau, 1991). In addition, our results uncovered a negative correlation between knowledge about the performed task and SL tendencies in virtual collaboration with VA (H6). This might even increase the effect as individuals, who do not perceive the task as challenging, show SL tendencies (Karau & Williams, 1993). Thus, individuals are less likely to compensate the slacking of others since the task is perceived as very basic and/or less important.
6.2 Implications: Smart Loafing
Our study exposed that the increasing collaboration with VAs (Changizi & Lanz, 2019; de Vreede & Briggs, 2005) is creating virtual environments (Maedche et al., 2019; Mirbabaie et al., 2020; Seeber et al. 2020a) in which it soon does not matter anymore whether individuals are collaborating with other humans or human-like machines. Research on SL is not only applicable to physical teams, virtual teams, or technology-supported teams (Karau & Williams, 1993; Robert, 2020; Suleiman & Watson, 2008), but also to virtual collaboration with VAs.
In theory and practice there is rather a negative attitude towards SL especially since there is a strong connection to ceding responsibility (e.g., Dennis et al., 2005; Suleiman & Watson, 2008). Transferring these negative aspects on virtual collaboration with VAs, this might lead to not recognizing errors or mistakes of VAs, the quality of the work results deteriorates, and less cognitive load decreases team performance (Mirbabaie et al., 2021b). However, SL in virtual collaboration with VAs somewhat differs from traditional SL. In contrast to human teams, by reducing the individual effort, no other human team member needs to compensate emerging slacking behavior. The lower effort is compensated by the VA possibly explaining the missing correlation between SL and personality traits in our study. Furthermore, lower cognitive load and the presence of SL in virtual collaboration with VAs indicates the main purpose of VAs: relieving individuals and assisting in the execution of tasks (Brachten et al., 2020; Mirbabaie et al., 2020; Norman, 2017; Seeber, et al., 2020a, 2020b, 2020c). Even though we motivated this study by elaborating that SL is unfavorable, responsibility attribution might even be a good thing since technologies enabled by AI outperform humans in certain domains, for instance, tackling repetitive tasks and interpreting complex interdependencies (Dellermann, et al., 2019a, 2019b; Dellermann, et al., 2019a, 2019b). We therefore derive, in the context of virtual collaboration with VAs in organizations, the term smart loafing and define “the reduction of effort in human-VA collaboration to maintain cognitive resources and enhance efficiency in work”.
Smart loafing certainly possesses implications for organizations. Employees, and especially knowledge workers, tend to avoid repetitive tasks but rather seek a way to automate inconvenient procedures. Circumstances related to repetitive tasks or certain overhead negatively impact the work-related wellbeing of individuals and, for instance, cause stress, lead to poor concentration or the feeling of exploitation (Pace et al., 2019). Furthermore, executing tasks that are cognitively appealing or challenging convey a feeling of satisfaction and enhance the perception of one’s unique value. Collaborating with VAs might even augment cognition, intelligence, and capabilities of individuals (Siddike et al., 2018). Moreover, the reduction of effort in human-VA collaboration unleashes cognitive resources (Brachten et al., 2020) which, in turn, can be used for more meaningful purposes thus improve the overall organizational performance and revenue (Frick et al., 2019). Therefore, smart loafing in organizations comes handy if individuals’ slacking does not outweigh advantages or yields in errors or faulty decisions. Nevertheless, VAs certainly possess the ability to recognize and prevent or counteract emerging negative behavior in virtual collaboration with humans. VAs might, for example, use certain social cues to avoid that humans blindly rely on certain outcomes while promoting collaborative decision-making and enhancing the accountability for tasks (Mirbabaie et al., 2021b).
6.3 Limitations and Further Research
Since this study took a first foray into the identification of SL in virtual collaboration with VAs, we developed our experimental setting as well as the VA and the task quite simple. Participants were assisted by a VA in the execution of one task where the task itself was not jointly solved within a larger team. Furthermore, we used a text-based VA with a limited level of anthropomorphism. The interaction with a VA using voice commands and integrating certain social cues might change the perceived usefulness of the assistance and related effects (Edwards et al., 2019; Feine et al., 2019). Further studies may develop more complex settings and survey participants in experimental surroundings. We further recommend extending our findings to more complex and cognitive challenging tasks to determine SL tendencies in mixed virtual teams with multiple humans and VAs. We also assess the investigation of the impact on the individual’s cognitive load as highly relevant. On the upside, using VAs in virtual collaboration might free humans from unwanted duties and enables to focus on more relevant tasks (Brachten et al., 2020). On the downside, individuals might blindly rely on results derived by VAs and not taking responsibility for tasks which might lead to inferior results (Mirbabaie et al., 2021b). We also invite scholars to analyze social cues in virtual collaboration with VAs and how they relate to SL behavior. It might be appealing to adapt existing findings from management science and equip a VA with a specific set of social cues that are beneficial to enhance the intrinsic motivation of human collaborators and reduce SL tendencies (Khan et al., 2020).
As we were interested in the existence of SL tendencies in virtual collaboration with VAs, we initially focused on a single cultural background (Central European). Future research should analyze a diversified and larger sample as well as include certain control variables, for instance, the number of questions asked by individuals, to reveal yet unrecognized effects. In addition, the information provided by the participants are based on self-reported data. Karau and Williams (1993) suggested that self-ratings of SL affect self-reported effect sizes only when a cover story was used to inform participants that the study was about effort versus performance or when maximizing rather than optimizing tasks were used (Schippers, 2014). Therefore, it would be interesting to identify differences from objective evidence. Especially data from experimental group settings seem to be fitting to shed additional light on the phenomenon SL in virtual collaboration with VAs. Moreover, questioning a participant about the given task that has been solved assisted by a VA could reveal the actual engagement in the task. Still, the results of this first foray offer empirical insights for gaining a deeper understanding on virtual collaboration with VAs and indicates that knowledge from human-to-human collaboration in terms of SL is transferable.
7 Conclusion
This study provides several insights regarding SL when collaborating with VAs. First, our results showed that SL occurs in virtual collaboration with VAs. Second, this study highlights that the responsibility of solving a task in virtual collaboration is likely to be attributed to a VA. Third, SL in virtual collaboration with VAs somewhat differs from traditional SL. It can be concluded that previous insights of teamwork are transferred to virtual teams and existing assumption are still valid (Dennis et al., 2005; Mirbabaie et al., 2020). Furthermore, the increasing collaboration with VAs in virtual environments (Maedche et al., 2019; Seeber, et al., 2020a, 2020b, 2020c) is blurring the boundaries between human–human and human-VA teams (Seeber et al., 2018). However, reducing individual efforts might not impact team performance and cohesiveness as lower efforts of human collaborators are compensated by VAs.
This research contributes to theory by suggesting the new construct of smart loafing describing the purposeful reduction of the individual effort in human-VA collaboration to save cognitive resources for enhancing efficiency at work. Our study provides evidence that SL is observable in virtual collaboration with VAs but not necessarily yields in disadvantages for team performance. Thus, this construct might be better suited to describe SL tendencies in virtual collaboration with VAs. However, this context applies in an organizational work setting but must be interpret with caution in other contexts. Smart loafing in, for instance, learning environments might be hindering and not "smart".
On a practical level, this study indicates that human-VA teams evoke similar effects as human teams. Practitioners comprehend that the application of VAs in virtual teams might involve certain drawbacks, especially when VAs cause SL behavior and missing responsibility attribution. However, this does not inevitably have to be considered negative but certain countermeasures are advisable for preventing emerging disadvantages. For example, analyzing and disclosing individual contributions of human team members might increases the individual performance and avoid slacking behavior. Moreover, VAs might even be developed in a way to detect SL tendencies of humans or highlight critical decisions and integrate an appropriate approval process.
References
Albanese, R., & van Fleet, D. D. (1985). Rational behavior in groups: The free-riding tendency. The Academy of Management Review, 10(2), 244. https://doi.org/10.2307/257966
Alnuaimi, O. A., Robert, L. P., & Maruping, L. M. (2010). Team size, dispersion, and social loafing in technology-supported teams: A perspective on the theory of moral disengagement. Journal of Management Information Systems, 27(1), 203–230. https://doi.org/10.2753/MIS0742-1222270109
Alter, S. (2020). Making sense of smartness in the context of smart devices and smart systems. Information Systems Frontiers, 22(2), 381–393. https://doi.org/10.1007/s10796-019-09919-9
Andres, H. P., & Shipps, B. P. (2019). Team learning in technology-mediated distributed teams. Journal of Information Systems Education, 21(2), 10.
Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
Bajwa, D., Lewis, L., Pervan, G., Lai, V., Munkvold, B., & Schwabe, G. (2007). Organizational Assimilation of Collaborative Information Technologies: Global Comparisons. In 2007 40th annual Hawaii international conference on system sciences (HICSS’07) (pp. 41–41). Waikoloa, HI, USA: IEEE. https://doi.org/10.1109/HICSS.2007.415
Bednar, P. M., & Welch, C. (2020). Socio-technical perspectives on smart working: Creating meaningful and sustainable systems. Information Systems Frontiers, 22(2), 281–298. https://doi.org/10.1007/s10796-019-09921-1
Beer, M., Slack, F., & Armitt, G. (2005). Collaboration and teamwork: Immersion and presence in an online learning environment. Information Systems Frontiers, 7(1), 27–37. https://doi.org/10.1007/s10796-005-5336-9
Behera, R. K., Bala, P. K., & Ray, A. (2021). Cognitive Chatbot for personalised contextual customer service: Behind the scene and beyond the hype. Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10168-y
Beißert, H., Köhler, M., Rempel, M., & Beierlein, C. (2014). Eine deutschsprachige Kurzskala zur Messung des Konstrukts Need for Cognition: Die Need for Cognition Kurzskala (NFC-K). GESIS-Working Paper, (32).
Benbya, H., & Leidner, D. E. (2018). How Allianz UK used an idea management platform to harness employee innovation. MIS Quarterly Executive, 17(2), 139–155.
Berg, M. M. (2015). NADIA: A simplified approach towards the development of natural dialogue systems. In C. Biemann, S. Handschuh, A. Freitas, F. Meziane, & E. Métais (Eds.), Natural language processing and information systems (1st ed., pp. 144–150). Springer, Berlin. https://doi.org/10.1007/978-3-319-19581-0_12
Berry, C. M., Ones, D. S., & Sackett, P. R. (2007). Interpersonal deviance, organizational deviance, and their common correlates: A review and meta-analysis. Journal of Applied Psychology, 92(2), 410–424. https://doi.org/10.1037/0021-9010.92.2.410
Bless, H., Wänke, M., Bohner, G., Fellhauer, R., & Schwarz, N. (1994). Need for Cognition : Eine Skala zur Erfassung von Freude und Engagement bei Denkaufgaben. Zeitschrift Für Sozialpsychologie, 25, 147–154.
Bluhm, D. J. (2009). Adaptive consequences of social loafing. Academy of Management Proceedings, 2009(1), 1–6. https://doi.org/10.5465/ambpp.2009.44256422
Bolin, A. U., & Neuman, G. A. (2006). Personality, process, and performance in interactive brainstorming groups. Journal of Business and Psychology, 20(4), 565–585. https://doi.org/10.1007/s10869-005-9000-7
Brachten, F., Brünker, F., Frick, N. R. J., Ross, B., & Stieglitz, S. (2020). On the ability of virtual agents to decrease cognitive load: An experimental study. Information Systems and e-Business Management. https://doi.org/10.1007/s10257-020-00471-7
Byun, G., Lee, S., Karau, S. J., & Dai, Y. (2020). Sustaining collaborative effort in work teams: Exchange ideology and employee social loafing. Sustainability, 12(15), 6241. https://doi.org/10.3390/su12156241
Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119(2), 197–253. https://doi.org/10.1037/0033-2909.119.2.197
Cacioppo, J. T., Petty, R. E., & Kao, C. F. (1984). PsycTESTS Dataset. https://doi.org/10.1037/t06602-000
Canonico, M., & De Russis, L. (2018). A comparison and critique of natural language understanding tools. In Cloud Computing 2018: The ninth international conference on cloud computing, GRIDs, and Virtualization (pp. 110–115).
Cao, A., Chintamani, K. K., Pandya, A. K., & Ellis, R. D. (2009). NASA TLX: Software for assessing subjective mental workload. Behavior Research Methods, 41(1), 113–117. https://doi.org/10.3758/BRM.41.1.113
Cetindamar Kozanoglu, D., & Abedin, B. (2020). Understanding the role of employees in digital transformation: conceptualization of digital literacy of employees as a multi-dimensional organizational affordance. Journal of Enterprise Information Management, ahead-of-p. https://doi.org/10.1108/JEIM-01-2020-0010
Cha, Y.-J., Baek, S., Ahn, G., Lee, H., Lee, B., Shin, J., & Jang, D. (2020). Compensating for the loss of human distinctiveness: The use of social creativity under Human-Machine comparisons. Computers in Human Behavior, 103, 80–90. https://doi.org/10.1016/j.chb.2019.08.027
Changizi, A., & Lanz, M. (2019). The comfort zone concept in a human-robot cooperative task. In S. Ratchev (Ed.), Precision assembly in the digital age (Vol. 530, pp. 82–91). Springer. https://doi.org/10.1007/978-3-030-05931-6_8
Chen, J.C.-C., & Cheng, C.-Y. (2018). Solving social loafing phenomenon through Lean-Kanban. Journal of Organizational Change Management, 31(5), 984–1000. https://doi.org/10.1108/JOCM-12-2016-0299
Chidambaram, L., & Tung, L. L. (2005). Is out of sight, out of mind? An empirical study of social loafing in technology-supported groups. Information Systems Research, 16(2), 149–168. https://doi.org/10.1287/isre.1050.0051
Cohen, A. R., Stotland, E., & Wolfe, D. M. (1955). An experimental investigation of need for cognition. The Journal of Abnormal and Social Psychology, 51(2), 291–294. https://doi.org/10.1037/h0042761
Conway, J. M., & Lance, C. E. (2010). What reviewers should expect from authors regarding common method bias in organizational research. Journal of Business and Psychology, 25(3), 325–334. https://doi.org/10.1007/s10869-010-9181-6
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302. https://doi.org/10.1037/h0040957
Cui, L., Huang, S., Wei, F., Tan, C., Duan, C., & Zhou, M. (2017). SuperAgent: A customer service chatbot for e-commerce websites. In Proceedings of ACL 2017, system demonstrations (pp. 97–102). Association for computational linguistics. https://doi.org/10.18653/v1/P17-4017
Dang, Y., & (Mandy), Zhang, Y. (Gavin), Brown, S. A., & Chen, H. . (2020). Examining the impacts of mental workload and task-technology fit on user acceptance of the social media search system. Information Systems Frontiers, 22(3), 697–718. https://doi.org/10.1007/s10796-018-9879-y
de Barcelos Silva, A., Gomes, M. M., da Costa, C. A., da Rosa Righi, R., Barbosa, J. L. V., Pessin, G., et al. (2020). Intelligent personal assistants: A systematic literature review. Expert Systems with Applications, 147, 113193. https://doi.org/10.1016/j.eswa.2020.113193
de Vreede, G.-J., & Briggs, R. O. (2005). Collaboration engineering: Designing repeatable processes for high-value collaborative tasks. In Proceedings of the 38th annual Hawaii international conference on system sciences (pp. 17c-17c). IEEE. https://doi.org/10.1109/HICSS.2005.144
Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019a). Hybrid Intelligence. Business & Information Systems Engineering, 61(5), 637–643. https://doi.org/10.1007/s12599-019-00595-2
Dellermann, D., Lipusch, N., Ebel, P., & Leimeister, J. M. (2019b). Design principles for a hybrid intelligence decision support system for business model validation. Electronic Markets, 29(3), 423–441. https://doi.org/10.1007/s12525-018-0309-2
Demir, M., McNeese, N. J., & Cooke, N. J. (2020). Understanding human-robot teams in light of all-human teams: Aspects of team interaction and shared cognition. International Journal of Human-Computer Studies, 140, 102436. https://doi.org/10.1016/j.ijhcs.2020.102436
Dennis, A. R., Pinsonneault, A., McNamara Hilmer, K., Barki, H., Gallupe, B., Huber, M., & Bellavance, F. (2005). Patterns in electronic brainstorming. International Journal of e-Collaboration, 1(4), 38–57. https://doi.org/10.4018/978-1-59904-393-7.ch010
Devaraj, S., Easley, R. F., & Crant, J. M. (2008). How does personality matter? Relating the five-factor model to technology acceptance and use. Information Systems Research, 19(1), 93–105. https://doi.org/10.1287/isre.1070.0153
Dias, M., Pan, S., & Tim, Y. (2019). Knowledge embodiment of human and machine interactions: Robotic process automation at the finland government. In Proceedings of the 27th European conference on information systems (p. 12). Stockholm.
Diederich, S., Brendel, A. B., & Kolbe, L. M. (2020). Designing anthropomorphic enterprise conversational agents. Business & Information Systems Engineering, 62(3), 193–209. https://doi.org/10.1007/s12599-020-00639-y
Edwards, C., Edwards, A., Stoll, B., Lin, X., & Massey, N. (2019). Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Computers in Human Behavior, 90, 357–362. https://doi.org/10.1016/j.chb.2018.08.027
Feine, J., Gnewuch, U., Morana, S., & Maedche, A. (2019). A taxonomy of social cues for conversational agents. International Journal of Human-Computer Studies, 132, 138–161. https://doi.org/10.1016/j.ijhcs.2019.07.009
Field, A. (2013). Discovering statistics using IBM SPSS statistics: And sex and drugs and rock “n” roll (4th editio.). Sage.
Frick, N., Brünker, F., Ross, B., & Stieglitz, S. (2019). The utilization of artificial intelligence for improving incident management. HMD, 56(2), 357–369. https://doi.org/10.1365/s40702-019-00505-w
Frick, N. R., Brünker, F., Ross, B., & Stieglitz, S. (2021a). Comparison of disclosure/concealment of medical information given to conversational agents or to physicians. Health Informatics Journal, 27(1), 146045822199486. https://doi.org/10.1177/1460458221994861
Frick, N., & Marx, J. (2021). Integrating digital nomads in corporate structures: Managerial contemplations. In Hawaii international conference on system sciences. https://doi.org/10.24251/HICSS.2021.260
Frick, N. R. J., Möllmann, H. L., Mirbabaie, M., & Stieglitz, S. (2021b). Driving digital transformation during a pandemic: Case study of virtual collaboration in a German Hospital. JMIR Medical Informatics, 9(2), e25183. https://doi.org/10.2196/25183
Frick, N. R. J., Wilms, K. L., Brachten, F., Hetjens, T., Stieglitz, S., & Ross, B. (2021). The perceived surveillance of conversations through smart devices. Electronic commerce research and applications, 47(September 2019). https://doi.org/10.1016/j.elerap.2021.101046
Frohberg, D., & Schwabe, G. (2006). Skills and motivation in ad-hoc-collaboration. CollECTeR: Collaborative electronic commerce technology and research, 157–172.
Fromm, J., Mirbabaie, M., & Stieglitz, S. (2020). The effects of virtual reality affordances and constraints on negative group effects during brainstorming sessions. In International conference on Wirtschaftsinformatik.
Fronza, I., & Wang, X. (2021). Social loafing prevention in agile software development teams using team expectations agreements. IET Software, (December 2020), sfw2.12019. https://doi.org/10.1049/sfw2.12019
Galy, E., Cariou, M., & Mélan, C. (2012). What is the relationship between mental workload factors and cognitive load types? International Journal of Psychophysiology: Official Journal of the International Organization of Psychophysiology, 83(3), 269–275. https://doi.org/10.1016/j.ijpsycho.2011.09.023
Gavala-González, J., Martins, B., Ponseti, F. J., & Garcia-Mas, A. (2020). Studying well and performing well: A Bayesian analysis on team and individual rowing performance in dual career athletes. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2020.583409
George, J. M. (1992). Extrinsic and intrinsic origins of perceived social loafing in organizations. Academy of Management Journal, 35(1), 191–202. https://doi.org/10.5465/256478
Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards designing cooperative and social conversational agents for customer service. ICIS.
Gnewuch, U., Yu, M., & Maedche, A. (2020). The effect of perceived similarity in dominance on customer self-disclosure to chatbots in conversational commerce. In 28th European conference on information systems.
Grimes, G. M., Schuetzler, R. M., & Giboney, J. S. (2021). Mental models and expectation violations in conversational AI interactions. Decision Support Systems, 144, 113515. https://doi.org/10.1016/j.dss.2021.113515
Hardy, C. J., & Latané, B. (1988). Social loafing in cheerleaders: Effects of team membership and competition. Journal of Sport and Exercise Psychology, 10(1), 109–114. https://doi.org/10.1123/jsep.10.1.109
Harkins, S. G., & Petty, R. E. (1982). Effects of task difficulty and task uniqueness on social loafing. Journal of Personality and Social Psychology, 43(6), 1214–1229. https://doi.org/10.1037/0022-3514.43.6.1214
Harkins, S. G., & Szymanski, K. (1989). Social loafing and group evaluation. Journal of Personality and Social Psychology, 56(6), 934–941. https://doi.org/10.1037/0022-3514.56.6.934
Hart, J. W., Karau, S. J., Stasson, M. F., & Kerr, N. A. (2004). Achievement motivation, expected coworker performance, and collective task motivation: Working hard or hardly working? Journal of Applied Social Psychology, 34(5), 984–1000. https://doi.org/10.1111/j.1559-1816.2004.tb02580.x
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Human mental workload (Vol. 52, pp. 139–183). Elsevier, New York
Hassell, M. D., & Cotton, J. L. (2017). Some things are better left unseen: Toward more effective communication and team performance in video-mediated interactions. Computers in Human Behavior, 73, 200–208. https://doi.org/10.1016/j.chb.2017.03.039
Hoon, H., & Tan, T. M. L. (2008). Organizational citizenship behavior and social loafing: The role of personality, motives, and contextual factors. The Journal of Psychology, 142(1), 89–108. https://doi.org/10.3200/JRLP.142.1.89-112
Hou, R.-J., Han, S.-H., Zhang, Y.-D., Zhang, C.-B., & Wang, K. (2021). Social loafing in online brand communities: The roles of personality and dehumanization. Online Information Review, ahead-of-p. https://doi.org/10.1108/OIR-06-2020-0255
Hu, T., Xu, A., Liu, Z., You, Q., Guo, Y., Sinha, V., et al. (2018). Touch your heart: A tone-aware chatbot for customer care on social media. In Proceedings of the 2018 CHI conference on human factors in computing systems - CHI ’18.
Hussain, S., Ameri Sianaki, O., & Ababneh, N. (2019). A survey on conversational agents/chatbots classification and design techniques. In L. Barolli, M. Takizawa, F. Xhafa, & T. Enokido (Eds.), Primate life histories, sex roles, and adaptability (Vol. 927, pp. 946–956). Springer. https://doi.org/10.1007/978-3-030-15035-8_93
Jackson, J. M., & Williams, K. D. (1985). Social loafing on difficult tasks: Working collectively can improve performance. Journal of Personality and Social Psychology, 49(4), 937–942. https://doi.org/10.1037/0022-3514.49.4.937
Jassawalla, A., Sashittal, H., & Sashittal, A. (2009). Students’ perceptions of social loafing: Its antecedents and consequences in undergraduate business classroom teams. Academy of Management Learning & Education, 8(1), 42–54. https://doi.org/10.5465/amle.2009.37012178
John, O. P., Naumann, L., & Soto, C. J. (2008). Paradigm shift to the integrative Big Five taxonomy: History, measurement, and conceptual issues. In Handbook of personality: Theory and research (3rd ed., pp. 114–158). Guilford.
Junglas, I., Goel, L., Ives, B., & Harris, J. (2019). Innovation at work: The relative advantage of using consumer IT in the workplace. Information Systems Journal, 29(2), 317–339. https://doi.org/10.1111/isj.12198
Kamar, E. (2016). Directions in hybrid intelligence: Complementing AI systems with human intelligence. International Joint Conference on Artificial Intelligence, 4070–4073.
Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65(4), 681–706. https://doi.org/10.1037/0022-3514.65.4.681
Khan, H., Rehmat, M., Butt, T. H., Farooqi, S., & Asim, J. (2020). Impact of transformational leadership on work performance, burnout and social loafing: A mediation model. Future Business Journal, 6(1), 40. https://doi.org/10.1186/s43093-020-00043-8
Knote, R., Janson, A., Söllner, M., & Leimeister, J. M. (2019). Classifying smart personal assistants: An empirical cluster analysis. In Proceedings of the 52nd Hawaii international conference on system sciences (pp. 2024–2033). https://doi.org/10.24251/hicss.2019.245
Kravitz, D. A., & Martin, B. (1986). Ringelmann rediscovered: The original article. Journal of Personality and Social Psychology, 50(5), 936–941. https://doi.org/10.1037/0022-3514.50.5.936
Kurylo, M., & Stevenson, J. (2011). NEO personality inventory. In Encyclopedia of clinical neuropsychology (pp. 1728–1731). Springer. https://doi.org/10.1007/978-0-387-79948-3_2000
Lamontagne, L., Laviolette, F., Khoury, R., & Bergeron-Guyard, A. (2014). A Framework for building adaptive intelligent virtual assistants. In Artificial intelligence and applications. https://doi.org/10.2316/P.2014.816-018
Latané, B. (1981). The psychology of social impact. American Psychologist, 36(4), 343–356. https://doi.org/10.1037/0003-066X.36.4.343
Latané, B., Williams, K., & Harkins, S. (1979). Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Psychology, 37(6), 822–832. https://doi.org/10.1037/0022-3514.37.6.822
Laumer, S., Maier, C., & Gubler, F. T. (2019). Chatbot acceptance in healthcare: Explaining user adoption of conversational agents for disease diagnosis. Twenty-Seventh European Conference on Information Systems, 0–18.
Lee, K., Lee, K. Y., & Sheehan, L. (2020). Hey Alexa! A magic spell of social glue?: Sharing a smart voice assistant speaker and its impact on users’ perception of group harmony. Information Systems Frontiers, 22(3), 563–583. https://doi.org/10.1007/s10796-019-09975-1
Liao, H., & Chuang, A. (2004). A multilevel investigation of factors influencing employee service performance and customer outcomes. Academy of Management Journal, 47(1), 41–58. https://doi.org/10.5465/20159559
Litvinova, Y., Rehm, S.-V., Goel, L. C., & Junglas, I. (2018). Collaborating in virtual reality by using digital Twins. In ISPIM innovation symposium: The international society for professional innovation management (ISPIM).
Lount, R. B., & Wilk, S. L. (2014). Working harder or hardly working? Posting performance eliminates social loafing and promotes social laboring in workgroups. Management Science, 60(5), 1098–1106. https://doi.org/10.1287/mnsc.2013.1820
Luger, E., & Sellen, A. (2016). “Like having a really bad PA”: The gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI conference on human factors in computing systems - CHI ’16 (pp. 5286–5297). https://doi.org/10.1145/2858036.2858288
Luo, Z., Marnburg, E., Øgaard, T., & Okumus, F. (2021). Exploring antecedents of social loafing in students’ group work: A mixed-methods approach. Journal of Hospitality, Leisure, Sport & Tourism Education, 28(December 2019), 100314. https://doi.org/10.1016/j.jhlste.2021.100314
Lv, X., Zhang, R., & Li, Q. (2021). Value co-destruction: The influence of failed interactions on members’ behaviors in online travel communities. Computers in Human Behavior, 122, 106829. https://doi.org/10.1016/j.chb.2021.106829
Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., et al. (2019). AI-based digital assistants: Opportunities, threats, and research perspectives. Business and Information Systems Engineering, 61(4), 535–544. https://doi.org/10.1007/s12599-019-00600-8
Majhi, S. G., Anand, A., Mukherjee, A., & Rana, N. P. (2021). The optimal configuration of IT-enabled dynamic capabilities in a firm’s capabilities portfolio: A strategic alignment perspective. Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10145-5
Maniscalco, U., Messina, A., & Storniolo, P. (2020). The human–robot interaction in robot-aided medical care. In Smart innovation, systems and technologies (Vol. 189, pp. 233–242). Split.
Massey, A. P., Montoya-Weiss, M. M., & Hung, Y.-T. (2003). Because time matters: Temporal coordination in global virtual project teams. Journal of Management Information Systems, 19(4), 129–155. https://doi.org/10.1080/07421222.2003.11045742
McDuff, D., & Czerwinski, M. (2018). Designing emotionally sentient agents. Communications of the ACM, 61(12), 74–83. https://doi.org/10.1145/3186591
McTear, M. F. (2017). The rise of the conversational interface: A new kid on the block? In Future and emerging trends in language technology. Machine Learning and Big Data (Vol. 10341). Cham: Springer.
Meske, C., & Junglas, I. (2020). Investigating the elicitation of employees’ support towards digital workplace transformation. Behaviour & Information Technology. https://doi.org/10.1080/0144929X.2020.1742382
Mirbabaie, M., Stieglitz, S., Brünker, F., Hofeditz, L., Ross, B., & Frick, N. R. J. (2020). Understanding collaboration with virtual assistants—The role of social identity and the extended self. Business & Information Systems Engineering. https://doi.org/10.1007/s12599-020-00672-x
Mirbabaie, M., Brünker, F., Möllmann (Frick), N.R.J. et al. (2021a). The rise of artificial intelligence – understanding the AI identity threat at the workplace. Electron Markets. https://doi.org/10.1007/s12525-021-00496-xv
Mirbabaie, M., Stieglitz, S., & Frick, N. R. J. (2021b). Hybrid intelligence in hospitals: Towards a research agenda for collaboration. Electronic Markets. https://doi.org/10.1007/s12525-021-00457-4
Mitchell, T., Cohen, W., Hruschka, E., Talukdar, P., Yang, B., Betteridge, J., et al. (2018). Never-ending learning. Communications of the ACM, 61(5), 103–115. https://doi.org/10.1145/3191513
Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177–213. https://doi.org/10.1207/S1532690XCI1902_02
Morgeson, F. P., Reider, M. W., & Campion, M. A. (2005). Selecting individuals in team settings: The importance of social skills, personality characteristics, and teamwork knowledge. Personnel Psychology, 58(3), 583–611. https://doi.org/10.1111/j.1744-6570.2005.655.x
Morrissey, K., & Kirakowski, J. (2013). “Realness” in Chatbots: Establishing quantifiable criteria. In M. Kurosu (Ed.), 5th international conference on human-computer interaction: Interaction modalities and techniques - Volume Part IV (pp. 87–96). https://doi.org/10.1007/978-3-642-39330-3_10
Mouakket, S., & Sun, Y. (2020). Investigating the impact of personality traits of social network sites users on information disclosure in China: The moderating role of gender. Information Systems Frontiers, 22(6), 1305–1321. https://doi.org/10.1007/s10796-019-09933-x
Nakano, Y., & Fukuhara, Y. (2012). Estimating conversational dominance in multiparty interaction. In Proceedings of the 14th ACM international conference on multimodal interaction - ICMI ’12 (p. 77). ACM Press. https://doi.org/10.1145/2388676.2388699
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
Norman, D. (2017). Design, business models, and human-technology teamwork. Research-Technology Management, 60(1), 26–30. https://doi.org/10.1080/08956308.2017.1255051
Noyes, J. M., & Bruneau, D. P. J. (2007). A self-analysis of the NASA-TLX workload measure. Ergonomics, 50(4), 514–519. https://doi.org/10.1080/00140130701235232
O’Leary, K., O’Reilly, P., Feller, J., Gleasure, R., Li, S., & Cristoforo, J. (2017). Exploring the application of blockchain technology to combat the effects of social loafing in cross functional group projects. In L. Morgan (Ed.), Proceedings of the 13th international symposium on open collaboration - OpenSym ’17 (pp. 1–8). ACM Press. https://doi.org/10.1145/3125433.3125464
ONeill, T. A., Boyce, M., & McLarnon, M. J. W. . (2020). Team health and project quality are improved when peer evaluation scores affect grades on team projects. Frontiers in Education. https://doi.org/10.3389/feduc.2020.00049
Onwuegbuzie, A. J., & Leech, N. L. (2005). Taking the “Q” out of research: Teaching research methodology courses without the divide between quantitative and qualitative paradigms. Quality & Quantity, 39(3), 267–295. https://doi.org/10.1007/s11135-004-1670-0
Pace, F., D’Urso, G., Zappulla, C., & Pace, U. (2019). The relation between workload and personal well-being among university professors. Current Psychology. https://doi.org/10.1007/s12144-019-00294-x
Panganiban, A. R., Matthews, G., & Long, M. D. (2020). Transparency in autonomous teammates. Journal of Cognitive Engineering and Decision Making, 14(2), 174–190. https://doi.org/10.1177/1555343419881563
Pearce, J. L., & Gregersen, H. B. (1991). Task interdependence and extrarole behavior: A test of the mediating effects of felt responsibility. Journal of Applied Psychology, 76(6), 838–844. https://doi.org/10.1037/0021-9010.76.6.838
Peters, G.-J. Y. (2018). The alpha and the omega of scale reliability and validity: Why and how to abandon {Cronbach}’s alpha and the route towards more comprehensive assessment of scale quality. European Health Psychologist, 16(2), 56–59. https://doi.org/10.31234/osf.io/h47fv
Pfeuffer, N., Benlian, A., Gimpel, H., & Hinz, O. (2019). Anthropomorphic information systems. Business & Information Systems Engineering, 61(4), 523–533. https://doi.org/10.1007/s12599-019-00599-y
Plotnick, L., Hiltz, S. R., & Privman, R. (2016). Ingroup dynamics and perceived effectiveness of partially distributed teams. IEEE Transactions on Professional Communication, 59(3), 203–229. https://doi.org/10.1109/TPC.2016.2583258
Porra, J., Lacity, M., & Parks, M. S. (2020). Can computer based human-likeness endanger humanness?” – A philosophical and ethical perspective on digital assistants expressing feelings they can’t have. Information Systems Frontiers, 22(3), 533–547. https://doi.org/10.1007/s10796-019-09969-z
Quarteroni, S. (2018). Natural language processing for industry. Informatik-Spektrum, 41(2), 105–112. https://doi.org/10.1007/s00287-018-1094-1
Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television and new media like real people and places. Cambridge University Press.
Robert, L. P. (2020). Behavior-output control theory, trust and social loafing in virtual teams. Multimodal Technologies and Interaction, 4(3), 39. https://doi.org/10.3390/mti4030039
Rubio, S., Diaz, E., Martin, J., & Puente, J. M. (2004). Evaluation of subjective mental workload: A comparison of SWAT, NASA-TLX, and workload profile methods. Applied Psychology, 53(1), 61–86. https://doi.org/10.1111/j.1464-0597.2004.00161.x
Russel, S., & Norvig, P. (2016). Artificial intelligence: A modern approach. Addison Wesley.
Schippers, M. C. (2014). Social loafing tendencies and team performance: The compensating effect of agreeableness and conscientiousness. Academy of Management Learning & Education, 13(1), 62–81. https://doi.org/10.5465/amle.2012.0191
Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094
Schober, P., Boer, C., & Schwarte, L. A. (2018). Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126(5), 1763–1768. https://doi.org/10.1213/ANE.0000000000002864
Schwabe, G. (2003). Growing an application from collaboration to management support - the example of {Cupark}. https://doi.org/10.5167/uzh-57166
Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., et al. (2020a). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174
Seeber, I., Bittner, E., Briggs, R., Druckenmiller, D., de Vreede, G., de Vreede, T., et al. (2018). Machines as teammates: A collaboration research agenda. In Hawaii international conference on system sciences (Vol. 9, pp. 420–429).
Seeber, I., Waizenegger, L., Seidel, S., Morana, S., Benbasat, I., & Lowry, P. B. (2020b). Collaborating with technology-based autonomous agents. Internet Research, 30(1), 1–18. https://doi.org/10.1108/INTR-12-2019-0503
Seeber, I., Waizenegger, L., Seidel, S., Morana, S., Benbasat, I., & Lowry, P. B. (2020c). Collaborating with technology-based autonomous agents: Issues and research opportunities. Internet Research, 30(1), 1–18. https://doi.org/10.1108/INTR-12-2019-0503
Shamekhi, A., Liao, Q., Wang, D., Bellamy, R., & Erickson, T. (2018). Face value? Exploring the effects of embodiment for a group facilitation agent. Proceedings of the 2018 CHI conference on human factors in computing systems - CHI ’18, 1–13. https://doi.org/10.1145/3173574.3173965
Siddike, M. A. K., Spohrer, J., Demirkan, H., & Kohda, Y. (2018). People’s interactions with cognitive assistants for enhanced performances. In T. Bui (Ed.), Proceedings of the 51st Hawaii international conference on system sciences. Hawaii International Conference on System Sciences. https://doi.org/10.24251/HICSS.2018.205
Smith, B. N., Kerr, N. A., Markus, M. J., & Stasson, M. F. (2001). Individual differences in social loafing: Need for cognition as a motivator in collective performance. Group Dynamics: Theory, Research, and Practice, 5(2), 150–158. https://doi.org/10.1037/1089-2699.5.2.150
Soltani, S., Elkhani, N., Ahmad, M. N., & Taghia, J. (2013). The effects of perceived organizational support and personality traits on user involvement. In Proceedings of the 24th Australasian conference on information systems.
Soto Setzke, D., Riasanow, T., Böhm, M., & Krcmar, H. (2021). Pathways to digital service innovation: The role of digital transformation strategies in established organizations. Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10112-0
Spohrer, J., & Banavar, G. (2015). Cognition as a service: An industry perspective. AI Magazine, 36(4), 71–86. https://doi.org/10.1609/AIMAG.V36I4.2618
Stieglitz, S., Brachten, F., & Kissmer, T. (2018). Defining bots in an enterprise context. In Thirty Ninth international conference on information systems.
Suleiman, J., & Watson, R. T. (2008). Social loafing in technology-supported teams. Computer Supported Cooperative Work (CSCW), 17(4), 291–309. https://doi.org/10.1007/s10606-008-9075-6
SurveyCircle. (2021). SurveyCircle.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. https://doi.org/10.1207/s15516709cog1202{\textunderscore}4
Tangney, J. P., Baumeister, R. F., & Boone, A. L. (2004). High self-control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality, 72(2), 271–324. https://doi.org/10.1111/j.0022-3506.2004.00263.x
Taylor, D. M., Doria, J., & Tyler, J. K. (1983). Group performance and cohesiveness: An attribution analysis. The Journal of Social Psychology, 119(2), 187–198. https://doi.org/10.1080/00224545.1983.9922820
Trocin, C., Mikalef, P., Papamitsiou, Z., & Conboy, K. (2021). Responsible AI for digital health: A synthesis and a research agenda. Information Systems Frontiers. https://doi.org/10.1007/s10796-021-10146-4
Vaghefi, I., & Lapointe, L. (2012). Information technology and social loafing:A qualitative investigation. In Thirty Third International Conference on Information Systems.
Verhagen, T., van Nes, J., Feldberg, F., & van Dolen, W. (2014). Virtual customer service agents: Using social presence and personalization to shape online service encounters. Journal of Computer-Mediated Communication, 19(3), 529–545.
von Wolff, R., Hobert, S., & Schumann, M. (2019). How may i help you? - State of the art and open research questions for chatbots at the digital workplace. In Proceedings of the 52nd Hawaii international conference on system sciences.
Waizenegger, L., Seeber, I., Dawson, G., & Desouza, K. C. (2020). Conversational conversational agents—Exploring generative mechanisms and secondhand effects of actualized technology affordances. Proceedings of the 53rd Hawaii international conference on system.
Wang, W., & Siau, K. (2018). Artificial intelligence: A study on governance, policies, and regulations. In MWAIS 2018 Proceedings (p. 40).
Williams, K. D., & Karau, S. J. (1991). Social loafing and social compensation: The effects of expectations of co-worker performance. Journal of Personality and Social Psychology, 61(4), 570–581. https://doi.org/10.1037/0022-3514.61.4.570
Wünderlich, N., & Paluch, S. (2017). A nice and friendly chat with a bot: User perceptions of AI-based service agents. International Conference on Information Systems, 1, 1–11.
Yan, J., & (Kevin), Leidner, D. E., & Benbya, H. . (2018). Differential innovativeness outcomes of user and employee participation in an online user innovation community. Journal of Management Information Systems, 35(3), 900–933. https://doi.org/10.1080/07421222.2018.1481669
You, S., & Robert, L. (2018). Emotional attachment, performance, and viability in teams collaborating with embodied physical action (EPA) Robots. Journal of the Association for Information Systems, 19(5), 377–407. https://doi.org/10.17705/1jais.00496
Zaccaro, S. J. (1984). Social loafing. Personality & Social Psychology Bulletin, 10(1), 99–106. https://doi.org/10.1177/0146167284101011
Zhou, H., Deng, Z., Xia, Y., & Fu, M. (2016). A new sampling method in particle filter based on Pearson correlation coefficient. Neurocomputing, 216, 208–215.
Zhu, M., Singh, S., & Wang, H. (2019). Perceptions of social loafing during the process of group development. International Journal of Organization Theory & Behavior, 22(4), 350–368. https://doi.org/10.1108/IJOTB-04-2018-0049
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Stieglitz, S., Mirbabaie, M., Möllmann, N.R.J. et al. Collaborating with Virtual Assistants in Organizations: Analyzing Social Loafing Tendencies and Responsibility Attribution. Inf Syst Front 24, 745–770 (2022). https://doi.org/10.1007/s10796-021-10201-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10796-021-10201-0