Social Dilemmas in The Sharing Economy - v07 - Socarxiv

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Social Dilemmas in the Sharing Economy*

Rense Corten †

Abstract
While it is widely acknowledged that trust plays an important role in the sharing economy, it is
often confused with other strategic considerations related to cooperation in sharing economy
interactions. In this chapter I propose that social dilemma research, rooted in game theory, offers a
rich and flexible theoretical framework for studying sharing economy interactions. I sketch
applicable models that may involve trust as an important feature of many sharing economy
interactions but also highlight other strategic considerations that superficially look similar to trust
problems but may be understood more fruitfully by alternative models. More specifically, I discuss
various types of two-person dilemmas such as the Prisoner’s Dilemma and the Trust Game, but also
N-person dilemmas, including common pool problems, exchange systems, and club goods. Solutions
to the cooperation problems implied by these dilemmas are also discussed. I conclude the chapter
by sketching some promising directions for future research.

* This is a draft chapter. The final version will be available in Handbook on the Sharing Economy, edited by
Belk, R., G. Eckhardt, and F. Bardhi , forthcoming 2019, Edward Elgar Publishing Ltd. The material cannot be
used for any other purpose without further permission of the publisher, and is for private use only
† Department of Sociology/ICS, Utrecht University; [email protected]

1
Introduction
Trust is often considered crucial in the sharing economy (e.g., Botsman and Rogers 2010; Frenken
and Schor 2017; Puschmann and Alt 2016): for “stranger sharing” (Schor 2014), actors need to
engage in interactions with partners who might very well take advantage of them (e.g., Bardhi and
Eckhardt 2012). This ubiquitous emphasis on trust suggests a general acknowledgement of the risk
of opportunistic behavior in sharing economy exchanges. In turn, this suggests that sharing
economy exchanges, not unlike traditional market exchanges, include aspects of strategic
interaction between participants in that participants’ actions are interdependent, and may take
each other’s actions into account when making decisions. I propose that social dilemma research,
rooted in game theory and rational choice theory, offers a rich and flexible theoretical framework
for addressing such strategic interdependencies. Studying the sharing economy from this
perspective allows for the development of theoretical models that indeed include trust as an
important feature of many interactions, but also highlights other strategic considerations that look
similar to trust problems but may actually be more fruitfully understood through different models.

For the sake of consistency of the chapter in terms of applications and examples, I follow the
definition of sharing economy by Frenken et al. (2015) who emphasize the utilization of idle
capacity. Importantly, this implies that platforms facilitating services (e.g., TaskRabbit or Uber) are
excluded for now, even if many of the arguments provided here – with appropriate modifications –
could also be applied to such platforms.

A social dilemma is, roughly, a situation in which a tension exists between individual rationality on
the one hand and collective rationality on the other (also see Kollock 1998; Rapoport 1974; Raub et
al. 2015 for more formal definitions). Social dilemmas are central to the study of cooperation
among goal-directed actors across a variety of disciplines in the social sciences, including
economics, sociology, and political science. While the Prisoners’ Dilemma (Rapoport and Chammah
1965) is by far the best known example of a social dilemma, the closely related trust game (Camerer
and Weigelt 1988; Coleman 1990) appears implicitly or explicitly in many accounts of the sharing
economy, underlying the ubiquitous notion that trust is the prevailing logic of the sharing economy.
Likewise, common pool problems or public good problems are frequently associated with the
sharing economy (Bradley and Pargman 2017). In this chapter, I discuss the applicability of these
well-known models to the sharing economy, but also discuss a number of lesser known approaches
from the social dilemmas literature, including the helping game and the theory of social exchange
systems (Ekeh 1974).

The fact that accounts of social dilemmas, being rooted in game theory, assume actors to be both
rational and selfish, may seem contradictory to the very notion of sharing (e.g., Belk 2010).
However, neither of these assumptions are strictly necessary for a fruitful analysis of social
dilemma situations. First, while typical analyses do require that actors are at least goal-directed
(i.e., motivated by incentives), it is often not necessary or desirable to make heroic assumptions
about forward-looking behavior or complete information. Indeed, arguments on bounded
rationality (Rubinstein 1998) are by now part of the standard repertoire of social dilemmas
research (e.g., Macy and Flache 2002). Second, game theory per se does not assume selfishness.

2
While “payoffs” as modeled in games represent actors’ utilities (i.e., their motives), these utilities
do not necessarily exclusively represent actors’ own material or monetary payoffs; it is perfectly
feasible to include other actors’ outcomes in utility functions (Fehr and Gintis 2007).

Two-person social dilemmas

Prisoner’s Dilemma
By far the best known example of a social dilemma is the Prisoners’ Dilemma (PD; Rapoport and
Chammah 1965), in which two players face incentives to take advantage of one another, although
mutual cooperation would actually make them both better off. The PD is two-sided in that the two
actors act simultaneously, and both face the same dilemma. Many platforms, such as Airbnb,
Couchsurfing and Blablacar, in principle have this two-sided nature, as both actors may face
incentives for opportunistic behavior, and the PD is indeed sometimes used as a model for sharing
economy interactions (e.g., Botsman and Rogers 2010:143). However, contrary to a common
misunderstanding, trust is not an issue in the PD: both actors face incentives to take advantage of
the other actor, regardless of the action of the other actor. This implies that beliefs about the
expected action of the other actor play no role in actors’ decision making: it is not the case that they
would be willing to cooperate on the condition that the other also cooperates.

Consider online peer-to-peer markets, such as eBay. There are typically two actors: the seller and
the buyer, who do not meet in person but trade via mail and wire transfers. To model their
interaction in terms of a PD, we define two actions for each actor. After the seller and buyer have
agreed on a price for an item, the seller has two options: to send the item to the buyer as agreed, or
to keep the item (or send an item of lower quality than agreed). The buyer also has two options: to
pay the agreed price for the item, or not. Obviously, the most attractive outcome for the seller is to
receive the money while keeping the item, while for the buyer the best outcome is receiving the
item for free. This combination of possible strategies would indeed lead to a PD, with the standard
prediction that both actors would take advantage of each other, even though they would both
prefer the cooperative outcome (Kollock 1999b).

Although this representation of the a typical eBay interaction captures the main risks involved for
both players and satisfies our intuition that online exchanges are essentially cooperation problems,
the resulting model is less satisfying in other respects. Most importantly, it ignores that in most
online exchange markets, social conventions have emerged that require buyers to make the first
move and pay up front, after which the seller ships the item. This makes opportunistic behavior by
the buyer impossible, and thus shifts all the risk of the exchange to the buyer. Similar arguments
apply to some types of sharing economy interactions, such as hospitality exchanges on platforms
like Airbnb or Couchsurfing. These too involve risks for both actors: for the guest, the obvious
safety risk of spending the night in a stranger’s house, and for the host, the risk of theft or damage
to property (and possibly also a safety risk). Yet, also here the fit is suboptimal, for example because
it does not allow for what is intuitively the most likely outcome of an Airbnb encounter: that the
guest will never enter the host’s house at all. Additionally, as in the eBay case, hospitality
interactions are often structured such that much of the risk is shifted to one party (for example, if

3
the accommodation is a separate apartment, the risk for the guest will be much smaller) to the
extent that the interaction is no longer a two-sided problem. If this is the case, a one-side dilemma
remains such as the trust game (next section).

Actor 2 Actor 2
C D C D
C R, R S, T C 3, 3 0, 5
Actor 1 Actor 1
D T, S P,P D 5, 0 1,1
Figure 1: Prisoner's Dilemma with generic payoffs (Left: T>R>P>S) and numerical payoffs (Right). Cells represent
payoffs to the actors in the form (Actor 1, Actor 2) resulting from combination of their respective choices (C or D).

The Trust game and the investment game


The trust game (Camerer and Weigelt 1988) can be understood as a sequential and one-sided
version of the PD in which one actor (the trustor) moves first, deciding whether or not to trust the
second actor (the trustee). If the trustor does place trust in the trustee, the trustee decides whether
or not to take advantage of the trustor (see Figure 2). As with the PD, the standard game-theoretical
prediction for this game is a pessimistic one. Anticipating that the trustee will abuse trust if given
the opportunity, the trustor will not trust the trustee, which is suboptimal for both. In contrast to
the PD, the problem has a one-sided nature, in that only the trustee has an incentive to take
advantage of the trustor. Another difference with the PD is that in the trust game, the trustor would
be willing to choose the cooperative option (to place trust) if and only if she would believe that the
likelihood that the trustee would abuse trust would be sufficiently low (this belief is what is also
commonly understood as “trust”). The Trust Game has been widely used to model economic
transactions that suffer from information asymmetry (Akerlof 1970), with recently also many
applications to online markets (e.g., Przepiorka 2013; Resnick and Zeckhauser 2002; Tadelis 2016),
and indeed features explicitly or implicitly in many accounts of the sharing economy.

Consider the case of hospitality services, such as Airbnb. Although this can be analyzed as a two-
sided dilemma, it seems reasonable to assume that the largest share of the risk lies on the side of
the host, while the host himself has few incentives or opportunities to take advantage of the guest.
After all, options for the host are somewhat limited: the characteristics of an accommodation are
more or less fixed, and after a host puts the accommodation on offer there is little the host could do
that would take advantage of the guest. This is particularly true for cases where the host is not
present during the stay, as seems increasingly common on Airbnb. The host, on the other hand,
runs the risk that the guest does not behave well and trashes the apartment. In such cases, the
exchange resembles a trust game, in which the possible actions are as follows: after the guest
requests a reservation of the accommodation for a certain period, the host faces the decision to
accept or reject the reservation, which respectively corresponds to “placing trust” or “withhold
trust” in the trust game. Subsequently, the guest, upon arrival, may either take good care of the
accommodation, or alternatively, behave carelessly and damage the property, disturb the
neighbors, or even behave intentionally maliciously and for example engage in theft. While
certainly not all guests, or indeed even relatively few, will actually have preferences to behave
opportunistically, it seems reasonable to assume that at least some guests do have such

4
preferences, and this sense the host faces a trust problem, which could be modeled as a trust game
with incomplete information (Camerer and Weigelt 1988).

Actor 1

Do not place Place trust


trust

Actor 2

Abuse trust Honor trust

P,P S,T R,R

Figure 2: Decision tree for the Trust Game (with T>R>P>S). Payoffs to the actors resulting from possible
combinations of their respective choices are represented at the bottom in the form (Actor 1, Actor 2).

In some instances actors have the possibility to act in a more fine-grained rather than binary
manner, as in the trust game. This would be the case, for example, if the trustor has the option to
use various levels of costly safeguards such as contracts or insurance to reduce risk. Such cases may
be analyzed as investment games (Berg et al. 1995), essentially a continuous version of the trust
game, in which the first mover decides on the degree to which she places trust in the trustee, upon
which the trustee decides on a degree of honoring or abusing trust. Engaging in the interaction
without any safeguards, thereby maximizing risk but minimizing transaction costs, would be
analogous to maximal investment in the investment game. Another application would be the case
were trustors may choose between different interactions that are more or less risky, such as in
peer-to-peer ridesharing where a driver may choose the length of the ride he or she is willing to
share. As in the trust game, the game theoretical prediction for the investment game is that the
trustor, anticipating that the trustee will return nothing (i.e., abuses trust) invests nothing (i.e., does
not trust).

N-person social dilemmas


Most of the examples so far involve exchanges in which one actor transfers some resource to
another actor and receives some benefit in return, such as in hospitality exchanges where the host
is paid by the guest in exchange for lodging. Many sharing economy interactions, however, involve
the pooling and redistribution of resources among more than two actors, or actors sharing a
resource without immediate (material) compensation. Such non-dyadic exchanges constitute
various types of N-person dilemmas, discussed in the next sections.

Public good problems and common pool resources


In a public goods game, actors choose whether to contribute part of their resources to the
production of a common resource (the public good). The sum of all contributions is then multiplied

5
by a factor m>1 and redistributed equally among all actors, irrespective of their individual
contributions. Due to this latter feature, the dominant strategy is to free-ride on the contributions of
others and not contribute anything. Thus, the standard game-theoretic prediction is that the public
good will not be produced. Classic examples include national defense, or, somewhat closer to the
topic at hand, open source software development (Kollock 1999a).

Common goods games are similar to public goods games, but there the problem is use rather than
production (Kollock 1998). Typical examples include overfishing, where fishermen face individual
incentives to maximize their catch, but as a result the fish stock is depleted to an extent that it can
no longer recuperate. While the two dilemmas are conceptually similar, result in the same
suboptimal outcome, and are often discussed interchangeably (e.g., Ostrom 2006), a key theoretical
difference is that common goods are rivalrous in that use of the resource by one actor diminishes
the possible use by other actors, while public goods are not (see Gardner et al. 1990 for a more
formal treatment). At the same time, they share the feature of non-excludability, in that it is
impossible, or at least very difficult, to prevent actors from using the resource.

Public- and common goods problems frequently appear in discussions of the sharing economy and
collaborative consumption (Botsman and Rogers 2010; Bradley and Pargman 2017; Erickson and
Sørensen 2016; Kostakis and Bauwens 2014), and earlier in the more general context of online
collaboration (Hess and Ostrom 2003; Kollock 1999a; Kollock and Smith 1996). Indeed, some early
examples of collaborative consumption such as public bike sharing (Cohen and Kietzmann 2014),
some types of car sharing (Bardhi and Eckhardt 2012), Freecycling (Norbutas and Corten 2018) or
collective toolsheds seem to share characteristics of public- or common goods in that they are not
dyadic exchanges but aim at the creation of a common pool resource that is open to all. However, as
Fremstad (2016 ) points out, many most modern examples of platform-based sharing economy
exchanges are excludable as access to such collective resources can be effectively limited through
membership of the platform. In this sense, the resources available via sharing platforms are
perhaps better characterized as club goods (Buchanan 1965), which to some extent provides a
solution for the provision of public goods.

Social exchange systems


Some types of exchanges are non-dyadic in that the actor sharing some resource does not receive
an immediate return. The hospitality equivalent of such an exchange is Couchsurfing, in which hosts
provide lodging without material compensation. Other examples include neighborhood sharing
platforms such as the Dutch platform Peerby,3 the now defunct Neighborgoods, or the Swiss-based
Pumpipumpe, but also Facebook groups aiming at “freecycling” (Norbutas and Corten 2018).

From a game-theoretic perspective, such exchanges may be modeled as Dictator Games (Forsythe
et al. 1994) in which actors can choose to transfer some of their resources to a second actor, or as
Helping Games, in which actors may produce a benefit to a second actor at a cost smaller than the
benefit (Nowak and Sigmund 1998). At first sight, these types of interactions do not appear to be
social dilemmas: the obvious unique Nash equilibrium in such cases is that the first actor gives

3At the time of writing, Peerby just introduced a rental option, while maintaining the option to borrow
without compensation.

6
nothing to the second actor, and there is no alternative outcome that would be more beneficial to
both.

A dilemma emerges, however, when such interactions take place not in isolated dyads but in larger
systems, in which actors sometimes play the role of provider and sometimes play the role of
receiver, as is the case in the sharing economy examples mentioned above. Then, the outcome in
which no actor provides anything to another actor is still an equilibrium, but all actors would be
better off4 if “help” was provided in each dyadic interaction (assuming that actors are equally often
in the receiver role). Such systems, known as generalized exchange systems (Ekeh 1974), thus
constitute social dilemmas at the macro level (Kollock 1999b; Yamagishi and Cook 1993).
Exchanges in these systems are “generalized” in that actors do not receive immediate compensation
from their exchange partners, but instead are compensated by receiving some resource from a third
actor when they are in need themselves. Typical examples include assisting stranded drivers on
remote mountain roads, where one does not receive immediate compensation for helping but
instead hopes to be assisted in a similar manner when in need.

While neighborhood lending platforms such as Peerby are exceptionally clear-cut instances of
generalized exchange systems, models like the Helping Game overlook that lending goods may also
include an element of trust, in that the borrower may not return the item in good shape or not at all.
This would suggest to model this type of sharing economy interactions using variations of trust
games in which the trustor (the provider), in dyadic exchanges, does not gain or even loses some
utility in the situation in which trust is placed and honored, in comparison to the situation in which
trust not placed. The Lending Game (recently proposed by Kas et al. 2018) is a first attempt at
modeling such exchanges. As compared to the regular Trust Game, the Lending Game again clearly
highlights that, from a rational point of view, sharing on platforms such as Peerby can only be
explained from the fact that these exchanges take place in larger systems, since even if the lender
were sure about the trustworthiness of the borrower, lending would still make the lender worse off
in isolated encounters.

Solutions: institutions for cooperation


Explaining the emergence of cooperation in social dilemmas has been the focus of a meanwhile
sizable literature in fields as diverse as social psychology, political science, economics, sociology,
and biology, resulting in various types of solutions. Kollock (1998) distinguishes motivational,
strategic, and structural solutions. Motivational solutions rely on psychological causes of
cooperation, such as other-regarding preferences (e.g., Fehr and Schmidt 1999) or bounded
rationality (e.g., Neyman 1985). Both the strategic and structural approaches prefer to maintain the
micro-level assumptions of selfishness and rationality, and instead look for social conditions that
may incentivize actors to cooperate (c.f. Coleman 1964). While the strategic approach looks for
conditions that leave the basic social dilemma intact (e.g., repeated interaction), the structural
approach introduces modifications to the rules of the stage game (e.g., punishment options). These
structural mechanisms can also be understood as “institutions” in the sense of “rules of the game”

4 Or at least not worse off on average, in the case of the Dictator Game.

7
(North 1990) that govern interaction. As sharing economy interactions are typically mediated by
online platforms that to a large extent shape the possible actions of participants, the emphasis here
will be on such institutional solutions.5

Solutions for two-person dilemmas


A prominent type of explanation within the structural approach involves the embeddedness of the
social dilemma in ongoing social relations. For the Prisoner’s Dilemma, Axelrod (1984) famously
showed that if actors play the game repeatedly and can observe each other’s behavior, conditional
cooperation can be sustained because long-term benefits of mutual cooperation outweigh the
short-term incentive to defect. Similar results hold for the trust game (Kreps 1990) and other
dyadic social dilemmas (Friedman 1971). The logic of embeddedness extends also to social
networks, where the spread of information via networks about past behavior again allows for
conditional cooperation to emerge (Buskens 2002; Raub and Weesie 1990).

Typical for sharing economy interactions, however, is that social embeddedness tends to be low:
interactions typically take place outside established social contexts and are probably rarely
repeated6 (i.e., ‘stranger sharing’; Schor 2014). Consequently, sharing economy platforms have
implemented a range of mechanisms aimed at promoting cooperation in social dilemmas.

Perhaps the best-known example of an institutional mechanism to promote cooperation is the


wide-spread use of reputation systems or “rating systems”7, allowing users to publicly “rate” their
experience with their exchange partners through some type of fixed and often quantified format
(e.g., the ubiquitous “five stars”) or through a free-form qualitative format (e.g., written reviews).
Conceptually, reputation systems promote trust and cooperation in much the same way social
embeddedness does: by allowing actors to monitor each other’s past behavior, potential defectors
are incentivized to cooperate or deterred from entering the market (Bolton et al. 2004b; Przepiorka
2013; Resnick and Zeckhauser 2002; Tadelis 2016). Meanwhile there is ample empirical evidence
from lab experiments (Bolton et al. 2004b, 2004a), field experiments (Resnick and Zeckhauser
2006) and observational studies on online markets (Diekmann et al. 2014; Kollock 1999b;
Przepiorka et al. 2017; Resnick and Zeckhauser 2002) that such reputation systems indeed tend to
promote trust, and as such these systems are widely considered to be the key innovation that
makes trust in online interactions (including sharing economy interactions) feasible (e.g., Botsman
and Rogers 2010).

Nevertheless, from a theoretical perspective, reputation systems are not without problems, which
are only partly understood. I highlight three issues here. First, the effectiveness of reputation
systems relies on the sharing of information by participants, which creates a second order social

5 Naturally, cultural and political contexts may also impact cooperation. Within Kollock’s framework, they
may be seen as operating through actors’ motivations (e.g., culturally determined preferences) or by changing
the rules of the game (e.g., through the law).
6 Although I am not aware of any empirical research that has assessed the prevalence of repeated interaction

in the sharing economy.


7 For brevity, I ignore here other widely applied solutions such as insurances and payment systems.

Nevertheless, both can be understood in terms of social dilemmas: e.g., in trust games, insurances modify the
trustor’s payoffs, while payment systems limit the trustee’s actions.

8
dilemma (Heckathorn 1989) in the sense that reputational information itself is a public good.
Diekmann et al. (2014) propose that other-regarding preferences and strong reciprocity may
explain contributions to reputation systems, which, interestingly, reduces an initially structural
solution to a motivational solution. Second, an additional second-order dilemma arises, coined the
“informational dilemma” by Bolton et al. (2004a). Without a reputation system, interacting with
newcomers generates private information about the trustworthiness of these newcomers that may
be worthwhile in the longer run, but in the presence of a reputation system, this information is no
longer private and actors instead have an incentive to free-ride on others generating this
information, which in theory makes establishing a good reputation harder for newcomers. Third,
reputation systems may generate arbitrary inequality through “reputation cascades” (Frey and van
de Rijt 2016). Because actors prefer to cooperate with partners who already have established a
reputation of trustworthiness, interaction may concentrate on just a few actors who then receive a
disproportional share of the benefits from exchange, to the expense of others who might be equally
trustworthy.

Solutions for N-person dilemmas


The large and to a large extent experimental literature on public goods suggests that, in the
presence of sufficiently many conditional cooperators, cooperation in public goods can be achieved
via various mechanisms, most prominently the possibility of costly punishment (e.g., Fehr and
Gintis 2007). Other mechanisms are feasible too, however (see Chaudhuri 2011 for a review), and
the most relevant of these for the sharing economy may be assortative mixing: when actors have
possibilities to choose who they want to play the public goods game with, production of the public
good becomes more likely (Ahn et al. 2009; Page et al. 2005). This speaks to the earlier observation
that many public goods problems in the sharing economy, being excludable, more resemble club
goods than pure public goods, but also raises the question whether sharing economy exchanges of
this type can be scaled to large groups if they indeed rely on (self-)selection of participants of a
certain type.

Similarly, a large body of literature exists on the conditions under which common pool resources
can be sustained, including the seminal work by Ostrom (1990) who formulated eight “design
principles” for institutions aimed at sustaining common pool resources. The principles rely strongly
on clear group boundaries and participation and self-determination of the actors using the common
pool resource in creating and implementing such institutions. While these principles may to some
extent apply to online communities (Kollock 1999a; Kollock and Smith 1996), whether they apply
to sharing economy platforms is questionable (Bradley and Pargman 2017): these platforms often
have a clear separation between the platform who create and enforce the rules, and users who have
little influence on these rules.

Research on generalized exchange systems often focusses at exchanges in fixed structures (i.e.,
‘network-generalized exchange’; Ekeh 1974; Yamagishi and Cook 1993), which is less relevant to
sharing economy interactions, where actors are typically free to choose who they exchange with.
Takahashi (2000) studies such “pure generalized exchange” from the perspective of evolutionary
game theory, and concludes that generalized reciprocity can emerge if actors can select recipients
based on recipients’ previous behavior, which suggest that some mechanisms must be present to

9
make this information available. Other research points at the importance of group solidarity (Molm
et al. 2007) or social learning (Tsvetkova and Macy 2014), but with few exceptions (Norbutas and
Corten 2018; Willer et al. 2012), these ideas have not been systematically studied in sharing
economy settings.

Conclusion
I hope this short chapter has demonstrated that social dilemma research has much to offer to the
study of the sharing economy. In particular, I hope to have shown that the social dilemma
perspective highlights strategic complexities of sharing economy interactions that are sometimes
obscured by a rather loose use of concepts such as “trust” or “commons.” For example, we have
seen that exchanges with direct compensation create different types of dilemmas than exchanges
without such compensation, and that N-person dilemmas can be expected to follow different logics
than two-person dilemmas, and may require different solutions. Also, I also hope to have shown
that by framing sharing economy interactions in terms of social dilemmas, a wealth of theoretical
and empirical literature becomes available that may help explain sharing economy phenomena, as
well as inform future research. Conversely, the emergence of the sharing economy provides new
opportunities to study social dilemmas in field contexts (including field experiments), due to its
combination of, on the one hand, a wide variety in platforms and thereby institutional contexts, and
on the other the fact that, due to its online nature, behavior is often recorded in detail.

I end with some possible avenues for such research. First, the sharing economy provides many new
opportunities to study the important topic of reputation systems, and to address the issues
surrounding such systems as summarized above. Second, the widespread and sometimes
remarkable level of cooperation observed in the sharing economy begs the question whether these
levels are due to the implementation of adequate institutions by sharing platforms, or to self-
selection by participants with certain social preferences. An answer to this question will have
important implications for the growth potential of the sharing economy. Third, the success of the
sharing economy raises new questions regarding the “governance of the commons” (Ostrom 1990),
given that sharing economy platforms that seem to share features of common pool resources at the
same time violate Ostrom’s celebrated design principles (Bradley and Pargman 2017). Fourth, the
sharing economy involves new types of social dilemmas that so far have rarely been studied, such
as the mixture of generalized exchange and trust that can be observed in neighborhood sharing
platforms (Kas et al. 2018). Finally, the emergence of large-scale “pure” generalized exchange
systems provides new opportunities to test both novel and existing hypotheses from the classic
tradition of social exchange theory.

References
Ahn, T. K., R. M. Isaac and T. C. Salmon (2009), ‘Coming and going: Experiments on endogenous
group sizes for excludable public goods’, Journal of Public Economics, 93 (1–2), 336–51.

Akerlof, G. A. (1970), ‘The Market for “Lemons”: Quality Uncertainty and the Market Mechanism’,
Quantitative Journal of Economy, 84, 488–500.

10
Axelrod, R. (1984), The Evolution of Cooperation, New York: Basic Books.

Bardhi, F. and G. M. Eckhardt (2012), ‘Access-Based Consumption : The Case of Car Sharing’, Journal
of Consumer Research, 39 (4), 881–98.

Belk, R. (2010), ‘Sharing’, Journal of Consumer Research, 35 (5), 715–734.

Berg, J., J. Dickhaut and K. McCabe (1995), ‘Trust, reciprocity, and social history’, Games and
Economic Behavior, 10 (1), 122–42.

Bolton, G. E., E. Katok and A. Ockenfels (2004a), ‘How Effective Are Electronic Reputation
Mechanisms? An Experimental Investigation’, Management Science, 50 (11), 1587–602.

Bolton, G. E., E. Katok and A. Ockenfels (2004b), ‘Trust among Internet Traders’, Analyse & Kritik, 26
(1), 185–202.

Botsman, R. and R. Rogers (2010), ‘What’s mine is yours’, London: Collins.

Bradley, K. and D. Pargman (2017), ‘The sharing economy as the commons of the 21st century’,
Cambridge Journal of Regions, Economy and Society, 10 (2), 231–47.

Buchanan, J. M. (1965), ‘An Economic Theory of Clubs’, Economica, 32 (125), 1.

Buskens, V. (2002), Social Networks and Trust, Boston, MA: Kluwer Academic Publishers.

Camerer, C. and K. Weigelt (1988), ‘Experimental Tests of a Sequential Equilibrium Reputation


Model’, Econometrica, 56 (1), 1–36.

Chaudhuri, A. (2011), ‘Sustaining cooperation in laboratory public goods experiments: a selective


survey of the literature’, Experimental Economics, 14 (1), 47–83.

Cohen, B. and J. Kietzmann (2014), ‘Ride On! Mobility Business Models for the Sharing Economy’,
Organization and Environment, 27 (3), 279–96.

Coleman, J. S. (1964), ‘Collective Decisions’, Sociological Inquiry, 34 (1964), 166–81.

Coleman, J. S. (1990), Foundations of Social Theory, Cambridge, MA: Belknap.

Diekmann, A., B. Jann, W. Przepiorka and S. Wehrli (2014), ‘Reputation formation and the evolution
of cooperation in anonymous online markets’, American Sociological Review, 79 (1), 65–85.

Ekeh, P. P. (1974), Social Exchange Theory: The Two Traditions, London: Heinemann Educational
Books.

Erickson, K. and I. Sørensen (2016), ‘Regulating the sharing economy’, Internet Policy Review, 5 (2),
1–13.

Fehr, E. and H. Gintis (2007), ‘Human Motivation and Social Cooperation: Experimental and
Analytical Foundations’, Annual Review of Sociology, 33, 43–64.

Fehr, E. and K. M. Schmidt (1999), ‘A Theory of Fairness, Competition, and Cooperation’, Quarterly
Journal of Economics, 114 (3), 817–68.

Forsythe, R., J. L. Horowitz, N. E. Savin and M. Sefton (1994), ‘Fairness in Simple Bargaining

11
Experiments’, Games and Economic Behavior, 6 (3), 347–69.

Fremstad, A. (2016), ‘Sticky Norms, Endogenous Preferences, and Shareable Goods’, Review of
Social Economy, 74 (2), 194–214.

Frenken, K., T. Meelen, M. Arets and P. B. van de Glind (2015), ‘Smarter regulation for the sharing
economy’, The Guardian.

Frenken, K. and J. Schor (2017), ‘Putting the sharing economy into perspective’, Environmental
Innovation and Societal Transitions, accessed at https://doi.org/10.1016/j.eist.2017.01.003.

Frey, V. and A. van de Rijt (2016), ‘Arbitrary inequality in reputation systems’, Scientific Reports,
38304.

Friedman, J. W. (1971), ‘A non-cooperative equilibrium for supergames’, The Review of Economic


Studies, 38, 1–12.

Gardner, R., E. Ostrom and J. M. Walker (1990), ‘The Nature of Common-Pool Resource Problems’,
Rationality and Society, 2 (3), 335–58.

Heckathorn, D. D. (1989), ‘Collective Action and the Second-Order Free Rider Problem’, Rationality
and Society, 1 (1), 78–100.

Hess, C. and E. Ostrom (2003), ‘Ideas, Artifacts, and Facilities: Information as a Common-Pool
Resource’, Law and Contemporary Problems, 66 (1/2), 111–45.

Kas, J., R. Corten and A. van de Rijt (2018), Indirect and Generalized Reciprocity in an Investment
Game with Partial versus Universal Reputation Systems, Utrecht.

Kollock, P. (1998), ‘Social Dilemmas: The Anatomy of Cooperation’, Annual Review of Sociology, 24,
183–214.

Kollock, P. (1999a), ‘The economies of online cooperation’, in M. A. Smith and P. Kollock (eds),
Communities in Cyberspace, New York: Routledge, pp. 220–39.

Kollock, P. (1999b), ‘The production of trust in online markets’, Advances in Group Processes.

Kollock, P. and M. Smith (1996), ‘Managing the Virtual Commons: Cooperation and Conflict in
Computer Communities’, Computer-Mediated Communication: Linguistic, Social, and Cross-
Cultural Perspectives, 109–28.

Kostakis, V. and M. Bauwens (2014), Network Society and Future Scenarios for a Collaborative
Economy, London: Palgrave McMillan.

Kreps, D. M. (1990), ‘Corporate culture and economic theory’, in J. E. Alt and K. A. Shepsle (eds),
Perspectives on Positive Political Economy, Cambridge: Cambridge University Press, pp. 90–
143.

Macy, M. W. and A. Flache (2002), ‘Learning Dynamics in Social Dilemmas’, Proceedings of the
National Academy of Sciences, 99 (3), 7229–36.

Molm, L. D., J. Collett and D. Schaefer (2007), ‘Building solidarity through generalized exchange: A
theory of Reciprocity’, American Journal of Sociology, 113 (1), 205–42.

12
Neyman, A. (1985), ‘Bounded complexity justifies cooperation in the finitely repeated prisoners’
dilemma’, Economics Letters, 19 (3), 227–9.

Norbutas, L. and R. Corten (2018), ‘Sustainability of generalized exchange in the sharing economy:
The case of the “freecycling” facebook groups’, International Journal of the Commons, 12 (1),
accessed at https://doi.org/10.18352/ijc.789.

North, D. C. (1990), Institutions, Institutional Change and Economic Performance, Cambridge:


Cambridge University Press.

Nowak, M. A. and K. Sigmund (1998), ‘Evolution of indirect reciprocity by image scoring’, Nature,
393 (6685), 573–7.

Ostrom, E. (1990), Governing the Commons: The Evolution of Institutions for Collective Action,
Cambridge, MA: Cambridge University Press.

Ostrom, E. (2006), ‘The value-added of laboratory experiments for the study of institutions and
common-pool resources’, Journal of Economic Behavior & Organization.

Page, T., L. Putterman and B. Unel (2005), ‘Voluntary Association in Public Goods Experiments:
Reciprocity, Mimicry and Efficiency*’, The Economic Journal, 115 (506), 1032–53.

Przepiorka, W. (2013), ‘Buyers pay for and sellers invest in a good reputation: More evidence from
eBay’, The Journal of Socio-Economics, 42, 31–42.

Przepiorka, W., L. Norbutas and R. Corten (2017), ‘Order without Law: Reputation Promotes
Cooperation in a Cryptomarket for Illegal Drugs’, European Sociological Review, 33 (6),
accessed at https://doi.org/10.1093/esr/jcx072.

Puschmann, T. and R. Alt (2016), ‘Sharing economy’, Business and Information Systems Engineering,
58 (1), 93–9.

Rapoport, A. (1974), ‘Prisoner’s Dilemma - Recollections and Observations’, in A. Rapoport (ed),


Game Theory as a Theory of Conflict Resolution, Dordrecht: Reidel, pp. 17–34.

Rapoport, A. and A. M. Chammah (1965), Prisoner’s Dilemma: A Study in Conflict and Cooperation,
Ann Arbor: Univresity of Michigan Press.

Raub, W., V. W. Buskens and R. Corten (2015), ‘Social Dilemmas and Cooperation’, in N. Braun and
N. J. Saam (eds), Handbuch Modellbildung Und Simulation in Den Sozialwissenschaften,
Wiesbaden: Springer, pp. 597–626.

Raub, W. and J. Weesie (1990), ‘Reputation and Efficiency in Social Interactions: An Example of
Network Effects’, American Journal of Sociology, 96, 626–54.

Resnick, P. and R. Zeckhauser (2002), ‘Trust among strangers in Internet transactions: Empirical
analysis of eBay’s reputation system’, in M. R. Baye (ed), The Economics of the Internet and E-
Commerce. Vol. 11. Adv. Appl. Microeconomics., Amsterdam: Elsevier Science.

Resnick, P. and R. Zeckhauser (2006), ‘The value of reputation on eBay: A controlled experiment’,
Experimental Economics, 9 (2), 79–101.

Rubinstein, A. (1998), Modeling Bounded Rationality, Cambridge, MA: MIT Press.

13
Schor, J. (2014), Debating the Sharing Economy, accessed at www.greattransition.org.

Tadelis, S. (2016), ‘Reputation and feedback systems in online platform markets’, Annual Review of
Economics, 8, 231–340.

Takahashi, N. (2000), ‘The emergence of generalized exchange’, American Journal of Sociology, 105
(4), 1105–34.

Tsvetkova, M. and M. Macy (2014), ‘The Social Contagion of Generosity’, PloS One, 9 (2), e87275.

Willer, R., F. J. Flynn and S. Zak (2012), ‘Structure, Identity, and Solidarity’, Administrative Science
Quarterly, 57 (1), 119–55.

Yamagishi, T. and K. Cook (1993), ‘Generalized exchange and social dilemmas’, Social Psychology
Quarterly, 56 (4), 235–48.

14

You might also like