Dynamics of Deceptive

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Dynamics of deceptive interactions in social networks

Rafael A. Barrio1,2 , Tzipe Govezensky3 , Robin Dunbar4,2 , Gerardo I


niguez
2,4,6,7
Kimmo Kaski
1
2

arXiv:1509.03918v1 [physics.soc-ph] 14 Sep 2015

5,2

, and

Instituto de Fsica, Universidad Nacional Aut


onoma de Mexico, 01000 Mexico D.F., Mexico

Department of Computer Science, Aalto University School of Science, FI-00076 AALTO, Finland

Instituto de Investigaciones Biomedicas, Universidad Nacional Aut


onoma de Mexico, 04510 Mexico D.F.,
Mexico
4

Department of Experimental Psychology, University of Oxford, OX1 3UD, United Kingdom

Centro de Investigaci
on y Docencia Econ
omicas, Consejo Nacional de Ciencia y Tecnologa, 01210 Mexico
D.F., Mexico
6

CABDyN Complexity Centre, Said Business School, University of Oxford, OX1 1HP, United Kingdom

Center for Complex Network Research (CCNR), Department of Physics, Northeastern University, Boston MA
02115, USA

September 15, 2015

Abstract
In this paper we examine the role of lies in human social relations by implementing some salient
characteristics of deceptive interactions into an opinion formation model, so as to describe the dynamical behaviour of a social network more realistically. In this model we take into account such
basic properties of social networks as the dynamics of the intensity of interactions, the influence of
public opinion, and the fact that in every human interaction it might be convenient to deceive or
withhold information depending on the instantaneous situation of each individual in the network.
We find that lies shape the topology of social networks, especially the formation of tightly linked,
small communities with loose connections between them. We also find that agents with a larger
proportion of deceptive interactions are the ones that connect communities of different opinion, and
in this sense they have substantial centrality in the network. We then discuss the consequences of
these results for the social behaviour of humans and predict the changes that could arise due to a
varying tolerance for lies in society.

keywords: social networks, anthropology, self-organised systems, mathematical modelling

Background

Deception, withholding information, making misleading statements, or blunt lies, are attitudes that most
societies abhor, and parents, mentors and educators invest a great deal of effort in teaching that such
behaviour is wrong and damages society [1, 2, 3, 4]. Yet it is also true that deception and lies are present
[email protected]

in practically all human interactions and societies [5, 6, 7, 8]. This being so, we must conclude that there
is a fundamental reason that prevents the social world from being totally honest.
Broadly speaking, trust-based exchange relationships play an important role in the emergence of
cooperation and complex structure in many social, economic, and biological systems [9, 10, 11]. In
human societies trust promotes peoples willingness to engage in reciprocity [12], while deception is
likely to destroy the stability of such relationships by only favouring particular individuals [13, 14].
However, most research has been focused on how to detect and police deception [15, 16], rather than
on the mechanisms regulating the appearance of lies and their implications for the structure of social
networks.
Previously we have studied deception and its societal consequences by means of an agent-based opinion
formation model [17] where the state of an agent i is described with two time-dependent variables, i.e. its
true (but not public) opinion xi and its public opinion yi , in principle different from xi . Their difference
d = |xi yi | quantifies the lies that are told by agent i to its neighbours, which are linked by weighted
links Aij representing social interactions. Agents and links constitute a highly structured social network
where opinion formation takes place. Both state variables evolve with a characteristic time scale dt, while
link weights change on a different time scale D. In addition, the network structure co-evolves with the
opinion dynamics via a network rewiring process with its own slower time scale, such that the weakest
links are cut and the same number of new links are randomly created to conserve the average degree of
the network. In the model, deception is defined as a partially truthful exchange of information between
agents (that is, a linear combination of xi and yj ) with the overall proportion of honesty in the system
regulated by a single parameter. Thus lies may be considered as pro- or anti-social interactions if the
information passed from agent i to agent j is proportional to yj or yj , respectively. The selection of
pro- or anti-social deception mimics the agents intention to be as similar or different as possible from
its neighbour [18]. In this context, pro-social lies are those that benefit the recipient rather than the
perpetrator, for example by continuing to reinforce the dyadic relationship between them. Common
examples might be liking something on someones social media page even though one does not really,
or asserting that something is fine when in fact it is not.
This quite simple model already gives some hints about what the fundamental utility for lying might
be. We discovered that, although anti-social lies destroy the connectivity of the social network, a certain
frequency of pro-social deception actually enhances the centrality of liars (who serve as links between
small communities of honest people). However, in this model individuals are assumed to pursue a fixed
strategy: they are always honest individuals, pro-social liars or anti-social liars. In more realistic scenarios, of course, there are enormous fluctuations away from this simple fixed strategy set and individuals
vary their behaviour between the three strategies according to circumstances, even though they may
naturally tend towards one strategy most of the time. An important step in exploring the dynamics of
social deception, then, is to develop a model that incorporates a significant amount of strategic flexibility
at the individual level. Apart from adding more realism to the model, this has the important consequence
of allowing individuals and populations to evolve towards a natural equilibrium, as individuals adjust
their own behaviour in accordance with the cost and benefit regimes they encounter [13, 19, 20].
The fundamental question in modelling deception is: why do people lie? In human deception, the
key issue must be related to the benefits and costs when deciding what information to pass on in an
interaction with another person. From this point of view, lying is a decision-making problem with an
optimal solution dependent on the gains and risks of lying [21, 22]. It is therefore important to include
in the model some way of deciding the best possible answer in every instantaneous and directed dyadic
interaction. In this paper we propose a more realistic model for the dynamics of deception including
2

these features. First we describe the model in general, including the dynamics of link weights and the
decision-making process. Then we discuss the results of our numerical simulations and make concluding
remarks.

Methods

Like in our earlier study [23], the basic dynamical equation for the opinion of an agent can be written
as,
xi
= fs (i)|xi | + i fl (i),
t

(1)

where the state variable xi is bounded by [-1,1] and represents the instantaneous opinion of agent i, such
that -1 corresponds to total disagreement and +1 to total agreement with a given topic of discussion.
The first term on the right-hand side describes an exchange of information between a pair of agents
through discussion, i.e. the interaction is short range. The second term stands for the influence of the
overall opinion in the network on agent i, and hence the interaction is long range. Both terms evolve
with a time scale dt called transaction time. The parameter i is a random bounded variable that
represents the attitude of agent i to the overall opinion fl (i), being near -1 if the agent is inclined to go
against the crowd and near +1 otherwise.
In accord with our earlier model of deceptive interactions between agents [17], we define a second
state variable yi corresponding to other agents public perception of the true but private xi , from which
yi may differ in value if agent i is not totally honest. The difference d = |xi yi | stands for the amount
of dishonesty or the size of the lie. Hence the overall opinion fl (i) should be formed with the publicly
available information (through social meetings, rumours, and news in the media) represented here by
the time-dependent variable yi ,
fl (i) =

`X
max
`=2

1
`

yj (t),

(2)

jm` (i)

where the second sum is over the set m` (i) of all agents j separated from agent i by a shortest-path
length ` = 2, . . . , `max . We assume that the influence of an agent decays with the distance `, i.e. the
smallest number of links needed to reach j from i in the network. Without loss of generality we also
consider a 1/`-dependence.
In Eq. (1) the short-range term is the direct interaction between agents with ` = 1,
X

fs (i) =

wij (t),

(3)

jm1 (i)

where wij (t) is the instantaneous information that agent j passes to i [see Eq. (5)]. Observe that in
general the matrix w is not symmetric; that is, the information that agent i gives to j, wji 6= wij .
Therefore, the sum of the elements of a row in w gives fs (i), while the sum of the elements of each
column in w is proportional to the average apparent opinion the agent holds in the network,
yi (t) =

1
ki

wji (t),

(4)

jm1 (i)

where ki = |m1 (i)| is the degree of agent i. Explicitly, the public opinion yi is the average of the
instantaneous information wji received by all neighbours j, and is thus bounded between -1 and +1.

Finally, we define the instantaneous information wij as,


wij (t) = 0 (j, i, t),

(5)

where the optimal opinion 0 that agent j shares with agent i (i.e. between truth and pro- or anti-social
lies) is the result of an individual decision-making process, as explained in section 2.2.
The nature of direct transactions is illustrated in Fig 1. For example, the terms wij |xi | in Eq. (1) imply
that if wij has the same sign as xi , agent i will reinforce its position and get closer to the extreme opinion
sign(xi ). Next we introduce the dynamical processes involved in our deception model, as described in
the sections below.

2.1

Dynamics of link weights and rewiring scheme

In social networks individuals are connected by weighted links that vary over time in the course of their
dyadic interactions and decision making. We assume that bad decisions (not necessarily due to lies)
are punished by weakening the link weight Aij between agents i and j. This can be incorporated into
the model by introducing a simple dynamics for link weights,
Aij
= DTij (t),
t

(6)

where D sets the time scale of change and Tij is a function of the four site variables associated with a
link, namely (xi , yi ) and (xj , yj ). Since Aij depends on two agents, we choose the following symmetric
form,



Tij (t) = [xi (t) + yj (t)] + [yi (t) + xj (t)] 1 Pij (t),

(7)

where the first square bracket represents similarity between agents according to the information agent
i has at its disposal, the second bracket is the corresponding term for agent j, and Pij (t) is the instant
punishment for lying. Observe that the term in {...} varies between 3 and -1, such that links with Tij < 0
are at risk of being cut, as Aij approaches zero. The matrix T should be symmetric under exchange
between i and j, in which case the punishment the society imposes on liars reads as follows,
Pij (t) = (1 e) (|wji (t) xi (t)| + |wij (t) xj (t)|) ,

(8)

where e is a parameter that measures the tolerance of society against lies, being 0 if it is intolerant, and
1 if it does not punish liars. Thus, the punishment Pij is proportional to the difference between the true
opinion of an agent and the instantaneous information it shares with its neighbour.
In a real social network, its topology may coevolve with the dyadic interactions between individuals [24]. Thus we introduce a rewiring scheme with dynamics dependent on link weights. We proceed by
cutting links with negative weight (Aij 0 Aij 0) and immediately creating a new link (with initial
weight 1) to substitute the cut link, in order to maintain the average degree of the network constant.
This sets the time scale for network rewiring larger than dt and variable, unlike in our former model in
which rewiring is performed at fixed intervals [17, 23].
The creation of new links is performed as follows. First we identify the two agents involved in the
cut link and choose the one with the least number of neighbours (i.e. the most isolated agent); then we
look at the second neighbours of this individual and create a link with the second neighbour (friend of
a [possibly lost] friend) that has the lowest degree. This bias for favouring agents with only a few links

assumes that such agents are more keen on making new friends. If there are no second neighbours, then
we create a link with one of the agents with the lowest degree in the whole network. As a further remark,
we note that the instantaneous information wij is not necessarily the same for everyone all the time [see
Eq. (5)], the net effect of which is that the rewiring time is variable and controlled for each link by the
slope Tij .

2.2

Decision-making process

In the Background we state that a key issue for human deceptive interactions is related to the benefit
and cost of lying, which an individual needs to evaluate in order to pass optimal information to others.
This means that in each transaction, acting agent j has to make a decision whether to lie or not to
neighbour i, by finding the extremal values of a utility function R that includes all gains and costs of
being deceitful or truthful,
R() = H L() = GH CH GL () + CL (),

(9)

where is the opinion agent j decides to share with i, either the truth ( = xj ) or a lie ( 6= xj ). Note
that the gain GH and the cost CH of being honest do not depend on , while the gain GL and the
cost CL of being dishonest depend on the particular opinion that agent j wishes to share. Then, the
optimal opinion 0 (j, i, t) = wij (t) is a stationary point of R (either a maximum or minimum) in the
permissible interval [1, 1], implicitly defined by,

R
= 0.
0

(10)

Under these conditions, the decision-making process for agent j is as follows. When interacting with
neighbour i, agent j finds the optimal opinion 0 by solving Eq. (10). If R(0 ) > 0, then agent j ignores
0 and shares its true opinion (i.e. wij = xj ), since being truthful is a better decision than not being
truthful. Otherwise, agent j shares the optimal opinion 0 . Note that in general 0 stands for a lie,
except for the case when 0 = xj . This particular case could be interpreted as a situation where an
agent (that has initially decided to lie) finds that the optimal decision is to be honest.
For the decision-making process to be complete, we need to find concrete expressions for the gains and
costs in Eq. (9), based on the available sociological knowledge about interactions between individuals.
The gain for being honest is considered to be prestige or reputation [8], which in our context is measured
by the degree kj . This is based on a previously studied sociological assumption [25, 26], namely that the
more connected you are, the more prestige you have, which means that you are considered trustworthy.
Therefore, we write the gain as,
GH =

kj min({k})
,
max({k}) min({k})

(11)

where we have normalised the degree to compare agents within and between communities.
The risk associated with being honest is proportional to the apparent disparity of opinion, as this
distance increases antagonism between agents. In other words, people tend to use small, white lies
to protect social links rather than put them at risk, since the difference in opinion corresponding to
complete honesty may create tension in the relationship [27]. Then we write,
CH =

|xj yi |
,
2

(12)

which is normalised to make the gain and cost terms comparable.


If the main aim of an agents deception is to avoid rejection by strengthening its social links, then
everyday experience suggests that the gain due to lying has two components. First, the liar benefits by
not losing face, that is, by minimising the distance yj between its lie and public opinion so that the
lie is not discovered easily. Second, the agent j gains by mimicking the response wji that agent i is
giving back, i.e. by pretending to be more similar to its peers than it is in reality. In this case we write,
GL () =





1
| yj |
1
| wji |
1
+
1
.
2
2
2
2

(13)

The risk of lying is also two-fold: agent j could pass information that is similar to its true opinion (xj )
and risk a large mismatch of opinions. The bigger this difference, the higher the penalty (or cost) that
the liar will incur from being found out [28]. Simultaneously, the agent could try to mimic an agreement
with the public opinion of agent i, thereby risking a mismatch if agent i is deceptive: the bigger the
difference between the lie and public opinion, the bigger the cost the liar bears from being found out.
This being so, the risk is the product of the two possibilities,
CL () =

| xj | | yi |.
4

(14)

We have normalised Eqs. (11)-(13) such that all of them vary between zero and one. The coefficient
in Eq. (14) is a quantity that controls the relative weight of the cost of lying, which could depend
on other social and cultural properties. We have examined the behaviour of the utility function R and
determined that = 4 balances the gains and costs between lying and being honest (see Supplementary
Information).
To summarise, the dynamics of our model is highly non-linear: the elements of the adjacency matrix
A depend on the vector y (the other agents perception of an agents true opinion), which in turn is
calculated every interaction using w. The matrix w, in turn, is the instantaneous flow of information
through each link, resulting from an agents decision about the optimal information to pass on (0 ). Our
new approach of casting transactions between agents as an optimised decision-making process constitutes
a major difference from our earlier model [17]. The benefit is that we now avoid predefining individuals as
either pro- or anti-social liars. Nevertheless, we can still classify the lie 0 in a binary way by comparing
the distances |0 yi | and |0 + yi |, being pro-social if the former is smaller and anti-social otherwise.
Then, the threshold that classifies 0 as a pro- or anti-social lie is 0, the midpoint between yi . We
emphasise that 0 = 0 (j, i, t) is a function of j, i, and t, obtained by finding a stationary point of H L
as given in Eq. (9). Allowing deception to vary in this way is more realistic than our previous approach
of having fixed, predefined phenotypes that do not vary in their behaviour: everyday experience tells us
that people do not say the same thing to everybody all the time.

Results

Using the model described above, we perform extensive numerical simulations for networks of different
sizes (N = 100, 200 and 500), starting with random initial conditions for the state variables xi , yi and
attitude parameters i , and by following a simple Euler integration scheme for a sufficient number of
time steps to obtain extreme opinions (xi = 1) for all agents. In some cases, we find a small number
of agents remaining locked in an undecided state |xi | < 1. This number depends on the values of the
parameters e (tolerance of society against lies) and D (time scale for the growth of link weights), the
6

only two parameters whose variation we consider. We can follow the time history of the process and
monitor the state variables and also the amount of instantaneous lying for all agents in the system. We
may also distinguish anti-social lies from pro-social ones by monitoring the optimal opinion 0 (j, i, t). If
this quantity is nearer to yi , then we consider it as a pro-social lie, and if it is nearer to yi we take it
as an anti-social lie [17]. As simulation results are qualitatively unaffected by network size, from now on
we only consider N = 100.
In Fig. 2 we show typical realisations of the dynamics while keeping D = 3 for the two extreme
values of the parameter e. Observe that honest agents with similar opinion form small clusters (i.e.
the network is assortative with respect to xi [29]), but there is also a large number of liars that serve
as weak links between these tightly bonded communities and can dwell within very small communities.
The effect of increasing social tolerance (e = 1) is small, but surprisingly the relative number of liars is
smaller when there is no punishment for lying. This result is in qualitative agreement with empirical
observations made in schools with and without punishment [30], where the authors report that a punitive
environment not only fosters increased dishonesty but also childrens abilities to lie in order to conceal
their transgressions.
In Fig. 3 we show the proportion of pro- and anti-social lies in the instantaneous response of each
agent to all of its neighbours, for the case e = 1 of Fig. 2. Observe that many agents lie pro-socially
all the time and to all their neighbours. In contrast, there are very few anti-social lies and they do not
persist, but instead disappear completely for some agents while for others they become intermittent. If
we reduce the social tolerance for lying, anti-social behaviour disappears completely. Notice also that,
despite using ideal conditions for the appearance of big lies (e = 1), there are always some agents that
behave in a totally honest manner all the time.
To analyse these results further, we find it convenient to quantify separately various groups of agents.
We focus our attention on those agents who are totally honest throughout the time line of the model,
those who tell only pro-social lies, those who tell anti- or pro-social lies indiscriminately, and those who
only lie anti-socially. Notice that for this kind of analysis to succeed, we need many realisations to
obtain an average value for each case. Also, we need to look at probability distributions rather than
well-defined categories, since the freedom to decide produces strategy changes in all agents. The model
output suggests that agents who only lie anti-socially are very few in number, as can be seen from
Fig. 4 where we show the probability distribution of the proportion of anti-social lies for the case of zero
tolerance (e = 0) and no punishment (e = 1). Note that social tolerance to lying has very little effect on
the appearance of anti-social lies, and that most of the agents turn out to tell very few lies.
In Fig. 5 we show the probability distribution of the proportion of lies per dyadic interaction, r, for
agents who lie indiscriminately (Anti-Pro case) and for those who tell only pro-social lies (Pro case),
for the two extreme values of the social parameter e. Explicitly, r is the fraction of the total number
of interactions that are lies. These results suggest that nearly 50% of the agents lie pro-socially a
small amount of time (< 10% of the total time). However, there are always a few agents who lie more
frequently: 20% of agents lie all the time, regardless of the level of social tolerance. This result implies
that it is disadvantageous to lie all the time. Fig. 5 also suggests that the Pro- and Anti-Pro strategies
are qualitatively quite similar, in the sense that many agents lie sporadically (small r) and a few agents
( 20%) lie most of the time. Obviously, the relative numbers depend on the social tolerance parameter
as well. An interesting observation here is that the lack of punishment for lies makes very small difference
between the appearance of anti- or pro-social lies.
Ref. [5] reports a statistical study of the number and nature of lies told by a group of 144 individuals,
the results of which are summarised in Table 2 therein for comparison with the results of our model.
7

For instance, the percentage of honest people (those who never tell lies) is 1.3% for individuals recruited
from a college population, and 8.6% for individuals from a local community. Our results show that 2.7%
of agents are honest if there is punishment, and 3.5% if there is not. Furthermore, the mean number
of lies per day measured was roughly 2, and the mean number of social interactions 6, of which only
61% were dyadic interactions [5]. This means that 50% of the dyadic interactions were lies. The area
under the curve in Fig. 5 (without distinguishing between pro- and anti-social lies) gives about 53%,
thus roughly agreeing with the experimental findings. In addition, we predict that the number of lies
per social interaction (obtained by calculating the mean value for the amount of dishonesty or the size of
the lie, d) is 0.38, in close agreement with the value 0.31 0.11 reported in Table 2 of the experimental
study [5].
We now investigate the social advantages of lying. This is done by examining network measures such
as the weighted clustering coefficient (WCC) and betweenness centrality (BC). In a weighted network or
graph, WCC is the geometric average of link weights in a subgraph, where weights are normalised by the
maximum weight in the network [31, 32]. BT is the sum of the fraction of all-pairs shortest paths that
pass through a node [33]. With these measures we see that liars serve as bridges between communities
(Fig. 2); hence they sacrifice their WCC (belonging to highly connected clusters) in order to improve
their BC (easy communication with all members of the network).
It is possible for a deceptive agent to increase its clustering coefficient provided it tells small lies
(d 0.1), irrespective of whether these are anti- or pro-social, even in the face of social punishment. In
Fig. 6, we show WCC averaged over 300 runs of the model for N = 100 agents. The conclusion is that
from the perspective of clustering, there is no benefit to lying unless the lie is small. We also see that a
society with total tolerance to lying does not provide liars with much advantage. However, when there
is punishment, the agents who lie pro- and anti-socially have an advantage over totally honest agents,
provided the number of lies is small. This can be seen in Fig. 7, where we show the WCC probability
distribution for selected values of the proportion of lies per dyadic interaction r.
In conclusion, the real advantage to being a liar is that BC increases for pro-social liars, provided
they tell small lies. This could be interpreted as a mechanism people use to fit into a social network
better. In Fig 8 we show the BC median taken over 300 runs of the model as a function of the size of lies,
and for different groups of agents. We present the median, instead of the average, because for the form
of distribution functions we have here, the median is a more robust quantity. In Fig 8(a) we show the
case of zero tolerance (e = 0), where only pro-social liars have an advantage over honest agents, provided
their lies are very small. In Fig 8(b) we show the same for tolerance e = 1.

Discussion and conclusions

Our model for the dynamics of lying in a social network of humans incorporates the relevant fact that
the individual act of lying corresponds to a flexible, personal and instantaneous decision. Hence we have
mapped this action to a decision-making problem for individuals in society so that they can adjust their
behaviour to the situations they face. In contrast to our earlier model [17], where agents have fixed
behavioural strategies, the present model is more realistic as the information an agent passes on (either
as truth, or as a pro- or anti-social lie) is a function of the circumstances the agent encounters. In effect,
we assume that agents learn and adjust their behaviour in the light of experience. In this respect, the
present model lies at the opposite extreme from our previous model in that it does not assume that
agents have inherited psychological predispositions to behave in a particular way. In all likelihood, of

course, the real world probably lies somewhere between these two extremes. The fact that the findings
from the two models are in broad agreement is therefore comforting, in that it suggests that irrespective
of where reality actually lies our findings will be robust.
The model studied here does not have a network rewiring time scale that is proportional to the
fundamental transaction time scale dt. Nevertheless, the rewiring time scale can still be tuned by using
one of two parameters: D (the time scale for the growth of link weights) or e (the tolerance of society
against lies). In addition, we see that as the tolerance parameter e increases, society is more tolerant of
lies and the time at which bonds are deleted increases, thus making the process slower. Furthermore, in
Fig. 2 we see that communities are much better defined when e = 0; as a result intolerance to lies and a
potential for high punishment shortens the mean life of liars, segregating the network into communities
with strong internal links.
In all our simulations we find that the number of anti-social lies diminishes, while pro-social lies
persist to considerable numbers throughout the dynamical progress of the system. Here we see that the
social tolerance parameter e has little effect on the proportion of anti-social lies, although it regulates
the total number of lies. Most of the agents lie sporadically and only very few seem to lie all the time.
This indicates that true liars are very rare in society, although they are nonetheless very special since
they have large BC. We also find that liars who tell small lies (d < 0.1) have larger WCC. In addition,
we observe that the dynamics favours the formation of cliques of purely honest agents, and that liars are
usually found to be in the perimeter of cliques and connected by weak links.
We also show that in general being honest pays off, but in some circumstances liars acquire an
advantage over honest agents. For instance, agents who occasionally tell small lies have larger WCC
and BC than honest agents (see Fig.7 for k < 0.2). Moreover, an agent who tells a fair number of
medium-sized lies (d = 1) could attain a larger BC than when it chooses to be honest.
In summary, it is interesting to note that for small lies, all liars are better off than honest agents.
Even more interesting is the fact that there is a maximal advantage for people who tell sizeable anti-social
lies. In short, anti-social lies yield considerable benefits for liars in appropriate circumstances. We know
that anti-social lies normally destroy the social network when they are widely distributed throughout
society [17]. However, our findings suggest that, in certain specific circumstances, they could have the
opposite effect and make the network more robust. This implies that we need to identify the conditions
under which such a situation arises, by examining the local circumstances of those agents who present
this peculiar property. Paradoxically, it might then be possible to increase the information flow in the
network by adding appropriate motifs that allow agents to have both high BC and WCC.
Competing interests: We have no competing interests.
Authors contributions: All authors conceived, designed and coordinated the study. RAB and GI
developed and analysed the model. RAB and TG carried out the numerical and statistical analyses. All
authors helped draft the manuscript and gave final approval for publication.
Funding: RAB acknowledges support from Conacyt project No. 179616. RDs research is funded by
a European Research Council Advanced grant. GI and KK acknowledge support from EUs FP7 FET
Open STREP Project ICTeCollective No. 238597, and GI from the Academy of Finland.

References
[1] M. Lachmann, S. Sz
amad
o, and C. T. Bergstrom. Cost and conflict in animal signals and human
language. Proc. Natl. Acad. Sci. USA, 98:1318913194, 2001. DOI: 10.1073/pnas.231216498.

[2] C. S. Wang, A. D. Galinsky, and J. K. Murnighan. Bad drives psychological reactions, but good
propels behavior responses to honesty and deception.

Psychol. Sci., 20:634644, 2009.

DOI:

10.1111/j.1467-9280.2009.02344.x.
[3] F. Xu, X. Bao, G. Fu, V. Talwar, and K. Lee. Lying and truth-telling in children: From concept to
action. Child Dev., 81:581596, 2010. DOI: 10.1111/j.1467-8624.2009.01417.x.
[4] M. Popliger, V. Talwar, and A. Crossman. Predictors of childrens prosocial lie-telling: Motivation,
socialization variables, and moral understanding. J. Exp. Child Psychol., 110:373392, 2011. DOI:
10.1016/j.jecp.2011.05.003.
[5] B. M. DePaulo, D. A. Kashy, S. E. Kirkendol, M. M. Wyer, and J. A. Epstein. Lying in everyday
life. J. Pers. Soc. Psychol., 70:979995, 1996. DOI: 10.1037/0022-3514.70.5.979.
[6] J. T. Hancock, J. Thom-Santelli, and T. Ritchie. Deception and design: The impact of communication technology on lying behavior. In Proceedings of the SIGCHI Conference on Human Factors
in Computing Systems, pages 129134. ACM, New York, 2004. DOI: 10.1145/985692.985709.
[7] K. B. Serota, T. R. Levine, and F. J. Boster. The prevalence of lying in america: Three studies of
self-reported lies. Hum. Commun. Res., 36:225, 2010. DOI: 10.1111/j.1468-2958.2009.01366.x.
[8] M. S. Panasiti, E. F. Pavone, A. Merla, and S. M. Aglioti. Situational and dispositional determinants
of intentional deceiving. PloS ONE, 6:e19465, 2011. DOI: 10.1371/journal.pone.0019465.
[9] M. A. Nowak and K. Sigmund. Evolution of indirect reciprocity. Nature, 437:12911298, 2005. DOI:
10.1038/nature04131.
[10] G. Roberts and J. S. Renwick. The development of cooperative relationships: An experiment. Proc.
R. Soc. B, 270:2279, 2003. DOI: 10.1098/rspb.2003.2491.
[11] W.-S. Kim. Effects of a trust mechanism on complex adaptive supply networks: An agent-based
social simulation study. J. Artif. Soc. Soc. Simul., 12:4, 2009.
[12] S. V. Paunonen. You are honest, therefore I like you and find you attractive. J. Res. Pers., 40:237
249, 2006. DOI: 10.1016/j.jrp.2004.12.003.
[13] W. A. Searcy and S. Nowicki. The Evolution of Animal Communication: Reliability and Deception
in Signaling Systems. Princeton University Press, Princeton, 2005.
[14] A. Sutcliffe and D. Wang. Computational modelling of trust and social relationships. J. Artif. Soc.
Soc. Simul., 15:3, 2012.
[15] D. Nyberg. The Varnished Truth: Truth Telling and Deceiving in Ordinary Life. University of
Chicago Press, Chicago, 1993.
[16] K. H. Craik. Reputation: A Network Interpretation. Oxford University Press, USA, 2008.
[17] G. I
niguez, T. Govezensky, R. Dunbar, K. Kaski, and R. A. Barrio. Effects of deception in social
networks. Proc. R. Soc. B, 281:20141195, 2014. DOI: 10.1098/rspb.2014.1195.
[18] F. Gino and L. Pierce. Dishonesty in the name of equity. Psychol. Sci., 20:11531160, 2009. DOI:
10.1111/j.1467-9280.2009.02421.x.

10

[19] S. Sz
amad
o. Cheating as a mixed strategy in a simple model of aggressive communication. Anim.
Behav., 59:221230, 2000. DOI: 10.1006/anbe.1999.1293.
[20] J. T. Rowell, S. P. Ellner, and H. K. Reeve. Why animals lie: How dishonesty and belief can coexist
in a signaling system. Am. Nat., 168:E180E204, 2006. DOI: 10.1086/508809.
[21] U. Gneezy. Deception: The role of consequences. Am. Econ. Rev., 95:384394, 2005.
[22] S. Mitri, D. Floreano, and L. Keller. The evolution of information suppression in communicating robots with conflicting interests. Proc. Natl. Acad. Sci. USA, 106:1578615790, 2009. DOI:
10.1073/pnas.0903152106.
[23] G. I
niguez, J. Kertesz, K. K. Kaski, and R. A. Barrio. Opinion and community formation in
coevolving networks. Phys. Rev. E, 80:066119, 2009. DOI: 10.1103/PhysRevE.80.066119.
[24] T. Gross and H. Sayama. Adaptive Networks. Springer-Verlag, Berlin, 2009.
[25] J. Henrich and F. J. Gil-White. The evolution of prestige: Freely conferred deference as a mechanism
for enhancing the benefits of cultural transmission. Evol. Hum. Behav., 22:165196, 2001. DOI:
10.1016/S1090-5138(00)00071-4.
[26] T. D
avid-Barrett and R. I. M. Dunbar. Social elites can emerge naturally when interaction in
networks is restricted. Behav. Ecol., 25:5868, 2014. DOI: 10.1093/beheco/art085.
[27] R. I. M. Dunbar and A. Machin. Sex differences in relationship conflict and reconciliation. J. Evol.
Psych., 12:109133, 2014. DOI: 10.1556/JEP-D-13-00024.
[28] J. T. Hancock, C. Toma, and N. Elison. The truth about lying in online dating profiles. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 449452.
ACM, New York, 2007. DOI: 10.1145/1240624.1240697.
[29] M. E. J. Newman. The structure and function of complex networks. SIAM Rev., 45:167256, 2003.
DOI: 10.1137/S003614450342480.
[30] Victoria Talwar and Kang Lee. A punitive environment fosters childrens dishonesty: A natural
experiment. Child Dev., 82:17511758, 2011. DOI: 10.1111/j.1467-8624.2011.01663.x.
[31] J.-P. Onnela, J. Saram
aki, J. Kertesz, and K. Kaski. Intensity and coherence of motifs in weighted
complex networks. Phys. Rev. E, 71:065103(R), 2005. DOI: 10.1103/PhysRevE.71.065103.
[32] J. Saram
aki, M. Kivel
a, J.-P. Onnela, K. Kaski, and J. Kertesz. Generalizations of the clustering
coefficient to weighted complex networks. Phys. Rev. E, 75:027105, 2007. DOI: 10.1103/PhysRevE.75.027105.
[33] U. Brandes. On variants of shortest-path betweenness centrality and their generic computation.
Soc. Networks, 30:136145, 2008. DOI: 10.1016/j.socnet.2007.11.001.

Short title for page headings: Dynamics of deception in social networks

11

Figure 1: (Online version in colour.) Diagram of the transaction dynamics of the model. Agent i
perceives the opinion wij from neighbour j and changes its own true opinion according to Eq. (1).

e=0

e=1

Figure 2: Examples of networks with N = 100 agents after tf = 600 time iterations of the dynamics, for
e = 0, 1 and D = 3. The color code for the agents is: honest and xi = 1 (white), honest and xi = 1
(black), and liars (light blue or dark green) for xi = 1 or xi = 1, respectively. The blue/red circles are
undecided agents with 0 < xi < 1 and 1 < xi < 0, respectively. Observe that the network is assortative
with respect to the opinion xi . The width of the links indicates their weight.

12

prosocial lies

antisocial lies
Figure 3: Time history of the proportion of pro- and anti-social lies in the instantaneous response of
each agent to all of its neighbours, for e = 1. Dark/light dots correspond to high/low proportions,
respectively. Observe that pro-social lies are abundant and persistent. On the contrary, anti-social lies
are very few and intermittent.

13

cumulative probability

1
0.8
0.6

e=0
e=1

0.4
0.2
0
0

0.1
0.2
0.3
proportion of antisocial lies

0.4

Figure 4: (Online version in colour.) Probability distribution of the proportion of anti-social lies, obtained
from 300 runs of networks with N = 100 agents. The two extreme cases of social tolerance to lies (e = 0,
1) are shown.

cumulative probability

0.8

0.6

0.4

AntiPro e=0
Pro e=0
AntiPro e=1
Pro e=1

0.2

0
0

0.2
0.4
0.6
0.8
proportion of lies per dyadic interacttion

Figure 5: (Online version in colour.) Probability distribution of the proportion of lies per dyadic interaction, obtained from 300 runs of networks with N = 100 agents. Dotted lines include pro-social lies
only, while continuous lines indicate both anti- and pro-social lies. Orange/blue (light/dark gray) lines
correspond to e = 1, 0, respectively. The normalisation factor is the total time span of each dynamics.

14

0.3

(a)

Average WCC

0.25
0.2

AntiPro
Pro
Honest

0.15
0.1
0.05
0

Average WCC

0.15

(b)

0.1

0.05

0
0

0.5

1
d

1.5

Figure 6: (Online version in colour.) Average weighted clustering coefficient (WCC) as a function of the
size of the lie d for: (a) zero tolerance (e = 0), and (b) no punishment (e = 1). Triangles correspond to
pro-social lies only, and dots to both anti- and pro-social lies. The WCC for totally honest networks is
indicated by a large dot at d = 0.

15

cumulative probability

(a)
0.8
0.6

r<=0.2
0.2<r<=0.5
0.5<r<=0.8
0.8<r
Honest

0.4
0.2

cumulative probability

0
1

(b)
0.8
0.6
0.4
0.2
0
0

0.2

0.4

0.6

0.8

WCC

Figure 7: (Online version in colour.) WCC probability distribution for several values of r in a society
with zero tolerance (e = 0), for the: (a) Anti-Pro and (b) Pro cases. Notice the advantage over honest
people that agents who lie indiscriminately gain, in contrast to those who only lie pro-socially.

16

0.2

(a)
Median BC

0.15
AntiPro
Pro
Honest

0.1

0.05

0
0.25

(b)
Median BC

0.2
0.15
0.1
0.05
0
0

0.5

1
d

1.5

Figure 8: (Online version in colour.) Median of betweenness centrality (BC), obtained from 300 runs in
networks with N = 100 agents. Symbols are as in Fig. 6. Notice the advantage over honest individuals
for agents who lie indiscriminately, in contrast to those who only lie pro-socially, for most lie sizes d.

17

Supplementary Information for


Dynamics of deceptive interactions in social networks
R. A. Barrio, T. Govezensky, R. Dunbar, G. I
niguez , K. Kaski

Corresponding author email: [email protected]

Calculation of the optimal lie


The optimal lie 0 is an extremal of R given by Eq. (10), where,


1
1
| yj |
R() = GH |xj yi |
1
2
2
2


| wji |
| yi || xj |
+ 1
+
,
2
4

(15)

according to Eqs. (9) and (12)-(14). The first three terms in the right-hand side are normalised to one,
while the factor could in principle have any value, depending on the cost we wish to apply to an act
of deception. Observe that the two first terms do not depend on . Then, taking the derivative,

R
=
[| yj | + | wji |]



(| yi |)
(| xj )
+ | xj |
,
+ | yi |

(16)

which evaluated at 0 [as in Eq. (10)] gives,


sign(0 yj ) + sign(0 wji ) =
[|0 yi |sign(0 xj ) + |0 xj |sign(0 yi )] .

(17)

The solution 0 depends on the values of the pair of signs on the left-hand side of Eq. (17), denoted
as (++), (+), (+), or ().
Case 1
Let us first consider the case when the two terms on the left-hand side of Eq. (17) are (+) or (+). In
this situation, agent j detects that there is a difference between its public opinion (yj ) and what agent
i is saying back (wji ). This implies that,
0 = |0 yi |sign(0 xj ) + |0 xj |sign(0 yi ).

(18)

Squaring this equation we get (0 yi )2 = (0 xj )2 , i.e.,


0 =

xj + yi
.
2
18

(19)

1.0

R = HL

0.5

(a)

xj = 1.0
yj = 1.0
yi = -1.0
wji = -1.0

(b)

xj = 0.5
yj = 0.7
yi = -0.2
wji = -0.1

0.0
0.5

1.0
1.0

R = HL

0.5

=1
xj = 0.5
yj = -0.6
yi = -0.2
wji = -0.1

(c)

(d)

0.0
xj = 0.5
yj = 0.6
yi = 0.3
wji = 0.4

0.5

1.01.0 0.5 0.0 0.5 1.0 1.0 0.5 0.0 0.5 1.0

Figure 9: Examples of the utility function R() = H L(), for several values of the variables xj , yj ,
yi , wji , and = 1, 4, 7 (dashed, continuous, and dotted lines, respectively). In all cases GH = 0.5.

Eq. (19) is valid even for xj = yi , in which case agent j should be totally honest (0 = xj ).
In other words, the lie that agent j considers to be the best is exactly in the middle between its true
opinion (xj ) and the public opinion of agent i (yi ). This seems reasonable, since it implies an action to
minimise confrontation as much as possible. Observe that in this case the solution does not depend on .

Case 2
Now, let us consider the remaining cases (++) or () for the pair of signs in the left-hand side of
Eq. (17), which are only possible when agent i is telling agent j something very near the public opinion
of j. From Eq. (17) we have,
2 = [|0 yi |sign(0 xj )
+|0 xj |sign(0 yi )] .

(20)

Squaring this equation and considering that sign()| | = ,




4 = 2 (0 yi )2 + (0 xj )2 + 2(0 yi )(0 xj ) ,

(21)

which has solution,


0 =

1
(xj + yi )
.
2

19

(22)

According to Eq. (22), for larger the lie 0 is smaller (as expected), and for smaller the optimal
lie is always an extreme, 0 = 1, since 0 (1, 1). The sign in Eq. (22) should be chosen as to lower
the value of R(0 ).
Balance in the cost of lying
We can use these results to estimate an appropriate value for . The condition for this value should be
that, in the case of extreme opinion values for the agents, it balances the gains and costs between lying
and being honest. By extreme opinion values we mean the case when both agents have decided opposite
opinions and are totally honest, i.e. yi = wji = 1, and xj = yi . Since in this case the cost of lying
should be greater than the cost of being honest (CL CH ), we have,

|xj yi |
|0 xj ||0 yi |
.
4
2
According to Eq. (19) 0 = 0 in this case, leading to,
4.

(23)

Eq. (23) sets a lower bound to the arbitrary parameter , since for < 4 there would be situations
in which telling a lie is more convenient than being honest, even against the agents own beliefs, which
we do not consider reasonable. If we interpret as a cultural parameter that regulates the cost of telling
lies in society, then as increases the punishment for telling lies is large and society will tend to act
more honestly as a whole.
In Fig. 9 we show examples of the utility function R [from Eqs. (9) and (15)] as a function of for
particular values of the variables xj , yj , yi , wji and . From Fig. 9(a) we see that the value = 4
represents the point at which the normalized costs for lying or being honest balance, i.e. R(0 ) = 0.
This means that if is less that 4, R < 0 and agents lie all the time. Conversely, if > 4 then agents
will tend to be honest at every opportunity. In Fig. 9(b)-(d) we show several other possible shapes for
R. There can be two minima with similar negative values for R, such that 0 = xj and the optimal lie
is actually the truth (b). Agent j may also decide to lie if the minimum at 0 6= xj is lower (c). Finally,
both agents can have similar opinions, leading to an honest interaction (d). Observe that in the last
three cases, the value of has little effect in the stationary points of the utility function.

20

You might also like