Polarization On Social Media-Micro-Level Evidence and Macro-Level Impications
Polarization On Social Media-Micro-Level Evidence and Macro-Level Impications
Polarization On Social Media-Micro-Level Evidence and Macro-Level Impications
Abstract: We formally introduce and empirically test alternative micro-foundations of social influence in the
context of communication on social media. To this end, we first propose a general theoretical framework al-
lowing us to represent by different combinations of model parameters of influence-response functions whether
and to what extent exposure to online content leads to assimilative or repulsive (distancing) influence on a
user’s opinion. We show by means of an agent-based model of opinion dynamics on social media that these
influence-response functions indeed generate competing predictions about whether personalization (defined
as the extent to which users are shielded from foreign opinions) increases or decreases polarization. Next, we
conducted an online experiment with respondents recruited on Facebook to estimate model parameters em-
pirically, using Bayesian models for censored data. In the experiment, participants´ opinions were measured
before and after exposure to arguments reflecting different ideological positions and moral foundations in a
within-subject design. Our findings support that exposure to foreign opinions leads to assimilation towards
those opinions, moderated by the extent of (perceived) ideological similarity to the source of influence. We
also find weak evidence in support of repulsion (distancing), but only for very large disagreement. Feeding es-
timated parameter values back into the agent-based model suggests that reducing personalization would not
increase, but instead reduce the level of polarization generated by the opinion dynamics of the model. We con-
clude that the naive interpolation of micro-processes to macro-predictions can be misleading if models are not
sufficiently empirically calibrated.
Keywords: Social Influence, Filter Bubbles, Social Media, Opinion Dynamics, Polarization, Micro-Macro
This article is part of a special section on "Opinion Dynamics: 20 years later", guest-editors: Guillaume
Deffuant, Andreas Flache, Rainer Hegselmann, & Michael Mäs
Introduction
1.1 How can we explain the persistence of opinion diversity in settings where individuals exert social influence on
each other and should, thus, grow identical in the long run (Axelrod 1997)? Bounded-confidence models pro-
vide one of the most surprising solutions. If influence is limited to those interaction partners who do not dis-
agree too much, then opinions converge only locally. Internally, local clusters grow increasingly homogeneous
but, at the same time, differences between clusters are preserved. A large literature has demonstrated intrigu-
ing complexities of this seemingly simple solution, for example, that only small changes in micro-assumptions
Figure 1: Polarization in- and outside of filter bubbles as a function of the minimal distance at which repulsive
influence occurs. All data points correspond to 10 runs with 50 agents, run for 1,000 steps. Bubble-sizes are 20%
(small) and 80% (large)
Micro-Foundations of Polarization
Assimilation
3.1 When discussing a political issue, individuals communicate information, potentially exerting assimilative in-
fluence (or positive influence) upon each other. According to classical models of social influence, after com-
munication of opinion position or arguments, interaction partners are assumed to move closer to each other
(French 1956; Friedkin & Johnsen 2011). There are many explanations for why people display such behavior (for
a review, see Wood 2000). For example, social exchange theory posits that people are reward seeking, and are
sensitive to positive reinforcements that arise from complying to the opinion position of the interaction partner
(Homans 1958; Blau 1964). This reward seeking behavior can arise from either learning, operant conditioning
(Homans 1958; Akers et al. 1979), or forward-looking rational behavior (Blau 1964). The positive reinforcements
from compliance appear in the form of between-individual social cues or as within-individual rewards for re-
solving cognitive dissonance (Festinger 1964; Groeber et al. 2014).
3.2 Others have stressed the importance of uncertainty about the individual’s initially held belief (Koslin et al. 1967;
Bikhchandani et al. 1992). Individuals learn from information communicated during social interaction and feel
that they are approaching true judgement, hence adjusting their opinion towards a given argument. The more
uncertain an individual is about their position in the opinion spectrum, the more one will rely upon social cues
(Denrell 2008). These cues may, in turn, co-vary with the attractiveness of the source based on ex-ante opinion
difference or knowledgeability/status of the interaction partner (Koslin et al. 1967; Bikhchandani et al. 1992).
Initially held beliefs thus feed back into the individual inclination to adopt the other’s position (Byrne 1961).
This assumption is known as the moderated assimilative influence hypothesis: the larger the ex ante opinion
difference between a source and a receiver, the smaller the portion of disagreement that the receiving party
wants to minimize (Takács et al. 2016).
3.3 Various studies reported empirical support for assimilative influence (for reviews, see Falk & Scholz 2018; Cial-
dini & Goldstein 2004; Wood 2000). Tests of the moderated assimilative influence hypothesis are scarce and also
yielded mixed results (Koslin et al. 1967; Takács et al. 2016).
3.4 In an attempt to understand the opinion dynamics and cultural clustering observed in many modern societies,
scholars wondered “If people tend to become more alike in their beliefs, attitudes, and behavior when they
interact, why do not all such differences eventually disappear?” (Axelrod 1997, pp. 203). For models assuming
only assimilative influence between nodes integrated in a well-connected graph, consensus is the only attain-
able outcome (French 1956; Harary 1959; Abelson 1964; DeGroot 1974; Friedkin & Johnsen 2011). Therefore,
many micro-level assumptions have been developed that impede or alter interpersonal influence. Generally,
they fall apart in mechanisms of distancing (discussed in Section 3.1), and of bounded confidence.
3.5 Bounded confidence is the notion that people tend to accept information that is more or less in line with their
initially held belief, and ignore information that deviates too much from their own view. It follows from social
judgment theory (Sherif & Hovland 1961) and the notion of confirmation bias (Nickerson 1998). Like moder-
ated assimilative influence, the probability of accepting an argument decreases with opinion difference on the
Distancing
3.7 In addition to the proposed mechanisms of assimilative influence, under certain circumstances discussants
may increase their opinion distance. We discuss here the most prominent distancing mechanism: repulsion (or
negative influence / rejection).
3.8 When two discussants hold too dissimilar opinions, interaction may result in divergence rather than assimila-
tion or stability. This mechanism is know as repulsion (Macy et al. 2003). Along similar lines as the arguments
for bounded confidence, it has been reasoned that an individual may encounter arguments that are so differ-
ent from their opinion such that the argument falls in their latitude of rejection (Sherif & Hovland 1961; Perloff
1993; Jager & Amblard 2005). One will not only reject the given argument, but values it so low that it is taken as
indicative for the validity of the initially held belief, strengthening commitment to this belief.
3.9 Empirical evidence for repulsion is not unequivocal since some studies do (e.g. Berscheid 1966; Liu & Srivas-
tava 2015; Bail et al. 2018) and others do not (e.g. Lemaine 1975; Van Knippenberg et al. 1990; Takács et al.
2016) find support for distancing effects from interactions between strongly dissimilar discussants. A couple of
(methodological) issues have been raised that might explain this inconsistency (an extensive discussion of the
critique can be found in Krizan & Baron 2007; Takács et al. 2016). Some studies do not allow for separating posi-
tive from repulsive influence, either because they use an in-group/out-group block design (Lemaine 1975; Hogg
et al. 1990)3 or because they use a statistical method not suitable for explicitly testing the repulsive influence
hypothesis (Levendusky 2013; Takács et al. 2016). Repulsion should only occur amongst very dissimilar discus-
sants, so studies that use convenience samples of, for example, college students might not observe repulsion
because of a lack of opinion heterogeneity in homogeneous populations. Lastly, what issue is chosen for the
manipulation could also bias the results. Issues that the respondents find too trivial could be unable to trigger
repulsion because the respondent simply does not care enough.4 On the other hand, issues that respondents
are highly engaged in might not be suited to measure influence at all, since the respondent experiences a high
level of certainty about their initial opinion. Recent studies on political polarization in the context of online
social networks however did provide indications for divergence resulting from argument exchange between
people that identify with dissimilar political parties (conservatives and republicans) in the US (Bail et al. 2018;
Anspach 2017).
3.10 But not only studies that did not find repulsion are prone to methodological biases. Berger & Heath (2008)
showed that conformity as well as nonconformity to an outgroup may occur as a result of social desirability,
but leaves the actual ‘private’ opinion untouched. Moreover, opinion shifts may be observed in an immediate
pretest-posttest design, but are not always long-lasting or stable (Mazen & Leventhal 1972).
Alignment
3.11 Observing opinion shifts away from an information source does not necessarily imply that individuals are in-
fluenced by the position of their interaction partner on the given topic. An alternative processes of influence
is possible: an interaction partner may convey a signal about the state of opinion coherency in the world. This
line of reasoning argues that opinion shifts in either direction result from a tendency to align (dis-align) with
in-group (out-group) members (e.g. Baldassarri & Gelman 2008; Banisch & Olbrich 2019).
3.12 People generally strive for coherence in terms of beliefs, preferences and opinions and in doing so strive to
belong to a distinct social group (Bourdieu 1984). Adapting behavior and beliefs, a social mimicry, has been
identified as mechanism through which individuals aim to affiliate themselves with certain others or social
groups (Cialdini & Goldstein 2004). We therefore expect a stronger tendency to conform to, or diverge from, the
opinion of the sending actor once the social group that the sending actor belongs to is known.
The model
4.4 The model is inspired by Flache (2018a), but variations on the same idea have appeared elsewhere in the social
simulation literature as well (e.g. Mäs et al. 2014; Jager & Amblard 2005; Flache 2018b). The model deviates
from its predecessors in its capacity to disentangle the influence weight and threshold for repulsion, as we will
see later in this section.
4.5 Consider a population of N agents, in which each individual i at timepoint t holds an opinion oi,t in the opinion
spectrum [0, 1]. When presented with the opinion oj of any alter or source j, i forms the new, updated opinion
oi,t+1 :
oi,t+1 = oit + ∆oit = oit + αwijt (oj − oit ) (1)
4.7 Equation 2 defines the influence weight as a monotone function of opinion distance. The moderating effect of
opinion distance is captured by γ, which can adopt any real number. It is likely, however, that the value in many
real life settings is positive and in the single digit range, representing a reduction of influence strength with
opinion distance. Figure 2a shows how different values of γ affect the influence weight as a function of initial
opinion distance, assuming oj = 0 and α = 1. To test for the type of social influence, γ offers a straightforward
interpretation. When γ = 0, there is only assimilative influence—the expected opinion shift is proportional to
the size of disagreement at the outset. This encompasses, for example, dynamics of averaging (when α = 0.5)
and copying (when α = 1). When 0 < γ < 1 we speak of moderated assimilative influence—the expected
opinion shift is weaker the larger the a priori opinion difference. When γ > 1 we can observe moderated positive
influence and repulsion. In this case, the expected opinion shift is weaker the larger the opinion difference, until
the critical value of 1/γ, where after social influence becomes negative.
Figure 2: Weight function and expected post-stimulus opinion as a function of pre-stimulus opinion, for different
values of γ, and α=1. oj is set to 0
4.8 Finally, oi,t+1 is truncated such that it does not exceed the opinion bound [0, 1]:
1,
if oi,t+1 ≥ 1
T
oi,t+1 = 0, if oi,t+1 ≤ 0 (3)
oi,t+1 otherwise
4.9 Notice that for convenience we henceforth use oi,t+1 as shortcut for the truncated result oTi,t+1 . This truncation
is a necessary step for two reasons. First, non-truncation could create runaway dynamics where the agents
who diverge become inaccessible early on in the process. Second, and perhaps more importantly, there is a
clear relation to opinion measurement. When opinions are measured (e.g. in a survey) or inferred (e.g. from
text analysis) they will have to be expressed on a scale with an upper and a lower bound. As such we cannot
observe a strong persuasive ‘pull’ when opinions are similar, nor a strong ‘push’ when opinions are (almost)
maximally different.5 Figure 2b shows the predicted opinion o at time t = t + 1 as a function of oit , for different
values of β.
4.12 The moral foundations literature suggests that arguments built on certain moral foundations are more per-
suasive for some than for others. In particular, individuals who identify as politically more liberal are more
responsive to arguments rooted in fairness and harm. To model the difference in perceived persuasiveness of
the argument, we separate the persuasiveness parameter into a general term (α0 ), and one that uses the agent’s
group membership e (α1 ):
oi,t+1 = oit + α0 wijt (oj − oit ) + α1 (1 − δei ej )wijt (oj − oit ) (5)
4.13 Equation 5 uses the same weight term twice, but for both levels of persuasiveness as modeled by α0 and α1 .
Hypotheses
4.14 The classification of behavioral responses from the micro-foundations discussed in Section 3, and the subse-
quent integration of those foundations into a single formal model in Section 4 allows the formulation of a series
of hypotheses that describe the shape of the response function. In other words, the values of α and β as de-
scribed in our model allow us to differentiate between the types of influence experienced by the actors subject
to influence. For models that separate the values of our parameters for different groups (e.g. when α is split
into α0 and α1 ), the hypotheses refer to the combined value for the respective group. All but Hypothesis 4 have
been pre-registered with the Open Science Framework: https://osf.io/dy2z6/.
H1 Individuals decrease distance between their own opinion and the opinion of the source of influence, such
that α > 0 and γ ≥ 0 (assimilative influence)
H2 An individual’s distance between their own opinion and the opinion of the source moderates the degree of
social influence such that γ > 0 (moderated assimilative influence)
H3 If an individual’s distance between their own opinion and the opinion of the source is sufficiently large,
influence is negative, such that γ > 1 (assimilative and repulsive influence)
H4 Argument persuasiveness is increased by alignment of the moral foundation of the given argument with
ideological orientation, such that γ for arguments of a moral foundation aligned with an individual’s po-
litical ideology is larger (moral foundations)7
H5 If an individual belongs to the same opinion cluster as the source, the individual proves more receptive to
the message, such that α is larger in interactions with others who are perceived to be ideologically closer
(social influence)
5.5 The arguments in Study 1 were constructed by us rather than participants, which may have affected how be-
lievable the arguments appeared to the respondents. Furthermore, we were unable to vary social distance
either, to avoid deception. We therefore asked all our respondents from Study 1 to express their opinion as a
short Facebookpost-like statement. These statements—in combination with the expressed party-identification
of the respondent—were then used as stimuli for participants in Study 2. In order to measure the perceived so-
cial distance between the sender and receiver of the argument, we asked the receivers to indicate where they
believe the sender of the argument would place themselves on an ideological identification scale, between
‘Very leftists’ and ‘Very rightist’. Social distance was calculated by taking the absolute difference of self-reported
ideological identification of the receiver and perceived ideological identification of the sender.
5.6 The same sampling procedure was used as in Study 1. This time, a subset of the respondents was routed to
a control condition in one of the two topics, to quantify the accuracy of the opinion measures. In total, 272
respondents took part in the survey. We did not administer a second survey testing for lasting effects.
5.7 Two diverse sets of respondents were obtained in two recruitment periods in February and October of 2020.
Figure 7 in Appendix A provides an overview of the two studies, and the sequence in which respondents were
exposed to stimuli and provided their answers.
Table 1: Descriptive statistics of the cleaned and pooled samples used in the analysis.
5.10 When conducting survey experiments online, data quality is a concern. To check whether respondents filled in
the survey and paid attention to the texts and directions included in them, we monitored the time they spent
in the survey in general, and on the pages with the stimuli in particular. We also included a quiz at the end of
the survey that assessed whether they paid sufficient attention and asked the respondents about their enjoy-
ment of the survey. Using a combination of all measures for engagement—timing, enjoyment and the attention
quiz—we formulated five criteria for exclusion of responses. We excluded everyone who had (1) completed the
Statistical approach
5.11 We used a statistical approach that allows for direct testing of the theoretical model on the empirical data ob-
tained in our experiment. In order to do so, we had to solve two issues of measurement and inference: censor-
ing of extreme opinions, and interdependence of persuasiveness and influence weights. First, the theoretical
model assumes that there exists a latent opinion dimension that is larger than the measurable opinion dimen-
sion (which is bounded by extreme values). In other words, repulsion can be ‘felt’ by agents with large opinion
differences, but not expressed due to the bounds of the opinion measurement instrument. Traditional methods
will therefore sometimes underestimate the true influence weights. Second, in the theoretical model, the two
core parameters, for persuasiveness of an argument and shape of the influence weight, cannot be interpreted
independently. The model thus requires reformulation to account for this interdependence. An in-depth de-
scription of how to link theoretical models for social influence to empirically estimable models is given in Ap-
pendix B.
5.12 The statistical models were estimated using Bayesian estimation for censored data and evaluated using the
Watanabe–Akaike (or ‘widely applicable’) information criterion (WAIC) and the expected log pointwise predic-
tive density (elpd) of the Leave One Out cross-validation procedure (LOO) (Vehtari et al. 2017).
Results
Study 1
6.1 The respondents in Study 1 received arguments infused with words from the moral foundations dictionary be-
longing to either the foundations harm and fairness, or to loyalty, authority and purity. The recruitment proce-
dure aimed to recruit people with strong opinions, leading to large opinion differences and sufficient opinion
change. Figure 3 shows that this approach was successful. The (perceived) opinion distance to stimulus is a
bipolar distribution that was slightly right skewed.9 There appeared to be sufficient variance to model the effect
of distance. The amount of absolute opinion change was 13 percent on average, but distributed with consider-
able right skew. The difference between the indicated opinion before and after the stimulus increased slightly
at larger distances to the stimulus.
6.2 The first model was fitted with the pooled data from the argument exchange on development aid and tax deals
where only the general effect of argument distance is considered (see Table 2, Model 1). At most, the predicted
opinion shifts in either direction are about 22% of the distance between the opinion position of the receiver
before the argument and the perceived position of the argument. The actual observed shift will depend on the
perceived distance between the own position and the position of the argument, as modeled by the distance
parameter. According to the simple model 1, the distance at which the receiver is unaffected by the argument is
at β/α = 0.221/0.276 ≈ 80% of the opinion range. Opinion differences beyond that point may have resulted in
negative opinion shifts / distancing. This means that only interactions between individuals with rather extreme
opinions resulted in opinion adjustments away from the sender.
Table 2: Posterior distribution and model fit for Bayesian weighted linear influence models with stimulus moral-
ity and ideological identification.
6.3 Model 1 averages all responses, but the set-up of Study 1 allows to differentiate persuasiveness of an argument
as a function of moral foundation of the arguments in relation to the ideological identification of the respon-
dent. As discussed in Section 3, arguments can be used to signal why it is important to support a position,
extending beyond a simple piece of ‘objective’ information. By appealing to one of five moral foundations, they
present reasons to support a certain position in relation to a value system that the receiver supports. Previous
work showed that ideologically leftist individuals are more likely to respond to arguments that appeal to the
dimensions harm and fairness. Model 2 was used to test precisely this claim, as one would expect that in this
response model, the moral foundation of an argument will affect the perceived persuasiveness. The single α
term for argument persuasiveness is therefore supplemented with three extra terms: rightist-appealing moral-
ity of the argument (moral), respondent’s placement on the left-right spectrum (right), and an interaction of the
two former terms (moral × right).
6.4 The goodness of fit statistics from Model 2 in Table 2 indicate that the model is an improvement upon the first.
Furthermore, the estimates point in the direction that moral foundations indeed mattered for the subject’s
response to an argument. The general persuasiveness of the argument increased substantially, and the βs
that define individual responses followed suit.10 The unadjusted value of α indicates the persuasiveness of an
argument with leftist moral foundations (i.e., that appeal to harm and fairness) for someone who identifies as
very leftist.11 In relation to the distance term, we see that those individuals are predicted to always adjust their
Figure 4: Predicted argument response function by political orientation and alignment of moral foundation.
Opinion shifts are predicted relative to an argument oj = 0
6.6 One explanation for the relatively large effects of moral foundations is that they affect the perceived position of
the argument. If the infusion of aligned moral foundation actually means that the same argument is perceived
as closer to the receiver, their co-variance could bias the results of the model overall. A simple way to test this
is by means of a Kolmogorov–Smirnov (KS) test for the equality of two continuous distributions. We checked
whether the perceived opinion distance (|oj − oi,t=1 |)—operationalized as the absolute difference between
the expressed opinion at t = 1 and the perceived opinion of the sender of the argument—differed between
respondents whose ideological identification was or was not aligned with the moral foundations infused in the
argument. Alignment was decided based on a strict cutoff of the ideological identification continuum at the
midpoint of the scale. That is, the respondents were grouped into politically left- or right-wing oriented groups.
6.7 Though there was a small numerical difference between perceived distance and the alignment of the moral
foundation with ideological identification, the KS-test did not consider this difference significant (D(311) =
0.080, p = 0.709). Hence, the remarkably strong effects of moral foundation infusion cannot be accounted for
through spill-over effects from perceived distance. Rather, it seems as though the morally charged arguments
actually provided a signal to the respondents to adjust their opinion away from or towards the source.
6.8 In the evaluation of a model fit to censored data, it is wise to look at the amount of censoring that was observed
and predicted. What is more, the coincidence of the two gives an indication of the goodness of fit.
6.9 Before stimulus, at t = 1, 59 respondents (19 percent) placed themselves at the end of the opinion spectrum.
This number was reduced to 52 (17 percent) post stimulus. In light of the results from the statistical models, this
fits the expectation that most interactions will lead to distance minimization. Model 2 from Table 2, though, is
slightly more optimistic and predicts that 14 people would place themselves on one of the extreme ends of the
scale. Out of those 14, only one respondent was incorrectly placed on an extreme. Considering all responses,
the predictive accuracy for placing an observation on an extreme end of the opinion scale at t = 2 is 87 percent.
6.10 Based on the low number of censored observations, and the reasonably high accuracy in predicting censoring,
censoring did not appear to have introduced a problematic bias that rendered the results unreliable.
Table 3: Posterior distribution and model fit for lagged Bayesian weighted linear influence models with stimulus
morality and ideological identification. The opinion at t=3 (one week after stimulus) is predicted with measure-
ments from t=1 (before stimulus).
6.13 The models testing for lasting change, however, revealed a remarkable pattern. As we see in Table 3, the change
in opinion appears to have grown not shrunk, if anything, during the one week incubation period. What is
more, the effects of argument morality, and its appeal to respondents of certain ideological identity, were no
longer meaningful additions to the model, as suggested by the goodness-of-fit statistics. Model 1, the model
with argument persuasiveness dependent on perceived distance to the argument, was the preferred model
by comparison of the elpdLOO and WAIC statistics. This model predicts that respondents wanted to halve the
distance (at most), and that the persuasiveness of the argument was moderated by distance to the argument.
The repulsion threshold is expected to lie at 72.9 percent difference, comparable to the threshold in the non-
lagged model 1 of Table 2.
6.14 Taking all evidence together, we believe there is sufficient support for opinion distance minimization (Hypoth-
esis 1) and the moderating role of opinion distance (Hypothesis 2). We found weak evidence in support of dis-
tancing (Hypothesis 3), but not enough to refute the null hypothesis in that case. The influential role of moral
foundations of argumentation is clear, and in line with Hypothesis 4. The lasting effects of (social) influence
were perhaps the most surprising in Study 1. It appears that influence effects may amplify over time, though
the mechanism responsible for that effect remains unknown.
Study 2
6.15 Unlike Study 1, Study 2 used participant-generated arguments as stimuli. The objective was to alleviate the
respondents from suspicion that the arguments might be fake, used to provoke a certain response, and hence
would trigger answers based on social desirability.14 What is more, the arguments provided by the previous
respondents could be used to more accurately measure receiver perceived social distance to the sender.
6.16 As a first sanity check for the magnitude of expected opinion shifts, we determined whether any opinion change
was induced by the arguments, comparing the average opinion shift among subjects that did and did not receive
a stimulus. About a third of the respondents were routed to a control condition where they did not receive an
argument on one of the two topics. A KS-test for the equality of continuous distributions revealed no discernible
difference between the control and experimental conditions (D=0.087, p=0.393). In absolute terms, the amount
of variation observed in the experimental condition was not distinguishable from the amount of measurement
Figure 5: Observed absolute value of opinion change (|oi,t=2 − oi,t=1 |) and loess curve by distance to stimulus.
Shape indicates whether the observation at time t=2 is censored, the dashed line presents the average opinion
shift in the control group (Ncontrol = 165)
6.18 The lack of variation in opinion change between the two conditions provided a first indication that the manip-
ulation might not have been effective, but for a more conclusive analysis we turned to the Bayesian weighted
linear influence models for censored data.
6.19 The baseline model in Table 4 shows no basic tendency towards distance minimization and a considerable
amount of variation around the drawn estimate. The more complex model 1, however, does not appear to
improve our fit to the data all too much. Though the average estimates in the posterior distribution take on
values that seem comparable to the fitted models from Study 1, their whole distributions have substantial vari-
ation around these estimates. The evidence for successful manipulation and distance minimization is too weak
to support Hypothesis 1 (α > 0 and γ ≥ 0).
6.20 At this point, fitting a model where the distance parameter is further adjusted for perceived ideological distance
between the respondent and the author of the stimulus was not likely to result in a much better fit than the
previous models. We see from model 2 in Table 4 that this is indeed the case. The model is a slight improvement
upon Model 1 and 2, but the estimates remain small and indistinguishable from zero.16
6.21 In sum, the analyses presented here did not strengthen our belief in the results from Study 1. Rather, they paint
the picture of weak to no influence measured immediately after the stimulus. It appears that the respondents
were not persuaded by the more realistic stimuli, but rather remained unaffected by the short arguments of
their peers.
Discussion
8.1 This study sought to explain the complex link between informational filter bubbles and opinion polarization
through investigation of the micro-foundations of social influence. Using a model of interpersonal influence
and algorithmic selection of interaction partners, we showed that well-intended attempts to pop the filter bub-
ble may have counterproductive effects, and that these effects critically depend on the shape of the response
functions that specify how individuals adapt opinions in response to novel information. An integration of ideas
from the social-influence literature yielded a formal model of such individual responses. This model was then
calibrated with data from two vignette experiments, building on moral foundations for argumentation and per-
ceived social distance. The observed amount of opinion change was marginal, but the Bayesian models for
censored data still provided support for moderated assimilative influence and indicative evidence of repulsion
at large opinion distances. The results were fed into the agent-based model, which showed that the amount of
repulsion is sufficiently low to avoid polarization in a filter bubble-free world.
8.2 One of our most surprising empirical observations was made in the lagged model. Initially, testing for lasting
effects was done to check how much the effect diminishes over time. Will any trace of the stimulus still be left
after one week? Would people actually change their mind because of a single argument? Remarkably, the effect
did not disappear after a week. If anything, the effect seems to have gotten stronger. Surely, this strengthens
our confidence in the method. At the time of the third measurement, respondents were less likely to remember
their answers from one week ago, alleviating concerns about reactivity and anchoring effects. What is more,
the effect of the stimulus persisted, or even strengthened over time. But while the absence of any hypotheses
on long-term opinion change effects in this study prevents us from drawing strong conclusions, we believe that
the unexpected finding is an interesting one, and might provide an avenue for future research. Not the least
because these results reflect the results from Bail et al. (2018), who found that exposure over a longer period to
tweets from supporters of the opposite party increased opinion distance. Rather than direct repulsion, it could
be that the effect was mediated through motivated reasoning or cognitive dissonance minimization. In other
words, the stimulus gave the respondents some food for thought, and that simply some more time was needed
to internalize the argument. Perhaps, alerting the respondents to the political themes discussed during the
experiment alone was enough to trigger a chain of events, strengthening their opinion in the end.
8.3 Relatively large effects were found for the persuasive power of moral foundations. For the same argument, the
infusion of adjectives from the moral foundations dictionary—signalling a certain moral foundation—strongly
increased the persuasive power of an argument for people to whom the moral foundation is resonant. Align-
ment alone was enough to predict only assimilative influence for subjects whose ideological identification
aligns with assumed preferred moral foundation and high occurrence of repulsion when perfectly dis-aligned.
Interestingly, though, the effects largely disappeared towards the measurement of opinion change one week
after exposure to the vignette. The results reflect that the vignettes might have induced moralization of a topic
in the short term, which is known to trigger attitude extremization (Clifford 2019).
8.4 The novel experimental design and method of analysis was not without imperfections that provide interest-
ing avenues for future research. First and foremost, the problem of censoring on opinion scales that this study
touched upon is a problem that is not simple to solve, but is ubiquitous in survey research. Out of all studies
that investigated repulsion, only few observe such distancing effects, yet all are troubled by the boundedness
Acknowledgements
We thank Marion Hoffman for sharing her insights on statistical modeling of opinion changes in the Bayesian
framework. Thanks to Guillaume Deffuant, Kathleen Carley and Arnout van de Rijt for their comments on an
earlier version of this paper. We are very grateful to all research subjects for their participation in our study.
Funding for participant compensation was provided by the Department of Sociology at the Faculty of Behav-
ioral and Social Sciences, University of Groningen. Marijn Keijzer acknowledges IAST funding from the French
National Research Agency (ANR) under grant ANR-17-EURE-0010 (Investissements d’Avenir program). Andreas
Flache acknowledges financial support by the Netherlands Organization for Scientific Research (NWO) under
the 2018 ORA grant ToRealSim (464.18.112).
Figure 7: Data collection stages. The figure shows the relationship between study 1 and study 1 follow-up (using
the same sample) and study 1 and study 2 (using written opinion statements from study 1 as stimuli in study 2).
Let now:
β := γ × α (7)
The term β thus captures the effect of γ dependent on the observed distance between the stimulus oj and pre-
stimulus opinion of the respondent oi1 . We estimate a model where the opinion after the influence event oi2 is
drawn from a normal distribution with a mean µoi2 and a standard deviation of 1/σ 2 :
β
Retrieving γ is done by taking γ = α , which gives us a direct interpretation for the shape of the response
function. The value of γ can be interpreted by itself as the shape of the response function to foreign information,
and as such, is comparable across interactions with arguments of varying persuasive strength.
To study the effect of varying levels of persuasiveness of an argument by the infusion of different moral founda-
tions and their appeal to subjects with different ideological identity, we extend the model by adding terms that
affect only values of persuasiveness (α)—the ‘strength’ of the experienced opinion shift. To account for the main
effects of respondent ideology, the main effect of moral foundation of the argument, and the interaction effect
of ideology and moral foundation, we have to add three extra terms to the model. The β parameter moral yields
the persuasiveness main effect of arguments of leftist (0 = harm and fairness) or rightist (1 = loyalty, authority
and purity) moral foundations of the argument. Second, right represents the main effect of rightist political ide-
ology of the respondent. Finally, the interaction term moral × right captures the effect of rightist respondent
Where mj ∈ {0, 1} is the moral foundation of argument j and ri ∈ [0, 1] is the degree to which the subject
identifies as politically right-wing.
Analogous to the implementation of factors that impact the persuasiveness of arguments, the alignment effects
that are believed to affect the shape of the response function alone are modeled as effects that change the value
of β for different groups in the data. We thus add a β term identity to the model, to capture the differences in
the main distance parameter for a respondent’s perceived distance to the source:20
Where ri ∈ [0, 1] is, again, the degree to which the respondent identifies as politically right-wing and rj ∈ [0, 1]
is the perception of the ideological identification of the source of the argument.
Within studies, the data are pooled to maximize statistical power. Although we could have chosen to account for
nesting within subject or topic, the intra-class correlations are sufficiently low to justify the simpler approach
(ICCsubject = 0.129 and ICCtopic = 0.246 for Study 1, and ICCsubject = 0.020 and ICCtopic = 0.082 for Study 2). All
analyses have been replicated using the topics separately too, accounting for the nested structure, without any
noteworthy differences.
Throughout the modeling, we took conservative priors for all effects with a mean of zero and a standard de-
viation tuned to the amount of variance in the data. These priors are effectively the null hypothesis, enabling
hypothesis testing through falsifying the null in a Bayesian framework (Baig 2020).
The estimation procedure is implemented in R 4.0.3 using the runjags package (Denwood 2016). To assess the
goodness of fit for each of the models we estimate log-likelihood values of all responses. In obtaining log-
likelihoods, we have to take the truncation of our dependent variable into account. We do so, by considering
the likelihood that an observation is censored within the log-likelihood function.
P (x|σ)
LL = (12)
P (0 ≤ x ≤ 1|σ)
The posterior distribution of all log-likelihood values can be used to estimate the goodness of fit statistics that
indicate whether the model is appropriate, and compare model performance across nested models. We esti-
mate the Watanabe–Akaike (or ‘widely applicable’) information criterion (WAIC) and the expected log pointwise
predictive density (elpd) of the Leave One Out cross-validation procedure (LOO) (Vehtari et al. 2017).
Equation 13 introduces the weight function for bounded confidence referenced in the base influence function
of Equation 1. To grasp the idea of bounded confidence, the influence weight function is adapted slightly from
the weighted linear influence weight function introduced in Equation 2. As shown in Figure 8, β still defines the
shape of the curve, but influence will never become negative. For values of β > 1, the decrease of the influence
weight ‘slows down’ with opinion difference, and for values of β < 1 the decrease of the influence weight grows
as |oj − oi | → 1.
Similar to the extension for social influence in the weighted linear influence model (Equation 5), we can extend
the model here to include this term as well:
Figure 8: Weight function and expected post-stimulus opinion of the non-linear model as a function of pre-
stimulus opinion, for different values of β’s and α=1. oj is set to 0
Notes
1
Here, we consider the third category of ‘similarity-biased influence’ (Flache et al. 2017) a higher-order cat-
egory as both models of assimilative and repulsive influence can include similarity-bias.
2
The model is presented formally in Section 4.
3
Studies that use such a block design might underestimate repulsive influence effects because they do not
take into account the variance in in-group identification, as well as experienced opinion differences, but test for
differences in net influence between two blocks (Krizan & Baron 2007).
4
Studies that look at interpersonal influence on political topics with strong (party) identification seem to
capture repulsion quite systematically (Levendusky 2013; Liu & Srivastava 2015) whereas studies that use more
trivial topics do not (Takács et al. 2016; Mäs & Flache 2013).
5
This of course does not mean that such a shift does not occur on the individual’s latent opinion dimension.
The bounded opinion scale, however, makes it unobservable.
6
Group membership can be operationalized as a dichotomous variable that captures whether i and j belong
to the same group ei = ej ∈ {0, 1}. More generally, one could think of e as being a vector of positions on some
dimension that captures social distance. In that case, distance is captured with a single value |ei − ej | ∈ [0, 1].
7
Method and argumentation were explicitly pre-registered, but the hypothesis was not.
8
Facebook’s targeted advertisement feature allows sampling for individual’s interests as inferred from their
liked pages and platform usage behavior. The total number of users who fit these criteria and could potentially
be targeted was 320,000.
9
Keep in mind that the distribution of the (perceived) distance to stimulus is the product of two distributions
and hence not expected to be uniform when constructed completely at random.
10
We included the general, or un-interacted terms of the infused moral foundation and the ideological iden-
tification of the respondents to account for any differences that may exist based on those parameters alone,
which would spill over into the interacted term of moral foundation with ideological identification.
11
Keep in mind that we experimentally manipulated the moral foundation of an argument to avoid systematic
co-variance of perceived distance to an argument and moral foundation.
12
The four groups plotted there are created by taking the predicted responses after exposure to an argument
oj = 0 for individuals at the extreme ends of ideological orientation.
13
The response rate to this survey was 39.7 percent (N = 62)
14
Please note that both studies were carried out without any form of deception, in line with the code of the
IRB that approved this research.
15
This should not be taken as proof for any discernible difference between the control and experimental
group. Pure measurement error is expected to reveal a similar pattern, as observations towards the bound-
aries of the scale have less room for variation. Visual inspection of the amount of opinion change by opinions
at t = 1 appears to be in line with that expectation.
References
Abelson, R. P. (1964). Mathematical models of the distribution of attitudes under controversy. In N. Frederiksen
& H. Gulliksen (Eds.), Contributions to Mathematical Psychology, (pp. 142–60). New York, NY: Rinehart Winston
Akers, R. L., Krohn, M. D., Lanza-Kaduce, L. & Radosevich, M. (1979). Social learning and deviant behavior: A
specific test of a general theory. American Sociological Review, 44(4), 636–655
Allcott, H. & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Per-
spectives, 31(2), 211–236
Anspach, N. M. (2017). The new personal influence: How our Facebook friends influence the news we read.
Political Communication, 34(4), 590–606
Axelrod, R. M. (1997). The dissemination of culture: A model with local convergence and global polarization.
Journal of Conflict Resolution, 41(2), 203–226
Baig, S. A. (2020). Bayesian inference: An introduction to hypothesis testing using Bayes factors. Nicotine and
Tobacco Research, 22(7), 1244–1246
Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Fallin Hunzaker, M. B., Lee, J., Mann, M., Merhout,
F., Volfovsky, A. & Hunzaker, M. B. F. (2018). Exposure to opposing views on social media can increase political
polarization. Proceedings of the National Academy of Sciences, 115(37), 9216–9221
Baldassarri, D. & Gelman, A. (2008). Partisans without constraint: Political polarization and trends in American
public opinion. American Journal of Sociology, 114(2), 408–446
Banisch, S. & Olbrich, E. (2019). Opinion polarization by learning from social feedback. The Journal of Mathe-
matical Sociology, 43(2), 76–103
Berger, J. & Heath, C. (2008). Who drives divergence? Identity signaling, outgroup dissimilarity, and the aban-
donment of cultural tastes. Journal of Personality and Social Psychology, 95(3), 593–607
Berscheid, E. (1966). Opinion change and communicator-communicatee similarity and dissimilarity. Journal
of Personality and Social Psychology, 4(6), 670–680
Bikhchandani, S., Hirshleifer, D. & Welch, I. (1992). A theory of fads, fashion, custom, and cultural change as
information cascades. Journal of Political Economy, 100(5), 992–1026
Blau, P. M. (1964). Exchange and Power in Social Life. Hoboken, NJ: John Wiley & Sons
Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. London: Routledge
Bozdag, E. & van den Hoven, J. (2015). Breaking the filter bubble: Democracy and design. Ethics and Information
Technology, 17(4), 249–265
Bramson, A., Grim, P., Singer, D. J., Fisher, S., Berger, W., Sack, G. & Flocken, C. (2016). Disambiguation of social
polarization concepts and measures. The Journal of Mathematical Sociology, 40, 80–111
Cialdini, R. B. & Goldstein, N. J. (2004). Social influence: Compliance and conformity. Annual Review of Psychol-
ogy, 55(1), 591–621
Clifford, S. (2019). How emotional frames moralize and polarize political attitudes. Political Psychology, 40(1),
75–91
Deffuant, G., Amblard, F. & Weisbuch, G. (2004). Modelling group opinion shift to extreme: The smooth bounded
confidence model. arXiv preprint. Available at: https://doi.org/10.48550/arXiv.cond-mat/0410199
Deffuant, G., Amblard, F., Weisbuch, G. & Faure, T. (2002). How can extremism prevail? A study based on the
relative agreement interaction model. Journal of Artificial Societies and Social Simulation, 5(4), 1
Deffuant, G., Keijzer, M. A. & Banisch, S. (2023). Regular access to constantly renewed online content favors
radicalization of opinions. arXiv preprint. Available at: https://doi.org/10.48550/arXiv.2305.16855
Deffuant, G., Neau, D., Amblard, F. & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in
Complex Systems, 03(01n04), 87–98
DeGroot, M. H. (1974). Reaching a consensus. Journal of the American Statistical Association, 69(345), 118–121
Festinger, L. (1964). Behavioral support for opinion change. The Public Opinion Quarterly, 28(3), 404–417
Flache, A. (2018a). About renegades and outgroup-haters: Modelling the link between social influence and
intergroup attitudes. Advances in Complex Systems, 21(06n07)
Flache, A. (2018b). Between monoculture and cultural polarization: Agent-based models of the interplay of
social influence and cultural diversity. Journal of Archaeological Method and Theory, 25(4), 996–1023
Flache, A. & Mäs, M. (2008). How to get the timing right. A computational model of the effects of the timing of
contacts on team cohesion in demographically diverse teams. Computational and Mathematical Organization
Theory, 14(1), 23–51
Flache, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S. & Lorenz, J. (2017). Models of social
influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4), 2
Flache, A., Mäs, M. & Keijzer, M. A. (2022). Computational Approaches in Rigorous Sociology: Agent-Based Com-
putational Sociology and Computational Social Science. In K. Gërxhani, N.-D. De Graaf & W. Raub (Eds.),
Handbook of Sociological Science. Contributions to Rigorous Sociology. Cheltenham: Edward Elgar Publishing
French, J. R. (1956). Formal theory of social power. Psychological Review, 63(3), 181–194
Friedkin, N. E. & Johnsen, E. C. (2011). Social Influence Network Theory: A Sociological Examination of Small
Group Dynamics. Cambridge: Cambridge University Press
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P. & Ditto, P. H. (2013). Moral Foundations Theory:
The Pragmatic Validity of Moral Pluralism. Amsterdam: Elsevier
Graham, J., Haidt, J. & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations.
Journal of Personality and Social Psychology, 96(5), 1029–1046
Groeber, P., Lorenz, J. & Schweitzer, F. (2014). Dissonance minimization as a microfoundation of social influence
in models of opinion formation. Journal of Mathematical Sociology, 38(3), 147–174