The International Encyclopedia of Communication Research Methods - 2017 - Esser - Comparative Research Methods
The International Encyclopedia of Communication Research Methods - 2017 - Esser - Comparative Research Methods
The International Encyclopedia of Communication Research Methods - 2017 - Esser - Comparative Research Methods
FRANK ESSER
University of Zurich, Switzerland
RENS VLIEGENTHART
University of Amsterdam, The Netherlands
Introduction
The International Encyclopedia of Communication Research Methods. Jörg Matthes (General Editor),
Christine S. Davis and Robert F. Potter (Associate Editors).
© 2017 John Wiley & Sons, Inc. Published 2017 by John Wiley & Sons, Inc.
DOI: 10.1002/9781118901731.iecrm0035
10.1002/9781118901731.iecrm0035, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/9781118901731.iecrm0035, Wiley Online Library on [20/11/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
2 C O M PA R AT I V E R E S E A R C H M E T H O D S
Foundations
Research goals
For all five research goals, the selection of which cases are included in the comparison
is crucial. Hantrais (1999, pp. 100–101) makes an important point by arguing, “any
similarities or differences revealed by a cross-national study may be no more than an
artifact of the choice of countries.” The rationale for the case selection must be linked to
a conceptual framework that justifies all design decisions made by the comparativist. In
reality, however, investigators of comparative communication projects sometimes fail
to present such a rationale, as their case selection is driven by the availability of data.
Furthermore, they only select countries to which they have access, which predictably
results in an overrepresentation of wealthier countries with better access to academic
resources. While this is not problematic per se, it does limit the generalizability of the
findings and thus the opportunities for prediction.
Presenting a justification for case selection is particularly important for smaller sam-
ples. The smaller the sample, the more important it is that a convincing theoretical
justification be provided that explicitly states the basis of each case. As an inexpen-
sive shortcut, scholars increasingly, albeit thoughtlessly, refer to existing typologies of
media systems, such as the three models of media/politics relationships in Western
Europe and North America by Hallin and Mancini (2004), without any deeper engage-
ment and without proving, in detail, that the variables of their own study are directly
linked to Hallin and Mancini’s dimensions. Neither do they link their selection to ratio-
nales of most similar or most different systems design (see further later), which suggest
careful case selection based on the research question the researcher seeks to answer.
Many scholars are not only unaware of the many alternative comparative frameworks
to Hallin and Mancini (see later), but they are also unfamiliar with the many biases
involved with uninformed case selection. Depending on the sample size, the following
research strategies are available.
be shown that the findings are building blocks for revising or expanding an existing
comparative typology or theory. Case studies that meet these criteria and follow the
method of structured, focused comparison can even accomplish the important step
from “description” to “explanation.” They do so by employing tools of causal inference
from qualitative methodology like “analytic narratives” or “process tracing.” Drawing
on concepts like detailed narrative, sequencing, path dependence and critical events,
“analytic narratives” and “process tracing” provide an explanation based on causal
chains rather than general laws or statistical relationships (for details, see George &
Bennett, 2005).
often forbid a strongly causal attribution of explanatory factors for the determined
variance of the dependent variable. However, “soft control” of the variance can be
achieved by supplementing with qualitative tools of causal inference like process
tracing or analytical narratives. Additionally, careful theoretical argumentation is
crucial.
A sophisticated extension of the most different and most similar logic was devel-
oped by Charles Ragin (1987, 2008). His approach, Qualitative Comparative Analysis
(QCA), is a configurational or holistic comparative method which considers each case
(system, culture) as a complex entity, as a “whole,” which needs to be studied in a case-
sensitive way. It combines quantitative, variable-based logic and qualitative, case-based
interpretation. It is important to understand that QCA uses a more complex under-
standing of causality than the most different and most similar logic. As Rihoux (2006,
p. 682) points out, QCA assumes that (a) causality is often a combination of “conditions”
(explanatory variables) that in interaction eventually produces a phenomenon—the
“outcome” (phenomenon to be explained); (b) several different combinations of con-
ditions may produce the same outcome; and (c) depending on the context a given
condition may very well have a different impact on the outcome. Thus different causal
paths—each path being relevant, in a distinct way—may lead to the same outcome. We
will return to this method further later.
Holtz-Bacha and Kaid (2011) note that in comparative communication research the
“study designs and methods are often compromised by the inability to develop con-
sistent methodologies and data-gathering techniques across countries” (pp. 397–398).
Consequently, they call for “harmonization of the research object and the research
method” across studies to guarantee the best possible comparability and generalizabil-
ity. But even within the same comparative study, achieving comparability across data
gathered in various countries can be challenging. This question of comparability leads
us to the problem of equivalence, as differences and similarities between cases can
only be established if equivalence has been secured at various levels. We distinguish
equivalence at the level of constructs, measurements, samples, and administration.
equivalence after the fact, primarily by means of statistical analysis. The second option
is to develop the key concepts collaboratively by incorporating the collective expertise
of international researchers at the outset of a comparative study. An ideal study com-
bines both options by first developing a conceptual framework based on multinational
input and then identifying the extent to which conceptual equivalence can be assumed
on the basis of the investigated empirical material.
For testing conceptual equivalence post hoc, several statistical techniques can be
used. For example, scholars can calculate and compare Cronbach’s alphas to check
whether a battery of questions forms a reliable scale for each separate system/culture.
A similar logic applies when the researcher anticipates more dimensions to be
present in the data. That is, if exploratory factor analyses result in similar factors and
factor loadings for various items across countries, it is interpreted as a good sign
(Vliegenthart, 2012). Alternatively, multidimensional scaling can be used to check
the cross-cultural validity of a survey scale. That is, if the value items yield similar
patterns of correlations across all countries under study, external construct equivalence
is assumed to be ensured (Wirth & Kolb, 2012). Those interested in more advanced
techniques for testing and for optimizing equivalence, such as congruence coefficient
analysis, multigroup confirmatory factor analysis or latent class analysis, may refer to
the work of Wirth and Kolb (2012).
However, as these authors also note, though the techniques just mentioned work
well for multi-country surveys, they are less efficient for comparative content analy-
sis. The first reason for this is that content analyses are usually based on categorical
rather than metric data, and the second reason is that in content analysis a single item
often represents a construct. Both characteristics prevent higher-level statistics from
being used effectively for testing construct equivalence in cross-national content anal-
yses. As an alternative, Wirth and Kolb (2012) suggest that scholars offer qualitative
discussions of functional equivalence based on explorations of the concept’s dimen-
sions, theoretical considerations, additional information, and expert advice. They also
suggest working more often with multiple indicators (instead of just one indicator) for
concepts addressed in comparative content analyses.
questionnaire or codebook is first produced and is then back-translated into the original
language. The result from the back-translation is then compared with the original ver-
sion to evaluate the quality of the translation. Ideally, this procedure is iterated until
a reliable match of the two versions is achieved (Wirth & Kolb, 2004). An important
motivation for such procedures is cultural decentering, meaning the removal of culture-
specific words, phrases, and concepts that are difficult to translate from the original
version of the instrument. An important tool may be the committee approach, in which
an interdisciplinary multicultural team of individuals who have expert knowledge of
the cultures, languages, and specific research fields jointly develop the research tools
(Van de Vijver & Leung, 1997).
The language issue has particular implications for calculating reliability in
cross-national content analyses. Thus, a native language approach where all coding
instruments are translated into the various native languages is a less than ideal approach
because it is essentially impossible to determine meaningful reliability coefficients
among the coder groups in different languages. A workable alternative is the project
language approach where all researchers and coders agree upon one common lingua
franca—usually English—for instruments, training, and reliability testing. Peter and
Lauf (2002) calculated intercoder reliability for the native language approach and
project language approach and found that reliability scores are generally somewhat
lower for coding in a project language, probably due to variations in individual
linguistic proficiencies. Though this appears to support opting for the native language
procedure, one cannot turn a blind eye to the fact that the native language option
enhances the risk that differences found in the results are confounded with differences
among coder groups in varying languages (Rössler, 2012).
In order to further check whether all survey items and codebook categories were
indeed measured the same way across all countries, additional statistical strategies
have been developed to test and enhance measurement equivalence once the data
are collected. Although still rarely done, measurement invariance should generally
be tested in all comparative communication studies. Of the various strategies that
are available for that purpose (see Davidov, Meuleman, Cieciuch, Schmidt, & Billiet,
2014; Wirth & Kolb, 2012), multigroup confirmatory factor analysis is probably the
most important for cross-national survey analyses (Kühne, 2018). The communication
field is likely to see more comparative survey research in the future, partly due to
the growing data availability on media use from projects such as the World Internet
Project. But even in this project, which was designed with a comparative goal from the
outset, meaningful conclusions can only be drawn after careful tests of measurement
invariance (see Büchi, 2016). And the same applies to comparative research based on
content analyses; here similar tests of measurement equivalence can be conducted
(Wirth & Kolb, 2012).
Descriptive comparisons
The most basic research questions are often descriptive in nature, and seek to describe
the occurrences of certain phenomena and how these occurrences vary between cases.
For example, a study may examine how newspapers and television reports differ across
two countries, that is, Sweden and Belgium, with respect to the framing of an election
campaign (Strömbäck & Van Aelst, 2010). In this example, the cases being compared
are two countries. In this study and in similar studies, the analyses are descriptive in
nature, and as such, they involve comparisons regarding the presence of issue framing
and game framing in various newspapers. However, very often, the overarching question
of the study is (implicitly or explicitly) framed in an explanatory way: in terms of how
one can account for differences and similarities across cases? This is also the case in
the study by Strömbäck and Van Aelst. They hypothesize that due to the similarities
between political and media system characteristics and their selection of cases based on
a most similar system design, differences between Belgium and Sweden will be minimal.
When they do find differences, the authors find it hard to explain them, and they do
not advance much beyond noting that country or political communication system is
“what matters” (pp. 56–57). Similarities and differences in election campaign coverage
between the two countries might be consistent with expectations derived theoretically
from different political and media system characteristics, a relationship that is not tested
statistically. As a result, there may be multiple explanations for differences between two
cases, even if they are comparable, as in the case of Sweden and Belgium. Additionally,
similarities might occur due to general journalistic practices rather than similar system
characteristics.
Statistically, the descriptive comparison of two (or more) countries is not too difficult,
and comparisons of means (e.g., t-tests) and analyses of variance (e.g., ANOVAs) are
often sufficient. In some instances, especially those with a mid-range number of cases,
10.1002/9781118901731.iecrm0035, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/9781118901731.iecrm0035, Wiley Online Library on [20/11/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
C O M PA R AT I V E R E S E A R C H M E T H O D S 13
one might be interested in a more systematic grouping of cases, for example, to identify
two or more clusters of countries that are highly similar. In those instances, techniques
such as multidimensional scaling, correspondence analysis, or cluster analysis may be
warranted. These three techniques share the underlying logic of positioning cases in
comparison to each other and highlighting those cases that are similar or different based
on a specific set of criteria or variables.
While multidimensional scaling has similarities with more widely applied factor
analysis, it has fewer restrictions on the data. In factor analysis, correlation matri-
ces are used, and interval variables with (roughly) normal distribution and linear
association are required. Multidimensional scaling, however, can be based on any
similarity/dissimilarity matrix. Most commonly, the outcome of a multidimensional
scaling analysis is two or three dimensions on which each individual case can be
positioned. Each dimension must be interpreted post hoc based on its underlying
variables. An application of the technique can be found in a comparative survey of
journalists by Hanitzsch and colleagues (2010), in which they compare similarities in
perceived influences on journalists across 17 countries.
A similar technique is correspondence analysis, a technique in which specifically
nominal variables are used to construct dimensions. Esser (2008) applies this technique
and identifies, based on a comparative content analysis of television election news, three
different political news cultures across five Western countries. In a follow-up correspon-
dence analysis of six national press systems over a 40-year time span, he finds similar
political news cultures (Esser & Umbricht, 2013).
A somewhat deviating technique is one that aims to divide cases in several similar
groups into a cluster analysis. Again, this input includes the scores of a mid-range
number of cases according to a predefined set of variables. A common application of
this technique is found in political science, where it is used, for example, to compare
party manifestos and investigate which parties take similar stances on certain issues
(see, for example, Pennings, Keman, & Kleinnijenhuis, 2006). Various techniques
can be used to calculate the distance between the cases and the best way to cluster
them into groups. The study by Brüggemann and colleagues (2014) provides a good
application of a cluster analysis wherein, based on a cluster analysis of 17 Western
countries and relying on a wide variety of data sources, they suggest an adjustment
to Hallin and Mancini’s (2004) classification of Western countries in media systems.
Among other things, the analysis refers to the existence of four rather than three
groups of countries.
and the second is low party identification combined with professionalized campaigns
and strong tabloid media.
Comparison of relation
A third type of research question is the comparison of relation, which involves investi-
gating in different contexts the relationship between an independent and a dependent
variable. The comparison of contexts serves as a robustness check to determine whether
a relationship holds in various situations. Holtz-Bacha and Norris (2001), for example,
test the effects of public television preferences on political knowledge and find that
in 10 out of the 14 countries they studied, a positive and significant relationship was
present. They rely on a set of regression analyses, one for each single country. Alterna-
tively, one can pool the data and use dummy variables for the countries and interaction
terms between the independent variable of interest and the dummy variables. If these
interaction terms are not significant, the relationship is similar across countries.
Comparative explanatory
The final type of question is labeled comparative explanatory. It goes one step beyond
the comparison of relation question in that it addresses explanations for different
relationships across units by taking characteristics of those units into consideration.
An example of a comparative explanatory question is found in the study by Schuck,
Vliegenthart, and de Vreese (2016). They investigate the effect of exposure to conflict
framing on turnout for the 2009 European parliamentary elections campaign. This
relationship is positioned at the individual level wherein the individuals are nested
within the various EU member states. Schuck et al. hypothesize and find that the
strength of the effect depends on a country characteristic, namely, the overall evalu-
ation of the EU in media coverage. More specifically, the more positive the coverage,
the stronger the effect of conflict framing. In this case, two levels are combined, the
individual (micro) level and the macro (country) level, where the first is nested in the
latter. In comparable cases, even additional levels can be considered such as journalists
nested in organizations nested in countries (Hanitzsch & Berganza, 2012). In these
instances, it makes sense to rely on multilevel modeling, though alternative strategies
can also be considered when the number of higher level units is limited (e.g., clustered
standard errors).
As with pooled time series, the main challenge posed by the nested structure of many
comparative datasets is that observations are not independent, which is one of the main
assumptions for many multivariate analyses, such as regression analysis. If we take the
example of citizens in various EU countries where respondents from the same coun-
tries are likely to have many commonalities, their scores on certain variables will be
more highly correlated, and they might display certain particular relationships between
variables. It is exactly these relationships, and how they potentially differ across coun-
tries, that lead scholars conducting comparative research to rely on multilevel modeling.
To conduct a multilevel model, one needs a reasonable number of higher level units,
specifically, a minimum of 10–15 units is required. Otherwise, it is more appropriate
10.1002/9781118901731.iecrm0035, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/9781118901731.iecrm0035, Wiley Online Library on [20/11/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
16 C O M PA R AT I V E R E S E A R C H M E T H O D S
either to work with dummies for the higher level units (see previous section) and rea-
son theoretically about different relations across countries, or to use standard errors
clustered at the higher level. When conducting a multilevel analysis, two models exist.
The first model relies on fixed effects of the independent variables, wherein the effects
are modeled as being the same across all higher level units and only the intercept varies
across these higher level units (random intercept). The second model relies on random
effects wherein relationships across variables measured at the lower level differ across
higher level units. When the independent variable interacts with a variable measured
at the higher level, a comparative explanatory question is addressed.
A final important assumption in multilevel modeling is that the selection of higher
level units resembles a true random sample of the larger population. In many instances,
it may be possible to obtain data for a relatively solid sample of Western countries, but
it may be more difficult for countries from other parts of the word. This is important to
keep in mind when considering the generalizability of findings. A hands-on primer on
how to conduct multilevel models is available in Hayes (2005).
The micro-macro links must be integrated into the theoretical framework that under-
lies the comparative analysis. Norris (2009) states that without a guiding theoretical
map, comparativists “remain stranded in Babel” (p. 323) and that only the develop-
ment of widely shared core theoretical concepts and standardized operational mea-
sures can reduce the “cacophonous Babel” in comparative communication research
(p. 327). Unfortunately, even today, many comparativists fail to explicate their objec-
tives and theoretical foundations, and hence, they end up with little more than merely
descriptive findings. There are, of course, exceptions. Esser and Hanitzsch (2012b), in
their Handbook of Comparative Communication Research, introduce several suitable
frameworks, such as the political communication system, media system, media market,
media audience, media culture, journalism culture, election communication system,
and news-making system, among others. It is now vital that these concepts and frame-
works be used, criticized, amended, and refined as beginning from scratch in each new
publication will not advance the field of comparative communication research. More-
over, using complaints about the alleged immaturity of the field as an excuse for deliv-
ering yet more immature studies will only serve to negatively affect the advancement
of research in the field. Accordingly, given that considerable progress has been made in
this field, we must continue to build on it.
Norris and Inglehart (2009) produced a ranking that vividly illustrates the extent
to which the world’s countries have become cosmopolitan, that is, absorbent of
transborder influences. Esser and Pfetsch concluded some time ago that “[i]n times of
10.1002/9781118901731.iecrm0035, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/9781118901731.iecrm0035, Wiley Online Library on [20/11/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
C O M PA R AT I V E R E S E A R C H M E T H O D S 17
designs, namely, those that represent international relations. These external influences
can express structural power or dependency relations between media systems (i.e.,
the hegemonic impact of core powers on peripheral systems in a given network),
cultural imperialism of values (“Americanization”), penetration of ideological or
economic values (from the West to the South or East), or more neutral processes of
interconnectivity and diffusion of ideas. Here, the longitudinal aspect might become
more important compared to the cross-sectional aspect.
Second, in addition to incorporating the linking mechanisms between individual
cases and transnational structures, comparativists must study the interplay between
external (border-transgressing) and internal (domestic) factors, as it will help them
understand how media systems respond to transnational influences. Media systems are
not empty containers, and journalists and news organizations are not passive receivers
of outside stimuli. Thus, the manners in which the various media systems respond are
likely to demonstrate valuable information about the specific conditions of the media
system in question. Put differently, transborder influences are likely to trigger cultural
shifts and structural transformations within media systems. However, as these processes
still occur within national contexts, these national pathways can still be subjected to
comparative analysis. This notion of path-dependency is also reflected in the concept
of glocalized hybrid cultures and hybrid media formats.
An early framework that attempted to account for the complex relationships
between supranational forces and individual cases is Tilly’s (1984) idea of encompass-
ing comparisons. It is a concept that requires the researcher to explicitly detail the
relationship of an individual system to a larger, more potent connecting structure,
such as its membership to a European film industry, a shared and border-transgressing
journalism culture, an Asian media market, and so on, that affects the behaviors of
its parts. “With this logic,” as Comstock (2012) explains, “the encompassing method
selects cases on the basis of their representativeness of common positions in the overall
system” (p. 376). Thus, the goal of the analysis is “to identify patterns of difference in
how hierarchically related localities respond to the same system-level dynamics and
perpetuate systemic inequality” (p. 376), for example, between more and less powerful
components.
A third innovation, in addition to incorporating external variables and examining
their interplay with domestic variables, is to integrate de-territorialization into com-
parative designs. In this sense, it may no longer suffice to compare one nation’s jour-
nalists with another national sample. Rather, it may also be necessary to compare both
to a third emerging type, specifically, a transnationally oriented community of jour-
nalists working in different countries for transnationally oriented media, including Al
Jazeera, Financial Times, The Wall Street Journal, International Herald Tribune, TIME, The
Economist, BBC World Service, and so on. Thus, comparativists may need to increase
the number of cases in their designs by including additional globalized control groups
to allow for a better assessment of how relevant the national is in relation to the transna-
tional (Reese, 2008).
A fourth innovation, which was mentioned previously, is the adoption of a multilevel
approach in comparative communication research where the national level is merely
one among many levels. As the nation-state has long ceased to be the only meaningful
10.1002/9781118901731.iecrm0035, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/9781118901731.iecrm0035, Wiley Online Library on [20/11/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
C O M PA R AT I V E R E S E A R C H M E T H O D S 19
category, additional levels of analyses, both above and below the nation-state, must be
included, depending on the research question of inquiry. With these modifications, the
comparative approach will continue to contribute substantially to the progression of
knowledge in the communication discipline.
Outlook
Only in the past decade have communication scientists slowly begun to more widely
integrate comparative elements into their research. Nonetheless, substantial progress
has been made due to the increased application of comparative conceptual frameworks
and the availability of comparative, mainly cross-national, data. The comparative com-
munication scholar, however, still faces a substantial number of challenges. One such
challenge is the rigid application of comparative logics, which is in line with work on
most similar and most different systems designs as found in political science. The sec-
ond challenge is to increase the number of cases included in the analyses by mov-
ing beyond the two- or three-country comparisons. This would offer opportunities to
respond to comparative explanatory questions and to more fully understand the role of
context on the effects of communicative processes. Finally, trends such as internation-
alization and globalization require the researcher to consider multiple units of analysis
and integrate them into a single empirical design to better understand more of today’s
complex reality.
References
Davidov, E., Meuleman, B., Cieciuch, J., Schmidt, P., & Billiet, J. (2014). Measurement equiva-
lence in cross-national research. Annual Review of Sociology, 40, 55–75. doi:10.1146/annurev-
soc-071913-043137
Downey, J., & Stanyer, J. (2010). Comparative media analysis: Why some fuzzy thinking
might help. Applying fuzzy set qualitative comparative analysis to the personalization of
mediated political communication. European Journal of Communication, 25(4), 331–347.
doi:10.1177/0267323110384256
Esser, F. (2008). Dimensions of political news cultures: Sound bite and image bite news in France,
Germany, Great Britain, and the United States. International Journal of Press/Politics, 13(4),
401–428. doi:10.1177/1940161208323691
Esser, F., & Hanitzsch, T. (2012a). On the why and how of comparative inquiry in communication
studies. In F. Esser & T. Hanitzsch (Eds.), Handbook of comparative communication research
(pp. 3–22). London: Routledge.
Esser, F. & Hanitzsch, T. (Eds.). (2012b). Handbook of comparative communication research. Lon-
don: Routledge.
Esser, F., & Pfetsch, B. (2004). Meeting the challenges of global communication and political inte-
gration: The significance of comparative research in a changing world. In F. Esser & B. Pfetsch
(Eds.), Comparing political communication. Theories, cases, and challenges (pp. 384–410). New
York: Cambridge University Press.
Esser, F., & Umbricht, A. (2013). Competing models of journalism? Political affairs coverage
in U.S., British, German, Swiss, French and Italian newspapers. Journalism, 15(8), 989–1007.
doi:10.1177/1464884913482551
George, A. L., & Bennett, A. (2005). Case studies and theory development in the social sciences.
Cambridge, MA: MIT Press.
Hallin, D., & Mancini P. (2004). Comparing media systems: Three models of media and politics.
Cambridge, UK: Cambridge University Press.
Hanitzsch, T., Anikina, M., Berganza, R., Cangoz, I., Coman, M., Hamada, B. I., … Yuen,
K. W. (2010). Modeling perceived influences on journalism: Evidence from a cross-
national survey of journalists. Journalism & Mass Communication Quarterly, 87(1), 7–24.
doi:10.1177/107769901008700101
Hanitzsch, T., & Berganza, R. (2012). Explaining journalists’ trust in public institutions across 20
countries: Media freedom, corruption and ownership matter most. Journal of Communication,
62(5), 794–814. doi:10.1111/j.1460-2466.2012.01663.x
Hantrais, L. (1999). Cross contextualization in cross-national comparative research. Interna-
tional Journal of Social Research Methodology, 2(2), 93–108. doi:10.1080/136455799295078
Hasebrink, U. (2012). Comparing media use and reception. In F. Esser & T. Hanitzsch (Eds.),
Handbook of comparative communication research (pp. 382–399). London: Routledge.
Hayes, A. F. (2005). Statistical methods for communication science. Mahwah, NJ: Erlbaum.
Hellman, M., & Riegert, K. (2012). Emerging transnational news spheres in global crisis
reporting: A research agenda. In I. Volkmer (Ed.), The handbook of global media research
(pp. 156–174). Malden, MA: Wiley-Blackwell.
Holtz-Bacha, C., & Kaid, L. L. (2011). Political communication across the world: Method-
ological issues involved in international comparisons. In E. P. Bucy & R. L. Holbert (Eds.),
Sourcebook for political communication research: Methods, measures, and analytical techniques
(pp. 395–416). New York: Routledge.
Holtz-Bacha, C., & Norris, P. (2001). To entertain, inform, and educate: Still the role of public
television? Political Communication, 18(2), 123–140. doi:10.1080/105846001750322943
Kohn, M. L. (1989). Cross-national research as an analytic strategy. In M. L. Kohn (Ed.), Cross-
national research in sociology (pp. 77–102). Newbury Park, CA: SAGE.
10.1002/9781118901731.iecrm0035, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/9781118901731.iecrm0035, Wiley Online Library on [20/11/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
C O M PA R AT I V E R E S E A R C H M E T H O D S 21
Kühne, R. (2018). Measurement invariance. In J. Matthes (Gen. Ed.), C. S. Davis & R. F. Potter
(Assoc. Eds.), The international encyclopedia of communication research methods. Malden, MA:
John Wiley & Sons, Inc.
Landman, T. (2008). Issues and methods in comparative politics (3rd ed.). London: Routledge.
Mancini, P., & Hallin, D. C. (2012). Some caveats about comparative research in media stud-
ies. In H. A. Semetko & M. Scammell (Eds.), The SAGE handbook of political communication
(pp. 509–517). Thousand Oaks, CA: SAGE.
Mill, J. S. (1843). A system of logic. London: Longman.
Norris, P. (2009). Comparative political communications: Common frameworks or
Babelian confusion? Government and Opposition, 44(3), 321–340. doi:10.1111/j.1477-
7053.2009.01290.x
Norris, P., & Inglehart, R. (2009). Cosmopolitan communications: Cultural diversity in a globalized
world. New York: Cambridge University Press.
Pennings, P., Keman, H., & Kleinnijenhuis, J. (Eds.). (2006). Doing research in political science.
An introduction to comparative methods and statistics (2nd ed.). London: SAGE.
Peter, J., & Lauf, E. (2002). Reliability in cross-national content analysis. Journalism and Mass
Communication Quarterly, 79(4), 815–832. doi:10.1177/107769900207900404
Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry. New York: John Wiley
& Sons, Inc.
Ragin, C. C. (1987). The comparative method: Moving beyond qualitative and quantitative strate-
gies. Berkeley and Los Angeles, CA: University of California Press.
Ragin, C. C. (2008). Qualitative comparative analysis using fuzzy sets (fsQCA). In B. Rihoux &
C. C. Ragin (Eds.), Configurational comparative methods. Qualitative comparative analysis and
related techniques (pp. 87–122). Thousand Oaks, CA: SAGE.
Reese, S. (2001). Understanding the global journalist: A hierarchy-of-influences approach. Jour-
nalism Studies, 2(2), 173–187. doi:10.1080/14616700118394
Reese, S. D. (2008). Theorizing a globalized journalism. In M. Loeffelholz & D. Weaver (Eds.),
Global journalism research: Theories, methods, findings, future (pp. 240–252). Chichester, UK:
Wiley-Blackwell.
Rihoux, B. (2006). Qualitative comparative analysis (QCA) and related systematic comparative
methods. Recent advances and remaining challenges for social science research. International
Sociology, 21(5), 679–706. doi:10.1177/0268580906067836
Rössler, P. (2012). Comparative content analysis. In F. Esser & T. Hanitzsch (Eds.), Handbook of
comparative communication research (pp. 459–468). London: Routledge.
Sartori, G. (1994). Compare why and how? In M. Dogan & A. Kazancigil (Eds.), Comparing
nations: Concepts, strategies, substance (pp. 14–34). Oxford: Blackwell.
Schuck, A., Vliegenthart, R., & de Vreese, C. (2016). Who’s afraid of conflict? The mobilizing
effect of conflict framing in campaign news. British Journal of Political Science, 46(1), 177–194.
doi:10.1017/S0007123413000525
Schuck, A. R. T., Vliegenthart, R., Boomgaarden, H. G., Elenbaas, M., Azrout, R., Van Spanje, J., &
de Vreese, C. H. (2013). Explaining campaign news coverage: How medium, time and context
explain variation in the media framing of the 2009 European Parliamentary elections. Journal
of Political Marketing, 12(1), 8–28. doi:10.1080/15377857.2013.752192
Strömbäck, J., & Van Aelst, P. (2010). Exploring some antecedents of the media’s framing of
election news: A comparison of Swedish and Belgian election news. International Journal of
Press/Politics, 15(1), 41–59. doi:10.1177/1940161209351004
Tilly, C. (1984). Big structures, large processes, huge comparisons. Los Angeles, CA: SAGE.
Van de Vijver, F., & Leung, K. (1997). Methods and data analysis of comparative research. In
J. W. Berry, Y. P. Poortinga, & J. Pandey (Eds.), Handbook of cross-cultural psychology (2nd ed.,
Vol. 1, pp. 257–300). Needham Heights, MA: Allyn & Bacon.
10.1002/9781118901731.iecrm0035, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/9781118901731.iecrm0035, Wiley Online Library on [20/11/2023]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
22 C O M PA R AT I V E R E S E A R C H M E T H O D S
Vliegenthart, R. (2012). Advanced strategies for data analysis: Opportunities and challenges of
comparative data. In F. Esser & T. Hanitzsch (Eds.), Handbook of comparative communication
research (pp. 486–500). New York: Routledge.
Vliegenthart, R., Schuck, A. R. T., Boomgaarden, H. G., & de Vreese, C. H. (2008). News cover-
age and support for European integration, 1990–2006. International Journal of Public Opinion
Research, 20(4), 415–439. doi:10.1093/ijpor/edn044
Voltmer, K. (2012). The media in transitional democracies. Cambridge, UK: Polity Press.
Wilson, S. E., & Butler, D. M. (2007). A lot more to do: The sensitivity of time-series
cross-section analyses to simple alternative specifications. Political Analysis, 15(2), 101–123.
doi:10.1093/pan/mpl012
Wirth, W., & Kolb, S. (2004). Designs and methods of comparative political communication
research. In F. Esser & B. Pfetsch (Eds.), Comparing political communication: Theories, cases,
and challenges (pp. 87–111). New York: Cambridge University Press.
Wirth, W., & Kolb, S. (2012). Securing equivalence: Problems and solutions. In F. Esser &
T. Hanitzsch (Eds.), The handbook of comparative communication research (pp. 469–485).
London: Routledge.
Further reading
Esser, F., & Hanitzsch, T. (Eds.). (2012). Handbook of comparative communication research.
London: Routledge.
Landman, T. (2008). Issues and methods in comparative politics (3rd ed.). London: Routledge.
Frank Esser is professor of international and comparative media research at the Uni-
versity of Zurich, where he co-directs an 80-person strong National Research Center
on the Challenges to Democracy in the 21st Century (NCCR Democracy). He has held
visiting positions at the Universities of Oklahoma, Texas–Austin, and California–San
Diego. His research focuses on cross-national studies of news journalism and political
communication. His books include Comparing Political Communication (2004, with B.
Pfetsch), Handbook of Comparative Communication Research (2012, with T. Hanitzsch),
and Comparing Political Journalism (2016, with de Vreese and Hopmann).