Quantifying Subjective Data Using Online Q-Methodology Software

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Quantifying subjective data using

online Q-methodology software


Susan Lutfallah and Lori Buchanan
University of Windsor

The Q-Sort methodology has been used to study participants’ subjective


views on various topics (Brown, 1996). The task has historically been
completed by manually sorting cards into categories that force responses
into a normal distribution (Brown, 1996). Data collection using this method
is time consuming and manual data entry is prone to human error. We
describe here QMethod Software – a computerized web-based application
that allows participants to sort and record their responses online. This
online application eliminates the need for researchers to attend the study
sessions and to manually enter data. QMethod Software described here is
currently being used in both applied and cognitive psychology studies,
including a clinical study that evaluates participants’ perception of behav-
iours seen as most characteristic or most uncharacteristic of psychological
aggression or coercive control in situations of intimate partner violence. In a
health psychology study, it is being used to examine people’s perceptions of
food allergy, and in a psycholinguistics lab it was used to evaluate the affec-
tive valence, abstractness, and semantic richness ratings of words. We will
show here that the data obtained from one of these psycholinguistic studies
(abstractness/concreteness) correlates highly with existing measures
(Brysbaert, Warriner & Kuperman, 2014) thus demonstrating that the Q-
sort methodology and this particular implementation, the QMethod Soft-
ware app, reproduces more typical evaluations/assessments in the
psycholinguistics literature.

Keywords: Q-Method, software, online, psycholinguistics, concreteness,


ratings, psychology

Assessing the variability in opinions toward the numerous characteristics of


language is an ongoing focus of psycholinguistics research (e.g., Hollis & Westbury,
2018; Marmolejo-Ramos et al., 2017; Warriner, Kuperman & Brysbaert, 2013). This
is an important focus because researchers care how people’s opinions vary, by how

https://doi.org/10.1075/ml.20002.lut
The Mental Lexicon 14:3 (2019), pp. 415–423. issn 1871-1340 | e‑issn 1871-1375
© John Benjamins Publishing Company
416 Susan Lutfallah and Lori Buchanan

much those opinions vary (Ho, 2017), and because the manipulation of certain
word level characteristics is often how we uncover processing requirements of
specific tasks. However, individual differences make it difficult to quantify the
subjective views of participants in a useful way.
One way that researchers gather these data is through the use of Likert-type
scales with which participants rate their opinions along a continuum (e.g., Very
Strongly Disagree to Very Strongly Agree). The responses for each item in the stim-
ulus set are averaged to produce a mean rating for each item (Ho, 2017). A draw-
back to this method is that the data can be difficult to interpret (Ho, 2017) and the
responses frequently gather at the extremes of the continuum or otherwise fail to
represent the full spectrum of opinions. One solution to this problem is to use a
Q-Sort rating task.
Q- Sort methodology was developed by William Stevenson in the 1930’s as
a way for researchers to collect the subjective views of participants on various
topics (Brown, 1996). The stimulus sorting task forces responses into a normal
distribution, with each response along the continuum rated relative to every other
response (Brown, 1996). This method avoids the extremes or truncated scales that
can be found with Likert-type scale questionnaires.
The Q-Method design has been used in studies conducted on various topics
across disciplines. For example, in educational psychology, it was used to find
effective motivational techniques for school counsellors (Goodrich, 2016), and
in health sciences it has been used to assess opinions about healthy nutrition
(Yarar & Orth, 2018) and to assess perfectionistic traits associated with Anorexia
Nervosa (Ghandour et al., 2018). See Block (2008) for a comprehensive review of
Q-Sort use. The goal of this paper is not to defend this methodology but rather
to offer an automated version of it to people who are interested in using it but are
looking for an online option.

Traditional Q-Methodology

Traditional Q-Methodology consists of two separate in-lab card sorting tasks. The
first sort involves placing cards into three piles according to whether participants
agree, disagree, or are neutral about each statement, word, or image (e.g., does
this image make you feel happy). The second sort involves placing cards from
each pile onto a grid that is set up to look like a normal distribution (e.g., with bins
ranging from very happy to very unhappy). The placement of each card reflects a
more fine-grained subjective opinion than the first sort (Brown, 1996).
This study method also uses a Likert-type scale distribution, meaning that in
a distribution ranging from −4 to +4, participants would have 9 columns in which
Quantifying subjective data using online Q-methodology software 417

to place their cards. One advantage of Q-methodology is that all of the responses
have to fit in a card-matrix or distribution grid that can be designed to prevent the
floor and ceiling effects often found in Likert-type studies. The number of rows
can vary based on how many cells the researcher chooses to add to each column.
For example, in a normal distribution grid, the fewest number of cells would be
found in the most extreme end columns (i.e., −4, +4) of the Likert-type scale, and
the highest number of rows would be found in the center column (i.e., 0) (Barker,
2008; Serfass & Sherman, 2013).
To quantify the results, each card and each empty slot are assigned a number
that is manually recorded by researchers, according to the position it is placed
along the distribution grid. Researchers are then able to view the data in a table,
with the statements as the column and the participants as the rows. This allows
the data to be viewed in a way that not only determines the individual placement
of each card, but also allows the researcher to find similarities or patterns in the
cards’ placement. This requires the use of an external analysis program to inter-
pret the data but the results have proven to be informative (Block, 2008).

The app for that

The traditional Q-Methodology process and data collection is time-consuming,


and as with any human performed data entry, is prone to human error. We wanted
to find a way to streamline the process, allow multiple users to participate at once,
reduce the risk of error, and be able to perform calculations on the data without
having to run it through external analysis programs. The QMethod Software app
was developed to meet all of these goals.
QMethod Software is a complete study management tool that allows
researchers to mimic the traditional in-lab Q-sort study method. As in the tradi-
tional Q-sort method, this application, provides participants with opportunities
to answer relevant post-sort questions about their unique sorting pattern, and
to complete any study related surveys within the test environment. The software
provides a secure and automated way to create instructions, send invitation
emails, and obtain consent from participants using an internet-connected
computer/laptop in any location. QSets can be developed using images or text and
are not limited by the number of statements. For example, an ongoing personality
study is using QMethod Software with over 100 statements from the California
QSet (CQS; Block, 1961; Freburg, Lutfallah, Saling, & Freburg, 2019) that high-
lights the ability of the software to handle large stimulus sets. The size and shape
of the distribution is also configurable, allowing the use of a flat distribution to, for
418 Susan Lutfallah and Lori Buchanan

example, assess word ratings equally, along a continuum (as in the current study),
a normal distribution or even a bimodal distribution.
In addition to the above testing options, the platform offers a choice of
analysis – data can be analyzed using the software, or raw data can be exported for
use in other statistical analysis programs. QMethod Software analyzes data using
an R based analysis engine to provide real time reports as participants submit their
QSorts. The in-app analyses allows researchers to choose the number of factors
they would like to analyze (i.e., from 2–7), the type of rotation (i.e., varimax, quar-
timax, promax, oblimin, or cluster), (Ahmed, Bryant, Tizro & Shickle, 2012;
Barker, 2008; Brown, 1996) and the type of correlation (i.e., Pearson, Kendall, or
Spearman) (Alberts & Ankenmann, 2001; Brown, 1996).

Psycholinguistic values using Q-Method

To assess the validity of this new online software application, we used a well
measured characteristic of language (concreteness; Brysbaert, et al., 2014;
Snefjella, Gé né reux & Kuperman, 2018), to conduct two separate web-based Q-
Methodology studies (i.e., Study 1 & Study 2). These studies measured partic-
ipants’ concreteness ratings for a list of words for which concreteness ratings
obtained from a more traditional method are known (Brysbaert et al., 2014).
Concreteness describes the relative tangibility of words in the English language
(Warriner et al., 2013). Values for each characteristic of language are generated
through human ratings of words such as the corpus of 40 thousand English lemmas
(Brysbaert et al., 2014) as well as through computer generated norms such as the
400 million-token Corpus of Historical American English (COHA) (Davies, 2010).
Human rated corpora have been collected using methods such as a 5-point Likert-
type scale (Brysbaert et al., 2014) while the larger corpora have relied on programs
such as SocialSent (Snefjella et al., 2018) to estimate concreteness ratings for words
across time. Our goal here is to test the app against these well-established values as
a simple demonstration of its effectiveness.

Method

We conducted an in-lab study using two lists of words. In this study participants
used QMethod Software on lab computers to sort a set of 55 words on concrete-
ness. Both lists used a unique set of words that were randomly selected from
the approximately 40,000 English lemmas that had previously been rated for
concreteness by Brysbaert and colleagues (2014).
Quantifying subjective data using online Q-methodology software 419

Forty-six English as first language undergraduate students from the Univer-


sity of Windsor were recruited using the Psychology department’s participant
pool website. The first 31 students were assigned to List 1 and the next 15 were
assigned to List 2 to verify that the effect was not list specific.
Participants rated the items on a concreteness/abstractness continuum. For
the purpose of this study Abstractness was defined as something that does not
have physical properties, (i.e., “When you think of this word, is it something that
you can hear, see, feel, taste, or smell? If the answer is no, this should be consid-
ered an abstract word.” ). Concreteness was defined as having physical properties,
(i.e., “When thinking of this word, is it something you can hear, see, feel, taste,
or smell? If the answer is yes, it should be considered a concrete word.” ). Neutral
words were defined as words that participants felt did not meet the definition of
either abstractness or concreteness as defined in this study.
Participants sat in front of a PC computer and were given a unique participa-
tion code. Prior to entering this code, participants were told that the instructions
they would see after entering this code were lengthy, but that it was important for
them to read them carefully to be sure that they understood what was being asked
of them in the two parts of the task that would follow. For part 1 the participants
were directed to a Pre-Sort screen where they sorted a group of 55 words into
three piles depending on their opinion of the word in terms of concreteness: If the
participant thought the word was concrete they clicked on the “thumbs up” icon,
if they thought it was not concrete they clicked on the “thumbs down” icon, and if
they were unsure or felt neutral about it, they clicked on the “question mark” icon.
See Figure 1 for an example of the Pre-Sort page.

Figure 1. Pre-Sort page in QMethod software (https://app.qmethodsoftware.com)


420 Susan Lutfallah and Lori Buchanan

After all words were sorted in the pre-sort phase the participants saw the Final-
Sort page, and cards from each of the three piles were placed into columns within
the distribution. If the participant found the word to be highly abstract, they
placed it in the −5 column, a little less abstract they would place it in the −4
column. Similarly, if they found the word to be highly concrete, they would place
it in the +5 column, whereas if they saw it as only marginally concrete, they would
place it in the +1 column. All of the cards were placed in columns according to the
abstractness or concreteness of each word, compared to all of the other words and
participants could adjust their ratings if they wanted. Each column in the distrib-
ution contained 5 rows. See Figure 2 for an example of the Final-Sort page.

Figure 2. Final-Sort page in QMethod software (https://app.qmethodsoftware.com)

Results

Pearson correlations were performed to determine whether the web-based Q-


Method word ratings approximate the concreteness ratings found in the corpus
of 40,000 English lemmas (Brysbaert et al., 2014). The mean of the ratings in
Study 1 was 2.34 (min = −3.29, max = 4.13) and the standard deviation was 2.03.
The mean of the ratings in Study 2 was −1.17 (min = −3.13, max = 3.73) and the stan-
dard deviation was 0.52.). Of particular interest in this study was the extent to
Quantifying subjective data using online Q-methodology software 421

which these ratings mapped on to the well accepted concreteness ratings of Brys-
baert and colleagues. The results revealed very high correlations in both studies.
Our ratings in Study 1 (r = .91, p < .001) could predict 82% of the variance of the
ratings in the Brysbaert (2014) data set and for Study 2 (r = .92, p < .001) our ratings
could predict 85% of the variance of the ratings in the Brysbaert (2014) data set,
for those same words.

Discussion

This online Q-Method app provides a valid new way to measure subjective views
in psychological studies. The web application has been shown to be a reliable
way to collect concreteness ratings and we anticipate that the same would be true
across a range of psycholinguistic variables where Likert- type scales would tradi-
tionally be used. The benefit of this method over Likert-type scales include the
fact that the items can be considered in relative terms with less extreme or trun-
cated ratings.
This paper marks the introduction of the QMethod Software using a well-
studied psycholinguistic characteristic as a test case. We have shown that
QMethod Software provides ratings that correlate strongly with ratings obtained
by Brysbaert et al. (2014). The app is easy to use, the data analyses are reliable,
and importantly, feedback from our participants indicated that they found the
app to be easy to use and engaging. Participants enjoyed the rating activity in a
way that suggests there is a certain amount of gamification present in the app.
Researchers interested in using this app can find it at https://qmethodsoftware
.com. The appropriate citation is this paper.

Funding

This research was supported by a Social Sciences and Humanities Research Council Partner-
ship grant (Words in the World – 895-2016-1008), http://www.sshrc-crsh.gc.ca/

References

Ahmed, S., Bryant, L., Tizro, Z., & Shickle, D. (2012). Interpretations of informed choice in
antenatal screening: A cross-cultural, Q-methodology study. Social Science & Medicine,
74(7), 997–1004. https://doi.org/10.1016/j.socscimed.2011.12.021
422 Susan Lutfallah and Lori Buchanan

Alberts, K., & Ankenmann, B. (2001). Simulating Pearson’s and Spearman’s Correlations in Q-
Sorts Using Excel. Social Science Computer Review, 19, 221–226.
https://doi.org/10.1177/089443930101900208
Barker, J. (2008). Q-methodology: An alternative approach to research in nurse education.
Nurse Education Today, 28, 917–925. https://doi.org/10.1016/j.nedt.2008.05.010
Brown, S. (1996). Q Methodology and Qualitative Research. Qualitative Health Research, 6,
561–567. https://doi.org/10.1177/104973239600600408
Block, J. (1961). The Q-sort method in personality assessment and psychiatric research.
Springfield, IL: Charles C Thomas. https://doi.org/10.1037/13141‑000
Block, J. (2008). The Q-sort in character appraisal: Encoding subjective impressions of persons
quantitatively. Washington, DC, US: American Psychological Association.
https://doi.org/10.1037/11748‑000
Brysbaert, M., Warriner, A., & Kuperman, V. (2014). Concreteness ratings for 40 thousand
generally known English word lemmas. Behavior Research Methods, 46, 904–911.
https://doi.org/10.3758/s13428‑013‑0403‑5
Davies, Mark. (2010–). The Corpus of Historical American English: 400 million words,
1810–2009. Retrieved from http://corpus.byu.edu/coha/
Freberg, K., Lutfallah, S., Saling, K., & Freberg, L. (2019). What Makes an Influencer
Influential? Using the California Q-sort to Predict Social Media Influence. Manuscript in
preparation.
Ghandour, B., Madison Donner, Zoe Ross-Nash, Maryn Hayward, Madalyn Pinto, &
Tara DeAngelis. (2018). Perfectionism in past and present anorexia nervosa. North
American Journal of Psychology, 20(3), 671–690.
Goodrich, M. (2016). Exploring School Counselors’ Motivations to Work with Lgbtqqi
Students in Schools: A Q Methodology Study. Professional School Counseling, 2016, 21(1a).
https://doi.org/10.5330/1096‑2409‑20.1a.5
Ho, G. W. K. (2017). Examining perceptions and attitudes: A review of Likert-type scales versus
Q-methodology. Western Journal of Nursing Research, 39(5), 674–689.
https://doi.org/10.1177/0193945916661302
Marmolejo-Ramos, F., Correa, J. C., Sakarkar, G., Ngo, G., Ruiz-Fernández, S., Butcher, N., &
Yamada, Y. (2017). Placing joy, surprise and sadness in space: A cross-linguistic study.
Psychological Research, 81(4), 750–763. https://doi.org/10.1007/s00426‑016‑0787‑9
Serfass, D., & Sherman, R. (2013). A methodological note on ordered Q-Sort ratings. Journal
Of Research In Personality, 47(6), 853–858. https://doi.org/10.1016/j.jrp.2013.08.013
Snefjella, B., Généreux, M., & Kuperman, V. (2018). Historical evolution of concrete and
abstract language revisited. Behavior Research Methods.
https://doi.org/10.3758/s13428‑018‑10712
Warriner, A., Kuperman, V., & Brysbaert, M. (2013). Norms of valence, arousal, and
dominance for 13,915 English lemmas. Behavior Research Methods, 45(4), 1191–1207.
https://doi.org/10.3758/s13428‑012‑0314‑x
Yarar, N., & Orth, U. R. (2018). Consumer lay theories on healthy nutrition: A Q methodology
application in Germany. Appetite, 120(Complete), 145–157.
https://doi.org/10.1016/j.appet.2017.08.026
Quantifying subjective data using online Q-methodology software 423

Address for correspondence

Lori Buchanan
Department of Psychology
University of Windsor
401 Sunset Ave
Windsor, ON N9B 3P4
Canada
[email protected]

Co-author information

Susan Lutfallah
Department of Psychology
University of Windsor
[email protected]
Copyright of Mental Lexicon is the property of John Benjamins Publishing Co. and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.

You might also like