Digital Literacy For Secondary School Students Usi
Digital Literacy For Secondary School Students Usi
Digital Literacy For Secondary School Students Usi
https://www.scirp.org/journal/ce
ISSN Online: 2151-4771
ISSN Print: 2151-4755
Robin Cohen1, Alexandre Parmentier1, Glaucia Melo1, Gaurav Sahu1, Aswin Annamalai1,
Sheldon Chi1, Trevor Clokie1, Amir Farrag1, Abdul Naik1, Syed Naseem1, Shikhar Sakhuja1,
Jean Wang1, Rich Clausi2, Anita Santin2
1. Introduction
In this paper, we present novel approaches for educating secondary school stu-
dents about the origins of online misinformation, promoting greater scrutiny about
the credibility of sources. In particular, the proposals offered integrate techniques
for analyzing content, enabling engaging interaction and revealing key examples,
Background
Establishing digital literacy quickly has become an important concern for today’s
youth. According to a study by the Pew Research Center in the USA in 2018 (M.
Anderson & Jiang, 2018), 95% of teens now report that they have a smartphone
or access to one, and 45% say that they are online on a near-constant basis.
There has, as well, been a significant increase in teenage users of social media
(more than a 100% increase over the past four years). The preponderance of mi-
sinformation is also alarming. A study by Vosoughi, Roy, & Aral (2018), con-
ducted by analyzing diffusion of news stories on Twitter from 2006 to 2017,
suggests that false news spreads farther, faster, deeper and more broadly that the
truth, especially for political news. And yet a report released by the Stanford
History Education Group (SHEG) in 2015 (Wineburg, McGrew, Breakstone, &
Ortega, 2016) shows a dismaying inability by students to reason about informa-
tion that they see on the Internet (including an absence of effort to do fact check-
ing by visiting alternate sites (Wineburg & McGrew, 2016). Students, for exam-
ple, had a hard time distinguishing advertisements from news articles or iden-
tifying where information came from. The report mentions “...students may fo-
cus more on the content of social media posts than on their sources” and also
“...despite their fluency with social media, many students are unaware of basic
conventions for indicating verified digital information”.
This report chronicles the Stanford effort to engage middle school, high school
and college students in determining whether sources of information were credi-
ble or not. This includes both asking students to judge posts in social media en-
vironments such as Twitter and Facebook, and challenging them to identify wheth-
Figure 1. Teenagers prefer to shop online according to PEW research (M. Anderson &
Jiang, 2018).
Figure 2. Most commonly used social media according to PEW research (M. Anderson &
Jiang, 2018).
Figure 9. Working of the “Help” function; this figure is best read in colour.
• Each student will be asked both to assess the truthfulness of a statement made
online and to reflect on the origin of their conclusions. Basically, given a claim
and the three evidence articles, the student now needs to classify the claim as
fake or true and select an evidence link from the provided list that they believe
is a valid evidence and can back their decision. To avoid confusion, there will
be only one whitelisted evidence link among the provided list of articles.
For each claim, there will be a “Help” checkbox (described in more detail in
the following subsection), which when chosen, will highlight important chunks
in the provided text as shown in Figure 9. A scoring system could be introduced
if running the quiz as a game. For instance: each correct answer scores +2 points,
whereas each incorrect answer scores 0; furthermore, they lose 1 point if they
select a link from the blacklist, and 0.5 point if they choose a link that is neither
in the blacklist nor the whitelist. A leaderboard will be maintained for the entire
class with the student with the most points on top.
A student is, therefore, forced to develop the required “fact-checking” skills
while competing with their classmates. We believe that gamification of the idea
will encourage students to really focus on the given task, which in turn, will help
them identify the attributes of fake content online. We now briefly discuss the
NLP-based backend for the proposed game.
As sample examples are fed to this network (training), the system is able to pro-
gressively determine the basis for deciding whether an article (test case) is true,
so that judgement can be passed on an entirely new example, later. Our backend
is made up of a Long Short Term Memory (LSTM) cells (Hochreiter & Schmid-
huber, 1996) coupled with an attention mechanism (Bahdanau, Cho, & Bengio,
2014). An LSTM is a special type of recurrent neural network (RNN) which can
learn a meaningful representation4 of a given piece of text (claims and articles, in
our case). Amongst the different types of RNNs available, we chose LSTMs, as
web articles can be quite long and LSTMs are known to capture long term con-
text in a given piece of text. However, there is still a limit to the range of text
they can cover. Therefore, we add an attention mechanism (attn.) to the network,
which provides a soft pointer and tells the model which part of text to focus on,
while making a prediction.
We train our network on the Leaders Prize dataset5, which is a collection of
articles published on various fact-checking websites, such as, Politifact, Snopes,
and The Weekly Standard. Each data sample contains the following fields: a claim,
a claimant, some evidence articles, and finally, a label (among 0, 1, and 2, where
0 denotes False, 2 denotes True and 1 denotes a Neutral claim) that were scraped
from the article’s webpage. In order to run our NLP-based backend, we feed the
claim and evidence articles as input to our network and the provided labels as
output. We perform some preprocessing on the text where we lowercase all words
4
Vectors, to be more precise.
5
Coauthor G.Sahu is involved in running the Leaders Prize competition through the AI Institute of
the University of Waterloo. Entrants are tasked with properly identifying whether a source is true
or false, on a large set of sample texts. See https://leadersprize.truenorthwaterloo.com/en/.
and normalize them to remove any non-ascii characters. Our algorithm is inten-
tionally a simplistic baseline to predict truthfulness, but the inclusion of the at-
tention mechanism then enables us to highlight a subset of text to students, as
outlined below. We achieve an F1-score of 48.21% on the dataset6.
We use PyTorch (Paszke et al., 2019) to implement the LSTM + attn. network
and train our architecture on a TitanXp GPU for ~3 hours.
We can now use the trained LSTM + attn. network in the backend of our app.
To be more precise, we will use it in the implementation of the “Help” checkbox.
Note that we only use the attention module of the trained network even though
it can be used to predict the truth rating of a claim as well; we would already
know truth-ratings when we are choosing claims and articles for the game. We
only need an automated system that can identify important chunks of text in a
large body of text. The working algorithm of the “Help” box for a given claim
and a set of evidence articles is shown in Figure 9 and described in Figure 11.
The flow chart of the complete game is shown in Figure 8. Note that the final
response box here is the output provided by the student, both deciding true or
false and identifying the article that best drives this conclusion.
We can notice from the performance that our LSTM + attn. model does not
give us the best F1-score but it can still be used to get some indication about im-
portant chunks in a given piece of text. With a suitable user interface for the de-
scribed game, we can make the content more engaging and will have succeeded
in fitting current technology to the education system such that the young gener-
ation can leverage its benefits.
In this section, we elaborate on how technology can be used as part of the educa-
tion of students about digital misinformation. More specifically, we propose us-
ing games in order to deliver some of the lessons to students, in a setting where
students are asked to respond to questions and educators are responsible for di-
recting class discussion, once the results from the games are revealed.
putting into practice the learned theory and clarifying possible remaining doubts
regarding the content presented. We also believe implementing this game quiz
can spark students’ further interest in the topic. The idea is to present questions
regarding the correct identification of elements that characterize a paid adver-
tisement. At the end of the round of questions, there should be a ranking pre-
senting which user (student) scored the highest points. When presenting the
quiz in class, teachers should act as moderators, explaining what should be done
during the quiz, how much time students would have to answer each question,
how many points you get for selecting correct questions, and how to answer the
questions, i.e., select the correct answer. When there are a lot of incorrect an-
swers or considerable disagreement on the correct response, the teacher should
take more time to explain why that answer is incorrect and what the rationale
should be to answer that question, using the same example in the question or
others. Students should answer the questions as they appear. Altogether, this
would be a very valuable opportunity for discussion and peer-based learning, as
proposed in (Crouch & Mazur, 2001) and (Mazur & Hilborn, 1997). Students
could, for example, confer before selecting their individual choices.
Kahoot! (http://www.kahoot.com/) is a game-based learning platform where
users can create a quiz with customized questions, and respondents can interact
using an internet browser (e.g., Chrome, Firefox), allowing the interaction in
desktops, mobile phones, tablets or any device that supports internet browsers.
Kahoot! is an interesting option to be presented as an example and even used in
classrooms, as it can be easily adopted and maintained. Using Kahoot! as exam-
ple, an image of a question we would put to students is presented in Figure 12;
students would be asked to reflect on whether the content is a paid ad.
Figure 13. Mock up of the app: The user is shown a fact on the screen followed by some
evidence articles. In the current mock-up, the user needs to scroll down the screen to see
the rest of the evidence articles. Note that only one of the provided evidence articles
comes from a trustworthy (whitelisted) source. Given this information, the user now
needs to select a source and tap on the red “X” if they think the shown fact is fake or they
choose the green heart otherwise.
• Students are not allowed to use electronic devices: In this setting, we can
use an iClicker-type device (Bruff, 2009) which has a unique ID that can be
used to “add” a student in the class and receive their responses. Once all the
students are added to “DCR-01,” the teacher now projects the facts one-by-one
onto a screen using a projector; the screen-layout in this case would be iden-
tical to the previous case, only the question is now being shown as a presen-
tation as opposed to in a game. Once the students go through the entire con-
tent of the screen, they need to register their two responses using the iClick-
er-type device.
Finally, when the game concludes, the teacher will have access to a leaderboard
corresponding to the game session that can be shown to the students. Addition-
ally, the teacher would also receive a detailed report which would contain every
student’s submitted response so that, in the future, they can help the students
improve by educating them about their mistakes.
tors at the national level have already begun to discuss an important agenda
within schools for the general topic of digital and media literacy, where good ci-
tizenship online may be promoted but where consumer awareness is also a cru-
cial consideration (MediaSmarts, n.d.). Educating about the credibility of infor-
mation would fit nicely here. A final obvious home for educating students about
digital misinformation within the Ontario curriculum would be as part of Tech-
nological Education and the Inherent Roles and Responsibilities which are out-
lined. The Computer Technology outline for Grade 11 and 12, for instance, lists
expectations surrounding Technology and Society including discussing draw-
backs for society7.
Another possible option for delivery is an online course dedicated to digital
literacy, one that could perhaps be mandatory for all students to complete as
part of their graduation requirements. There is already some precedent for ob-
liging all students to pass the Ontario Secondary School Literacy Test (OSSLT)
(Education Quality and Accountability Office, 2020) in Grade 10, so that expand-
ing the vision of literacy to the digital world is not unreasonable. Online delivery
of the topics proposed in this paper could also be achieved in a course that is op-
tional and that may be selected as part of the looming mandate of the Ontario
government that all secondary students complete at least 2 courses online (Miller,
2020). An e-learning course on cyber safety might actually appeal to students.
This may still be arranged as informational with online grading as each module
is completed and would not be meant as pre-requisites to other courses. Aiming
for an exposure level half-credit course to be completed by graduation so that stu-
dents will have contact with the essential basic information may in fact be ideal.
The course mandate could even expand to continue to educate students about
cybersecurity as well, a topic recently outlined in Blackberry’s 2020 Threat Re-
port (BlackBerry, 2020).
Several cautions apply for the suggestions of running competitive games in a
classroom setting. The focus should not be on whether a student has won or lost,
but instead retention of the primary lessons about misinformation and credibility
should be paramount. Whether gamification in education is a positive or a nega-
tive is still a topic of much discussion amongst educators (Walker, 2014; Zaino,
2013; Cantador & Conde, 2010). It is also important to note that most course
curricula are already quite packed, leaving little room for expansion of content.
In order for these additions to be contemplated, educators, school boards and
the Ontario Ministry of Education will need to examine the larger picture for
that subject’s curriculum.
The primary advantage of the methods we propose for educating Ontario Sec-
ondary School students about credibility of online information, is to cover a
breadth of contexts that are particularly relevant to this demographic, as well as
to reveal specific issues such as fake reviews and targetted advertising that go
beyond the concerns drawn out by the Stanford project (Wineburg et al., 2016,
2020). Our approach also introduces some gamification in order to engage the
students, and supports running this education on mobile devices.
It is important to note that the proposals presented in this paper were created
by a team of computer scientists and educators. One of the coauthors in fact is a
computer science graduate student who previously was a teacher in the Ontario
secondary school system. One very significant hallmark of our work is clarifying
how to leverage technical solutions towards the teaching of this very important
topic (which also has a specific technical bent). But we integrate as well reflec-
tion on best practices in education, when clarifying how we propose to deliver
each of the different student experiences that we are creating.
We cannot emphasize enough the need to educate young people today specif-
ically about the topic of digital misinformation. According to a Pew Research
Report in 2017 (J. Anderson & Rainie, 2017), experts may be divided about whether
false narratives online will increase or reduce. Some promise may even poten-
tially come from automated approaches such as using artificial intelligence trust
modeling methods to identify questionable sources through an analysis of con-
tent (Ohashi, Cohen, & Fu, 2017). But an entire subgroup of the prominent res-
pondents of the Pew survey believed that the information environment will im-
prove explicitly because people will become more adept at sorting through ma-
terial. They went on to explain that information literacy must be elevated to be a
primary goal at all levels of education. Outlining the steps that can be taken to
realize this important achievement is the primary topic of this paper.
tion and for communicating may become the order of the day, tomorrow. While
the methods and approaches presented in this paper are designed to still be func-
tional in entirely new contexts, it will always be useful for educators and tech-
nology experts to be aware of the current attention of youth. Valuable opportuni-
ties to bring together researchers from different disciplines have occurred recently
(such as Joel Breakstone’s introduction to the Stanford History project at the
2017 Weblogs and Social Media conference (ICWSM 2017 Workshop on Digital
Misinformation, 2017; Ciampaglia, Mantzarlis, Maus, & Menczer, 2018)), a venue
with heavy participation from computer scientists) and efforts such as these
must continue.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this pa-
per.
References
Alexander, J. (2018). The Bizarre Justin Bieber Burrito Incident Reminds Us Not to Be-
lieve Everything Online. The Verge.
https://www.theverge.com/2018/10/29/18037402/justin-bieber-burrito-yes-theory-pran
k-youtube
Anderson, J., & Rainie, L. (2017). The Future of Truth and Misinformation Online. Pew
Research Center.
https://www.pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinforma
tion-online
Anderson, M., & Jiang, J. (2018). Teens, Social Media & Technology. Pew Research Center.
https://www.pewresearch.org/internet/2018/05/31/teens-social-media-technology-2018
Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural Machine Translation by Jointly Learn-
ing to Align and Translate.
BlackBerry, C. T. (2020). The Blackberry Cylance 2020 Threat Report.
https://www.cylance.com/en-us/resources/knowledge-center/2020-threat-report.html
Bruff, D. (2009). Teaching with Classroom Response Systems: Creating Active Learning
Environments. Hoboken, NJ: John Wiley & Sons.
Cantador, I., & Conde, J. M. (2010). Effects of Competition in Education: A Case Study in
an e-Learning Environment. In Proceedings of the 2010 IADIS International Confe-
rence on e-Learning (pp. 11-18). Freiburg, Germany.
Ciampaglia, G. L., Mantzarlis, A., Maus, G., & Menczer, F. (2018). Research Challenges of
Digital Misinformation: Toward a Trustworthy Web. AI Magazine, 39, 65-74.
https://doi.org/10.1609/aimag.v39i1.2783
Crouch, C. H., & Mazur, E. (2001). Peer Instruction: Ten Years of Experience and Re-
sults. American Journal of Physics, 69, 970-977. https://doi.org/10.1119/1.1374249
Das, R., Munkhdalai, T., Yuan, X., Trischler, A., & McCallum, A. (2018). Building Dy-
namic Knowledge Graphs from Text Using Machine Reading Comprehension.
Davis, C. A., Varol, O., Ferrara, E., Flammini, A., & Menczer, F. (2016). Botornot: A Sys-
tem to Evaluate Social Bots. In Proceedings of the 25th International Conference on
World Wide Web (pp. 273-274). Montreal, Canada. Geneva, Switzerland: International
World Wide Web Conferences Steering Committee.
https://doi.org/10.1145/2872518.2889302
Dellos, R. (2015). Kahoot! A Digital Game Resource for Learning. International Journal of
Instructional Technology and Distance Learning, 12, 49-52.
Donald, B. (2016). Stanford Researchers Find Students Have Trouble Judging the Credi-
bility of Information Online. Stanford Graduate School of Education: Research Stories.
https://ed.stanford.edu/news/stanford-researchers-find-students-have-trouble-judging-
credibility-information-online
Education Quality and Accountability Office (2020). The Ontario School Literacy Test.
https://www.eqao.com/en/assessments/osslt
Elmurngi, E., & Gherbi, A. (2017). Detecting Fake Reviews through Sentiment Analysis
Using Machine Learning Techniques. In IARIA 2017 Conference on Data Analytics
(pp. 65-72). Barcelona, Spain: IARIA XPS Press.
Global News (2018). Here’s How a Fake Photo of Justin Bieber Eating a Burrito Fooled
the World. https://globalnews.ca/news/4606909/justin-bieber-burrito
Hochreiter, S., & Schmidhuber, J. (1996). LSTM Can Solve Hard Long Time Lag Prob-
lems. In Proceedings of 1996 Neural Information Processing Systems (pp. 473-479).
Denver, United States: MIT Press.
ICWSM 2017 Workshop on Digital Misinformation (2017).
https://cnets.indiana.edu/blog/2016/12/29/icwsm-2017-misinformation-workshop
Jurca, R., & Faltings, B. (2009). Mechanisms for Making Crowds Truthful. Journal of Ar-
tificial Intelligence Research, 34, 209-253. https://doi.org/10.1613/jair.2621
Mazur, E., & Hilborn, R. C. (1997). Peer Instruction: A User’s Manual. Upper Saddle
River, NJ: Prentice Hall. https://doi.org/10.1063/1.881735
MediaSmarts (n.d.). Digital Literacy Fundamentals. Canada’s Centre for Digital and Me-
dia Literacy.
https://mediasmarts.ca/digital-media-literacy/general-information/digital-media-litera
cy-fundamentals/digital-literacy-fundamentals
Miller, J. (2020). Plan for Mandatory Online High School Courses Shrouded in Mystery.
Ottawa Citizen.
https://ottawacitizen.com/news/local-news/plan-for-mandatory-online-high-courses-s
hrouded-in-controversy-and-mystery
Mukherjee, A., Liu, B., & Glance, N. (2012). Spotting Fake Reviewer Groups in Consumer
Reviews. In Proceedings of the 21st International Conference on World Wide Web (pp.
191-200). Lyon, France: Association for Computer Machinery.
https://doi.org/10.1145/2187836.2187863
Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., & Deng, L.
(2016). Ms Marco: A Human-Generated Machine Reading Comprehension Dataset.
Ohashi, D., Cohen, R., & Fu, X. (2017). The Current State of Online Social Networking
for the Health Community: Where Trust Modeling Research May Be of Value. In Pro-
ceedings of the ACM 2017 International Conference on Digital Health (pp. 23-32).
London, UK: Association for Computer Machinery.
https://doi.org/10.1145/3079452.3079462
Ontario Ministry of Education (2020). The Ontario Curriculum: Secondary.
http://www.edu.gov.on.ca/eng/curriculum/secondary
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Chintala, S. et al.
(2019). Pytorch: An Imperative Style, High-Performance Deep Learning Library. In H.
Wallach, H. Larochelle, A. Beygelzimer, F. D. Alche’-Buc, E. Fox, & R. Garnett (Eds.),
Advances in Neural Information Processing Systems 32 (pp. 8024-8035). New York:
Curran Associates, Inc.
Pesce, N. L. (2019). Kylie Jenner Can Make More Money in One Instagram Post than
Many People Earn in a Lifetime. MarketWatch.
https://www.marketwatch.com/story/kylie-jenner-can-make-more-money-in-one-insta
gram-post-than-many-people-earn-in-a-lifetime-2019-07-23
Picchi, A. (2019). Buyer Beware: Scourge of Fake Reviews Hitting Amazon, Walmart and
Other Major Retailers. CBS Interactive.
https://www.cbsnews.com/news/buyer-beware-a-scourge-of-fake-online-reviews-is-hit
ting-amazon-walmart-and-other-major-retailers
Talwar, A., Jurca, R., & Faltings, B. (2007). Understanding User Behavior in Online
Feedback Reporting. In Proceedings of the 8th ACM Conference on Electronic Com-
merce (pp. 134-142). San Diego, United States: Association for Computer Machinery.
https://doi.org/10.1145/1250910.1250931
Vosoughi, S., Roy, D., & Aral, S. (2018). The Spread of True and False News Online. Science,
359, 1146-1151. https://doi.org/10.1126/science.aap9559
Walker, T. (2014). Gamification in the Classroom: The Right or Wrong Way to Motivate
Students. NeaToday: News and Features from the National Education Association.
Wineburg, S., & McGrew, S. (2016). Why Students Can’t Google Their Way to the Truth.
Education Week, 36, 22-28.
Wineburg, S., Breakstone, J., Smith, M., McGrew, S., Ortega, T., & Kerr, D. (2020). COR:
Civic Online Reasoning Curriculum Resources.
https://cor.stanford.edu/?page=0main-content
Wineburg, S., McGrew, S., Breakstone, J., & Ortega, T. (2016). Evaluating Information:
The Cornerstone of Civic Online Reasoning. Stanford Digital Repository.
Wong, Q. (2019). Deepfakes Are Coming. Facebook, Twitter and Youtube Might Not Be
Ready. CNET.
https://www.cnet.com/news/facebook-twitter-and-youtube-grapple-with-altered-video
s-ahead-of-the-2020-election
Zaino, J. (2013). The Pros and Cons of Gamification in the Classroom. EdTech Magazine.
https://edtechmagazine.com/higher/article/2013/07/pros-and-cons-gamification-classroom