Measuring Engagement: Learning Analytics in Online Learning Griff Richards
Measuring Engagement: Learning Analytics in Online Learning Griff Richards
Measuring Engagement: Learning Analytics in Online Learning Griff Richards
Griff Richards
Thompson Rivers University
Kamloops, BC, Canada
[email protected]
Abstract: The growing number of on-line interactions between learners and on-line learning
systems leaves a trail of data that can be analyzed at a number of levels of granularity and for
several purposes. Learner engagement in the program of studies, with the course or with a
specific learning activity is one variable thought to highly correlate with learner success.
Engagement may be with the learning content, or with other learners and faculty in the
socially constructed environment. This paper looks at three specific areas where learner
engagement can be measured, and discussed the possibilities for using such information in
advising learners and faculty on ways to strengthen course outcomes.
Keywords: Learning analytics, Learner engagement, student success, on-line learning
There is an idea in education that the greater a learner is engaged with their
learning and with their peers, the better their learning will be. As early as 1980
Richard Snow, having exhausted aptitude treatment interaction (ATI) studies,
remarked that the only thing that seemed to matter was the degree of interaction
the learner had with the content and with other learners. Interaction seemed to
increase internalization of the content and resulted in deeper learning and better
test scores. Of course, engagement is not the only factor for promoting and
predicting learning outcomes. Ability (aptitude), prior knowledge and prior success
in learning (achievement) are well regarded as major factors. However engagement
seems to be the current focus, partly because of the extraordinary work of George
Kuh (2004) in developing and deploying instruments to measure student
engagement in colleges and high schools, and demonstrating positive correlations of
student engagement with student retention and academic success.
Measuring engagement is a key step towards improving it. But to be truly effective
with engagement data, learning analytic systems also need to trigger appropriate
responses for both learners, and the instructional systems. The goal of this paper is
to explore the concept of engagement in on-line learning, and to look at the growing
approach of learning analytics in monitoring, measuring and responding to learner
engagement.
Richards
What is Engagement?
Engagement is most often associated with personal involvement and commitment.
To be completely engaged in a book, a computer game or a conversation is involves
intense attention and often the exclusion of other stimuli – a state often referred to
as “flow”. There are a number of definitions of engagement in the context of
learning. Chapman (2003) talks about active participation and cognitive investment.
Engagement can involve individual attention or it can reflect participation in a
group. Kuh, Kinzie, Buckley, et al. (2006) related learning engagement to student
effort in both learning activities and other non-academic campus activities. They
identified five factors of engagement: active and collaborative learning, student-
faculty interaction, supportive campus environment, enriching educational
experiences, and the level of academic challenge. Ross (2009) noted certain
conditions and activities are needed for students to be engaged in learning. These
include structures such as classes, cultures and expectations of behaviour,
relationships between learners or between learners and their instructors,
motivation of the individuals, and factors outside the learning system such as family
responsibilities.
In the rapidly growing era of social media Churchill (2010) noted, "Engagement is
more than the actions of a single actor; it is about social groups and reciprocal
action and responsibility..." Interaction with faculty and classmates can help
establish an identity. Participation in a campus club or sports group can create a
supportive social network. Possibly the most direct action to improve academic
engagement is to make learning activities more engaging by borrowing strategies
from the Cooperative Learning movement of the 1990's. Positive interdependence
in group work and small group sizes improve opportunities for peer interaction, and
active participation in learning activities (Johnson, Johnson & Smith, 1998).
However, to be successful, cooperative learning structures often require that
learners learn how to participate in a cooperative setting and instructors learn how
to teach in a facilitative style. Perhaps we now must also learn to be engaging.
2
Measuring Engagement
learners can be just as engaged as on-campus learners although since most online
learners are older, they are more goal driven and less interested in non-academic
campus activities. The yearly survey method means that the results are not timely
enough to flag and intervene with individual learners who are at risk for failure.
Learning management systems (LMS) capture a lot of data that can be of value in
mapping student engagement and predicting those at risk. Most LMS systems
contain an asynchronous forum – a text discussion where participants engage in
online seminars about assigned topics. Such extended chat sessions leave traces that
can be used to analyze the transactions. Each posting is part of a response pattern of
who talks to who, who provides high-value messages responded to by many, and
who provides low-value messages. By mining this data, the tacit social structure of
the class can be quickly visualized using open source tools such as Snapp (Bakharia,
2010). Snapp portrays learners as nodes, linked by lines representing the number
of interactions between them. Figure 1 shows a sample of data from a brief on-line
discussion. Highly active participants appear in the centre of the diagram festooned
with multiple connections, while late-comers and less active learners appear as
sparsely connected dots on the outside of the cloud. The relative size of the dot
increases with the number of connections. The same technique could be applied to
analyzing an instructor’s message traffic to see which learners are engaging with the
teacher. The plot makes it easy to see who is busy, but there is no indication of the
intellectual value of the postings. Once again, we might say that those with more
postings are more engaged, but we do not know the depth of that engagement.
3
Richards
Some measures of engagement can be misleading. For example, Beer, Clark and
Jones (2010) plotted the number of learning management system (LMS) page visits
(“hits”) by academic achievement for several thousand students (see Figure 2). They
noted a solid correlation between the number of LMS hits and the student grade. Did
this mean that students who clicked more web pages were more engaged? Perhaps,
but the explanation might simply be that they had to click on the web pages to take
their tests and submit their assignments, and students who completed their work
were more likely to succeed. Beer et al. also noted that students using the
Blackboard LMS had much higher hits than students using Moodle LMS. Does that
4
Measuring Engagement
mean Blackboard is a more engaging LMS? No, it reflects that Moodle has a more
efficient access architecture. Learning Analytics is not simply about counting hits or
mapping discussions, it is about intelligent and thoughtful interpretation of data in
the context of human activity.
We see by this example how a student logging into the learning management system
could be presented with a display that shows their progress and achievement in
comparison to their classmates, or possibly to all students who have ever taken the
course. In the case of a self-paced or independent study course, the comparison
could be made to the historical progress trends of all previously enrolled students.
Of course, not all learners might want to know this information they might prefer to
5
Richards
remain blissfully in the dark rather than face undue stress and competition that
plague high need-achievement personalities, and learners at risk for failure.
However, early detection of risk should be the first line of defense in providing
remedial action. The ethical issue should not be “Should we inform learners of their
standing?” but “What form of remedial action should we counsel them to take?” Is it
possible to have a similar dashboard for engagement – perhaps one that shows the
lack of interaction with other students in group work and the consequences of the
lack of social capital on future income?
Course administrators could also make use of learning analytics to monitor the
effectiveness of course learning activities, the perceived usefulness of examinations
and assignments, or library references are being used. Richards and Devries (2011)
described the use of learning analytics combined with student comments to provide
feedback on the usefulness of prescribed learning activities. Such a system could
correlate student comments with achievement scores for each module of
instruction. In contrast to the usual course evaluation form that is administered at
the end of a course, the analytics provides tracking and feedback during the course
from the first learning activity onward. The goal is to develop sufficiently engaging
activities that make a measurable gain in learning by learners. Analytics from
tracking and performance data could also be mixed with questionnaire responses
from learners to identify learning activities or course readings that are difficult to
learn.
6
Measuring Engagement
patterns, the generation and testing of a hypothesis, and then the subsequent
application of a useful metric to raise warning flags, to help pace progress, to
suggest remediation or similar referral to a human agent. Such interventions also
need to be evaluated to ensure that they bring about the desired effect. Sending
learners at risk off to an inappropriate tutoring service could exacerbate their
problems instead of getting them on track.
Analytics can not of themselves cope with institutional politics or labour relations
issues. For example a learning analytic system might just as easily identify
disengaged faculty members, and unions might fight to block such unwarranted
intrusions on the sanctity of the classroom. Indeed, discriminating good from bad
teachers might end up flooding the classrooms of the good teachers (and increasing
their marking load) while reducing the registrations (and marking load) in the
classes of the poor teachers. Thus analytics must be used judiciously and carefully to
avoid undesired effects.
Another thing analytics can not do by themselves is improve instruction. While they
can point to areas in need of improvement and they can identify engaging practices,
the numbers can not make suggestion for improvements. This requires a human
intervention – usually in the form of a focus group or by soliciting suggestions from
the learners themselves. In many cases learners are well placed to suggest effective
instructional practices or to identify useful learning resources. Such learner-centric
models are at the core of self-directed learning communities.
Conclusions
This paper has skimmed the surface of the emerging field of learning analytics. As
we start measuring “engagement” or “achievement” we need to be aware of the
assumptions behind the measures. We need to know what we are measuring and
the inference between the “thing being measured” and the “concept” we want to
observe. Learning analytics is becoming a very popular way of tracking interactions,
the field is new and we have much to learn from those who have been conducting
data-mining and other analytics in sectors such as health and finance. A new world
7
Richards
References
Bakharia, A., Heathcote, E. & Dawson, S. (2009). Social Networks Adopting Pedagogical Practice:
SNAPP. Proceedings ASCILITE. Auckland.
http://www.ascilite.org.au/conferences/auckland09/procs/bakharia-poster.pdf
Beer, C., Clark, K., Jones, D. (2010). Indicators of engagement. Proceedings ASCILITE 2010. Sydney.
http://www.ascilite.org.au/conferences/sydney10/Ascilite%20conference%20proceedings
%202010/Beer-full.pdf
Graf, S. & Kinshuk. (2008). Adaptivity and Personalization in Ubiquitous Learning Systems. In A.
Holzinger (Ed.) Proceedings of USAB 2008, LNCS 5298 331:338.
Johnson, D.W., Johnson, R. T., & Smith, K. A. (1998). Active learning: Cooperation in the college
classroom. Edina, MN: Interaction Books.
Kuh, G. (2004). The national survey of student engagement: Conceptual framework and overview of
psychometric properties.
http://nsse.iub.edu/2004_annual_report/pdf/2004_conceptual_framework.pdf
Kuh, G., Kinzie, J., Buckley, J., Bridges, B., & Hayek, J. (2006). What matters to student success: A review
of the literature. Executive Summary. Commissioned Report.
http://nces.ed.gov/npec/pdf/Kuh_Team_ExecSumm.pdf
Krathwohl, D., Bloom, B., & Masia, B. (1956). Taxonomy of educational objectives. Handbook II:
Affective domain. New York: David McKay.
Richards, G. & Devries, I. (2011). Revisiting formative evaluation: Dynamic monitoring for the
improvement of learning activity design and delivery. Proceedings Learning Analytics and
Knowledge (LAK11). Banff.
Richardson, J. C., Newby, T. (2006). The role of students’ cognitive engagement in online learning.
American Journal of Distance Education, 20(1) 23:37.
Ross, C. (2009). Engagement for learning: What matters to students, what motivates them and how
can institutional support services foster student engagement? ANZSSA Biennial Conference,
Brisbane. http://www.adcet.edu.au/Anzssa/View.aspx?id=7034