Example 2
Example 2
Example 2
Assessment is a powerful force that drives students’ learning. Students will learn what they
think will be tested and not what is in the curriculum or even what they think has been covered
in class (Ramsden, 1992). Teachers can harness and multiply the effects of good assessment
with good teaching practices in order to maximise the impact on students’ learning.
Key words: assessment for learning, assessment of learning, assessment of learning, validity,
reliability, transparency, inclusivity, authenticity and fairness.
Introduction
“Cher (sic) what will be tested?” is a common phrase that we often hear from our students.
Such preoccupation with ‘what will be tested’ underscores the importance that students place
on assessment. Biggs and Tang (2011) suggested that assessment influences what students
will learn and how they will learn. It simply drives students’ learning. Gibbs and Simpson
(2004) also found that students were most influenced by the assessment and not the teaching.
These views were similarly shared by Ramsden (1992) who asserted that students will learn
what they think will be tested and not what is in the curriculum or even what they think has
been covered in class. Hence, it is imperative for teachers to understand the great impact of
assessment on students’ learning and how it can be effectively employed to motivate students
to learn. Stepfani (2004) highlighted that assessment of learning has also become more
important now than it has ever been; to equip students to become confident, independent and
effective lifelong learners to face and succeed in a volatile, uncertain, complex, and
ambiguous (VUCA) world.
This essay comprises three sections. The first section examines why assessment is
necessary and how it forms an integral component of teaching and learning. It explores the
purposes, principles and quality of assessment. The second section of the essay provides a
critical reflection of my teaching practice of how I use assessment to arouse, direct and
sustain students’ behaviour in learning, and in meeting the intended learning outcomes (ILOs)
and competency standards for technical and vocational education and training at Singapore
Polytechnic. The final section will conclude with my key take-away from this invaluable
course on assessment and evaluation.
1
learning’ or ‘assessment as learning’? These three facets of assessment are important and
integral in teaching and learning, and I shall examine them in turn.
The overarching purpose of assessment is to improve student’s learning and teachers’
teaching, and in meeting the ILOs. Assessment for learning is a process where teachers use
assessment as an investigating tool to find out what students know and can do and what they
do not know or cannot do. It provides teachers with the basis to continually adjust their
teaching strategies to help students breach the gaps they might have. It is also important to let
students know how they will be assessed so that they are clear of what is expected from them.
Another essential feature of assessment for learning is providing students with frequent
constructive feedback about their learning and giving them opportunities to improve.
Consequently, assessment becomes an episode of learning too (Stefani, 2004). It also means
providing differentiated teaching and learning activities for students with varying abilities and
levels of development to cope with their studies and move forward. Assessment for learning
is therefore a formative assessment to identify the strengths and weaknesses, and guide
improvement for both teachers and students (although, it may include minor summative tests
or coursework). Assessment for learning also requires a strong and trusted relationship
between students and teachers. As Steven Covey (1932-1912) once said, “when there is trust
… there will be very good communication and teamwork”. Hence, it is important for teachers
to consistently affirm their students with care and concern so as to provide a safe and
nurturing environment in which they can grow (and fail). Hence, the key objective of
assessment for learning is to enhance the ability of teachers to address learning needs
effectively by monitoring student progress and identifying areas where teaching and learning
activities need to be modified to help students achieve their potential (MOE, 2011).
Assessment of learning, on the other hand, is generally considered a summative
assessment where evidences of achievement are collected at the end of a task or programme.
The data is then analysed to judge and report how well the students have performed against
the ILOs or competency standards, and in relation to others (Earl and Katz, 2006). Hence, it is
important for teachers to ensure the assessment is valid, reliable, transparent, inclusive,
authentic and fair (Brown and Race, 2013). It is also equally important to let students
understand the rational for the assessment, the ILOs and how they will be assessed.
Assessment of learning, as a summative assessment, is often used to decide the student’s
future by the students themselves, parents, teachers, school administrators and others.
Finally, assessment as learning is a process where students become they own assessors. It
places students at the focus of the assessment process, in which students reflect on how they
are performing and, as a result, become more aware of their own thought processes.
Assessment as learning enhances students’ metacognitive skills and develops them to be
independent and effective lifelong learners. The role of teachers is relegated from directing to
guiding students to set their own goals and develop good learning practices. Assessment as
learning also focuses student’s attention on the task, rather than on getting the right answers,
which encourages deep-approach learning (Earl and Katz, 2006). Hence, it is important for
teachers to design meaningful tasks or activities that stimulate and encourage students to
critically reflect and monitor their own learning (Leathwood, 2005).
In summary, assessment plays a pivotal role in how students learn, their motivation to
learn and how teachers teach (Earl and Katz, 2006). The use of assessment serve different
purposes: to inform teachers about their teaching (assessment for learning), as evidence to
demonstrate if students had met the ILOs (assessment of learning) and for students to be
cognisant of their learning (assessment as learning). Earl (2003) shared two possible scenarios
of how assessment is used, in a traditional way or a re-configured way, as shown in Fig 1. In
2
the first pyramid, Earl (2003) suggested that the traditional approach in classroom assessment
was primarily based on assessment of learning, where considerable time and effort were spent
as of
of as
in imparting content to students and ensuring they achieve the ILOs or competency standards.
Assessment for learning was employed to check if students were on track to achieving the
results, while assessment as learning was rarely used. However, in recent years, the attention
has shifted towards assessment as learning as a means to develop students as critical thinkers,
and independent and lifelong learners for the 21st century. As a result, assessment for
learning has also been refined as an investigating tool for teachers to assess students’
progress, adjust teaching strategies and provide timely feedback to help students manage their
learning. This changed the way students perceive assessment and motivated them to learn
more and created a positive washback between assessment and learning. While assessment of
learning is used only when summative judgement is required. Hence, it is crucial for teachers
and school leaders to have clear purpose of how assessment should be used, as part of the
curriculum, to actively engage students in learning.
According to Ramsden (1992) who succinctly puts it, “assessment is the curriculum” as
far as the students are concerned. Students will learn what they think they will be assessed on
and not on what is in the curriculum or even on what has been covered in class. The trick then
is to embed the assessment as part of the curriculum. Biggs and Tang (2011) underscored the
importance of constructive alignment and suggested that teaching and assessment tasks must
be carefully aligned with the ILOs, with the assessment tasks constructed before the teaching
and learning activities. Hence, it is important for the ILOs to be appropriately set and defined.
In the next section, I shall evaluate the ILOs and some of the assessment designs for one of
the module that I am teaching. But before I move on, I would like to expound on this issue of
fit-for-purpose and quality in assessment.
According to Brown (2004), assessment needs to be fit-for-purpose in which it must be
valid, reliable, transparent, inclusive, authentic and fair. For an assessment to be valid, it must
measure what we set out to measure. For example, by ensuring the assessment adequately
covers the ILOs. Reliability, on the other hand, looks at how well the assessment provides
consistent and stable information to allow different teachers to draw the same inferences
about a student’s learning. Transparency lets students know how they will be assessed.
Hence, teachers should explain the marking rubrics and show students some samples of good
works as reference. Inclusivity provides students with opportunities to demonstrate their
abilities in different ways. Authenticity looks at how ‘real life’ the assessment is in practice
3
and verifies if the work is truly done by the student and not someone else. Fairness deals with
objectivity and honesty in measurement and reporting.
4
Table 1. Lertap5 brief item stats for "AY2014_S1 MCQ"
n 235
Min 3.00
Median 11.00
Mean 10.86
Max 19.00
s.d. 2.76
var. 7.60
Range 16.00
IQRange 4.00
Skewness 0.02
Kurtosis -0.05
MinPos 0.00
MaxPos 20.00
Table 2 shows that question 18 was the most difficult question with only 8% of the students
getting it correct. It also indicates that question 15 was the easiest question with 94% of the
students getting it correct.
On closer examination, there were 1% of students who did not answer question 2 and some of
the better students had answered question 18 wrongly. Further investigation revealed that
most of the students, including the better ones, chose the distractor with the most stringent
qualifications (c) rather than the correct answer (b) with less qualifications attached. This
observation is confirmed by two z-scores having positive values.
18. To ensure that a technician remains reasonably current on the aircraft types to which they hold
authorisation, they are required to have _____.
(a) appropriate continuation or refresher training
(b) been involved in at least 6 months of actual aircraft maintenance experience in any 2 year period
(c) been involved in at least 6 months of actual aircraft maintenance experience in any 2 year period
and have appropriate continuation or refresher training
5
18 (c20)
option wt. n p pb(r) b(r) avg. z
a 0.00 19 0.08 0.04 0.07 11.21 0.13
b 1.00 18 0.08 -0.04 -0.08 11.39 0.19 two positive z-scores
c 0.00 198 0.84 -0.07 -0.10 10.77 -0.03
Figure 1 shows a scatter plot of discrimination against difficulty. From the plot, it can be seen
that 15 out of 20 questions were correctly answered by more than 50% of the students. 5
questions had 40% or less getting them correct.
The discrimination index or point-biserial correlation shows that questions 10 and 19
have high correlation, pb(r) > 0.3, while questions 18, 8 and 4, have poor correlation. By
removing questions 18, 8 and 4, the subtest reliability or alpha figures will improve. For
example, by removing question 8, the subtest reliability will be improved by 0.021, which
will increase from 0.484 to 0.505 (see alpha figures in Appendix C1).
U-L analysis
In the U-L analysis, the distractors for all the questions performed their task, that is, one or
more students chose an incorrect answer.
In the U-L discrimination analysis, it can be seen most of the questions have an index of
0.1 or higher, except question 18 (0.09). According to Hopkins (1998) an index of 0.1
provides fair discrimination, while any index higher than 0.4 provides excellent
discrimination.
In the U-L difficulty analysis, it can be seen most of the questions have an index less than
0.77, except questions 8 (0.81) and 15 (0.94). According to Mehrens and Lehmann (1991) a
three option multiple-choice test on a maximally discriminating test should have a difficulty
value of about 0.77. From this recommendation, it seems the multiple-choice test for the
semester test was moderately difficult.
6
Hopkins (1998) an index of 0.1 provides fair discrimination, while any index higher than 0.4
provides excellent discrimination.
In the mastery difficulty analysis, it can be seen most of the questions have an index less
than 0.77, except questions 10 (0.79) and 15 (0.94). According to Mehrens and Lehmann
(1991) a three option multiple-choice test on a maximally discriminating test should have a
difficulty value of about 0.77. From this recommendation, it seems the multiple-choice test
for the semester test was moderately difficult.
The index of dependability for mastery is at 0.806, which is much higher than the earlier
alpha figure of 0.484.
In concluding the LERTAP analysis, it seems that the multiple-choice questions were
moderately difficult for most students. While most of the questions had fair discrimination,
pb(r) > 0.1. Four questions, in particular 8, 10, 15 and 18, require adjustment to improve their
reliability. To help students do well in the multiple-choice questions, more opportunities and
practice will be given to them. Students should also be reminded to allocate more time for the
multiple-choice questions.
In the final part of this section, I have evaluated all the assessment components for this
module against the set of criteria for Assessment for Learning and append the findings in
Appendix D. The case studies and assignment are well developed and satisfy most of the
principles of quality assessment. The discussion board and blog were recently introduced and,
therefore, requires more time to develop.
Conclusion
This course on assessment and evaluation in education has given me a greater insight into the
importance of assessment in learning and teaching. Assessment is a powerful force that drives
students’ learning and motivates them to take charge of their own leaning. As educators we
must harness and multiply the effects of good assessment. But, at the same time, we must
always remember to align assessment with the desired outcomes of education and good
teaching practices in order to achieve maximum impact (Heng, 2014). As educators we have a
common purpose “to awake the natural curiosity of young minds” (Anatole France, 1844-
1924) and “to teach one to think intensively and to think critically … plus character” (Martin
Luther King Jr., 1929-1968). Assessment is a tool that we can use to excite young minds.
7
References
Anderson, LW, & Krathwohl, DR (eds) 2001, A taxonomy for learning, teaching and
assessing: a revision of Bloom’s Taxonomy of educational objectives, NewYork:
Longman.
Biggs, J, & Tang, C 2011, Teaching Quality Learning at University, 4th edn, McGraw Hill.
Brown, S 2004, ‘Assessment for learning’, Learning and Teaching in Higher Education, iss.
1, pp. 81-89.
Brown, S, & Race, P 2013, ‘Using effective assessment to promote learning’ in Hunt, L, &
Chalmers, D (eds), University teaching in focus: a learning-centred approach, NewYork:
Routledge, pp. 74-91.
Cuban, L 2014, ‘Reforming again, again, and again’, Educational Researcher, vol. 19, no. 1,
pp. 3-13.
Earl, L 2003, Assessment as learning: using classroom assessment to maximise student
learning, Thousand Oaks: Cowin.
Earl, L, & Katz, S (eds) 2006, Rethinking classroom assessment with purpose in mind:
assessment for learning, assessment as learning, assessment of learning, Canada: Manitob
Education, Citizenship and Youth.
Gibbs, G, & Simpson, C 2004, ‘Conditions under which assessment supports student’s
learning’, Learning and Teaching in Higher Education, iss. 1, pp. 3-31.
Heng, SK (Minister for Education) 2014, 40th International Association for Educational
Assessment (IAEA), media release, Grand Copthorne Waterfront Hotel, Singapore, 26
May.
Leathwood, C 2005, ‘Assessment policy and practice in higher education: purpose, standards
and equity’, Assessment & Evaluation in Higher Education, vol. 30, no. 3, pp. 307-324.
Ministry of Education (MOE) 2011, Ministry of Education position paper: assessment.
Learning Media, New Zealand.
Ramsden, P 1992, Learning to teach in higher education, London: Routledge.
Stefani, L 2004, ‘Assessment of Student Learning: promoting a scholarly approach’, Learning
& Teaching in Higher Education, iss. 1, pp. 51-66.
8
Bibliography
Costley, C 2007, ‘Work-based learning: assessment and evaluation in higher education’,
Assessment & Evaluation in Higher Education, vol. 32, no. 1, pp. 1-9.
Brodie, P, & Irving, KC 2007, ‘Assessment in work-based learning: investigating a
pedagogical approach to enhance student learning’, Assessment & Evaluation in Higher
Education, vol. 32, no. 1, pp. 11-19.
James, D, & Fleming, S 2004, ‘Agreement in Student Performance in Assessment’, Learning
& Teaching in Higher Education, iss. 1, pp. 32-50.
Krathwohl, DR 2002, ‘A revision of Bloom’s Taxonomy: an overview’, Theory into Practice,
vol. 41, no. 4, pp. 212-225.
Mayer, RE 2002, ‘Rote versus meaningful learning’, Theory into Practice, vol. 41, no. 4, pp.
226-225.
Mercurio, A 2013, ‘Assessment and learning: building teachers’ assessment literacy’,
Proceedings of the 39th annual conference on educational assessment 2.0, Available from:
<http://www.iaea.info/documents/paper_5bc1b798.pdf>. [17 July 2014].
Te Kete Ipurangi (TKI), What is assessment for learning? Available from:
<http://assessment.tki.org.nz/Assessment-in-the-classroom/Assessment-for-learning-in-
principle/What-is-assessment-for-learning>. [10 July 2014].
9
APPENDIX A
10
APPENDIX B
(b) When can one expect an occurrence of an overload situation and what can be done
to relieve an impending overload ? (5 marks)
B5. There is a tendency, for highly motivated and de-motivated people, to exhibit a pattern
of characteristics. Describe these characteristics (10 marks)
11
B2. Essay Questions for AY2013/14_S1 (revised)
B1. Reason (1997) suggests that omissions, or forgetting, accounts for 56% of maintenance
errors. Using the information processing model, explain how humans process and
retains information and discuss the probable reasons why people are prone to
forgetting. What strategies would you offer the engineers to help them remember and
carry out their tasks safely and effectively? [10 marks]
B2. The following passage is an extract of the Aircraft Accident Report (NTSB/AAR-
89/03) into the explosive decompression of Aloha Airlines Flight 243 near Maui,
Hawaii on 28 April 1988.
There are several possibilities why the inspectors, when complying with the AD, failed
to find the detectable crack in the S-4R lap joint on N73711, even though the area
reportedly was given an eddy current inspection and two inspectors performed
independent visual inspections. A person can be motivated to do a critical task very
well; but when asked to perform that same task repeatedly, factors such as expectation
of results, boredom, task length, isolation during the inspection task, and the
environmental conditions all tend to influence performance reliability. Airline
maintenance is most often performed at night and during the early morning hours; the
time of day that has been documented to cause adverse human performance. Aloha
Airlines training records revealed that little formal training was provided in NDI
techniques and methods. The inspector who found the S-4R lap joint cracks requiring
repair stated that only on-the-job training (OJT) had been provided since he became an
inspector in August 1987; his training records show formal NDI training on September
17, 1987, when a 2-hour training session was given by a Boeing representative.
Records [also] indicate the [other] inspector who provided the initial OJT had only 2
hours of formal NDI training, during the same 2-hour training session on September
17, 1987, provided by Boeing.
From the above passage, show where the Dirty Dozen played a role in the accident and
define the meaning of these factors. Suggest appropriate safety nets to rectify the
impact of these undesirable influences and provide your reasons for them. [10 marks]
B3. (a) Hawkins (1972) proposed the SHEL Model to explain the importance and study of
human factors. Briefly describe the purpose of studying human factors and provide
examples where the L-E interface is critical in aircraft maintenance and inspection tasks.
Explain how can these L-E interface be enhanced, in a practical and reasonable
manner? [6 marks]
(b) Explain the term ‘troubled’ employees and what can you do as a supervisor to help
them perform their tasks safely and efficiently. [4 marks]
C1. In the case study, “Careless, Stupid, Dumb and Lazy”, Johnny Workhard, a diligent,
helpful and dependable worker damaged an aircraft while driving a tow tug under an
aircraft. The manager concluded that Johnny Workhard was a careless, stupid, dumb
and lazy person and decided to fire him to set him as an example for others to be more
vigilant in their work.
Based on the circumstantial evidence, would you agree that Johnny Workhard is a
careless, stupid, dumb and lazy person and he deserves to be fired? What sort of safety
culture does Low Cost Airlines exhibit? Discuss the strategies and policies that you
would adopt to motivate the employees in developing a generative-type safety culture at
Low Cost Airlines and make it the best company that people would like to work for.
Clearly state the underpinning motivation theories that you use, if any. [20 marks]
12
APPENDIX C
Summary statistics
number of scores (n): 235
lowest score found: 3.00 (15.0%)
highest score found: 19.00 (95.0%)
median: 11.00 (55.0%)
mean (or average): 10.86 (54.3%)
standard deviation: 2.76 (13.8%)
standard deviation (as a sample): 2.76 (13.8%)
variance (sample): 7.63
number of subtest items: 20
minimum possible score: 0.00
maximum possible score: 20.00
13
1 (c3)
option wt. n p pb(r) b(r) avg. z
a 0.00 91 0.39 -0.37 -0.47 9.57 -0.47
b 1.00 141 0.60 0.21 0.26 11.70 0.30
c 0.00 3 0.01 -0.02 -0.07 10.33 -0.19
2 (c4)
option wt. n p pb(r) b(r) avg. z
a 0.00 47 0.20 -0.13 -0.19 10.13 -0.26
b 0.00 57 0.24 -0.34 -0.47 9.19 -0.60
c 1.00 129 0.55 0.22 0.28 11.83 0.35
other 0.00 2 0.01 0.06 0.22 12.50 0.60
3 (c5)
option wt. n p pb(r) b(r) avg. z
a 0.00 67 0.29 -0.22 -0.30 9.88 -0.35
b 1.00 128 0.54 0.05 0.07 11.45 0.21
c 0.00 40 0.17 -0.04 -0.06 10.60 -0.09
4 (c6)
option wt. n p pb(r) b(r) avg. z
a 0.00 57 0.24 -0.13 -0.17 10.25 -0.22
b 0.00 11 0.05 -0.01 -0.02 10.73 -0.05
c 1.00 167 0.71 -0.04 -0.06 11.07 0.08
5 (c7)
option wt. n p pb(r) b(r) avg. z
a 0.00 22 0.09 -0.26 -0.45 8.64 -0.81
b 1.00 163 0.69 0.17 0.22 11.45 0.22
c 0.00 50 0.21 -0.18 -0.26 9.88 -0.35
6 (c8)
option wt. n p pb(r) b(r) avg. z
a 1.00 134 0.57 0.18 0.23 11.70 0.31
b 0.00 23 0.10 -0.15 -0.27 9.57 -0.47
c 0.00 78 0.33 -0.27 -0.36 9.78 -0.39
7 (c9)
option wt. n p pb(r) b(r) avg. z
a 1.00 120 0.51 0.25 0.31 11.98 0.41
b 0.00 21 0.09 -0.08 -0.14 10.14 -0.26
c 0.00 94 0.40 -0.38 -0.48 9.59 -0.46
8 (c10)
option wt. n p pb(r) b(r) avg. z
a 0.00 64 0.27 -0.10 -0.13 10.41 -0.16
b 0.00 4 0.02 -0.12 -0.38 8.25 -0.95
c 1.00 167 0.71 -0.03 -0.04 11.09 0.09
9 (c11)
option wt. n p pb(r) b(r) avg. z
a 1.00 89 0.38 0.14 0.18 11.97 0.40
b 0.00 107 0.46 -0.24 -0.30 10.13 -0.26
c 0.00 39 0.17 -0.09 -0.13 10.31 -0.20
10 (c12)
option wt. n p pb(r) b(r) avg. z
a 0.00 22 0.09 -0.29 -0.51 8.36 -0.90
b 0.00 28 0.12 -0.33 -0.54 8.36 -0.91
c 1.00 185 0.79 0.34 0.48 11.53 0.24
14
11 (c13)
option wt. n p pb(r) b(r) avg. z
a 0.00 59 0.25 -0.14 -0.19 10.19 -0.24
b 1.00 143 0.61 0.11 0.14 11.48 0.23
c 0.00 33 0.14 -0.22 -0.35 9.33 -0.55
12 (c14)
option wt. n p pb(r) b(r) avg. z
a 0.00 62 0.26 -0.23 -0.31 9.81 -0.38
b 1.00 41 0.17 0.18 0.26 12.71 0.67
c 0.00 132 0.56 -0.03 -0.04 10.77 -0.03
13 (c15)
option wt. n p pb(r) b(r) avg. z
a 0.00 36 0.15 -0.15 -0.23 9.89 -0.35
b 0.00 41 0.17 -0.18 -0.26 9.80 -0.38
c 1.00 158 0.67 0.09 0.11 11.35 0.18
14 (c16)
option wt. n p pb(r) b(r) avg. z
a 1.00 149 0.63 0.04 0.05 11.30 0.16
b 0.00 30 0.13 -0.12 -0.19 10.00 -0.31
c 0.00 56 0.24 -0.15 -0.20 10.13 -0.26
15 (c17)
option wt. n p pb(r) b(r) avg. z
a 1.00 220 0.94 0.16 0.31 11.03 0.06
b 0.00 8 0.03 -0.25 -0.59 7.25 -1.31
c 0.00 7 0.03 -0.09 -0.23 9.43 -0.52
16 (c18)
option wt. n p pb(r) b(r) avg. z
a 0.00 99 0.42 -0.39 -0.49 9.61 -0.45
b 1.00 130 0.55 0.24 0.31 11.87 0.37
c 0.00 6 0.03 -0.08 -0.21 9.50 -0.49
17 (c19)
option wt. n p pb(r) b(r) avg. z
a 0.00 76 0.32 -0.19 -0.25 10.08 -0.28
b 0.00 70 0.30 -0.13 -0.17 10.31 -0.20
c 1.00 89 0.38 0.14 0.18 11.94 0.39
18 (c20)
option wt. n p pb(r) b(r) avg. z
a 0.00 19 0.08 0.04 0.07 11.21 0.13
b 1.00 18 0.08 -0.04 -0.08 11.39 0.19 two positive z-scores
c 0.00 198 0.84 -0.07 -0.10 10.77 -0.03
19 (c21)
option wt. n p pb(r) b(r) avg. z
a 0.00 68 0.29 -0.33 -0.44 9.41 -0.52
b 0.00 44 0.19 -0.21 -0.30 9.66 -0.43
c 1.00 123 0.52 0.31 0.38 12.08 0.44
20 (c22)
option wt. n p pb(r) b(r) avg. z
a 0.00 28 0.12 -0.04 -0.07 10.54 -0.12
b 0.00 150 0.64 -0.24 -0.31 10.36 -0.18
c 1.00 57 0.24 0.15 0.21 12.32 0.53
15
C2. LERTAP Results (Upper/Lower)
16
Res = a b c other U-L diff. U-L disc.
lower 0.19 0.64 0.17 0.00
10 upper 0.00 0.00 1.00 0.00 0.72 0.55
2nd 0.04 0.06 0.89 0.00
3rd 0.09 0.04 0.87 0.00
4th 0.11 0.17 0.72 0.00
lower 0.23 0.32 0.45 0.00
11 upper 0.15 0.77 0.09 0.00 0.55 0.43
2nd 0.26 0.68 0.06 0.00
3rd 0.23 0.66 0.11 0.00
4th 0.26 0.60 0.15 0.00
lower 0.36 0.34 0.30 0.00
12 upper 0.13 0.34 0.53 0.00 0.18 0.32
2nd 0.19 0.28 0.53 0.00
3rd 0.21 0.13 0.66 0.00
4th 0.38 0.11 0.51 0.00
lower 0.40 0.02 0.57 0.00
13 upper 0.09 0.06 0.85 0.00 0.69 0.32
2nd 0.15 0.13 0.72 0.00
3rd 0.17 0.23 0.60 0.00
4th 0.11 0.23 0.66 0.00
lower 0.26 0.21 0.53 0.00
14 upper 0.79 0.09 0.13 0.00 0.63 0.32
2nd 0.66 0.11 0.23 0.00
3rd 0.66 0.11 0.23 0.00
4th 0.60 0.15 0.26 0.00
lower 0.47 0.19 0.34 0.00
15 upper 1.00 0.00 0.00 0.00 0.94 0.13
2nd 1.00 0.00 0.00 0.00
3rd 0.87 0.04 0.09 0.00
4th 0.94 0.02 0.04 0.00
lower 0.87 0.11 0.02 0.00
16 upper 0.21 0.77 0.02 0.00 0.48 0.57
2nd 0.23 0.77 0.00 0.00
3rd 0.36 0.62 0.02 0.00
4th 0.53 0.43 0.04 0.00
lower 0.77 0.19 0.04 0.00
17 upper 0.17 0.23 0.60 0.00 0.39 0.40
2nd 0.32 0.26 0.43 0.00
3rd 0.40 0.28 0.32 0.00
4th 0.26 0.38 0.36 0.00
lower 0.47 0.34 0.19 0.00
18 upper 0.04 0.09 0.87 0.00 0.05 0.06
2nd 0.13 0.09 0.79 0.00
3rd 0.17 0.06 0.77 0.00
4th 0.00 0.13 0.87 0.00
lower 0.06 0.02 0.91 0.00
19 upper 0.11 0.09 0.81 0.00 0.51 0.60
2nd 0.09 0.15 0.77 0.00
3rd 0.34 0.17 0.49 0.00
4th 0.43 0.23 0.34 0.00
lower 0.49 0.30 0.21 0.00
20 upper 0.06 0.34 0.60 0.00 0.36 0.47
2nd 0.17 0.70 0.13 0.00
3rd 0.11 0.64 0.26 0.00
4th 0.13 0.77 0.11 0.00
lower 0.13 0.74 0.13 0.00
17
C3. LERTAP Results (Mastery@75)
18
Res = a b c other U-L diff. B disc.
others 0.30 0.20 0.50 0.00
20 masters 0.09 0.36 0.55 0.00 0.26 0.32
others 0.11 0.66 0.23 0.00
19
APPENDIX D
0 – no evidence
Discussion Board
1 – needs improving
2 – fair evidence
Case Studies
Assignment
3 – good evidence
4 – strong evidence
Overall
Tests
Blog
Checklist for
Assessment for Learning
1. Authentic assessment.
1.1. Enhancing students’ perception of the meaning and
relevance of assessment. 4 2 3 4 4 3
1.1.1. Promote deep learning approaches. 4 2 3 4 4 3
1.1.2. Position students as active learners. 4 - 3 4 4 3
1.1.3. Promote hands-on experience of study.
1.2. Drawing and linking to the real world. 4 3 3 4 4 4
1.2.1. Linking learning to practice. 4 3 3 4 4 4
1.2.2. Addressing real world issues in the workplace. 4 3 3 4 4 4
1.2.3. Working on authentic cases.
1.3. Developing a sense of personal engagement. 4 2 3 4 4 3
1.3.1. Harnessing learners’ everyday experience. 4 2 2 4 4 3
1.3.2. Bring a ‘live’ feel to assessment. 4 2 4 4 4 3
1.3.3. Encourage students to focus/reflect on and
evaluate their learning.
2. Balancing summative and formative assessment
2.1. Developing the formative potential of summative
assessment. - - - 3 3 3
2.1.1. Enhancing learning through summative - - - 3 3 3
assessment tasks.
2.1.2. Learning from feedback associated with 2 2 - 3 3 3
summative assessment. - - - 3 3 3
2.2. Learning through marks and grades. - 2 1 3 3 2
2.2.1. Understanding marks and grades.
2.2.2. Changing marking scales. - - - - - 3
2.2.3. Formative marking. - - - - - 3
2.3. Rebalancing time, effort and focus towards formative
assessment.
2.3.1. Making a difference of allocation of time and
effort.
2.3.2. Shifting what is deemed as valuable and
important.
3. Creating opportunities for practice and rehearsal.
3.1. Planning and structuring courses to maximise active
learning and student involvement.
3.1.1. Learning tasks for students to undertake within - - - - - 1
lectures. 4 4 4 4 4 4
3.1.2. Designing tasks for students to undertake outside 4 2 2 4 4 3
class time.
3.1.3. Designing progressively challenging takes which
gradually build students’ capabilities. 4 4 4 4 - 4
3.2. Creating a low-stake formative environment.
3.2.1. Designing the introductory session to reinforce 4 3 1 4 - 3
the need for active participation and practice.
3.2.2. Designing formative activities in which 4 4 1 4 - 3
provisional views and mistakes are valued.
3.2.3. Helping students see the rationale for engaging in 4 2 2 4 - 3
practice. 4 2 2 4 2 3
20
0 – no evidence
Discussion Board
1 – needs improving
2 – fair evidence
Case Studies
Assignment
3 – good evidence
4 – strong evidence
Overall
Tests
Blog
Checklist for
Assessment for Learning
3.3. Creating opportunities for social interaction. 4 - - 4 - 3
3.3.1. Building in carefully designed class discussions.
3.3.2. Encouraging students to form their own study
group.
3.3.3. Encouraging students to work in groups to
collaboratively produce as agreed output.
4. Designing formal feedback to improve learning.
4.1. Enhancing tutor feedback to make it work better.
4.1.1. Providing comments on work while students are 4 3 1 4 - 3
producing it. 4 3 1 - - 3
4.1.2. Setting short tasks that allow students to gain - - - 4 4 4
formative feedback.
4.1.3. Helping students learn from feedback associated
with staged summative tasks. 4 2 1 4 - 3
4.2. Deploying others to provide feedback. 3 1 1 3 - 2
4.2.1. Using peers to provide feedback. - 2 2 - - 2
4.2.2. Peer feedback to support self-regulation.
4.2.3. Using technology to enhance feedback provision. 2 2 2 2 - 2
4.3. Focusing student attention on feedback. 2 2 2 2 - 2
4.3.1. Managing students’ expectation of feedback. 2 1 1 1 - 1
4.3.2. Preparing students for feedback.
4.3.3. Enhancing engagement with feedback.
5. Design opportunities for informal feedback.
5.1. Enhancing learning through classroom dialogue.
5.1.1. Designing appropriate tasks to enhance classroom 4 - - 4 - 4
activity and peer interaction.
5.1.2. Establishing a conducive climate for dialogue. 4 3 3 3 - 3
5.1.3. Using peer discussion to help students appreciate 4 - - 4 - 4
a variety of approaches to tackling work.
5.2. Encouraging students to work collaboratively.
5.2.1. Using summative assessment to foster - - - 4 1 2
collaborative working. 4 1 - 4 - 3
5.2.2. Preparing students for group works.
5.3. Seeding activity and dialogue beyond the classroom. 4 2 - 4 - 3
5.3.1. Stimulating students to engage in peer discussion 4 2 2 4 - 3
beyond the classroom.
5.3.2. Designing authentic experiences beyond the
formal curriculum.
6. Developing students as self-assessor and effective lifelong
learners.
6.1. Inducting students into AfL cultures and communities. 4 2 3 3 3 3
6.1.1. Promoting the important of self-review and 4 2 3 3 3 3
reflection. 4 1 1 3 - 2
6.1.2. Evaluate what they do already and can do.
6.1.3. Encourage students to regard themselves as
partners in assessment communities. 4 1 1 3 - 2
6.2. Stimulating students’ active involvement in the 4 1 1 3 - 2
assessment process. 4 2 2 3 3 3
6.2.1. Explicitly involving students as assessors.
6.2.2. Using peer review to involve students in the 2 1 - 3 3 2
assessment process.
6.2.3. Encouraging active learning in self-review.
6.3. Developing students’ assessment literacy.
21
0 – no evidence
Discussion Board
1 – needs improving
2 – fair evidence
Case Studies
Assignment
3 – good evidence
4 – strong evidence
Overall
Tests
Blog
Checklist for
Assessment for Learning
6.3.1. Enhance students’ appreciation of assessment
processes and criteria.
7. Diagnostic assessment or pre-assessment
7.1. Gauge students’ prior knowledge and misconceptions. 4 - - 2 2 2
7.2. Provide a baseline for understanding how much learning 4 - - 2 2 2
has taken place after the learning activity is completed.
22