Example 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Assignment 4:

Assessment & Evaluation in Education


Philip BS Quah
School of Mechanical & Aeronautical Engineering, Singapore Polytechnic

Assessment is a powerful force that drives students’ learning. Students will learn what they
think will be tested and not what is in the curriculum or even what they think has been covered
in class (Ramsden, 1992). Teachers can harness and multiply the effects of good assessment
with good teaching practices in order to maximise the impact on students’ learning.

Key words: assessment for learning, assessment of learning, assessment of learning, validity,
reliability, transparency, inclusivity, authenticity and fairness.

Introduction
“Cher (sic) what will be tested?” is a common phrase that we often hear from our students.
Such preoccupation with ‘what will be tested’ underscores the importance that students place
on assessment. Biggs and Tang (2011) suggested that assessment influences what students
will learn and how they will learn. It simply drives students’ learning. Gibbs and Simpson
(2004) also found that students were most influenced by the assessment and not the teaching.
These views were similarly shared by Ramsden (1992) who asserted that students will learn
what they think will be tested and not what is in the curriculum or even what they think has
been covered in class. Hence, it is imperative for teachers to understand the great impact of
assessment on students’ learning and how it can be effectively employed to motivate students
to learn. Stepfani (2004) highlighted that assessment of learning has also become more
important now than it has ever been; to equip students to become confident, independent and
effective lifelong learners to face and succeed in a volatile, uncertain, complex, and
ambiguous (VUCA) world.
This essay comprises three sections. The first section examines why assessment is
necessary and how it forms an integral component of teaching and learning. It explores the
purposes, principles and quality of assessment. The second section of the essay provides a
critical reflection of my teaching practice of how I use assessment to arouse, direct and
sustain students’ behaviour in learning, and in meeting the intended learning outcomes (ILOs)
and competency standards for technical and vocational education and training at Singapore
Polytechnic. The final section will conclude with my key take-away from this invaluable
course on assessment and evaluation.

Assessment as an Integral part of Teaching and Learning


Since 1991, there has been increasing attention and vigorous discourse about assessment and
evaluation in higher education. According to Leathwood (2005), assessment is something we
all live with, whether we like it or not. It is seen and used in a myriad of ways from students
using assessment to attain good grades to teachers appraising students’ work, and it even
includes student evaluations of courses and student feedback about teachers. Assessment is
about making a judgement and it carries different connotations for various people it serves.
But, more importantly, what is our paradigm of assessment as educators? Do we assess in
order to teach or do we teach in order to assess? Is assessment inclusive or exclusive? Our
beliefs and values about assessment will invariably affect the way we teach and influence
students’ learning. Hence, do we see assessment as ‘assessment for learning’, ‘assessment of

1
learning’ or ‘assessment as learning’? These three facets of assessment are important and
integral in teaching and learning, and I shall examine them in turn.
The overarching purpose of assessment is to improve student’s learning and teachers’
teaching, and in meeting the ILOs. Assessment for learning is a process where teachers use
assessment as an investigating tool to find out what students know and can do and what they
do not know or cannot do. It provides teachers with the basis to continually adjust their
teaching strategies to help students breach the gaps they might have. It is also important to let
students know how they will be assessed so that they are clear of what is expected from them.
Another essential feature of assessment for learning is providing students with frequent
constructive feedback about their learning and giving them opportunities to improve.
Consequently, assessment becomes an episode of learning too (Stefani, 2004). It also means
providing differentiated teaching and learning activities for students with varying abilities and
levels of development to cope with their studies and move forward. Assessment for learning
is therefore a formative assessment to identify the strengths and weaknesses, and guide
improvement for both teachers and students (although, it may include minor summative tests
or coursework). Assessment for learning also requires a strong and trusted relationship
between students and teachers. As Steven Covey (1932-1912) once said, “when there is trust
… there will be very good communication and teamwork”. Hence, it is important for teachers
to consistently affirm their students with care and concern so as to provide a safe and
nurturing environment in which they can grow (and fail). Hence, the key objective of
assessment for learning is to enhance the ability of teachers to address learning needs
effectively by monitoring student progress and identifying areas where teaching and learning
activities need to be modified to help students achieve their potential (MOE, 2011).
Assessment of learning, on the other hand, is generally considered a summative
assessment where evidences of achievement are collected at the end of a task or programme.
The data is then analysed to judge and report how well the students have performed against
the ILOs or competency standards, and in relation to others (Earl and Katz, 2006). Hence, it is
important for teachers to ensure the assessment is valid, reliable, transparent, inclusive,
authentic and fair (Brown and Race, 2013). It is also equally important to let students
understand the rational for the assessment, the ILOs and how they will be assessed.
Assessment of learning, as a summative assessment, is often used to decide the student’s
future by the students themselves, parents, teachers, school administrators and others.
Finally, assessment as learning is a process where students become they own assessors. It
places students at the focus of the assessment process, in which students reflect on how they
are performing and, as a result, become more aware of their own thought processes.
Assessment as learning enhances students’ metacognitive skills and develops them to be
independent and effective lifelong learners. The role of teachers is relegated from directing to
guiding students to set their own goals and develop good learning practices. Assessment as
learning also focuses student’s attention on the task, rather than on getting the right answers,
which encourages deep-approach learning (Earl and Katz, 2006). Hence, it is important for
teachers to design meaningful tasks or activities that stimulate and encourage students to
critically reflect and monitor their own learning (Leathwood, 2005).
In summary, assessment plays a pivotal role in how students learn, their motivation to
learn and how teachers teach (Earl and Katz, 2006). The use of assessment serve different
purposes: to inform teachers about their teaching (assessment for learning), as evidence to
demonstrate if students had met the ILOs (assessment of learning) and for students to be
cognisant of their learning (assessment as learning). Earl (2003) shared two possible scenarios
of how assessment is used, in a traditional way or a re-configured way, as shown in Fig 1. In

2
the first pyramid, Earl (2003) suggested that the traditional approach in classroom assessment
was primarily based on assessment of learning, where considerable time and effort were spent

as of

Assessment for learning Assessment for learning

of as

(a) Traditional assessment pyramid (b) Re-configured assessment Pyramid

Fig. 1 Balance among Assessment Purposes (Earl, 2003)

in imparting content to students and ensuring they achieve the ILOs or competency standards.
Assessment for learning was employed to check if students were on track to achieving the
results, while assessment as learning was rarely used. However, in recent years, the attention
has shifted towards assessment as learning as a means to develop students as critical thinkers,
and independent and lifelong learners for the 21st century. As a result, assessment for
learning has also been refined as an investigating tool for teachers to assess students’
progress, adjust teaching strategies and provide timely feedback to help students manage their
learning. This changed the way students perceive assessment and motivated them to learn
more and created a positive washback between assessment and learning. While assessment of
learning is used only when summative judgement is required. Hence, it is crucial for teachers
and school leaders to have clear purpose of how assessment should be used, as part of the
curriculum, to actively engage students in learning.
According to Ramsden (1992) who succinctly puts it, “assessment is the curriculum” as
far as the students are concerned. Students will learn what they think they will be assessed on
and not on what is in the curriculum or even on what has been covered in class. The trick then
is to embed the assessment as part of the curriculum. Biggs and Tang (2011) underscored the
importance of constructive alignment and suggested that teaching and assessment tasks must
be carefully aligned with the ILOs, with the assessment tasks constructed before the teaching
and learning activities. Hence, it is important for the ILOs to be appropriately set and defined.
In the next section, I shall evaluate the ILOs and some of the assessment designs for one of
the module that I am teaching. But before I move on, I would like to expound on this issue of
fit-for-purpose and quality in assessment.
According to Brown (2004), assessment needs to be fit-for-purpose in which it must be
valid, reliable, transparent, inclusive, authentic and fair. For an assessment to be valid, it must
measure what we set out to measure. For example, by ensuring the assessment adequately
covers the ILOs. Reliability, on the other hand, looks at how well the assessment provides
consistent and stable information to allow different teachers to draw the same inferences
about a student’s learning. Transparency lets students know how they will be assessed.
Hence, teachers should explain the marking rubrics and show students some samples of good
works as reference. Inclusivity provides students with opportunities to demonstrate their
abilities in different ways. Authenticity looks at how ‘real life’ the assessment is in practice

3
and verifies if the work is truly done by the student and not someone else. Fairness deals with
objectivity and honesty in measurement and reporting.

Evaluation of Curriculum and Assessment


In this section, I shall evaluate the ILOs and assessment designs for one of the modules that I
am teaching. In the first part, I shall use the revised Bloom’s Taxonomy by Anderson and
Krathwohl (2001) to evaluate the original ILOs and describe the proposed changes that I have
made. In the second part, I shall evaluate a set of essay questions that was used in the
previous semester test and compare it with the set of questions that I had prepared. In the third
part, I shall use the LERTAP, a classical test theory software, to evaluate the reliability of the
multiple-choice questions and analyse the performance of the students that took them. Finally,
in the last part, I shall evaluate all the assessment components in this module against a set of
criteria designed for Assessment for Learning.
In the two-dimensional model of the revised Bloom’s Taxonomy, the verb refers to the
cognitive process involved while the object describes the knowledge that students are
expected to acquire or construct (Anderson and Krathwohl, 2001). Appendix A shows the
original and proposed ILOs. It can be seen that some of the original ILOs are quite vague and
students are only required to remember and understand factual knowledge. In the proposed
change, students are required to use their higher order thinking skills to conceptualise and
apply the procedural knowledge that they have learnt. Hence, it is important to pitch the ILOs
at the appropriate cognitive and knowledge levels to drive students towards deep learning.
Cuban (1990) had perceptively cautioned that care must be taken not to blindly fit the
curriculum and underscore academic rigour. In addition, the ILOs must be clear, complete
and concise in its interpretation so that students are fully aware of the expectations required of
them.
In the second part of the evaluation, I realised that questions from past semester tests
were primarily assessing students on their ability to recall facts and encouraging rote learning
(see Appendix B1). Hence, I decided to change the structure of the questions and included
authentic and more challenging questions in the test (see Appendix B2). By doing so, I hope
to elicit and cultivate higher order thinking in students. To prepare students for this change, I
provided students with sample questions and answers to familiar them with the new
assessment format. When the students realised that the test questions would be more difficult,
they began to pay more attention in class and asked more questions. As a result of the change
in assessment, students were putting in more effort into their studies and thinking more.
During this course, we were introduced to four different evaluation tools to assess test
items, such as multiple-choice questions. I have used multiple-choice questions as an
assessment component umpteen times before but I never had a chance to evaluate if they were
reliability and effective. Hence, I am glad to have this opportunity to evaluate my multiple-
choice questions using LERTAP. In the recent mid-semester test, 20 multiple-choice
questions were administered to 235 students and the results are shown below.
From the statistics table (n=235), the mean and standard deviation for the MCQ test are
10.86 and 2.76 respectively. The minimum and maximum values of 3 and 11 show that there
were no out-of-range values, and the range is 16. The skewness is 0.02 (< +/- 1.0) and,
therefore, the frequencies distribution is not significantly different from a normal distribution.

4
Table 1. Lertap5 brief item stats for "AY2014_S1 MCQ"
n 235
Min 3.00
Median 11.00
Mean 10.86
Max 19.00
s.d. 2.76
var. 7.60
Range 16.00
IQRange 4.00
Skewness 0.02
Kurtosis -0.05
MinPos 0.00
MaxPos 20.00

Table 2 shows that question 18 was the most difficult question with only 8% of the students
getting it correct. It also indicates that question 15 was the easiest question with 94% of the
students getting it correct.

Table 2. Lertap5 brief item stats for "AY2014_S1 MCQ"


Res = a b c other diff. disc. ?
1 39% 60% 1% 0.60 0.21
2 20% 24% 55% 1% 0.55 0.22
3 29% 54% 17% 0.54 0.05
4 24% 5% 71% 0.71 - 0.04
5 9% 69% 21% 0.69 0.17
6 57% 10% 33% 0.57 0.18
7 51% 9% 40% 0.51 0.25
8 27% 2% 71% 0.71 - 0.03
9 38% 46% 17% 0.38 0.14
10 9% 12% 79% 0.79 0.34
11 25% 61% 14% 0.61 0.11
12 26% 17% 56% 0.17 0.18
13 15% 17% 67% 0.67 0.09
14 63% 13% 24% 0.63 0.04
15 94% 3% 3% 0.94 0.16
16 42% 55% 3% 0.55 0.24
17 32% 30% 38% 0.38 0.14
18 8% 8% 84% 0.08 - 0.04 a
19 29% 19% 52% 0.52 0.31
20 12% 64% 24% 0.24 0.15

On closer examination, there were 1% of students who did not answer question 2 and some of
the better students had answered question 18 wrongly. Further investigation revealed that
most of the students, including the better ones, chose the distractor with the most stringent
qualifications (c) rather than the correct answer (b) with less qualifications attached. This
observation is confirmed by two z-scores having positive values.

18. To ensure that a technician remains reasonably current on the aircraft types to which they hold
authorisation, they are required to have _____.
(a) appropriate continuation or refresher training
(b) been involved in at least 6 months of actual aircraft maintenance experience in any 2 year period
(c) been involved in at least 6 months of actual aircraft maintenance experience in any 2 year period
and have appropriate continuation or refresher training

5
18 (c20)
option wt. n p pb(r) b(r) avg. z
a 0.00 19 0.08 0.04 0.07 11.21 0.13
b 1.00 18 0.08 -0.04 -0.08 11.39 0.19 two positive z-scores
c 0.00 198 0.84 -0.07 -0.10 10.77 -0.03

Figure 1 shows a scatter plot of discrimination against difficulty. From the plot, it can be seen
that 15 out of 20 questions were correctly answered by more than 50% of the students. 5
questions had 40% or less getting them correct.
The discrimination index or point-biserial correlation shows that questions 10 and 19
have high correlation, pb(r) > 0.3, while questions 18, 8 and 4, have poor correlation. By
removing questions 18, 8 and 4, the subtest reliability or alpha figures will improve. For
example, by removing question 8, the subtest reliability will be improved by 0.021, which
will increase from 0.484 to 0.505 (see alpha figures in Appendix C1).

Figure 1. Lertap5 brief item stats for "AY2014_S1 MCQ"

U-L analysis
In the U-L analysis, the distractors for all the questions performed their task, that is, one or
more students chose an incorrect answer.
In the U-L discrimination analysis, it can be seen most of the questions have an index of
0.1 or higher, except question 18 (0.09). According to Hopkins (1998) an index of 0.1
provides fair discrimination, while any index higher than 0.4 provides excellent
discrimination.
In the U-L difficulty analysis, it can be seen most of the questions have an index less than
0.77, except questions 8 (0.81) and 15 (0.94). According to Mehrens and Lehmann (1991) a
three option multiple-choice test on a maximally discriminating test should have a difficulty
value of about 0.77. From this recommendation, it seems the multiple-choice test for the
semester test was moderately difficult.

Mastery Analysis (set at 75%)


A mastery analysis was also conducted with a proficiency rate set at 75%. The Civil Aviation
Authority of Singapore uses this value as a pass mark for candidates taking the Basic Papers,
in preparation for the qualification towards an Aircraft Maintenance Engineer Licence.
Likewise, in the mastery discrimination or ‘B discrimination’ analysis, it can be seen
most of the questions have an index of 0.1 or higher, except question 18 (-0.3). According to

6
Hopkins (1998) an index of 0.1 provides fair discrimination, while any index higher than 0.4
provides excellent discrimination.
In the mastery difficulty analysis, it can be seen most of the questions have an index less
than 0.77, except questions 10 (0.79) and 15 (0.94). According to Mehrens and Lehmann
(1991) a three option multiple-choice test on a maximally discriminating test should have a
difficulty value of about 0.77. From this recommendation, it seems the multiple-choice test
for the semester test was moderately difficult.
The index of dependability for mastery is at 0.806, which is much higher than the earlier
alpha figure of 0.484.
In concluding the LERTAP analysis, it seems that the multiple-choice questions were
moderately difficult for most students. While most of the questions had fair discrimination,
pb(r) > 0.1. Four questions, in particular 8, 10, 15 and 18, require adjustment to improve their
reliability. To help students do well in the multiple-choice questions, more opportunities and
practice will be given to them. Students should also be reminded to allocate more time for the
multiple-choice questions.
In the final part of this section, I have evaluated all the assessment components for this
module against the set of criteria for Assessment for Learning and append the findings in
Appendix D. The case studies and assignment are well developed and satisfy most of the
principles of quality assessment. The discussion board and blog were recently introduced and,
therefore, requires more time to develop.

Conclusion
This course on assessment and evaluation in education has given me a greater insight into the
importance of assessment in learning and teaching. Assessment is a powerful force that drives
students’ learning and motivates them to take charge of their own leaning. As educators we
must harness and multiply the effects of good assessment. But, at the same time, we must
always remember to align assessment with the desired outcomes of education and good
teaching practices in order to achieve maximum impact (Heng, 2014). As educators we have a
common purpose “to awake the natural curiosity of young minds” (Anatole France, 1844-
1924) and “to teach one to think intensively and to think critically … plus character” (Martin
Luther King Jr., 1929-1968). Assessment is a tool that we can use to excite young minds.

7
References
Anderson, LW, & Krathwohl, DR (eds) 2001, A taxonomy for learning, teaching and
assessing: a revision of Bloom’s Taxonomy of educational objectives, NewYork:
Longman.
Biggs, J, & Tang, C 2011, Teaching Quality Learning at University, 4th edn, McGraw Hill.
Brown, S 2004, ‘Assessment for learning’, Learning and Teaching in Higher Education, iss.
1, pp. 81-89.
Brown, S, & Race, P 2013, ‘Using effective assessment to promote learning’ in Hunt, L, &
Chalmers, D (eds), University teaching in focus: a learning-centred approach, NewYork:
Routledge, pp. 74-91.
Cuban, L 2014, ‘Reforming again, again, and again’, Educational Researcher, vol. 19, no. 1,
pp. 3-13.
Earl, L 2003, Assessment as learning: using classroom assessment to maximise student
learning, Thousand Oaks: Cowin.
Earl, L, & Katz, S (eds) 2006, Rethinking classroom assessment with purpose in mind:
assessment for learning, assessment as learning, assessment of learning, Canada: Manitob
Education, Citizenship and Youth.
Gibbs, G, & Simpson, C 2004, ‘Conditions under which assessment supports student’s
learning’, Learning and Teaching in Higher Education, iss. 1, pp. 3-31.
Heng, SK (Minister for Education) 2014, 40th International Association for Educational
Assessment (IAEA), media release, Grand Copthorne Waterfront Hotel, Singapore, 26
May.
Leathwood, C 2005, ‘Assessment policy and practice in higher education: purpose, standards
and equity’, Assessment & Evaluation in Higher Education, vol. 30, no. 3, pp. 307-324.
Ministry of Education (MOE) 2011, Ministry of Education position paper: assessment.
Learning Media, New Zealand.
Ramsden, P 1992, Learning to teach in higher education, London: Routledge.
Stefani, L 2004, ‘Assessment of Student Learning: promoting a scholarly approach’, Learning
& Teaching in Higher Education, iss. 1, pp. 51-66.

8
Bibliography
Costley, C 2007, ‘Work-based learning: assessment and evaluation in higher education’,
Assessment & Evaluation in Higher Education, vol. 32, no. 1, pp. 1-9.
Brodie, P, & Irving, KC 2007, ‘Assessment in work-based learning: investigating a
pedagogical approach to enhance student learning’, Assessment & Evaluation in Higher
Education, vol. 32, no. 1, pp. 11-19.
James, D, & Fleming, S 2004, ‘Agreement in Student Performance in Assessment’, Learning
& Teaching in Higher Education, iss. 1, pp. 32-50.
Krathwohl, DR 2002, ‘A revision of Bloom’s Taxonomy: an overview’, Theory into Practice,
vol. 41, no. 4, pp. 212-225.
Mayer, RE 2002, ‘Rote versus meaningful learning’, Theory into Practice, vol. 41, no. 4, pp.
226-225.
Mercurio, A 2013, ‘Assessment and learning: building teachers’ assessment literacy’,
Proceedings of the 39th annual conference on educational assessment 2.0, Available from:
<http://www.iaea.info/documents/paper_5bc1b798.pdf>. [17 July 2014].
Te Kete Ipurangi (TKI), What is assessment for learning? Available from:
<http://assessment.tki.org.nz/Assessment-in-the-classroom/Assessment-for-learning-in-
principle/What-is-assessment-for-learning>. [10 July 2014].

9
APPENDIX A

Table A. Comparison of Intended Learning Outcomes

Old Proposed change


1. Understand the need for Human Introduction to Human Factors
Factors & Error Management
1.1 Discuss the need to take human factors Explain the importance and need for human factors
into account. in ensuring aviation safety.
1.2 Discuss incidents and accidents Cite and show how human factors had caused some
attributable to human factors/human serious incidents and catastrophic accidents to
errors. happen.
1.3 Discuss Murphy’s Law. Illustrate how Murphy’s Law can be used as a
defensive strategy against human errors.

2. Understand the physical and mental Human Performance & Limitations


human performance characteristics.
2.1 Describe performance-shaping factors. Classify and explain the different types of
performance shaping factors.
2.2 Describe the human vision and hearing Explain the functions and importance of vision and
systems. hearing in aircraft maintenance and identify the
factors that may degrade their effectiveness.
2.3 Describe the human information- Sketch the information processing model and
processing model. explain how each component is critical in the
decision making process.
2.4 Discuss the differences between Define acrophobia and claustrophobia, and discuss
attention and perception. ways to identify the symptoms and manage them.
2.5 Discuss the types of memory and
situation awareness.
2.6 Discuss the fear of working in restricted
areas and the fear of heights.

3. Understand the social working Social Psychology


environment.
3.1 Discuss individual and group Describe the roles and responsibilities of individuals
responsibility. and groups in an aircraft maintenance organisation.
3.2 Discuss motivation and the lack of Identify and explain the four common phenomena
motivation. seen in groups and suggest ways to manage them.
3.3 Discuss the effects of peer pressure. Discuss the different motivation theories and
evaluate how they can be applied to enhance
teamwork and performance.
3.4 Discuss the effects of social and Analyse the influence of peer pressure and suggest
organisational culture. ways to manage them.
3.5 Discuss the factors and importance for Examine the influence of organisational culture and
good team working. explain how a safety culture can be developed and
nurtured.
3.6 Discuss how managers and supervisors Explain the process of team development and
can play a key role in ensuring that work discuss the characteristics of an effective team.
is carried out safely.
3.7 Highlight the crucial roles and responsibilities that
managers and supervisors play in leading the team
and ensuring work is carried out safely and
efficiently.

10
APPENDIX B

B1. Essay Questions for AY2013/14_S2 (past)


B1. (a) The essence of workload management, in the field of aircraft maintenance, includes
numerous criteria. List these criteria. (5 marks)

(b) When can one expect an occurrence of an overload situation and what can be done
to relieve an impending overload ? (5 marks)

B2. Individualism encourages independence, In contrast to individualism, the emphasis of a


team is on cohesiveness. State the Concept of a Team. (10 marks)

B3. (a) What is the aim of error management? (2 marks)


(b) State the preventive measures prescribed for error management? (8 marks)

B4. In aircraft maintenance, airplane systems may be termed simple or complex.


(a) Give a brief description of a simple system. (2 marks)
(b) List the criteria that constitute a complex system. (8 marks)

B5. There is a tendency, for highly motivated and de-motivated people, to exhibit a pattern
of characteristics. Describe these characteristics (10 marks)

11
B2. Essay Questions for AY2013/14_S1 (revised)
B1. Reason (1997) suggests that omissions, or forgetting, accounts for 56% of maintenance
errors. Using the information processing model, explain how humans process and
retains information and discuss the probable reasons why people are prone to
forgetting. What strategies would you offer the engineers to help them remember and
carry out their tasks safely and effectively? [10 marks]

B2. The following passage is an extract of the Aircraft Accident Report (NTSB/AAR-
89/03) into the explosive decompression of Aloha Airlines Flight 243 near Maui,
Hawaii on 28 April 1988.

There are several possibilities why the inspectors, when complying with the AD, failed
to find the detectable crack in the S-4R lap joint on N73711, even though the area
reportedly was given an eddy current inspection and two inspectors performed
independent visual inspections. A person can be motivated to do a critical task very
well; but when asked to perform that same task repeatedly, factors such as expectation
of results, boredom, task length, isolation during the inspection task, and the
environmental conditions all tend to influence performance reliability. Airline
maintenance is most often performed at night and during the early morning hours; the
time of day that has been documented to cause adverse human performance. Aloha
Airlines training records revealed that little formal training was provided in NDI
techniques and methods. The inspector who found the S-4R lap joint cracks requiring
repair stated that only on-the-job training (OJT) had been provided since he became an
inspector in August 1987; his training records show formal NDI training on September
17, 1987, when a 2-hour training session was given by a Boeing representative.
Records [also] indicate the [other] inspector who provided the initial OJT had only 2
hours of formal NDI training, during the same 2-hour training session on September
17, 1987, provided by Boeing.

From the above passage, show where the Dirty Dozen played a role in the accident and
define the meaning of these factors. Suggest appropriate safety nets to rectify the
impact of these undesirable influences and provide your reasons for them. [10 marks]

B3. (a) Hawkins (1972) proposed the SHEL Model to explain the importance and study of
human factors. Briefly describe the purpose of studying human factors and provide
examples where the L-E interface is critical in aircraft maintenance and inspection tasks.
Explain how can these L-E interface be enhanced, in a practical and reasonable
manner? [6 marks]

(b) Explain the term ‘troubled’ employees and what can you do as a supervisor to help
them perform their tasks safely and efficiently. [4 marks]

C1. In the case study, “Careless, Stupid, Dumb and Lazy”, Johnny Workhard, a diligent,
helpful and dependable worker damaged an aircraft while driving a tow tug under an
aircraft. The manager concluded that Johnny Workhard was a careless, stupid, dumb
and lazy person and decided to fire him to set him as an example for others to be more
vigilant in their work.

Based on the circumstantial evidence, would you agree that Johnny Workhard is a
careless, stupid, dumb and lazy person and he deserves to be fired? What sort of safety
culture does Low Cost Airlines exhibit? Discuss the strategies and policies that you
would adopt to motivate the employees in developing a generative-type safety culture at
Low Cost Airlines and make it the best company that people would like to work for.
Clearly state the underpinning motivation theories that you use, if any. [20 marks]

12
APPENDIX C

C1. LERTAP Results (Full)

Lertap5 full item stats for "AY2014_S1 MCQ", created: 6/23/2014

Summary statistics
number of scores (n): 235
lowest score found: 3.00 (15.0%)
highest score found: 19.00 (95.0%)
median: 11.00 (55.0%)
mean (or average): 10.86 (54.3%)
standard deviation: 2.76 (13.8%)
standard deviation (as a sample): 2.76 (13.8%)
variance (sample): 7.63
number of subtest items: 20
minimum possible score: 0.00
maximum possible score: 20.00

reliability (coefficient alpha): 0.48


index of reliability: 0.69
standard error of measurement: 1.99 (9.9%)

item difficulty bands item discrimination bands


.00: 18 .00: 3 4 8 13 14 18
5 6 9 11 12 15
.10: 12 .10: 17 20
.20: 20 .20: 1 2 7 16
.30: 9 17 .30: 10 19
.40: .40:
.50: 2 3 6 7 16 19 .50:
.60: 1 11 13 14 .60:
.70: 4 5 8 10 .70:
.80: .80:
.90: 15 .90:

alpha figures (alpha = .4792)


without alpha change
1 0.453 -0.026
2 0.449 -0.030
3 0.485 0.006
4 0.502 0.022
5 0.462 -0.017
6 0.458 -0.021
7 0.443 -0.036
8 0.500 0.021
9 0.466 -0.013
10 0.430 -0.049
11 0.473 -0.006
12 0.461 -0.018
13 0.478 -0.002
14 0.488 0.009
15 0.468 -0.011
16 0.445 -0.035
17 0.468 -0.012
18 0.490 0.010
19 0.430 -0.049
20 0.465 -0.014

13
1 (c3)
option wt. n p pb(r) b(r) avg. z
a 0.00 91 0.39 -0.37 -0.47 9.57 -0.47
b 1.00 141 0.60 0.21 0.26 11.70 0.30
c 0.00 3 0.01 -0.02 -0.07 10.33 -0.19
2 (c4)
option wt. n p pb(r) b(r) avg. z
a 0.00 47 0.20 -0.13 -0.19 10.13 -0.26
b 0.00 57 0.24 -0.34 -0.47 9.19 -0.60
c 1.00 129 0.55 0.22 0.28 11.83 0.35
other 0.00 2 0.01 0.06 0.22 12.50 0.60
3 (c5)
option wt. n p pb(r) b(r) avg. z
a 0.00 67 0.29 -0.22 -0.30 9.88 -0.35
b 1.00 128 0.54 0.05 0.07 11.45 0.21
c 0.00 40 0.17 -0.04 -0.06 10.60 -0.09
4 (c6)
option wt. n p pb(r) b(r) avg. z
a 0.00 57 0.24 -0.13 -0.17 10.25 -0.22
b 0.00 11 0.05 -0.01 -0.02 10.73 -0.05
c 1.00 167 0.71 -0.04 -0.06 11.07 0.08
5 (c7)
option wt. n p pb(r) b(r) avg. z
a 0.00 22 0.09 -0.26 -0.45 8.64 -0.81
b 1.00 163 0.69 0.17 0.22 11.45 0.22
c 0.00 50 0.21 -0.18 -0.26 9.88 -0.35
6 (c8)
option wt. n p pb(r) b(r) avg. z
a 1.00 134 0.57 0.18 0.23 11.70 0.31
b 0.00 23 0.10 -0.15 -0.27 9.57 -0.47
c 0.00 78 0.33 -0.27 -0.36 9.78 -0.39
7 (c9)
option wt. n p pb(r) b(r) avg. z
a 1.00 120 0.51 0.25 0.31 11.98 0.41
b 0.00 21 0.09 -0.08 -0.14 10.14 -0.26
c 0.00 94 0.40 -0.38 -0.48 9.59 -0.46
8 (c10)
option wt. n p pb(r) b(r) avg. z
a 0.00 64 0.27 -0.10 -0.13 10.41 -0.16
b 0.00 4 0.02 -0.12 -0.38 8.25 -0.95
c 1.00 167 0.71 -0.03 -0.04 11.09 0.09
9 (c11)
option wt. n p pb(r) b(r) avg. z
a 1.00 89 0.38 0.14 0.18 11.97 0.40
b 0.00 107 0.46 -0.24 -0.30 10.13 -0.26
c 0.00 39 0.17 -0.09 -0.13 10.31 -0.20
10 (c12)
option wt. n p pb(r) b(r) avg. z
a 0.00 22 0.09 -0.29 -0.51 8.36 -0.90
b 0.00 28 0.12 -0.33 -0.54 8.36 -0.91
c 1.00 185 0.79 0.34 0.48 11.53 0.24

14
11 (c13)
option wt. n p pb(r) b(r) avg. z
a 0.00 59 0.25 -0.14 -0.19 10.19 -0.24
b 1.00 143 0.61 0.11 0.14 11.48 0.23
c 0.00 33 0.14 -0.22 -0.35 9.33 -0.55
12 (c14)
option wt. n p pb(r) b(r) avg. z
a 0.00 62 0.26 -0.23 -0.31 9.81 -0.38
b 1.00 41 0.17 0.18 0.26 12.71 0.67
c 0.00 132 0.56 -0.03 -0.04 10.77 -0.03
13 (c15)
option wt. n p pb(r) b(r) avg. z
a 0.00 36 0.15 -0.15 -0.23 9.89 -0.35
b 0.00 41 0.17 -0.18 -0.26 9.80 -0.38
c 1.00 158 0.67 0.09 0.11 11.35 0.18
14 (c16)
option wt. n p pb(r) b(r) avg. z
a 1.00 149 0.63 0.04 0.05 11.30 0.16
b 0.00 30 0.13 -0.12 -0.19 10.00 -0.31
c 0.00 56 0.24 -0.15 -0.20 10.13 -0.26
15 (c17)
option wt. n p pb(r) b(r) avg. z
a 1.00 220 0.94 0.16 0.31 11.03 0.06
b 0.00 8 0.03 -0.25 -0.59 7.25 -1.31
c 0.00 7 0.03 -0.09 -0.23 9.43 -0.52
16 (c18)
option wt. n p pb(r) b(r) avg. z
a 0.00 99 0.42 -0.39 -0.49 9.61 -0.45
b 1.00 130 0.55 0.24 0.31 11.87 0.37
c 0.00 6 0.03 -0.08 -0.21 9.50 -0.49
17 (c19)
option wt. n p pb(r) b(r) avg. z
a 0.00 76 0.32 -0.19 -0.25 10.08 -0.28
b 0.00 70 0.30 -0.13 -0.17 10.31 -0.20
c 1.00 89 0.38 0.14 0.18 11.94 0.39
18 (c20)
option wt. n p pb(r) b(r) avg. z
a 0.00 19 0.08 0.04 0.07 11.21 0.13
b 1.00 18 0.08 -0.04 -0.08 11.39 0.19 two positive z-scores
c 0.00 198 0.84 -0.07 -0.10 10.77 -0.03
19 (c21)
option wt. n p pb(r) b(r) avg. z
a 0.00 68 0.29 -0.33 -0.44 9.41 -0.52
b 0.00 44 0.19 -0.21 -0.30 9.66 -0.43
c 1.00 123 0.52 0.31 0.38 12.08 0.44
20 (c22)
option wt. n p pb(r) b(r) avg. z
a 0.00 28 0.12 -0.04 -0.07 10.54 -0.12
b 0.00 150 0.64 -0.24 -0.31 10.36 -0.18
c 1.00 57 0.24 0.15 0.21 12.32 0.53

15
C2. LERTAP Results (Upper/Lower)

Lertap5 U-L stats for "AY2014_S1 MCQ", created: 6/23/2014.

Summary group statistics


n avg. avg% s.d. min. mdn. max.
upper 47 14.7 74% 1.3 13 14 19
2nd 47 12.4 62% 0.5 12 12 13
3rd 47 10.7 53% 0.5 10 11 11
4th 47 9.5 47% 0.5 9 9 10
lower 47 7.0 35% 1.3 3 8 8
everyone 235 10.9 54% 2.8 3 11 19
This was an upper-lower analysis with more than two groups.

Res = a b c other U-L diff. U-L disc.


1 upper 0.13 0.85 0.02 0.00 0.60 0.51
2nd 0.26 0.74 0.00 0.00
3rd 0.38 0.62 0.00 0.00
4th 0.53 0.45 0.02 0.00
lower 0.64 0.34 0.02 0.00
2 upper 0.06 0.06 0.85 0.02 0.57 0.55
2nd 0.23 0.11 0.64 0.02
3rd 0.19 0.28 0.53 0.00
4th 0.26 0.32 0.43 0.00
lower 0.26 0.45 0.30 0.00
3 upper 0.11 0.77 0.13 0.00 0.60 0.34
2nd 0.26 0.55 0.19 0.00
3rd 0.36 0.49 0.15 0.00
4th 0.28 0.49 0.23 0.00
lower 0.43 0.43 0.15 0.00
4 upper 0.19 0.06 0.74 0.00 0.69 0.11
2nd 0.13 0.04 0.83 0.00
3rd 0.32 0.02 0.66 0.00
4th 0.30 0.02 0.68 0.00
lower 0.28 0.09 0.64 0.00
5 upper 0.02 0.83 0.15 0.00 0.63 0.40
2nd 0.00 0.83 0.17 0.00
3rd 0.13 0.72 0.15 0.00
4th 0.11 0.66 0.23 0.00
lower 0.21 0.43 0.36 0.00
6 upper 0.79 0.06 0.15 0.00 0.53 0.51
2nd 0.64 0.04 0.32 0.00
3rd 0.62 0.17 0.21 0.00
4th 0.53 0.04 0.43 0.00
lower 0.28 0.17 0.55 0.00
7 upper 0.81 0.06 0.13 0.00 0.52 0.57
2nd 0.68 0.04 0.28 0.00
3rd 0.49 0.06 0.45 0.00
4th 0.34 0.17 0.49 0.00
lower 0.23 0.11 0.66 0.00
8 upper 0.15 0.00 0.85 0.00 0.80 0.11
2nd 0.30 0.00 0.70 0.00
3rd 0.36 0.00 0.64 0.00
4th 0.32 0.06 0.62 0.00
lower 0.23 0.02 0.74 0.00
9 upper 0.62 0.26 0.13 0.00 0.40 0.43
2nd 0.40 0.47 0.13 0.00
3rd 0.38 0.38 0.23 0.00
4th 0.30 0.53 0.17 0.00

16
Res = a b c other U-L diff. U-L disc.
lower 0.19 0.64 0.17 0.00
10 upper 0.00 0.00 1.00 0.00 0.72 0.55
2nd 0.04 0.06 0.89 0.00
3rd 0.09 0.04 0.87 0.00
4th 0.11 0.17 0.72 0.00
lower 0.23 0.32 0.45 0.00
11 upper 0.15 0.77 0.09 0.00 0.55 0.43
2nd 0.26 0.68 0.06 0.00
3rd 0.23 0.66 0.11 0.00
4th 0.26 0.60 0.15 0.00
lower 0.36 0.34 0.30 0.00
12 upper 0.13 0.34 0.53 0.00 0.18 0.32
2nd 0.19 0.28 0.53 0.00
3rd 0.21 0.13 0.66 0.00
4th 0.38 0.11 0.51 0.00
lower 0.40 0.02 0.57 0.00
13 upper 0.09 0.06 0.85 0.00 0.69 0.32
2nd 0.15 0.13 0.72 0.00
3rd 0.17 0.23 0.60 0.00
4th 0.11 0.23 0.66 0.00
lower 0.26 0.21 0.53 0.00
14 upper 0.79 0.09 0.13 0.00 0.63 0.32
2nd 0.66 0.11 0.23 0.00
3rd 0.66 0.11 0.23 0.00
4th 0.60 0.15 0.26 0.00
lower 0.47 0.19 0.34 0.00
15 upper 1.00 0.00 0.00 0.00 0.94 0.13
2nd 1.00 0.00 0.00 0.00
3rd 0.87 0.04 0.09 0.00
4th 0.94 0.02 0.04 0.00
lower 0.87 0.11 0.02 0.00
16 upper 0.21 0.77 0.02 0.00 0.48 0.57
2nd 0.23 0.77 0.00 0.00
3rd 0.36 0.62 0.02 0.00
4th 0.53 0.43 0.04 0.00
lower 0.77 0.19 0.04 0.00
17 upper 0.17 0.23 0.60 0.00 0.39 0.40
2nd 0.32 0.26 0.43 0.00
3rd 0.40 0.28 0.32 0.00
4th 0.26 0.38 0.36 0.00
lower 0.47 0.34 0.19 0.00
18 upper 0.04 0.09 0.87 0.00 0.05 0.06
2nd 0.13 0.09 0.79 0.00
3rd 0.17 0.06 0.77 0.00
4th 0.00 0.13 0.87 0.00
lower 0.06 0.02 0.91 0.00
19 upper 0.11 0.09 0.81 0.00 0.51 0.60
2nd 0.09 0.15 0.77 0.00
3rd 0.34 0.17 0.49 0.00
4th 0.43 0.23 0.34 0.00
lower 0.49 0.30 0.21 0.00
20 upper 0.06 0.34 0.60 0.00 0.36 0.47
2nd 0.17 0.70 0.13 0.00
3rd 0.11 0.64 0.26 0.00
4th 0.13 0.77 0.11 0.00
lower 0.13 0.74 0.13 0.00

17
C3. LERTAP Results (Mastery@75)

Lertap5 U-L stats for "AY2014_S1 MCQ", created: 6/13/2014.

Summary group statistics


n avg. avg% s.d. min. mdn. max.
masters 22 15.7 78% 1.2 15 15 19
others 213 10.4 52% 2.4 3 10 14
everyone 235 10.9 54% 2.8 3 11 19
This was an upper-lower analysis based on a mastery cutoff percentage of 75.
Variance components
df SS MS
Persons 234 90.15 0.39
Items 19 192.20 10.12
Error 4446 884.05 0.20
Index of dependability: 0.806
Estimated error variance: 0.012
For 68% conf. intrvl. use: 0.110
Prop. consistent placings: 0.869
Prop. beyond chance: 0.234

Res = a b c other U-L diff. B disc.


1 masters 0.05 0.91 0.05 0.00 0.60 0.34
others 0.42 0.57 0.01 0.00
2 masters 0.00 0.00 1.00 0.00 0.54 0.50
others 0.22 0.27 0.50 0.01
3 masters 0.00 0.82 0.18 0.00 0.55 0.30
others 0.31 0.52 0.16 0.00
4 masters 0.09 0.05 0.86 0.00 0.70 0.18
others 0.26 0.06 0.69 0.00
5 masters 0.00 0.91 0.09 0.00 0.69 0.24
others 0.10 0.67 0.23 0.00
6 masters 0.86 0.05 0.09 0.00 0.57 0.32
others 0.54 0.10 0.36 0.00
7 masters 0.86 0.09 0.05 0.00 0.51 0.39
others 0.47 0.09 0.44 0.00
8 masters 0.09 0.00 0.91 0.00 0.71 0.22
others 0.29 0.02 0.69 0.00
9 masters 0.64 0.27 0.09 0.00 0.38 0.28
others 0.35 0.47 0.17 0.00
10 masters 0.00 0.00 1.00 0.00 0.79 0.23
others 0.10 0.13 0.77 0.00
11 masters 0.09 0.86 0.05 0.00 0.61 0.28
others 0.27 0.58 0.15 0.00
12 masters 0.09 0.36 0.55 0.00 0.17 0.21
others 0.28 0.15 0.56 0.00
13 masters 0.05 0.05 0.91 0.00 0.67 0.26
others 0.16 0.19 0.65 0.00
14 masters 0.82 0.05 0.14 0.00 0.63 0.20
others 0.62 0.14 0.25 0.00
15 masters 1.00 0.00 0.00 0.00 0.94 0.07
others 0.93 0.04 0.03 0.00
16 masters 0.14 0.86 0.00 0.00 0.55 0.34
others 0.45 0.52 0.03 0.00
17 masters 0.14 0.14 0.73 0.00 0.38 0.38
others 0.34 0.31 0.34 0.00
18 masters 0.00 0.05 0.95 0.00 0.07 - 0.03
others 0.09 0.08 0.84 0.00
19 masters 0.18 0.05 0.77 0.00 0.53 0.27

18
Res = a b c other U-L diff. B disc.
others 0.30 0.20 0.50 0.00
20 masters 0.09 0.36 0.55 0.00 0.26 0.32
others 0.11 0.66 0.23 0.00

19
APPENDIX D

Assessment for Learning Checklist

0 – no evidence

Discussion Board
1 – needs improving
2 – fair evidence

Case Studies

Assignment
3 – good evidence
4 – strong evidence

Overall
Tests
Blog
Checklist for
Assessment for Learning
1. Authentic assessment.
1.1. Enhancing students’ perception of the meaning and
relevance of assessment. 4 2 3 4 4 3
1.1.1. Promote deep learning approaches. 4 2 3 4 4 3
1.1.2. Position students as active learners. 4 - 3 4 4 3
1.1.3. Promote hands-on experience of study.
1.2. Drawing and linking to the real world. 4 3 3 4 4 4
1.2.1. Linking learning to practice. 4 3 3 4 4 4
1.2.2. Addressing real world issues in the workplace. 4 3 3 4 4 4
1.2.3. Working on authentic cases.
1.3. Developing a sense of personal engagement. 4 2 3 4 4 3
1.3.1. Harnessing learners’ everyday experience. 4 2 2 4 4 3
1.3.2. Bring a ‘live’ feel to assessment. 4 2 4 4 4 3
1.3.3. Encourage students to focus/reflect on and
evaluate their learning.
2. Balancing summative and formative assessment
2.1. Developing the formative potential of summative
assessment. - - - 3 3 3
2.1.1. Enhancing learning through summative - - - 3 3 3
assessment tasks.
2.1.2. Learning from feedback associated with 2 2 - 3 3 3
summative assessment. - - - 3 3 3
2.2. Learning through marks and grades. - 2 1 3 3 2
2.2.1. Understanding marks and grades.
2.2.2. Changing marking scales. - - - - - 3
2.2.3. Formative marking. - - - - - 3
2.3. Rebalancing time, effort and focus towards formative
assessment.
2.3.1. Making a difference of allocation of time and
effort.
2.3.2. Shifting what is deemed as valuable and
important.
3. Creating opportunities for practice and rehearsal.
3.1. Planning and structuring courses to maximise active
learning and student involvement.
3.1.1. Learning tasks for students to undertake within - - - - - 1
lectures. 4 4 4 4 4 4
3.1.2. Designing tasks for students to undertake outside 4 2 2 4 4 3
class time.
3.1.3. Designing progressively challenging takes which
gradually build students’ capabilities. 4 4 4 4 - 4
3.2. Creating a low-stake formative environment.
3.2.1. Designing the introductory session to reinforce 4 3 1 4 - 3
the need for active participation and practice.
3.2.2. Designing formative activities in which 4 4 1 4 - 3
provisional views and mistakes are valued.
3.2.3. Helping students see the rationale for engaging in 4 2 2 4 - 3
practice. 4 2 2 4 2 3

20
0 – no evidence

Discussion Board
1 – needs improving
2 – fair evidence

Case Studies

Assignment
3 – good evidence
4 – strong evidence

Overall
Tests
Blog
Checklist for
Assessment for Learning
3.3. Creating opportunities for social interaction. 4 - - 4 - 3
3.3.1. Building in carefully designed class discussions.
3.3.2. Encouraging students to form their own study
group.
3.3.3. Encouraging students to work in groups to
collaboratively produce as agreed output.
4. Designing formal feedback to improve learning.
4.1. Enhancing tutor feedback to make it work better.
4.1.1. Providing comments on work while students are 4 3 1 4 - 3
producing it. 4 3 1 - - 3
4.1.2. Setting short tasks that allow students to gain - - - 4 4 4
formative feedback.
4.1.3. Helping students learn from feedback associated
with staged summative tasks. 4 2 1 4 - 3
4.2. Deploying others to provide feedback. 3 1 1 3 - 2
4.2.1. Using peers to provide feedback. - 2 2 - - 2
4.2.2. Peer feedback to support self-regulation.
4.2.3. Using technology to enhance feedback provision. 2 2 2 2 - 2
4.3. Focusing student attention on feedback. 2 2 2 2 - 2
4.3.1. Managing students’ expectation of feedback. 2 1 1 1 - 1
4.3.2. Preparing students for feedback.
4.3.3. Enhancing engagement with feedback.
5. Design opportunities for informal feedback.
5.1. Enhancing learning through classroom dialogue.
5.1.1. Designing appropriate tasks to enhance classroom 4 - - 4 - 4
activity and peer interaction.
5.1.2. Establishing a conducive climate for dialogue. 4 3 3 3 - 3
5.1.3. Using peer discussion to help students appreciate 4 - - 4 - 4
a variety of approaches to tackling work.
5.2. Encouraging students to work collaboratively.
5.2.1. Using summative assessment to foster - - - 4 1 2
collaborative working. 4 1 - 4 - 3
5.2.2. Preparing students for group works.
5.3. Seeding activity and dialogue beyond the classroom. 4 2 - 4 - 3
5.3.1. Stimulating students to engage in peer discussion 4 2 2 4 - 3
beyond the classroom.
5.3.2. Designing authentic experiences beyond the
formal curriculum.
6. Developing students as self-assessor and effective lifelong
learners.
6.1. Inducting students into AfL cultures and communities. 4 2 3 3 3 3
6.1.1. Promoting the important of self-review and 4 2 3 3 3 3
reflection. 4 1 1 3 - 2
6.1.2. Evaluate what they do already and can do.
6.1.3. Encourage students to regard themselves as
partners in assessment communities. 4 1 1 3 - 2
6.2. Stimulating students’ active involvement in the 4 1 1 3 - 2
assessment process. 4 2 2 3 3 3
6.2.1. Explicitly involving students as assessors.
6.2.2. Using peer review to involve students in the 2 1 - 3 3 2
assessment process.
6.2.3. Encouraging active learning in self-review.
6.3. Developing students’ assessment literacy.

21
0 – no evidence

Discussion Board
1 – needs improving
2 – fair evidence

Case Studies

Assignment
3 – good evidence
4 – strong evidence

Overall
Tests
Blog
Checklist for
Assessment for Learning
6.3.1. Enhance students’ appreciation of assessment
processes and criteria.
7. Diagnostic assessment or pre-assessment
7.1. Gauge students’ prior knowledge and misconceptions. 4 - - 2 2 2
7.2. Provide a baseline for understanding how much learning 4 - - 2 2 2
has taken place after the learning activity is completed.

22

You might also like