Rubrics For Engineering Education
Rubrics For Engineering Education
Rubrics For Engineering Education
Chan, CKY (2015). "Rubrics for Engineering Education", Engineering Education Enhancement and Research Asia (E3R Asia).
Introduction
Rubrics are scoring or grading tool used to measure a students’ performance and learning
across a set of criteria and objectives. There is no unified set of rubrics because the scoring
rubrics vary accordingly across different disciplines and courses. There are three
components within rubrics namely (i) dimensions/criteria: the aspects of performance that will
be assessed, (ii) descriptors: characteristics that are associated with each dimension, and (iii)
scale/level of performance: a rating scale that defines students’ level of mastery within each
criterion. Figures 1 and 2 presented below show that the scales and dimensions of rubrics
can exchange position.
The use of rubrics aids teachers to assess students’ work objectively and effectively. It can
be used for both summative and formative purposes. Rubrics can (a) offer ways to define
expectations, especially in dealing with processes or abstract concepts, (b) provide a
common language to help teachers and students discuss about the expected learning, (c)
increase reliability of the assessment when using multiple assessors, and (d) provide
feedback to students on various forms of assessments. (Rogers, 2010)
References:
There are two types of rubrics, “holistic rubrics” and “analytic rubrics”.
Holistic rubrics do not list separate levels of performance for each criterion. Assigning a level
of performance is done by assessing performance across multiple criteria as a whole.
When using holistic rubrics, the assessor makes judgment by forming an overall impression
of the performance and matching the performance to the descriptions that is most suited
along the scale. Each category on the scale describes performance on several performance
criteria.
Analytic rubrics list separate levels of performance for each criterion so that assessors can
assess students’ performance on each criterion individually. The scales of analytic rubrics
tend to focus on important dimensions related to performance criteria. Analytic rubrics are
commonly used in assessing engineering assessments (University of Michigan, n.d.).
Figure 2: Example of Analytic Rubrics (Accessed from Rogers, 2010)
References:
Online rubrics are convenient and easy to use. Online rubrics are web-based software that
allows users to develop custom rubrics, 4-point rubrics, instant rubrics, or use the templates
offered on the online rubrics website to create preformatted rubrics. With the online rubrics,
lecturers can give feedback and even build up a database of rubrics archive for various
assessments (Assessment Resource Centre (HKU), 2009). Rubistar (n.d.) and Rubrics
Maker (n.d.) are two online tools that can be used to create and edit rubrics.
References:
The process of developing rubrics within engineering courses can be exhaustive. There are
various steps involved in the developmental process of rubrics. The following section will
explore the procedure in developing rubrics and will provide tips in developing rubrics.
1. Identify the purposes and aims of assessing the students: Determine if it is for
feedback and/or for certification or others. See “Assessment in Higher Education”
(http://ar.cetl.hku.hk/assessment.htm) for more details.
2. Identify what you want to assess: Align them with the students’ learning outcomes
and objectives and learning activities.
3. Select the appropriate rubrics: Determine whether holistic rubrics or analytic rubrics
are more appropriate. The selection depends on the type of assessment used and
the specific results you want to provide for feedback in the outcome assessment
process.
4. Identify the performance criteria that your assessment will be graded against: For
example for presentation rubrics, you may have introduction, knowledge
understanding, presentation delivery, posture/eye-contact and time-management.
5. Identify the type of scale to be used: Identifying an appropriate scale is essential both
in terms of the number of levels and the type. For instance a scale of 1-0 will not be
useful, and a scale of 10 levels will probably cause frustration for the evaluator and
become too exhaustive. When adopting the use of “0” in the number scale, it is
important to take precaution as a student who receives a “0” may have the tendency
to feel that he or she receives a grade of “zero”. It may be more useful to use scales
with words such as “Excellent, Proficient, Average and Poor”
6. Describe the level of mastery: Write descriptive statement(s) for each level of
performance, the difference between each level should be as equal as possible. The
best way to do that is to determine the worst and the best levels, and try to fill the
levels in between. In addition, the description of the levels should be objective than
subjective. For instance, a descriptive statement like “Student’s mathematical
calculations contain no errors” is better than a descriptive statement like “Student’s
mathematical calculations are good”. The first statement is preferred over the latter
statement because the phrase “no errors” is quantifiable, whereas “are good” requires
the evaluator to make judgment.
7. Test the rubrics: Conduct a test trial of the scale on several samples with several
faculty members using the developed rubrics. In order to determine the inter-rater
reliability of the rubrics, use formal statistical tests or at least draw up a rating matrix
containing ratings of all raters and look for signs of reasonable consistency among all
raters.
8. Put the rubrics into application: After conducting the test trials, the rubrics can be
used in the formal assessment process.
9. Revise the rubrics from time to time: Discuss with fellow colleagues and students
when revising the rubrics. Others opinion can offer you insights on how to improve
your rubrics. Therefore it is wise to enlist the help of colleagues when developing
rubrics for the assessment of a program. Rubrics function to promote shared
expectations and grading practices, which can be beneficial to both faculty members
and students in the programme.
10. Options: It is sometimes useful to develop the rubrics with the students, as it helps the
students to understand the usefulness of rubrics and allowing transparent
assessment procedures.
Tips on developing rubrics
1. Find and adapt existing rubrics: The chance of finding rubrics that matches exactly to
your program or course is rare. However if you want to save time, you can choose to
adapt existing rubrics where you make minor modifications to the rubrics to match
your own assessment. If not, you can seek other fellow colleagues to see if they have
developed a set of rubrics of their own to gain insights on developing your own set of
rubrics.
2. Evaluate the rubrics: In order to evaluate your rubrics critically, you can try answering
the following questions during the process: (a) Do the rubrics target the outcome(s)
being assessed? (If the rubrics do, then you have developed some successful
rubrics); (b) Do the rubrics address anything extraneous? (If the rubrics do, then
delete those extraneous areas) (c) Are the rubrics useful, feasible, manageable, and
practical? (If the answer is yes, then you can find multiple ways to use the rubrics i.e.
for grading assignment, peer review, and students’ self assessment, etc.)
3. Gather reference samples from student that exemplifies each point on the scale:
Rubrics become meaningful to a student or colleague when the benchmarks, anchors,
or exemplars are available.
4. Be prepared for any necessary revision of the rubrics at all times: As the developer of
the rubrics, you have to bear in mind to revise your rubrics on a timely basis.
5. Share your rubrics, when you have developed good rubrics: Sharing your rubrics
among your fellow colleagues can enhance interaction across academic faculty
members and in return you might get beneficial constructive feedbacks from your
colleagues on how to improve your rubrics.
6. Grade Moderation: Share your rubrics with teachers from the same course
conducting the same assessment to prevent grade inflation or deflation, and thus help
achieve consistency in assessment.
References:
Various practitioners have sought to develop their own rubrics to accommodate certain
factors within their rubrics. In the following section, some case studies will be presented on
the role of rubrics in engineering education.
References:
Developing and Using Rubrics to Evaluate Subjective Engineering Laboratory and Design
Projects
The study conducted at Iowa State University’s Faculty of Aerospace Engineering and
Engineering Mechanics (Kellog, Mann, & Dieterich, 2001) discusses the process of
developing and refining the rubrics for engineering design courses and laboratory courses.
Apart from the development and refinement of the rubrics, the discussion also covers
observations and feedback from faculty and teaching assistants using the rubrics and the
results from the student summative survey data, which includes the implemented changes
that address student concerns.
Developing rubrics are never an easy task because the process involves a lot of trial and
error, which challenges the developer’s patience. Besides developing rubrics, refinement is
also crucial. A good indicator suggesting rubrics need refinement is when the teacher feels
that the best piece of work is not receiving the best grades. Although developing and refining
rubrics are exhaustive, the results from the study revealed that faculty and teaching
assistants all appreciate the use of the rubrics as a way to ensure that the grading are unified
and to describe standards for completing assignments. However the summative survey
results from students revealed a mixed response about the use of rubrics. The seniors were
positive about the rubrics in which they even asked for rubrics, whereas sophomores were
less pleased with the rubrics.
The hypothesis for such varied response is because the seniors have developed certain
familiarity with rubrics, thus the direction offered by the rubrics could be easily interpreted
into their actions. In addition, the seniors considered the rubrics invaluable because it
provided them guidance on how to document their work. In contrast with the seniors, the
sophomores had the tendency to perceive the rubrics to be a checklist for their laboratory
report that was used to punish them. Many of the responses were very performance-oriented.
They felt that the rubrics aimed at providing them a guideline, which directs them how to do
something rather than providing them with examples of what they should do. Some
responses stated that they believed they had met the criteria in the rubrics, yet they still
received poor grades.
The following are summative observations from the faculty in their development and
implementation of the rubrics.
(1) It is observed that the key factor to the success or failure of the rubrics used in the
laboratory course depends on how the teacher applies the rubrics and how well students
were educated on the use of the rubrics. Seniors did not need much guidance and
discussion about the rubrics compared to the sophomores.
(2) Another key to its success is students’ experience with the material. Students who have
not had the experience of writing an assignment like technical reports should be offered
materials like sample reports or checklists along with the rubrics so that they can understand
what is expected with an assignment.
(3) The faculty refined the rubrics as the semester progressed by changing the weighting of
the objectives. This emphasized the higher-level skills, the quality of content as the semester
proceeded and the students’ mastery of the “mechanical aspects of reporting technical
information” (Kellog et al., 2001, p. 8).
(4) Collaboration among teachers in the development and implementation of the rubrics
seems to be important in standardizing grading. For design courses, the teachers and the
teaching assistant discussed the rationale behind the objectives and criteria for the rubrics.
Examples of evaluated reports using the rubrics were given to the teaching assistants.
However for the laboratory courses, such measure has not been adopted. Thus students
have commented about the inconsistency of grading in the laboratory courses with different
teachers using the rubrics.
(5) When students become accustomed to the use of rubrics, they can provide invaluable
feedback in the refinement process of the rubrics. The students in each of the courses
provided sufficient feedback and opinions from the summative survey results. Some of which
facilitated the refinement process. For instance it was evident, that sophomore students
needed extra support and detail on the use of rubrics.
References:
Kellog, R. S., Mann. J. A., & Dieterich, A. (2001, June). Developing and using rubrics
to evaluate subjective engineering laboratory and design projects. Paper presented at
the 108th ASEE Annual Conference and Exposition, Albuquerque, New Mexico.
Developing Analytic and Holistic Rubrics to assess Students’ Knowledge associated to the
Learning Outcomes of the Scenario Assignments in Engineering
McMartin, McKenna, & Youseffi (2000) describe the use of a scenario assignment in
teaching non-freshman students in a Mechanical Engineering course at the University of
California, Berkeley. The scenario assignment is basically a qualitative performance
assessment tool created to assess students’ knowledge of teamwork, engineering practices,
and problem solving. Students were offered the scenario to describe a “day in the life”
problem faced by engineers. Students were asked to describe the process or plan they
would adopt in finding the solution to a technical or design problem as a team instead of just
solving the problem presented in the scenario in terms of analyzing appropriate models,
running simulations, and converging on a correct recommendation.
Analytic and holistic rubrics were developed to assess students’ knowledge with respect to
the learning outcomes associated with the scenario assignment. Initial findings suggest that
the scoring of the scenarios using analytic rubrics facilitated faculty in figuring out students’
strengths and weaknesses quickly. In addition to figuring out the strengths and weaknesses,
the analytic rubrics can also assist the faculty in adapting their course to address the areas
where students need attention. For holistic rubrics, the rubrics can be easily used to assess
the changes in students’ learning and development over time and across a curriculum.
However, the holistic rubrics fail to provide definitive details regarding the achievement of
learning outcomes i.e. the ability to solve open-ended problems or the ability to work in an
inter-disciplinary team. Therefore the creators of the rubrics took the initiation of developing
analytic rubrics to resolve the problems in holistic rubrics. Figures 1 and 2 presented below
shows the rubrics the creators have developed for the scenario assignment.
Figure 1: An example of holistic rubrics for the scenario assignment (Accessed from
McMartin et al., 1999)
(a) Student recognises and determines when a problem is worth solving (develops
decision making criteria; justifies decisions.)
(b) Student defines (frames) problem accurately (analyses critical elements and scope
of problem, focuses on issues, sorts issues according to impact on problem.)
(d) Student devises process and work plan to solve problem (identifies critical tasks,
time needed, and resources; uses organisational and management tools; divides
work efficiently.)
(e) Student identifies, considers, and weighs options or consequences of plan and
design (identifies analytic strategy to weigh design consequences and solutions.)
(g) Student leads or follows when appropriate to the needs of the group (shares stage,
offers expertise/participation when and where appropriate.)
Figure 2: An example of analytic rubrics for the scenario assignment with a focus on
criteria (d) (Accessed from McMartin et al., 1999)
Criteria (d): Student devises process and work plan to solve problem
Measure Score
fails to identify the critical tasks and actions necessary to solve problem; 1
fails to identify and misidentifies the time and resource requirements; does
not employ organisational or management tools to organise tasks and
resources
identifies few of the critical tasks and actions necessary to solve problem; 2
identifies few, or misidentifies the time and resource requirements; employs
few organisational and management tools to organise tasks and resources
identifies some of the critical tasks and actions necessary to solve problem; 3
identifies some of the time and resource requirements; sometimes employs
organisational and management tools to logically and efficiently organise
tasks and resources
identifies all critical tasks and actions necessary to solve problem; identifies 4
most time and resource requirements; always employs organisational and
management tools to logically and efficiently organise tasks and resources
References:
McMartin, F., McKenna, A., & Youssefi, K. (1999, November). Establishing the
trustworthiness of scenario assignments as assessment tools for undergraduate
engineering education. Paper presented at the 29th ASEE/IEEE Frontiers in
Education Conference, San Juan, Puerto Rico.
McMartin, F., McKenna, A., & Youssefi, K. (2000). Scenario assignments as
assessment tools for undergraduate engineering education. Education, IEEE
Transactions, 43(2), 111-119.
The choice between using holistic rubrics or analytic rubrics depends on a variety of factors
such as the type of assessment, the learning outcomes, the feedback that the teacher wish
to provide and others.
The tendency to use holistic rubrics is when the teacher wants to make a quick or gross
judgment. For instance if an assessment is like a brief homework assignment, applying a
holistic judgment maybe already sufficient (i.e. check or cross) to quickly review students’
work. In addition, holistic rubrics are used when a single dimension is adequate to
understand students’ performance. Holistic rubrics are commonly applied to many writing
rubrics because they are not easy to differentiate clarity from organization or content from
presentation. Thus some educators believe holistic assessment of students’ performance
can better capture students’ ability on certain tasks.
The tendency to use analytic rubrics is when the teacher wants to typically assess each
criterion separately, especially for assignments that involve a larger number of criteria.
Analytic rubrics can better handle cases when it becomes extremely difficult to assign a level
of performance as the number of criteria increases because as student performance varies
increasingly across criteria, assigning an appropriate holistic category to the performance
becomes difficult. Moreover, the use of analytic rubrics may also be initiated by the following
reasons i.e. the need to see the relative strengths and weaknesses of a student; the need to
assess complicated skills or performance; the need for detailed feedback to drive
improvements; or the need to initiate students to self-assess themselves in their
understanding and performance.
References:
In developing and using both analytic and holistic rubrics, it is important to realize their
advantages and disadvantages especially in designing the rubrics so that the designed
rubrics serve the intended purpose. Therefore the following section will explore some of the
benefits and drawbacks of the two rubrics respectively.
1. Holistic rubrics are normally written generically and can be used with many tasks.
2. The use of holistic rubrics saves time as it minimizes the number of decisions
required for the teacher to make.
3. Trained teachers have the tendency to apply them consistently, which results in more
reliable measurement.
4. It can be easily used to assess the changes in students’ learning and development
over time and across a curriculum.
1. The holistic rubrics cannot provide the teacher and the student specific feedback
about the strengths and weaknesses of the students’ performance.
2. The following may suggest that the rubrics is developed poorly when the
performances meet the criteria in two or more categories, thus selecting the one with
the best description would be tough.
3. Criteria within the rubrics cannot be differentially weighted.
1. Analytic rubrics can provide the teacher and the student specific feedback about the
strengths and weaknesses of the students’ performance (unlike holistic rubrics).
2. The dimensions within the analytic rubrics can be weighted to reflect relative
importance.
3. When the same rubrics’ categories are used repeatedly, the analytic rubrics can show
progress over time in some or all dimensions.
4. More useful for grade moderation.
Drawbacks of Analytic Rubrics
1. Compared to holistic rubrics, creating and using analytic rubrics are less time efficient.
2. Increased possibility of disagreement among evaluators. Harder to achieve “intra-
rater reliability” and “inter-rater reliability” on all of the dimensions within the analytic
rubrics compared to a single score yielded by holistic rubrics.
References: