Aznar-Mas - 2023 - Effectiveness of The Use of Open-Ended Questions in Student Evaluation of Teaching

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Journal of Industrial Engineering and Management

JIEM, 2023 – 16(3): 521-534 – Online ISSN: 2013-0953 – Print ISSN: 2013-8423
https://doi.org/10.3926/jiem.5620

Effectiveness of the Use of Open-Ended Questions in Student


Evaluation of Teaching in an Engineering Degree
Lourdes E. Aznar-Mas1* , Lorena Atarés-Huerta2 , Juan A. Marin-Garcia3
1
DLA Universitat Politècnica de València (Spain)
2
DTA Universitat Politècnica de València (Spain)
3
ROGLE-DOE Universitat Politècnica de València (Spain)
*
Corresponding author: [email protected], [email protected], [email protected]

Received: January 2023


Accepted: October 2023

Abstract:
Purpose: The purpose of our research is to show the point of view of the members of the Board
concerning advantages, disadvantages and effectiveness of open ended questions used as a complement to
closed response questionnaires.
Design/methodology/approach: In this paper, we will describe a pilot experience carried out at a
Spanish public university where a short questionnaire with open ended questions was launched, and
students were invited to comment on their perception of the teaching received.
Findings: The response return rate (about 35%) was relatively high compared to other online closed
response questionnaires delivered. Moreover, the students’ comments provided valuable information
which made the members of the Board of the Engineering Degree chosen reflect. Their reflection was
evidence based and led to initiatives and actions to improve the quality of teaching, as well as to get an
extensive view of the Degree.
Practical implications: Findings reveal that the information retrieved can also be used in multiple ways
such as formative feedback or even for improvement of courses and instruction.
Originality/value: Student evaluation of teaching is a powerful tool for continuous teaching
improvement but the information provided by conventional closed response questionnaires may not be
sufficient.
Keywords: teaching evaluation, higher education, student satisfaction, teaching improvement, open-ended
questions

To cite this article:

Aznar-Mas, L.E., Atarés-Huerta, L., & Marin-Garcia, J.A. (2023). Effectiveness of the use of open-ended
questions in student evaluation of teaching in an engineering degree. Journal of Industrial Engineering and
Management, 16(3), 521-534. https://doi.org/10.3926/jiem.5620

-521-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

1. Introduction
Student evaluation of teaching is the systematic process in which school leaders periodically critique teachers’ work
performance based on student feedback (Lejonberg, Elstad & Christophersen, 2018). Although initially considered
a highly controversial topic, teaching evaluation has long been seen as an integral part of good professional practice
in Higher Education institutions (Hounsell, 2003). While also serving personnel evaluation purposes, student
evaluation of teaching can potentially contribute to teachers’ professional development by providing useful
feedback (Lejonberg et al., 2018), being a way of emphasizing both the course and the instructor’s strengths and
areas for improvement (Abdelhadi & Nurunnabi, 2019).
The purpose of teaching evaluation should determine the best suited feedback gathering methods, and the two
purposes cited above – personnel evaluation and teaching improvement – may require different types of data
(Wolfer & Johnson, 2003). Whereas an overall summary of teaching ability would be important for administrative
purposes, reflective instructors willing to improve their teaching would benefit most from rich information on their
areas of strength and weakness (Wolfer & Johnson, 2003). Smylie (2014) recommends teaching evaluation systems
to be explicitly linked to developmental purposes in order to have beneficial effects.
Studies on teaching evaluation focus mainly on quantified methodology to assess teachers (Ernst, 2014). Student
evaluation of teaching surveys are inexpensive, easy to conduct, and less time consuming than other approaches
(Villanueva, Brown, Pitterson, Hurwitz & Sitomer, 2017). However, standard questions may be very general, hence
not revealing a precise opinion (Llorent-Bedmar & Cobano-Delgado-Palma, 2019). Therefore, aiming for
improving instruction, it is important to maximise the information from student feedback (Abdelhadi &
Nurunnabi, 2019), which could be obtained by combining quantitative and qualitative methods. Adopting this
combined strategy, the inside out approach – with teachers and administrators on the inside, assuming to know
what students want and expect from our institutions – would be complemented by thinking outside in, listening to
the students’ voice to understand their difficulties and address the problematic aspects of teaching performance
(Tricker, Rangecroft & Long, 2005).
In this study, Action Research methodology has been used to describe an experience carried out at an Engineering
School at a public university, founded in 1968 in Spain. In 2016, an open response questionnaire was first launched
to gather information on the students’ perception of the teaching process.

2. Student Evaluation of Teaching: Literature Review


The use of evaluation in Higher Education to examine and improve teaching quality is increasing. Aiming to serve
its purpose, evaluation requires an understanding as for what is to evaluate, how to evaluate it, what data should be
collected in order to do it and, finally, how to implement teaching improvements based on what has been found
(Morgan, 2008). In some countries, governments use the information provided by students to assess the
performance of Higher Education institutions aiming to improve teaching quality, promote good practice and even
reward the best performing institutions (Shah & Nair, 2009).
There is no definite method to effectively evaluate teaching in Higher Education, because there are considerable
variations both across institutions and disciplines (Villanueva et al., 2017). Some practices frequently referenced are
peer review –usually consisting of in-class observation and review of materials–, teacher self-evaluation, exit and
alumni evaluations, and student mid-course and end-of-course evaluation (Villanueva et al., 2017; Wolfer &
Johnson, 2003). It is generally recognized that multiple sources of evidence should be used in assessing teaching
effectiveness (National Research Council, 2003), in order to cover all dimensions of teaching and the course (Hill,
Ball & Schilling, 2008). Taking into consideration multiple sources of evidence (from students, peers or mentors,
and different collection methods), the strengths of each could compensate for the weaknesses of the rest, therefore
leading to a diagnosis about teaching effectiveness that is more accurate than those based on a single source (Berk,
2005). This synergistic effect is most evident when peer ratings are coupled with student evaluations, since these
practices cover the aspects of teaching that students are unable to evaluate (Berk, 2005). Whereas students are the
best source of feedback regarding the quality of student-teacher interactions, peers are most capable of discussing
on content expertise, instructional design and assessment methods (Iqbal, 2013).

-522-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

Student evaluation of teaching at the end of the course is the most common method used for evaluating teaching
and courses (Villanueva et al., 2017). Students’ feedback provides a rich insight on teaching and course success
since, being experienced learners, students are familiar with the elements that help them reach academic success by
facilitating their learning process (Blair & Valdez-Noel, 2014). Therefore, the perception that students have on their
courses and from their learning experience is a valuable source to evaluate teaching quality in universities
(Abdelhadi & Nurunnabi, 2019).
Student evaluation of teaching is aimed to measure the instructor effectiveness and the quality of instruction. Being
in a position to judge particular aspects of teaching and the classroom (Villanueva et al., 2017), students can provide
meaningful and useful information about their learning experience, which serves mainly two purposes:
(1) administrative and personnel decisions and (2) instructor and courses’ individual improvement by bringing the
teaching community a new value (Crumbley & Reichelt, 2009; Hujala, Knutas, Hynninen & Arminen, 2020).
Therefore, the evaluations that students provide both on courses and teachers can promote remarkable
improvement in Higher Education practice (Blair & Valdez-Noel, 2014).
Many and varied ways of assessing student satisfaction are being used by Higher Education institutions such as an
informal face-to-face chat between the tutor and the student, and more formal written questionnaires (Tricker et al.,
2005). Measures range from qualitative semi-structured measures to standardized exclusively quantitative measures
(Wolfer & Johnson, 2003). Although all the methods available are valuable tools for gathering information about
the students’ learning experience (Gaba & Dash, 2004) closed-ended quantitative questionnaires are the most
frequent method for teaching evaluation and often the only one, as has been repeatedly reported in literature
(Wolfer & Johnson, 2003; Villanueva et al., 2017). This method is popular partially because the measurement
process is easy and simple, the students only have to fill in forms that require little class time (Hornstein, 2017;
Villanueva et al., 2017). Data can be recorded automatically, numerical results are extremely easy to compare among
teachers, departments and faculties (Llorent-Bedmar & Cobano-Delgado-Palma, 2019), as well as between the
lecturers and their department (Hornstein, 2017).
However, some disadvantages have been reported on this quantitative approach to student satisfaction. Despite the
reliability of the method, if most teachers at a university are rated as ‘excellent’, still 50% of them will be under the
median score, which leads to demotivation and loss of performance (Hornstein, 2017). For administrative decision
making, it would be enough to know if the instructor does not exceed some maximum percentage of
unsatisfactory ratings according to a minimum score below which teaching effectiveness would be considered
insufficient.
The apparent precision of numerical scores obtained from quantitative tools may mistakenly imply high precision
in the measurement (Wolfer & Johnson, 2003). Research supports the validity of student evaluations for making
rough distinctions among instructors (exceptional – adequate – unacceptable), but not for making finer distinctions.
As stated by McKeachie (1997), small numerical differences are unlikely to distinguish between competent and
incompetent teachers. As a particular example, Wolfer & Johnson (2003) found a very limited range of actual scores
for their quantitative test, which made it difficult to identify meaningful distinctions among instructors. Moreover,
the unidimensional nature of the instrument made it difficult to identify deficits in performance patterns, and
hence both the individual improvement and the planning for training. These authors concluded that the test failed
both for administrative decision making and teaching improvement purposes.
Averages of student ratings appear objective simply because they are numerical, but a single numerical measure
cannot capture all relevant aspects of an instructor’s teaching ability (Crumbley & Reichelt, 2009). Calculating the
means of categories leads to uninterpretable results (Hornstein, 2017). It has been argued that student evaluation
of teaching should not be used as a continuous rating scale, but rather a discrete standard that should be met
(McKeachie, 1996).
Additional concerns relate to the actual content of the questionnaire, which should only feature well formulated
items. This is crucial to properly measure the student satisfaction with the teacher performance (Moreno-Murcia,
Silveira & Belando, 2015). In the literature, some of the items found in university questionnaires have been
described as unsuitable, irrelevant and poorly thought on (González-López & López-Cámara, 2010;

-523-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

Llorent-Bedmar & Cobano-Delgado-Palma, 2019). The poorly formulated wording, including more than one
variable per item, or being too general that they may be irrelevant to a particular class, deprives the teachers of
accurate feedback to improve their teaching. Moreover, students may get confused as they may not know whether
they are evaluating the course or the instructor (Villanueva et al., 2017). In some questionnaires, students are asked
about aspects which do not depend on the task of the teacher directly or indirectly (Llorent-Bedmar &
Cobano-Delgado-Palma, 2019). There is also concern that factors unrelated with teaching quality, such as the size
of the class, has an influence on the students’ evaluation (Villanueva et al., 2017).
Focusing on instructor behaviours instead of on the fundamental interaction with the students, some quantitative
tests may not provide the type of information needed for teaching improvement. A teaching strategy may be highly
valued by a group of students and not by others, which is why the teaching behaviour should not be the focus, but
how it fits within a class (Wolfer & Johnson, 2003). The emphasis on tutors’ concerns limits the opportunity for
students to express their own ideas because they are not able to respond to questions which have not been asked
(Tricker et al., 2005). Some closed-ended questionnaire items may not cover issues that are really important for
students because they may reflect a teacher-centred or researchers’ preconceived framework (Grebennikov & Shah,
2013). In fact, these tools fail to cover important aspects of the teaching process which are not mentioned in the
predefined set of questions, which could be substantially explored from students’ reviews (Lin, Zhu, Zhang, Shi,
Guo & Niu, 2019). For this reason, teaching improvement would highly benefit from detailed students’ feedback
about what is working and not working, much more than it does from standardized evaluation instruments (Wolfer
& Johnson, 2003).
Quantitative and qualitative information should complement each other to cover a wide range of students’
perceptions on their learning experience (Grebennikov & Shah, 2013). Quantitative analysis can be used to test the
validity of qualitative insights while qualitative work can be used as preparation for quantitative work, to explore the
phenomenon in as much detail as possible (Douglas, Douglas, McClelland & Davies, 2015). Thus, open-ended
comments are likely to point at the reasons for quantitative results which may differ from those assumed by
researchers (Palermo, 2003). Douglas et al. (2015) gathered hand-written narratives of the learning experience of
350 students, who were asked to report on both good and bad experiences. The narratives provided a rich source
of data to help a Faculty identify what causes student satisfaction and dissatisfaction. This was compared with the
traditional quantitative method to gather student feedback on specific areas of teaching, and some new
determinants of quality were identified. Douglas, McClelland and Davies (2008) compared a qualitative information
gathering method with the traditional quantitative surveys, and found the synergy between the two, which is
especially useful to deeply understand the students’ experience. Grebennikov & Shah (2013) reported on how a
time series of qualitative data generated by students’ feedback surveys can help one university improve student’s
experience by examining what worked well and what needed readjustment.
Llorent-Bedmar and Cobano-Delgado-Palma (2019) critically analysed the student satisfaction surveys used at
Spanish public universities, and found that from a total of 711 items, only 29 were open-ended questions. These
authors advise for the inclusion of open-ended questions to allow students to express their opinions freely, which
would enable for the collection of more accurate and useful data. While quite many studies have been published on
quantitative questionnaires in the Spanish Higher Education context (Fernandez & Mateo, 1992; Segura-Egea,
Zarza‐Rebollo, Jiménez‐Sánchez, Cabanillas‐Balsera, Areal‐Quecuty & Martín-González, 2020; López-Gavira &
Omoteso, 2013; Gallifa & Batallé, 2010; Humanes-Humanes & Roses-Campos, 2014), less research has focused on
how to collect and process qualitative information. Rodriguez-Gómez, Ibarra-Sáiz, Gallego-Noche, Gómez-Ruiz
and Quesada-Serra (2012) used a survey tool to analyse assessment at universities, where some open-ended
questions, which were answered by a minority of the sample, helped analyse the quantitative data. Marin-Garcia &
Atarés-Huerta (2014) coded and summarized the information provided by first year university students on their
perceptions, to present a list of good and bad teaching practices. Mattos-Medina, Prados-Megías and Padua-Arcos
(2013) reported on the students’ perceptions in the specific context of the Physical Education Degree. Both in
Spanish universities and in a general international context, it seems obvious that the qualitative information
provided by students is being insufficiently exploited. Students’ comments are usually reliable and significant, and
provide rich data for reflective lecturers to realize what is actually happening in the classroom, how they are being

-524-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

perceived and understood by their students, and ultimately how effective their teaching is (Levin, 2000). Despite the
potential of qualitative data for teaching improvement, the use of comments collected through students’ surveys is
still insufficient, and the literature on their analysis is still limited (Grebennikov & Shah, 2013; Tricker et al., 2005).
This paper is based on a pilot experience carried out at an Engineering School to obtain qualitative information
from students. We aimed to research and analyse the point of view of the members of the Engineering School
Board when gathering information through open response questionnaires in order to implement actions for
continuous improvement in teaching quality. Having analysed the comments made by two members of the Board,
via an individual interview on open ended questions in an online questionnaire for student evaluation of teaching,
the advantages and disadvantages of open response questionnaires were revealed. Thus, our research question in
this paper is: Is it effective to include open ended questions to improve the information retrieved from
conventional closed response questionnaires in student evaluation of teaching?

3. Methodology
The method chosen for our study was Action Research, where researchers get involved in the experiment through a
participatory process of improvement and look for solutions from the inside. To this end, we have adapted a
checklist from Marin-Garcia and Alfalla-Luque (2021) to follow the correct steps.
There are several participants in this action research. Two teachers (Authors 1 and 2) were involved in the literature
review, acted as interviewers and performed the subsequent analysis of the data obtained from the interviews.
Author 3, the only author teaching at that Engineering School, is the Director of the Degree chosen and acted as
the reviewer of global methodology, solved discrepancies and was also interviewed (Person B). The other
participants were: a member of the Board responsible of innovation (Person A), and the students filling in the
open response questionnaire.
The first part of the questionnaire designed retrieved the basic information about students, such as academic year,
age, gender (no personal data were asked for), and the second part allowed students to make comments about
positive and negative aspects on each of their courses, overlaps in course contents or missing aspects. The
questionnaire could be completed in a reasonably short time as most students feel reluctant to invest much time
with additional activities that do not imply a minimum percentage of their course grades, particularly if this is
non-class time. Opposite to the closed conventional questionnaires delivered by their Engineering School or the
University, this open response questionnaire had to be filled in online after clicking on a link sent to them via email
by the person leading the experiment.
Students were motivated to act as evaluators of the teaching received and of their instructors by completing an
open response questionnaire to express their own opinion. They were informed about the purpose of the new type
of questionnaire, availability of the outcomes, and possible benefits and effects on their Degree, not only for them
but also for future students. Students were told that if they did not want to say anything, they could simply log in
and close the questionnaire without adding anything on it. This tool was hosted on the institutional website at the
Engineering School chosen for our research.
All data collected from the students’ comments by using an online tool were processed and codified with a software
package, Atlas.ti, supported by the Grounded Theory approach for efficient qualitative analysis of texts, audio and
video data. Several dimensions and subcategories arose, and the theory obtained from the students’ comments and
perceptions of lived experiences resulted in valuable information for both the members of the Board in their
Degree and university managers (Aznar-Mas, Atarés-Huerta & Marin-Garcia, 2021).
Aiming to explore the advantages, disadvantages and effectiveness of open response questionnaires, the point of
view of the members of the Board had to be analysed. To this end, Authors 1 and 2 had an individual interview
with two of them. Both interviews were held individually at their office, in Spanish, with unlimited duration to
allow for as many comments and opinions as possible. The information collected was processed and codified, and
some excerpts were translated into English to support our findings.
The interview was based on the following questions:

-525-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

1. Can you describe any problem or weakness you have discovered along the time in this Degree, not
detected via conventional closed response questionnaires of student evaluation of teaching?
2. Why did you decide to change the assessment tool for the evaluation of teaching?
3. Why was an open response questionnaire chosen for innovation in the evaluation of teaching?
4. Who proposed this initiative?
5. How did students first react towards this new type of questionnaire?
6. Can you describe the rate of students’ response to this new type of questionnaire?
7. How did teachers first react towards this new type of questionnaire?
8. Can you name advantages and disadvantages of these open response questionnaires?
9. Can you name improvement actions carried out in this degree, resulting from the information obtained
from the open response questionnaires of student evaluation of teaching?
10. Can you name future improvements to be implemented, resulting from the comments and opinions
provided by the students?
11. Is there anything you would like to improve or change in the new type of questionnaire (open response)?
12. Do you think this new type of questionnaire could be used in other Degrees, at other Higher Education
institutions and universities?

4. Results and Discussion


4.1. Quantitative Analysis from the Open Response Questionnaire
Concerning the response rate and time invested in the experience, we will distinguish between the students filling in
the questionnaire and the member of the Board who implemented this action for improvement and reported the
outcomes.
Research has found that response rates have dropped significantly, as instruments for student evaluation of
teaching are increasingly administered online (Ernst, 2014). A lower online response rate (about 10-15%) is largely
due to students’ differing feelings of obligation in the two formats. 374 students were sent an invitation to complete
the questionnaire and, on this occasion, the response rate of students was about 35% of the total (131 students).
The data collected showed that the total number of words retrieved from the students’ response was 8,163 and the
average time of completion per student was between 2 to 15 minutes. Only 8.5% of the students invested more
than 15 minutes, being 57 minutes the highest time for completion. Hence, it was verified that this procedure did
not reduce the response rate as compared to the online format of the institutional questionnaire. Most of the
comments dealt with positive and negative aspects of the teaching, whereas the response on overlaps in contents
and missing items was extremely low and not significant for analysis.
Concerning the time invested by the Director of the Engineering Degree to process, analyse and disseminate
results to the Engineering School members, it was approximately 30 hours.
As regards time investment in the processes of coding and analysis, it must be remarked that it has been of extreme
value due to the amount of information gathered.

4.2. Qualitative Analysis from the Interviews


4.2.1. Phase I: Starting Point, Implementation and Acceptance
At the university where this experience was held, an institutional Likert scale questionnaire has been used for more
than 30 years to assess student satisfaction with the teaching of all courses (Figure 1).
In the Engineering Degree of our research, some comments on incidents concerning teaching matters on some
courses had arisen (‘We observed two or three specific issues’) along the time. Rumours and comments on specific conflicts
became frequent among students and even among some teachers. The situation was difficult to solve as there was not
clear evidence of those facts. Person B, who was in charge of the coordination of studies, had the impression that
relevant information was missing: ‘We lacked relevant information on what students perceived in the courses’, ‘We had the impression
that we did not get everything from the students’ representatives’, ‘I stopped having meetings with teachers because only a few of them agreed,

-526-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

and it was as if they were just trying to justify themselves’. With the institutional closed response questionnaire, problems can
be detected but it does not reveal their nature nor provides the information needed to act: ‘When you have detected a big
problem, what you do not know is exactly what the problem is’, ‘I had some information but it was insufficient to allow me to act’.

Figure 1. Conventional Likert questionnaire used at the Universitat Politècnica de València (Spain) to assess student satisfaction

-527-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

Person B considered necessary to verify those comments in order to solve conflict and act: ‘I needed to be able to verify
things but, above all, to be able to have evidences’, ‘The end-of-term questionnaire provided me with very little qualitative feedback’, ‘I
lacked details, many details’. Yet, more qualitative information was needed: ‘There were issues that I was missing’.
Meetings with students and teachers held in the past were not the solution, as students’ representatives did not
seem to express the general voice but some personal ideas: ‘It was often unclear whether the opinion they conveyed was a
personal opinion or on behalf of others’. As for teachers, problems did not seem to be clearly stated and some facts
seemed to be omitted. Therefore, actions taken afterwards were clearly ineffective.
Issues and conflict affecting some teachers and courses had to be solved soon and two more reasons motivated the
implementation of the open response questionnaire. Firstly, Person B was very much aware that there were several
teachers with excellent teaching practice: ‘I did not just want to be able to act in cases where I observed negative matters’ Person
B wanted to reward those teaching professionals who were doing a really good job and with whom students were
totally satisfied: ‘I wanted to be able to congratulate the teachers who were striving for good teaching’. Secondly, Person A stated
that accreditation agencies needed clear information about all degrees, and evidences of all matters were required.
Quality was a priority, not only for the sake of accreditation but also for the institution and its prestige: ‘The quality
of the Degrees had to be verified’.
From this starting point, aiming to prioritize teaching quality, Person B proposed the use of an open response
questionnaire to collect qualitative information from students’ perceptions and lived experiences from the teaching
received: ‘It was Person B’s initiative to launch this kind of open response questionnaire to get a more qualitative view ’. Person B
proposed the initiative to all members of the board at the Engineering School and obtained their total support to
start: ‘Everyone told me it was a great, great idea and I boosted it’.
The questionnaire was hosted on the institutional website at the Engineering School. At first, only positive and
negative aspects concerning each course were asked: ‘The first time, I only asked about positive aspects and aspects that could
be improved in each course’. Later, Person A suggested to include questions on overlaps in contents and missing items
in the students’ academic curriculum: ‘I suggested that, in some way, more objective items could be introduced such as gaps and
overlaps’. Students were persuaded to give honest and useful feedback.
Along the implementation of this new questionnaire different attitudes emerged. Students did not trust this
tool, probably because they did not believe that their response was going to be accepted nor imagined
members of the board or teachers changing contents, their type of instruction or their materials. On the other
hand, students who were satisfied with the teaching received might not feel the need to complete the
questionnaire considering ‘No answer’ as probably ‘There is nothing to highlight, either positively or negatively ’, which
means: ‘I am satisfied’ or ‘Why do I have to log in if I’m not going to say anything?’ as they do not have anything
relevant to comment (Ravelli, 2000). These two facts could explain some of the reasons for a low response
rate, hence information could be missed.
Moreover, not all teaching professionals accepted this new tool easily in the beginning, because they did not like
being observed and compared to others in a public way.

4.2.2. Phase II: Processing of Information, Dissemination of Outcomes, Actions Taken, Reflection on the
Experience
Once the data had been gathered and processed, Person B compared them to those of the closed response
questionnaire so as to detect any discrepancies or errors: ‘I do check that there is nothing strange in the questionnaires, that is,
that I want to congratulate someone who has obtained a 2 (low score) in the institutional questionnaire’ ‘First I verify everything’.
Reports were prepared from the data collected and sent to both teachers and students: ‘I make a report that I upload to
Sakai for the students, and for the teachers’. Members of the board, colleagues and students shared the same information:
the report of the data from the questionnaire of all courses in the Degree: ‘I send the report via email to the person
responsible of the course’. Person B acknowledged that the information obtained is not always easy to handle when
conflict must be solved: ‘There are things that, … you say: “This is going to blow out” ... and sometimes I wish I had not known
because they are not easy to handle’.

-528-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

The information provided by open response questionnaires is a powerful tool for continuous improvement of
studies and provide rich information on students’ learning difficulties and challenges. Students’ comments let reveal
which actions should be prioritized. In the interviews, both members of the board agreed on the effectiveness of
the instrument for continuous improvement of the degree. In this respect, some specific actions have already been
taken, whereas others will be implemented in a near future. Some details about those actions were provided.
Participants in the teaching/learning process were informed about the outcomes of the open response
questionnaire and this caused some changes.
Teachers were informed about the students’ perceptions from their lived experiences about gaps or issues, if any,
thus providing teachers with information to analyse and handle them: ‘Students are telling you that between your model
and what they need to learn there is a gap; and that you should think about how to reduce it ’. Some overlaps in course
contents were solved: ‘It has allowed us to detect overlaps’. Students were satisfied because their voice was heard and
were able to either congratulate or criticize their teachers’ instruction. Members of the board received
information about conflicts already solved and others where they had to act: ‘I give them a report’. It was also
relevant to have information about potential problems that could emerge in a near future and anticipate their
solving: ‘Evidences for the Board to act appear’.
Person B showed remarkable satisfaction with this action of improvement: ‘I think this is what I have to do and,
moreover, I like doing it. I think that, basically, I do like it because it keeps me motivated’. However, it takes a long time to
process, analyse and handle: ‘It is time-consuming for me, … the long hours of meetings that I have with the students, which are
derived from this; the analysis of these things and what comes out of it’. One of the things Person B enjoys most is
congratulating teachers: ‘Congratulations, I can see you feel satisfied, I want you to be in this Degree, I do want to have teachers like
you in our Degree’.

4.2.3. Phase III: Advantages and Disadvantages of Both Types of Questionnaires


The two members of the Board shared the opinion that both types of questionnaires are necessary as they both
have advantages and complement one another: ‘Each of them has its advantages’, ‘They are complementary’. The
advantages and disadvantages of the two types of questionnaires, mentioned by the two members of the Board
who were interviewed are shown below (Figure 2).

Figure 2. Advantages and Disadvantages of Closed Response and Open Response Questionnaires,
reported by the members of the Engineering School Board who were interviewed

-529-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

4.2.4. Phase IV: Future Improvements in the Questionnaires of Student Evaluation of Teaching
Both interviews provided interesting suggestions on how to include additional open ended questions to improve
the effectiveness of closed response questionnaires on student evaluation of teaching and implement them in the
future. Person A, claiming the need for complete information, proposed to include a question on the opinion
teachers had about their students: ‘A process like this has to evaluate the opinion of all, of both sides’, ‘Teachers are never asked
about their opinion on their students’, ‘Not all information is available’. Person A also suggested to include a sort of
suggestion box for students to propose any type of improvement: ‘Some other questions could be added, to have a wider
scope. Of course, students should also suggest areas for improvement as in a suggestion box’. As only students who have attended
lectures should answer the questionnaires, Person A proposed the introduction of a question about student
participation in order to know about their involvement: ‘The student’s involvement in the course is also a valuable
information’.
Person B proposed to fill in the open response questionnaire in class time in a computer laboratory to enhance
participation and increase the response rate: ‘It could be an opportunity to see what happens, to detect any change in
participation’. Another proposal was to include this activity as a part of the assessment tasks of some courses related
to continuous improvement to analyse the participation of students: ‘It would be great to have this experience in courses
related to continuous improvement, customers’ needs, or Marketing, as another type of assessment activity’.
Some limitations should be pointed out in our paper: practices based on gathering students’ feedback on their
teaching experience are fully dependent on the students’ involvement in the evaluation process. The low response
rate of online student evaluation questionnaires poses one of the main limitations for the collection of
representative data. It is, then, paramount having students feel that their voice is heard, while also making them
understand the impact that their opinions could have on future initiatives for continuous improvement in their
Degrees. In our research, students were thoroughly informed and motivated and about 35% response rate was
obtained, which is remarkably high for an online questionnaire.
Nevertheless, one could think that a 35% response rate is not a high percentage as there many students who do not
provide any comments. Yet, that rate depends very much on the context the online questionnaire has been
delivered. It must be noted that our university statistics reveal that institutional questionnaires on any Degree
(5 closed response questions that need a few minutes to be answered) are usually completed by fewer than 15% of
the students. Thus, having a 35% response rate in our open-ended questions, which provide a lot of qualitative
information, could be considered a good result as they allow to detect some issues, their origin, and to enhance
initiatives to be taken by the members of the Engineering School Board to improve the quality of the Degree. It
would be relevant to add research on how this return rate could be furtherly increased. More studies are needed to
elucidate the reasons behind discrepancies in the return rate due to the type of delivery, either on paper or online.
It would also be interesting, for further research, to contrast this experience of open response questions included in
questionnaires of student evaluation of teaching, with those implemented by other universities so as to have a
wider overview of the situation. However, the present paper presents the experience of including questions which
are more specific than simply a “leave a comment” section; students must write on four topics which are relevant
for the management of a university Degree: their positive and negative perceptions of the courses taken, as well as
the overlaps in course contents and the missing matters found in those courses.
The focus of this study is on how this evaluation method proved to be effective for the members of the
Engineering School Board. In future research, the perception of usefulness from the students’ perspective should
also be addressed. Perceived usefulness most likely affects students’ engagement in the evaluation process, hence
findings on this topic would likely provide some justification for low response rates. Another reason for this could
be that students who are happy and satisfied with the teaching they have received would not find it useful to fill in
the questionnaire, as they do not have anything relevant to add.
Another limitation of the study stems from the inherent subjectivity of data codification. This cannot be fully
solved, as researchers have to work with perceptions of lived experiences of human beings. This could be avoided
if there was a larger group of researchers involved in the processing and analysis of the data retrieved. However, a
corpus of 8,163 words was analysed for this paper, which is a fairly feasible number of words within the context of

-530-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

qualitative analysis. CAQDA software packages used for codification were available so as to make the coding
process easier in case of mass response and provide reliability when using multiple coders.
The interviews were done in Spanish and translated into English for the present research. This could imply a little
loss of meaning in the reporting of results, but not in the analysis.

5. Conclusion
Without information on how a system is working there can be no evidence of improvement, hence effective
measurement is a prerequisite for any quality improvement process (Newall & Dale, 1991). A combination of
techniques can make up for the deficiencies of student conventional questionnaires to get a comprehensive
overview of the teaching at Higher Education institutions. In this respect, we have described an experience carried
out in an Engineering Degree to analyse, from the point of view of the Board, the advantages, disadvantages and
effectiveness of open response questionnaires as a complement to closed response questionnaires.
An open response questionnaire has offered a broader view and a larger amount of qualitative information which
has proved to be very valuable for teaching improvement initiatives in the Engineering Degree chosen. Although
there is still some room for readjustment, this practice has been highly appreciated by the teaching professionals
and members of the Board. Since 2016, the open response questionnaire has been delivered every year, hosted on
the institutional website, and actions have been taken based on the information retrieved. This type of open
response questionnaires could also be used in any other Higher Education institutions or universities and as
suggested by Marshall (2022) they could also allow to redefine the quantitative dimensions used in the student
assessment of teacher performance.
University managers should take all this information into account as decisions and initiatives made after reflecting
on it can have a real impact on the transformation of Higher Education institutions and universities. Benefits
obtained through the utilization of open response questionnaires, though being time consuming, would have to
become a paramount task for any institution deeply involved in continuous improvement so as to reach excellence
and quality.

Declaration of Conflicting Interests


The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication
of this article.

Funding
The authors received no financial support for the research, authorship, and/or publication of this article.

References
Abdelhadi, A., & Nurunnabi, M. (2019). Engineering student evaluation of teaching quality in Saudi Arabia. The
international journal of engineering education, 35(1), 262-272.
Aznar-Mas, L.E., Atarés-Huerta, L.M., & Marin-Garcia, J.A. (2021). Students have their say: factors involved in
students’ perception on their engineering degree. European Journal of Engineering Education, 46(6), 1007-1025.
https://doi.org/10.1080/03043797.2021.1977244
Berk, R.A. (2005). Survey of 12 Strategies to Measure Teaching Effectiveness. International Journal of Teaching and
Learning in Higher Education, 17(1), 48-62.
Blair, E., & Valdez-Noel, K. (2014). Improving higher education practice through student evaluation systems: is the
student voice being heard? Assessment & Evaluation in Higher Education, 39(7), 879-94.
https://doi.org/10.1080/02602938.2013.875984
Crumbley, D.L., & Reichelt, K.J. (2009). Teaching effectiveness, impression management, and dysfunctional
behavior: Student evaluation of teaching control data. Quality Assurance in Education: An International Perspective,
17(4), 377-92. https://doi.org/10.1108/09684880910992340

-531-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

Douglas, J.A., Douglas, A., McClelland, R.J., & Davies, J. (2015). Understanding student satisfaction and
dissatisfaction: an interpretive study in the UK higher education context. Studies in Higher Education, 40(2), 329-49.
https://doi.org/10.1080/03075079.2013.842217
Douglas, J., McClelland, R., & Davies, J. (2008). The development of a conceptual model of student satisfaction
with their experience in higher education. Quality Assurance in Education: An International Perspective, 16(1), 19-35.
https://doi.org/10.1108/09684880810848396
Ernst, D. (2014). Expectancy Theory Outcomes and Student Evaluations of Teaching. Educational Research and
Evaluation, 20(7-8), 536-56. https://doi.org/10.1080/13803611.2014.997138
Fernández, J., & Mateo, M.A. (1992). Student Evaluation of University Teaching Quality: Analysis of a
Questionnaire for a Sample of University Students in Spain. Educational and Psychological Measurement, 52(3), 675-86.
https://doi.org/10.1177/0013164492052003017
Gaba, A.K., & Dash, N.K. (2004). Course evaluation in open and distance learning: a case study from Indira
Gandhi National University. Open learning: the Journal of Open, Distance and e-Learning, 19(2), 213-21.
https://doi.org/10.1080/0268051042000224806
Gallifa, J., & Batallé, P. (2010). Student Perceptions of Service Quality in a Multi‐campus Higher Education System
in Spain. Quality Assurance in Education: An International Perspective, 18(2), 156-70.
https://doi.org/10.1108/09684881011035367
González-López, I., & López-Cámara, A. B. (2010). Sentando las bases para la construcción de un modelo de
evaluación a las competencias docentes del profesorado universitario. Revista de Investigación Educativa, 28(2),
403-23.
Grebennikov, L., & Shah, M. (2013). Student voice: using qualitative feedback from students to enhance their
university experience. Teaching in Higher Education. Critical Perspectives, 18(6), 606-18.
https://doi.org/10.1080/13562517.2013.774353
Hill, H.C., Ball, D.L., & Schilling, S.G. (2008). Unpacking Pedagogical Content Knowledge: Conceptualizing and
Measuring Teachers’ Topic-Specific Knowledge of Students. Journal for Research in Mathematics Education, 39(4),
372-400. https://doi.org/10.5951/jresematheduc.39.4.0372
Hornstein, H.A. (2017). Student evaluations of teaching are an inadequate assessment tool for evaluating faculty
performance. Cogent Education, 4(1), 1-9. https://doi.org/10.1080/2331186X.2017.1304016
Hounsell, D. (2003). The evaluation of teaching. In Fry, H., Ketteridge, S., & Marshall, S. (Eds.) A Handbook for
Teaching and Learning in Higher Education: Enhancing Academic Practice, second edition (200-212). Routledge.
Hujala, M., Knutas, A., Hynninen, T., & Arminen, H. (2020). Improving the quality of teaching by utilising written
student feedback: A streamlined process. Computers & Education, 157, 103965.
https://doi.org/10.1016/j.compedu.2020.103965
Humanes-Humanes, M.L., & Roses-Campos, S. (2014). College students’ views about journalism education in
Spain. Comunicar. Media Education Research Journal, 22(1), 181-88. https://doi.org/10.3916/C42-2014-18
Iqbal, I. (2013). Academics’ resistance to summative peer review of teaching: questionable rewards and the
importance of student evaluations. Teaching in Higher Education. Critical Perspectives, 18(5), 557-69.
https://doi.org/10.1080/13562517.2013.764863
Lejonberg, E., Elstad, E., & Christophersen, K.A. (2018). Teaching evaluation: antecedents of teachers’ perceived
usefulness of follow-up sessions and perceived stress related to the evaluation process. Teachers and Teaching. Theory
and Practice, 24(3), 281-96. https://doi.org/10.1080/13540602.2017.1399873
Levin, B. (2000). Putting Students at the Centre in Education Reform. Journal of Educational Change, 1(2), 155-72.
https://doi.org/10.1023/A:1010024225888

-532-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

Lin, Q., Zhu, Y., Zhang, S., Shi, P., Guo, Q., & Niu, Z. (2019). Lexical based automated teaching evaluation via
students’ short reviews. Computer Applications in Engineering Education, 27(1), 194-205.
https://doi.org/10.1002/cae.22068
Llorent-Bedmar, V., & Cobano-Delgado-Palma, V.C. (2019). Análisis crítico de las encuestas universitarias de
satisfacción docente. Critical analysis of university teacher satisfaction surveys. Revista de Educación, 385, 91-117.
https://doi.org/10.4438/1988-592X-RE-2019-385-418
López-Gavira, R., & Omoteso, K. (2013). Perceptions of the Usefulness of Virtual Learning Environments in
Accounting Education: A Comparative Evaluation of Undergraduate Accounting Students in Spain and England.
Accounting Education, 22(5), 445-66. https://doi.org/10.1080/09639284.2013.814476
Marin-Garcia, J.A., & Atarés-Huerta, L.M. (2014). Buenas y malas prácticas docentes según el punto de vista de los
alumnos de grado. In-Red 2014-Jornadas de Innovación Educativa y Docencia en Red. (1290-1300).Universitat Politècnica
de València.
Marin-Garcia, J.A., & Alfalla-Luque, R. (2021). Teaching experiences based on action research: a guide to publishing
in scientific journals. WPOM-Working Papers on Operations Management, 12(1), 42-50.
https://doi.org/10.4995/wpom.7243
Marshall, P. (2022). Contribution of Open-Ended Questions in Student Evaluation of Teaching. Higher Education
Research & Development, 41(6), 1992-2005. https://doi.org/10.1080/07294360.2021.1967887
Mattos-Medina, B., Prados-Megías, E., & Padua-Arcos, D. (2013). La voz del alumnado: Una investigación narrativa
acerca de lo que siente, piensa, dice y hace el alumnado de Magisterio de Educación Física en su formación inicial.
Movimiento, 19(4), 251-69. https://doi.org/10.22456/1982-8918.37816
McKeachie, W.H. (1996). Student ratings of teaching. In England, J., Hutchings, P., & McKeachie, W.J. (Eds.),The
professional evaluation of teaching. ACLS Occasional Paper, 33(1-9).
McKeachie, W.J. (1997). Student ratings: The validity of use. American Psychologist, 52, 1218-25.
https://doi.org/10.1037/0003-066X.52.11.1218
Moreno-Murcia, J.A., Silveira, Y., & Belando, N. (2015). Questionnaire evaluating teaching competencies in the
university environment. Evaluation of teaching competencies in the university. New Approaches in Educational
Research, 4(1), 54-61. https://doi.org/10.7821/naer.2015.1.106
Morgan, P. (2008). The Course Improvement Flowchart: A description of a tool and process for the evaluation of
university teaching. Journal of University Teaching and Learning Practice, 5(2), 1-14. https://doi.org/10.53761/1.5.2.2
National Research Council (2003). Improving Undergraduate Instruction in Science, Technology, Engineering, and Mathematics.
Report of a Workshop. Washington DC: The National Academies Press. https://doi.org/10.17226/10711
Newall, D., & Dale, B.G. (1991). The introduction and development of a quality improvement process: a study. The
International Journal of Production Research, 29(9), 1747-60. https://doi.org/10.1080/00207549108948046
Palermo, J. (2003). 20 Years on - Have student evaluations made a difference? In Nair, C.S., & Harris, R. (Eds.),
Proceedings of the Australian Universities Quality Forum 2003: National quality in a global context (136-140). Melbourne,
Australian Universities Quality Agency.
Ravelli, B. (2000). Anonymous Online Teaching Assessments: Preliminary Findings. Paper presented at the Annual
National Conference of the American Association for Higher Education. Charlotte, North Carolina.
Rodríguez-Gómez, G., Ibarra-Sáiz, M.S., Gallego-Noche, M.B., Gómez-Ruiz, M.Á. & Quesada-Serra, V. (2012). La
voz del estudiante en la evaluación del aprendizaje: un camino por recorrer en la universidad. Revista Electrónica de
Investigación y Evaluación Educativa - Relieve, 18(2),1-22. https://doi.org/10.7203/relieve.18.2.1985
Segura‐Egea, J.J., Zarza‐Rebollo, A., Jiménez‐Sánchez, M.C., Cabanillas‐Balsera, D., Areal‐Quecuty, V., &
Martín-González, J. (2020). Evaluation of undergraduate Endodontic teaching in dental schools within Spain.
International Endodontic Journal, 54(3), 454-463. https://doi.org/10.1111/iej.13430

-533-
Journal of Industrial Engineering and Management – https://doi.org/10.3926/jiem.5620

Shah, M., & Nair, C.S. (2009). Using Student Voice to Improve Student Satisfaction: Two Australian Universities the
Same Agenda. Journal of Institutional Research (South East Asia), 7(2), 43-55.
Smylie, M.A. (2014). Teacher Evaluation and the Problem of Professional Development. Mid-Western Educational
Researcher, 26(2), 97-111.
Tricker, T., Rangecroft, M., & Long, P. (2005). Bridging the gap: an alternative tool for course evaluation.Open
Learning: The Journal of Open, Distance and e-Learning, 20(2), 185-192. https://doi.org/10.1080/02680510500094249
Villanueva, K.A., Brown, S.A., Pitterson, N.P., Hurwitz, D.S., & Sitomer, A. (2017). Teaching evaluation practices in
engineering programs: Current approaches and usefulness. International Journal of Engineering Education, 33(4), 1-18.
Wolfer, T.A. & Johnson, M.M. (2003). Re-Evaluating Student Evaluation of Teaching. The Teaching Evaluation
Form. Journal of Social Work Education, 39(1), 111-121. https://doi.org/10.1080/10437797.2003.10779122

Journal of Industrial Engineering and Management, 2023 (www.jiem.org)

Article’s contents are provided on an Attribution-Non Commercial 4.0 Creative commons International License. Readers are
allowed to copy, distribute and communicate article’s contents, provided the author’s and Journal of Industrial Engineering and
Management’s names are included. It must not be used for commercial purposes. To see the complete license contents, please
visit https://creativecommons.org/licenses/by-nc/4.0/.

-534-

You might also like