School Evaluation Policies and Systems in Korea: A Challenge of Social Validation
School Evaluation Policies and Systems in Korea: A Challenge of Social Validation
School Evaluation Policies and Systems in Korea: A Challenge of Social Validation
3-28
Juhu Kim
Ajou University, Korea
Juah Kim
Korean Educational Development Institute, Korea
Hoi K. Suen
Pennsylvania State University, U.S.A.
Abstract
For school evaluation policies and systems in Korea to be effective, they need
to have social validity. This paper reviews the historical context of school evaluation
policy development, school evaluation system design, and critical factors contributing
to the implementation of the policy from the perspective of social validity. The results
showed that the school evaluation policy has been evolving continuously. Although the
central government and local education offices have spent much time and energy on
the development of policies and systems, implementation has been slow and rocky. In
addition, the overall school evaluation policy is still facing several problems, including
the lack of utilization of achievement data, the need to further improve the quality of
evaluator judgments, and the need to continuously validate evaluation indexes. Despite
such problems, evaluation of school quality has become an important component of school
accountability, which is an aspect of education increasingly demanded by tax-payers. The
development in the last 15 years can be understood as a slow, arduous but steady process
of social validation toward a consensual school evaluation policy and system.
Introduction
Social validity as a concept refers to the extent to which the procedures, goals
and outcomes of research, of a psychological intervention, or of a social intervention
are regarded as important and acceptable to the population in general, and to
direct stakeholders in particular. It has long been considered a critical element of
assessment and intervention in special education and applied behavior analyses (e.g.,
Fawcett, 1991; Wolf, 1978) and clinical interventions (e.g., Foster & Mash, 1999); it is
considered an important aspect of validity for traditional quantitative psychometrics
(e.g., Messick, 1995), qualitative investigations (e.g., Patton, 2002), as well as mixed
method research (e.g., Dellinger & Leech, 2007; Onwuegbuzie & Johnson, 2006).
The process of social validation within the context of a single individual assessment
instrument generally involves the gathering of evidence of social congruence,
consensus, and absence of discrepancies across diverse sources of information
or stakeholders (e.g., Kramer, 2011). For systems of assessment and evaluation,
however, the process of validation may involve a lengthy process of consensus-
building across communities, beyond the relatively simple process of gathering
evidence of congruence. For large-scale and complex evaluation and assessment
systems, it may be necessary to conceptualize its social validation from a hermeneutic
or socio-cultural perspective (Moss, Girard, & Haniford, 2006).
The design and implementation of school evaluation policies in Korea can be
viewed as an illustration of the challenges and difficulties involved in the process of
social validation of such a large and complex system. In 1995, the Korean Presidential
Committee on Educational Innovations suggested that school evaluation should
be utilized for the improvement of school quality. Based upon suggestions by the
Presidential Committee, the Korean government developed a new policy aimed
not only to enhance the quality of schooling, but also to improve the quality of
accountability (J. Kim, T. Chung, J. Kim, & S. Kim, 2004). For these purposes, the
Korean government invested about 3 million dollars in the last decade to support
school evaluation activities. In coordination with the national policy, local education
offices also developed and implemented their own school evaluation polices.
With interest from the national government and support at the local level, school
evaluation activities have become widespread throughout the nation and have
contributed to the improvement of schooling.
Under the new policy, the Korean government designed a school evaluation
model focusing on interactions between external evaluators and school teachers
(J. Kim, Choi, Ryu & Lee, 1999; J. Kim, 2004). Based upon a qualitative evaluation
approach, professional evaluation teams visited schools and conducted in-depth
reviews of entire school systems through classroom observations, interviews with
teachers, school document analyses, and conversations with parents and students.
Unlike school evaluation models currently dominant in the United States and
Great Britain, both of which emphasize student achievement, the school evaluation
4
School evaluation policies and systems in Korea
5
Juhu Kim, Juah Kim, & Hoi K. Suen
and the unintended consequences (T. Chung, J. Kim et al., 2004a; B. Kim, 2004). For
instance, as an example of an unintended consequence, school teachers spent much
time making glossy, good-looking reports. Since the evaluation process heavily
relied on site-visits, the quality of each school was based on the judgments of site-
visitors. Thus, in order to impress these external evaluators, teachers prepared
beautiful-looking reports with lots of graphs, diagrams, and tables. As time went on,
school teachers complained about these time-consuming tasks. The evaluation was
perceived as a one-shot summative inspection, rather than a new system that could
help improve the quality of schooling formatively (T. Chung, K. Nam, et al., 2008).
One of the more fundamental criticisms from field practitioners was about the
gap between the philosophical background behind the school evaluation policy and
schooling in reality within the context of Korean education. The philosophical belief
behind the school evaluation policy, a part of neo-liberal educational reforms (Park,
2009), was rooted in client-oriented education with school autonomy. So, it was
assumed that each school had its own plans for educational innovation, pursing the
improvement of school quality. In addition, the results of school evaluations were
supposed to be opened to the public. However, field practitioners did not experience
decentralization and democratization through the implementation of the school
evaluation policy, although the policy environment itself presumed the liberation
of the educational system. In actuality, the central government did not cede any
of the authority of the school system or the educational policy decision making. In
particular, administrators were hesitant to share the results of the school evaluations
with parents. As a result, it was not easy to reach a consensus about the meaning of
evaluating school quality as well as school evaluation policy.
Given these circumstances, one critical question arose: “What is the primary
rationale behind the school evaluation policy in Korea?” Another question was why
the policy encountered so many barriers during implementation in the educational
system. Were the problems beyond issues of process-related components (e.g.,
development of evaluation index, training of evaluators, analysis of survey data),
but related to social/contextual factors? These questions are critical ones about the
validity of the school evaluation system within the socio-cultural context of Korea.
Although there were a few research studies (T. Chung, J. Kim et al., 2004a; B. Kim,
2004; S. Kim, Chung, & Kim, 2009) that have investigated the quality of school
evaluation policies, they did not focus on the validity of school evaluation from wider
socio-cultural perspectives. Thus, the main purpose of this paper was to review
the school evaluation policy in Korea from the perspective of social validation. As
such, this paper also serves as part of the continuing social dialog toward a goal of
a final, valid system that will ensure the future of school quality. To understand the
components involved in this social validation process, it is necessary to examine:
6
School evaluation policies and systems in Korea
1) What are the educational and historical contexts of school evaluation policy
development within the Korean society?
2) How was the school evaluation system developed and redesigned for the
improvement of school quality?
3) What are some of the critical socio-cultural and contextual factors involved
in the implementation of school evaluation policies within the educational
system of Korea?
7
Juhu Kim, Juah Kim, & Hoi K. Suen
Evaluation Indirect monitoring Model development National project Redesign Holistic assess-
activities at local education & small scale pilot through qualitative period ment of school
offices study approaches system
Stake holders MOE MOE MOE + LEO MEHRD + LEO +
parents
8
School evaluation policies and systems in Korea
financial planning. A national curriculum, along with textbooks, was developed and
managed by the central government. As such, the role of school superintendents was
basically one of implementing well-designed plans by the government, rather than
creatively developing their own policies.
Under these circumstances, it was the central government that should be held
accountable for the quality of schooling. If the government wanted to examine
and evaluate the quality of schooling, they would be evaluating their own work.
Although each school was also responsible for quality, that responsibility was
relatively small compared to the one of the government (H. Kim et al., 2005).
One of the reasons as to why school quality was not assessed has something to
do with a historical concept of schooling in Korea. Traditionally, in Korean society,
school was never treated as a social unit. As such, before 1995, the idea of quality
of school was an ambiguous construct which could not be directly evaluated.
School-based management or individual school accountability were alien concepts
unconnected to the core of the education system in Korea.
Historically, quality control for the Korean educational system was not
accomplished via the assessment of school quality but through personnel evaluation.
For instance, during the Yi dynasty (1392 – 1910), the quality of schools was indirectly
managed through the evaluation of local government officers (J. Kim, 2005). At
that time, the performance of local government officers was evaluated using seven
different criteria. One of the seven criteria was the encouragement of education. If
an officer reported good evidence showing an increase in student enrollment at local
education centers, the officer obtained high ratings for his personnel evaluation.
Given this historical context, the concept of school accountability was also
moot. Since the central government directly managed the entire school system,
accountability was not a concern for individual schools. School principals simply
followed policies and directives from the central government and would not
creatively manage their schools based upon their own visions. In addition, they were
not expected to report the outcomes of schooling to the public or the parents. Thus,
prior to 1995, school level evaluation as a concern had been absent from the views of
the central government, local education offices, as well as those of teachers, parents,
and administrators.
An understanding of this historical and socio-cultural background is
critical for the understanding of some of the subsequent developments. The 1995
recommendations of the presidential commission fundamentally suggested the
imposition of quintessentially western concepts of school quality, school evaluation
and accountability onto a school system for which such concepts were alien and
incongruent.
9
Juhu Kim, Juah Kim, & Hoi K. Suen
After the central government had accepted the recommendations from the
presidential committee on educational innovations, the Ministry of Education (MOE)
began in 1996 to develop a school evaluation system with assistance from KEDI.
Since KEDI did not have any prior experience in school evaluation, the first stage
of development was to review many different school evaluation models abroad. In
particular, KEDI paid attention to the school evaluation model used by the Office for
Standards in Education (OFSTED), a British national institute for school evaluation.
After finishing their theoretical review of school evaluation models, KEDI
developed a pilot evaluation model in 1999 (J. Kim, K. Choi, et al., 1999; Ryu & Kim,
1999) The main methodology behind the evaluation model was a series of qualitative
inquiries about the overall quality of schooling at each local school through
document analyses, interviews, classroom observations, and other similar methods.
A team of about 8 evaluators first reviewed various pieces of evaluation evidence
reported from an individual school. After reviewing the evidence, they collected
more direct evidence through a site visit. A final evaluation report was then prepared
by the team and sent to the school.
In coordination with the government’s efforts for the development of the school
evaluation system, LEOs also began to consider school evaluation. Since they did not
have the resources to develop sophisticated evaluation models based on any complex
theoretical framework, their approach to school evaluation tended to be simplistic.
They identified measurable school characteristics (e.g., frequency of in-service
workshops for teachers, number of instruction manuals, teacher-child ratio, number
of certificates earned by students) that they believed reflect quality of schooling and
developed checklists for these characteristics (T. Chung, J. Kim, et al., 2004a). Using
such checklists, administrators or principals reported school scores. The scoring
system tended to be simple and non-systematic.
During the election of the superintendent of each LEO, the candidates would
propose their vision of how to implement educational policies handed down from
the MOE. The main focus of individual school evaluations at the LEO level, with
their checklists and scores, were aimed at inspecting whether the superintendent’s
procedures had been implemented as planned. The use of such simple checklists
instead of more sophisticated and comprehensive methods was at least partly
due to a lack of well-trained evaluators at the local schools. As a result of using
such superficial checklists, the impact of LEO school evaluation on each LEO was
minimal. Teachers, principals, administrators, and parents did not believe that the
LEO evaluation information reflected the actual quality of the schools. Although the
results of each school evaluation were distributed to the teachers, administrators, and
parents these results were largely ignored. The local school evaluation information
was treated as a technical supplement to the report of the visitation team.
10
School evaluation policies and systems in Korea
11
Juhu Kim, Juah Kim, & Hoi K. Suen
Through analyzing all participating schools as a system, they had the opportunity to
understand how the overall school system worked. In spite of earlier concerns from
school teachers regarding the negative impact of school evaluation, the consensus
among the participants led to a positive spirit of purpose that all involved would
work together for the reconstruction of curriculum, instructional methods, school
leadership, communicating with parents, and so forth. The participants also pointed
out that more research funds and resources should be utilized for the continuous
improvement of the evaluation model itself. In addition, a specialized research
institute for school evaluation at the national level was also proposed.
However, soon after the initiation of this consensus and commitment, negative
responses towards the evaluation system began to emerge from field practitioners.
For instance, many school teachers were not comfortable with intensive interviews
and observations that took an entire week. Some evaluators began to doubt the
effectiveness and efficiency of the qualitative methodology used. In the next three
years, the number of schools participating in the evaluation project increased
from 16 to 100 (see Table 2); partly due to budgetary constraints and partly due to
complaints about the length of visits, site visitation days were reduced from 6 to 2
days per school. Although the number of trained evaluators more than doubled from
139 to 290 by 2002, it suddenly dropped to 92 evaluators in 2003. During the same
4 years, the number of evaluators per team decreased from 13 to 3. Concomitantly,
the sample of schools from which supplemental evaluation data were collected via
a survey expanded from 108 to 756 schools. Meanwhile, as more and more schools
joined the core group of site-visit participants, less resources were allocated for these
visits.
While the MOE, KEDI, and the participating schools implemented the site-
visitations and surveys, all LEOs continued to pursue their own improvement of
school quality by continuing to use the evaluation indexes they had developed
before 2000. The numerical indexes were developed locally and were targeted to
specific procedures and concerns of local education offices (J. Kim, T. Chung, et al.,
12
School evaluation policies and systems in Korea
2004). In other words, unlike the national KEDI evaluation model, the evaluation
approach at the local level continued to focus on narrow numerical measures of local
concerns, rather than assessing quality of schooling through more comprehensive,
holistic observations. During 2000-2003, these two attempts ran in parallel with little
interaction or cross-referencing with one another.
In 2003, school administrators, the National Audit Center (NAC), and the newly
expanded Ministry of Education and Human Resources (MEHRD)1) concurrently
criticized the school evaluation system. The general consensus was that the
evaluation system was not contributing to the improvement of school quality. School
administrators complained about the burden of evaluation preparation, the quality of
evaluators, and the perceived lack of reliability or validity of evaluation information.
The NAC was critical of the poor cost-benefit ratio of the evaluation system. The
NAC also pointed out a partial redundancy of evaluations conducted by KEDI and
those conducted by local education offices. The MEHRD and local education offices
complained about the difficulty of obtaining meaningful information from school
evaluation results to guide accountability. Hence, the school evaluation project at the
national level was terminated at the end of 2003.
It can be observed at this stage that the two initial communities of stakeholders;
i.e., MOE and LEOs, have made great strides toward consensus and congruence.
However, new stakeholders; i.e., NAC and MEHRD, emerged with different
perspectives and expectations. This became another step in the long and complicated
process of social validation.
13
Juhu Kim, Juah Kim, & Hoi K. Suen
of the overall quality of schooling at the national level. So, the MEHR and
administrators suggested the development of a larger system of school monitoring,
rather than evaluating each school’s specific local qualities. They also recommended
making a strong connection between school evaluation at the national level and those
of LEOs.
KEDI also reviewed many different school evaluation models developed by
LEOs but found these models to be locally unique and to be neither systematic
nor generalizable. For example, some offices used a hodgepodge of more than 100
different indexes but had no theoretical rationale for the choice of indexes.
In order to develop Korea’s unique school evaluation model, many different
evaluation approaches used in other countries were examined by KEDI. For instance,
school evaluation systems in the United Kingdom and the United Sates were
thoroughly reviewed (Erpenbach, 2003; Gong, Blank, & Manise, 2002; Marion &
White, 2002; OFSTED, 2003a; OFSTED, 2003b; OFSTED, 2003c; OFSTED, 2004; Potts,
2002; Sammons, Josh, & Peter, 1995). The results provided a couple of insights. For
instance, since the use of student achievement data for school evaluation was not
permitted in Korea, site-visitation oriented approaches (e.g., traditional accreditation
system, qualitative interview, classroom observation) were seriously considered.
With this perspective, the OFSTED’s evaluation model was investigated in order to
combine quantitative and qualitative evaluation methods. In particular, OFSTED’s
systematic assessment approach utilizing a collection of qualitative evidence was
very helpful for the development of an integrated evaluation system combining site
visiting results and survey data.
Through the review of school evaluation systems in the US, a recent policy, No
Child Left Behind (NCLB) emphasizing student’s academic growth was also examined.
Because of limitations in the utilization of student achievement data in Korea, it
was difficult to gain meaningful insights from the NCLB policy for the redesign of
the Korean school evaluation system. However, the US approach of focusing on
continuous growth with specific objectives (e.g., AYP: Adequate Yearly Progress)
was carefully interpreted (Fast & Hebbler, 2004; Fast & Erpenbach, 2004). In addition,
utilization of the results of school evaluation (e.g., closing of schools) gave insights for
the reviewers to design a new method for the utilization of evaluation results.
After the review of school evaluation systems in Korea and in the world,
KEDI proposed a revised evaluation system. Unlike the previous school evaluation
model, the newly revised model emphasized the importance of communication
with the public, school outcomes, and policy implementation at the national level.
The main foci of the revised model were as follows. First, evaluation should provide
meaningful results for the improvement of the school system. In other words, as
Stufflebeam et al. (1971) pointed out, the main focus on evaluation should not be to
prove something, but to improve education. Thus, the main goal of school evaluation
was to get valid evidence of high-quality schooling for continuous growth. Then, the
schools should show appropriate evidence of continuous growth and improvement.
14
School evaluation policies and systems in Korea
15
Juhu Kim, Juah Kim, & Hoi K. Suen
Finally, communication with parents and local communities was also proposed.
Information about the quality of schooling had never been open to the public in
Korea. Because of a policy of educational equity in Korea, the MEHRD and LOEs
had assumed that the quality of schooling was about the same across all schools.
However, in reality, many research studies (J. Kim, Min, & Choi, 2009, December;
S. Kim et al., 2009; Ryu et al., 2006) had reported that there are large differences
in school quality by geographic regions (e.g., urban vs. rural schools). For the
construction of partnerships among schools, parents, and local communities, open
communication about school quality was strongly recommended.
With the new direction of school evaluation, KEDI developed the School
Evaluation Common Indexes (SECI). The uniqueness of SECI lies in its efficiency
and use of holistic judgments (T. Chung, J. Kim, et al., 2004b). The core functions of
a school system were extracted through an intensive literature review. A short list of
14 key outcomes-based evaluation questions and corresponding rating scales were
prepared. For each question, detailed elements reflecting high quality schooling were
described. Additionally, these questions were written in such a way that evaluators
could figure out what characteristics were necessary to show an appropriate level
of quality. After reviewing each school’s self-evaluation report, evaluators were to
choose an appropriate score for each scale. Very detailed scoring rubrics (1: very poor
– 5: excellent) were developed and provided. After assigning ratings based on school
reports, evaluators would conduct site visits to confirm their scores. As such, the
process and judgment system was standardized and systemically managed through
the indexes.
The SECI system was validated through a pilot study in 2005. The Kyungbook
Province Office agreed to use SECI for their school evaluation as a pilot site. Through
collaboration among the MEHRD, KEDI, and the Kyungbook Province Office, 72
schools joined the pilot school evaluation project (J. Kim, T. Chung, J. Kim, & S. Kang,
2005b). About 30 evaluators were newly recruited and trained by the KEDI research
team. At the end of 2005, the results of school evaluation were opened to the public.
Based upon the results of this new school evaluation project, the remaining 15 LEOs
also adapted the SECI and began to use it in 2006.
An important effect of the criticisms by the NAC and the MEHRD in the
previous stage was that KEDI became cognizant of the importance of involvement
of more stakeholders in the design and implementation processes. The SECI was an
attempt to integrate the perspectives of the previous MOE and those of the LEOs to
arrive at a consensual system. It was also an attempt to have a system that would
satisfy the demands of the NAC and those of the MEHRD. Finally, it was recognized
that, if this system is to be socially valid, it needs to be of value to yet another
community of stakeholders: that of parents and the general interested public.
16
School evaluation policies and systems in Korea
17
Juhu Kim, Juah Kim, & Hoi K. Suen
whether the evaluation system really enhances the quality of schooling. In addition,
some people are even questioning whether school evaluation itself is appropriate for
the education system in Korea. Because of these reasons, not only field practitioners,
educational researchers, administrators, and parents, but also the government, do
not attach a high priority to the current school evaluation process. Thus, construction
and implementation of the school evaluation system is still on-going. It is also clear
that, even when all stakeholders are on board, it will take more negotiation of goals,
objectives, and outcomes prior to the achievement of a reasonable level of social
validity.
18
School evaluation policies and systems in Korea
scoring process with evaluation index. Rather, they needed to know that numbers
(e.g., evaluation scores) do not make decisions, people do.
Different opinions arose from evaluators. Although the newly developed
evaluation model heavily relied on evaluators’ judgments, the evaluators were not
given enough time to fully conduct school evaluations. Because of the burden put on
field teachers’ for the preparation of school evaluation, the evaluators did not want
to take up too much of teachers’ time. So, they usually spent half to one day only
for site visits. These shorter visits were welcomed by teachers but the evaluators did
not have enough time to interview school teachers and students thoroughly. Thus,
the evaluators could not have sufficient information to assess the overall quality
of schooling. As a result, evaluators tended to rely on information from the paper
reports prepared by teachers. In other words, while school teachers did not want the
evaluators to stay at their schools for a long time, they also complained about the
evaluators’ judgment as being superficial due to their brief visits.
To bridge this gap, a more systematic training program for evaluators was
initiated. School evaluators were provided with two-day intensive workshops
prepared by the MEHRD and KEDI. However, since the government and LEOs
did not have enough funds for the training workshops, only selected evaluators
from each LEO joined the workshops. After the workshops, the trained evaluators
were supposed to deliver the knowledge and experiences from the workshops to
their local colleagues. This indirect train-the-trainer linkage between the training
workshops and evaluators at LEOs was not enough to help school evaluators be
ready for performing valid assessments. Thus, the improvement of evaluators’
quality would be one of the foremost priorities to improve the current school
evaluation system.
19
Juhu Kim, Juah Kim, & Hoi K. Suen
20
School evaluation policies and systems in Korea
One concern in this absolute interpretation was that administrators who were
very familiar with relative interpretations were skeptical about the value of such
absolute interpretations. They expected to see a simple and comparable interpretation
for their various decisions (e.g., research funds distribution, selection of innovative
schools, sending consultants to poor quality schools).
As an alternative solution, the education office at the Kyungbook Province used
a mixed model covering absolute and relative interpretations (J. Kim, T. Chung, et
al., 2005b) After collecting evaluation scores using the SECI by school evaluators,
they combined the scores with more local measurable quantitative data that can be
compared across schools (e.g., parent satisfaction score, rate of college entrance, rate
of gaining certificates). Additionally, they also looked at the growth rate of these
quantitative indicators in the last three years. Finally, the office reported a total score
after combining the SECI scores along with the collected quantitative data. When the
office used this model, many schools at low-income areas were reinterpreted as good
schools because the quality of those schools was rapidly enhanced. In contrast, some
schools reported high ratios of 4-year college entrance were re-categorized as being
at an ‘intermediate level’ of quality because the quality of these schools had not been
changed. Thus, a mixed model combining SECI scores with measurable outcomes
along with growth rate is becoming a reasonable approach.
A very difficult obstacle for the school evaluation system is the existing teacher
management system in Korea. In particular, in the case of public schools, teachers are
rotated among schools every 4 or 5 years. Thus, it is very difficult to communicate
with teachers about the improvement of the quality of their current school with
any sense of continuity or growth. For instance, it was common to hear “well,
these school performances were done with the previous school principal” when
interviewing principals.
The teacher rotation system was designed in order to avoid unequal distribution
of high-quality teachers (E. Kim, 2009). It was also used to solve any imbalance in
teacher supply and demand, especially in rural areas. So, in the case of public schools,
principal’s average working period is about 2 years. Thus, under the 3-year cycle
system in school evaluation, maintaining a consistent school quality becomes difficult.
When it comes to private schools, the situation is somewhat better; but it is
still problematic. Relatively, without rotation of personnel, it would be easier to
find people who actually should be responsible for the current quality of schooling
of a particular school. However, even in this case, private schools in Korea are
also supported by the central government financially. So, except for the rotation of
personnel, other aspects of school management are still controlled by the central
government. Because of this reason, school principals at private schools are also not
permitted to independently control teachers.
21
Juhu Kim, Juah Kim, & Hoi K. Suen
22
School evaluation policies and systems in Korea
their high honorable social status, to show evidence of the quality of their teaching
(Hwang et al., 1997).
However, in spite of these long traditional views, the perspectives of the
Korean people are gradually changing recently. Taxpayers have begun to ask the
government to examine the accountability of the education system. They believe that
education is not a sacred area anymore and should be treated like any other social
institutions. So, teachers are also expected to provide reasonable evidence showing
that taxpayers’ money is being spent appropriately and meaningfully. In addition,
the school system itself should be held accountable through active communication
with the general public outside of education. For these demands, valid and reliable
school evaluation, along with an easy-to-understand reporting system, is expected.
23
Juhu Kim, Juah Kim, & Hoi K. Suen
24
School evaluation policies and systems in Korea
1) A s of 2001, Jan., Ministry of Education (MOE) changed to Ministry of Education and Human Resources
Development (MEHRD).
2) As of 2008, the Ministry of Education and Human Resources Development has been changed to the Ministry of
Education, Science, and Technology.
25
Juhu Kim, Juah Kim, & Hoi K. Suen
Reference
Chung, T., Kim, J., & Kim, J. (2004a). A study on the development of comprehensive school
evaluation model. Seoul: Korean Educational Development Institute.
Chung, T., Kim, J., & Kim, J. (2004b). Comprehensive indexes for school evaluation. Seoul:
Korean Educational Development Institute.
Chung, T., Kim, J., & Kim, J. (2005). A report on the results of tailored school evaluation
for primary and secondary schools at ChungChungnamdo province. Seoul: Korean
Educational Development Institute.
Chung, T., Kim, J., Lee, T., & Kim, J. (2005). Making a linkage between school evaluation
and supervision. Seoul: Korean Educational Development Institute.
Chung, T., Nam, K., & Kim, J. (2008). Annual report of comprehensive school evaluation.
Seoul: Korean Educational Development Institute.
Dellinger, A. B., & Leech, N. L. (2007). Toward a unified validation framework in
mixed methods research. Journal of Mixed Methods Research, 1(4), 309-332.
Erpenbach, W. (2003). Statewide educational accountability under NCLB. Washington,
DC: The Council of Chief State School Officers.
Fast, E. F., & Erpenbach, W. J. (2004). Revisiting statewide educational accountability
under NCLB. Washington, DC: The Council of Chief State School Officers.
Fast, E. F., & Hebbler, S. (2004). A framework for examining validity in state
accountability systems, Washington, DC: The Council of Chief State School
Officers.
Fawcett, S. B. (1991). Social validity: A note on methodology. Journal of Applied
Behavior Analysis, 24, 235-239.
Foster, S. L., & Mash, E. J. (1999). Assessing social validity in clinical treatment
research issues and procedures. Journal of Consulting and Clinical Psychology, 67,
308-319.
Gale, T. (2001). Critical policy sociology: Historiography, archaeology and genealolgy
as methods of policy analysis. Journal of Education Policy, 16(5), 379-393.
Gong, B., Blank, R. K., & Manise, J. G. (2002). Designing school accountability systems.
Washington, DC: The Council of Chief State School Officers.
Hwang, J, Yoon, J., Kang, Y, Oh, S., Kim, K., & Kim, C. (1997). Development of school
evaluation procedure and criteria for primary and secondary schools. Seoul: Seoul
National University Education Research Center.
Jeong , S., Yoo, K., Kim, M., Kim, J., & Kim, B. (2003). Strategic plans for development of
school evaluation. Seoul: Korean Educational Development Institute
Kang, Y., Kang, S., Kim, S., & Ryu, H. (2007). High school equalization policy: The reality
and myth. Seoul: Korean Educational Development Institute.
Kim, B. (2004). An analytic review of national-level school evaluation. Korean Journal
of Korean Education, 31(2), 219-244.
Kim, E. (2009). Teacher policy: procurement and disposition of qualitative teachers. Seoul:
Korean Educational Development Institute.
26
School evaluation policies and systems in Korea
Kim, H., Park, J., Yang, S., Kim, E., Jang, S., Lee, T., et al. (2005). A study on the
innovative educational administration system for the construction of school-based
management. Seoul: Korean Educational Development Institute.
Kim, J. (2004). A qualitative school evaluation model for the improvement of
schooling through mutual information exchange. Korean Journal of Educational
Research, 39(1), 217-248.
Kim, J. (2005). Is educational innovation possi ble through school evaluation?
Educational Development, 32(2), 92-98
Kim, J., Choi, K., Ryu, B. & Lee, J. (1999). Development of school evaluation model for
vocational high schools. Seoul: Korean Educational Development Institute.
Kim, J., Chung, T., Kim, J., & Kim, S. (2004). Investigation of school evaluation
systems at municipal and provincial offices of education. Korean Journal of
Education Evaluation, 17(2), 237-257.
Kim, J., Chung, T., Kim, J., & Kang, S. (2005a). Annual report of comprehensive school
evaluation. Seoul: Korean Educational Development Institute.
Kim, J., Chung, T., Kim, J., & Kang, S. (2005b). School evaluation report of vocational high
schools at Kyoungbuk province. Seoul: Korean Educational Development Institute.
Kim, J., Min, I., & Choi, P. (2009, December). Analysis of K-SAT achievement gap
by geographical regions in college scholastic aptitude test. Paper presented at the
symposium on the analysis of college scholastic aptitude test and achievement
test results at national level, Seoul, Korea.
Kim, S., Chung, T., & Kim, J. (2009). Analysis of the current status and outcomes of school
evaluation. Seoul: Korean Educational Development Institute.
Kim, Y., Kim, H., Yoon, J., Kim, J., & Huh, S. (2000). Field investigation of the current
issues on education. Seoul: Korean Educational Development Institute.
Kramer, J. M. (2011). Using mixed methods to establish the social validity of a self-
report assessment: An illustration using the Child Occupational Self-assessment
(COSA). Journal of Mixed Methods Research, 5(1), 52-76
Lee, I., Kim, Y., & Lee, H. (1999). Developing a national evaluation framework for the
primary and secondary schools. Seoul: Korean Educational Development Institute
Marion, S., & White, C. (2002). Making valid and reliable decisions in determining adequate
yearly progress. Washington, DC : The Council of Chief State School Officers.
Messick, S. (1995). Validity of psychological assessment: Validation of inferences
from persons’ responses and performances as scientific inquiry into score
meaning. American Psychologist, 50, 741-749.
Moss, P. A., Girard, B. J., & Haniford, L. C. (2006). Validity in educational assessment.
Review of Research in Education, 30, 109-162.
OFSTED. (2003a). Framework for inspecting schools. London: Office for Standards in
Education. HMI 1525
OFSTED. (2003b). Handbook for inspecting nursery and primary schools. London: Office
for Standards in Education. HMI 1359.
27
Juhu Kim, Juah Kim, & Hoi K. Suen
28