Orlich 1989
Orlich 1989
Orlich 1989
To cite this article: Dr. Donald C. Orlich (1989) Evaluating Staff Development, The
Clearing House: A Journal of Educational Strategies, Issues and Ideas, 62:8, 370-374, DOI:
10.1080/00098655.1989.10114097
Article views: 2
DONALD C. ORLICH
370
1989, Val. 62 Staff Development 371
nel must be designated to collect the data for each of the the participants. To accomplish these evaluation objec-
four areas. Finally, a series of standards must be devel- tives, staff developers may choose to use two additional
oped by which the evaluation itself is judged either mean- evaluation methodologies: formative and summative.
ingful or useless. The CIPP Model is rather complex and Michael Scriven (1%7) and others have suggested these
requires well-trained evaluators. modes. Let us examine the components.
Formative evaluation is designed to provide ongoing
The CSE Model feedback as quickly as possible. Formative instruments
Marvin C. Alkin (1970) suggested the more eclectic are specifically designed to monitor the activities or com-
CSE Evaluation Model. (CSE refers to UCLAs Cen- ponents of a program as they take place, in order to de-
ter for the Study of Evaluation.) The basic principle be- termine where problems are emerging. Formative evalua-
hind the CSE Model is that evaluation is an ongoing tion allows problems to be speedily identified and recti-
process that helps decision makers to select among alter- fied.
natives in a more informed way. (This element is found in Only a few selected items need to be checked in the
all models in some form.) Yet it should be noted that to course of a formative evaluation. These would all be based
select among alternatives means that viable alternatives on the stated learning objectives for the project. The im-
are in fact available. Evaluation must be viewed, argues portant point is to collect feedback while enough time re-
Alkin, as a means by which directions can be changed, mains to make corrections.
programs modified or eliminated, and personnel reshuf- Summative evaluation is conducted as the final assess-
fled as need be. In most cases, staff developers do not use ment of a project (or part of a project). Summative eval-
Downloaded by [UNSW Library] at 19:18 22 April 2016
evaluation to specify alternative directions. uations may take several forms, as long as they are con-
Alkin identified five decision areas and their concomi- sistent with the prescribed objectives of the program.
tant evaluation requirements. Summative data can be tabulated into absolute responses
and then given as a percentage for each item. Compari-
Decisions Evaluations
sons between participants may be made on summative
1. Selections of objec- Needs assessment data (but not on the formative measures). Recall that for-
tives or problems mative evaluations are designed to give feedback,
2. Programs to meet Plans whereas summative evaluation is for grading. These
objectives evaluations are placed at logical points in the project,
3. Program operations Implementation such as at the ends of units, learning activities, or pro-
4. Program improvement Progress gram elements. Most important, a single summative eval-
5. Program certification Outcomes uation is inadequate. The summative sets are arranged in
profiles to illustrate the sum of evaluation activities.
Alkin stresses that each of the five paired areas re-
quires the collection of information, an evaluation of Temporary Systems Approach
that information or data, and, finally, a decision based In any social system, permanent structures, such as in-
on the quantifiable information. In all steps, the evalua- stitutions or organizations, endure beyond the lifetimes
tor must realize that the judgments are based on a proba- of their members. Humans expect their institutions to ex-
bility of success. ist forever. Churches, the military, colleges, govern-
When one judges a staff development program, the ments, families, and the schools are all permanent fea-
first two pairs would be most critical, that is, objectives/ tures of society. Thus, we live in a society marked by per-
needs assessments and program/plans. However, the manence and the knowledge that some things simply do
judging of instructional components of the program not change very much.
would rely chiefly on the last three pairs-operations/im- Matthew B. Miles (1%) made these observations and
plementations, improvement/progress, and certifica- then asked the question, How do permanent organiza-
tion/outcome. tions change? One apparent solution was through the
The CSE Model requires continuous interaction establishment of temporary systems within the perma-
among all elements and providers of a staff development nent ones. Temporary systems, noted Miles, operate for
program. Feedback from all participants in an inservice only short durations, have well-established goals, and are
project is critical in the CSE Model. The feedback is used expected to end after short periods of time. Workshops,
by project directors who want to capitalize on their own conferences, clinics, seminars, and training sessions, in
human resources. which a small number of people meet for a defined pe-
riod of time to achieve a specified set of goals or objec-
Formative-SummativeEvaluation tives, operate as temporary systems.
The basic objectives of any evaluation system are to While involved in a temporary system, participants
determine (1) the extent to which the project objectives temporarily drop most of their usual roles and responsi-
are being achieved and (2) the impact of the project on bilities (from their permanent systems) and concentrate
372 The Clearing House April
on a few short-term objectives. The participants know To install a temporary system into the evaluation para-
that they will be in the temporary system for only a brief digm of any inservice education project, staff developers
period. During the temporary system phase of a training must consider five rather simple phases: (1) planning or
project, participants are free to try out new ideas, prac- preparing for the project; (2) organizing for the projects
tice a new technique without the usual penalties for mis- start-up; (3) operating the project; (4) closing the system,
takes, and work in a generally supportive and noncom- that is, preparing the participants for their customary
petitive climate removed from back-home interruptions. roles; and ( 5 ) implementing the strategies learned in the
project.
The evaluation of staff development activities is time-
FIGURE 1 consuming and requires a commitment to using data that
Project Desires are generated. However, evaluations of all the activities
of any inservice project should be carried out to ensure
Directions: Please place an X on the line between the
paired words to indicate your desires for
that the following take place:
this workshop.
Staff reactions and perceptions are obtained.
I desire that this workshop be Adjustments are made as needed.
1. More information - - - More practice Successful activities are identified for future use.
oriented oriented Outstanding presenters are identified and used again.
- - - Conducted with Success or failure of a project can be determined early.
Downloaded by [UNSW Library] at 19:18 22 April 2016
2. Conducted by
lecture as the hands-on Long- or short-range profiles are compiled.
primary format experiences Participants learn that their evaluations have an impact
3. Oriented toward - - - Oriented toward on staff development.
student problems teacher problems
4. Theoretically - - - Experientially When using a temporary systems model, data must be
oriented oriented systematically collected from all participants. For exam-
5. Balanced: - - - Information-giving ple, if a workshop leader is unsure about the format and
lectures, only emphasis workshop participants want, he or she would
demonstrations,
and activities query them, using a questionnaire such as that shown in
figure 1. Decisions about the presentations could then be
FIGURE 2
Feedback (Use after full-day session)
Directions: Please place an X on the line that best describes your response.
Very Very
adequate Adequate Inadequate inadequate
Very Very
informative Informative Uninformative uninformative
Very Very
relevant Relevant Irrelevant irrelevant
FIGURE 3
Perceptions
You are having a variety of experiences during this project and, of course, these experiences affect what you
learn. These experiences and your consequent learning will help the program director improve the project.
For each item below, circle the number showing how well you think the management tasks have been done by
the director and the staff.
Low 0 1 2 3 4 5 6 7 8 9 High
Frequencies
1. Project goals were not specified 0 1 2 3 4 5 6 7 8 9 Project goals were specified clearly.
clearly.
2. The climate of this project was 0 1 2 3 4 5 6 7 8 9 The climate of this project was very
poor. good.
3. The wrong people came to this 0 1 2 3 4 5 6 7 8 9 The right people came to this
project. project.
4. The overall design of this project 0 1 2 3 4 5 6 7 8 9 The overall design of this project
was ineffective was quite effective.
Downloaded by [UNSW Library] at 19:18 22 April 2016
5. This project did not get off to a 0 1 2 3 4 5 6 7 8 9 This project did get off to a very good
good start. start.
6. This project will have no influence 0 1 2 3 4 5 6 7 8 9 This project will strongly influence
on how I teach. how I teach.
7. Staff resources were poorly used 0 1 2 3 4 5 6 7 8 9 Staff resources were well used in
in this project. this project.
Note: See Orlich and Hannaford (1986) for a 14-year longitudinal study using this model. The elements of goals, climate,
design, and influence tend to be critical for success.
adjusted before the workshop begins. Feedback is an es- study that used the temporary systems approach. Of the
sential part of this model (figure 1). traits listed in figure 3, four tended to be critical for suc-
Figure 2 shows how feedback can be obtained on four cess: (1) clear goals, (2) positive climate, (3) overall de-
elements of a typical inservice session. A final, or sum- sign, and (4) influence on instruction. These findings may
mative, evaluation could be conducted using the model in not be surprising, for they represent elements critical for
figure 3. successful staff development programs.
A school district staff developer could compile the data
obtained from the various instruments and prepare pro- Conclusion
files of various inservice efforts. Decision-making evalua- There are many reasons to evaluate staff development
tion models are used to find out how well the project was programs and inservice education projects. Depending
perceived, in order to make rational decisions in the fu- on the project objectives, one could use group tests, stu-
ture. Trends that are observed can be maintained, if desir- dent achievement tests, classroom observations, attitude
able, or changed if the trends are considered undesirable. scales, or anecdotal records as mechanisms for determin-
The author has been using temporary systems manage- ing the success of an inservice project. When using deci-
ment on all inservice projects that he has directed since sion-making evaluation models, staff developers can
1972. There is no doubt that this method has improved make critical adjustments in any training program so that
the operation of the projects. In 1986, Marion Hanna- every participant masters the content and, more impor-
ford and the author reported on a 14-year longitudinal tant, uses the skills or knowledge as was intended.
374 The Clearing House April
Bruce Joyce and Beverly Showers (1988) provided a House, E. R. 1978. Assumptions underlying evaluation models. Educa-
tional Reseurcher, 7(3): 4 1 2 .
practical and realistic rationale for evaluating inservice Joyce, B., and B. Showers. 1988. Student achievement through staff
programs. They concluded that if we truly intend to in- development. New York: Longman.
crease student learning through staff development pro- Miles, M. B. 1964. On temporary systems. In Innovations in education,
edited by Matthew B. Miles. New York: Teachers College Press.
grams, serious evaluation of those programs will be nec- Orlich, D. C., and M. E. Hannaford. 1986. Longitudinal evaluation
essary (p. 127). No question about it, systematic evalua- strategy for staff development. Paper presented at the National Staff
tion of inservice projects is essential. Development Council Annual Conference, Atlanta, Georgia,
December 15.
Scriven, M. 1967. The methodology of evaluation. AERA Monograph
Series on Curriculum Evaluation, No. 1: 39-83.
Stufflebeam, D. L. 1971. The relevance of the CIPP evaluation model
REFERENCES for educational accountability. Journal of Research and Development
in Education, 5(1): 19-21.
Alkin, M. C. 1970. Products for improving educational evaluation. Tuckman, B. W . 1985. Evaluating instructional program, 2d ed.
Evaluation Comment, 3(1): 1-4. Boston: Allyn and Bacon.
Downloaded by [UNSW Library] at 19:18 22 April 2016