Data-Driven Decision Making and Dewey's Science of Education

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Data-Driven Decision Making and Dewey’s Science of

Education
Natalie Schelling, Lance E. Mason

Education and Culture, Volume 37, Number 1, 2021, pp. 41-59 (Article)

Published by Purdue University Press

For additional information about this article


https://muse.jhu.edu/article/836232

[ Access provided at 20 Nov 2021 02:31 GMT from CNRS BiblioSHS ]


Article

Data-Driven Decision Making and


Dewey’s Science of Education

Natalie Schelling and Lance E. Mason


Indiana University Kokomo

Abstract
This paper considers elementary teachers’ perceptions of data-driven
decision making (DDDM) through the lens of Dewey’s science of
education. Dewey argues that educational science can be used to
improve teaching and learning, but it must be flexible, attendant
to multiple dimensions of growth, and grounded in real classroom
experiences with teachers at the helm. Results indicate that teachers
have a vision of classroom assessments consistent with Dewey’s vision,
but often feel inhibited by the constraints of standardized assessments.
Keywords: DDDM, science, assessment, standardization

Introduction
This paper uses results from a qualitative study to consider how data can be used
for elementary teachers’ instructional planning in a manner consistent with
Dewey’s vision for a science of education. Dewey1 argues that science can be used
to make human endeavors, including education, more intelligent and socially
useful. This is ostensibly consistent with the current conception of data-driven
decision making (DDDM), the systematic use of student data to inform decisions in
educational settings.2 However, Dewey3 also cautions against the oversimplification
of using physical science as the model for human understanding, stating, “When
we introduce a like simplification into social and moral subjects we eliminate
the distinctively human factors.” Dewey4 argues that a “science of education” is
particularly complex and multifaceted, as many past and present factors influence
whatever conclusions can be derived (19). He aims at a human science that is flexible,
responsive, and recognizes both linear and nonlinear dimensions of educational,
social, and moral growth.
The contemporary American education system has veered far from Dewey’s
vision. Today’s prevailing educational science emphasizes teachers’ data use as
evidence of their instructional effectiveness, but is often based upon the kind

Education and Culture 37 (1) 2021: 41-59 41


42 Schelling and Mason
of reductive scientific models that Dewey warned against. School districts and
teachers are forced to comply with these testing regimes through the high-stakes
consequences of the No Child Left Behind legislation (NCLB) and its successor, the
Every Student Succeeds Act (ESSA). The complex human factors acknowledged by
Dewey are ignored, as performance is boiled down to a series of externally defined,
quantifiable criteria in the form of test scores. It is clear that Dewey would critique
the prevailing use of data in schools perspective as simplistic, overly focused on the
individual, and detrimental to the myriad dimensions of student growth. DDDM
arguably holds the potential to take a more holistic consideration, and the discussion
section will consider this model in light of Dewey’s vision.
The qualitative study explored elementary teachers’ perceptions of DDDM.
Participants were asked to describe (1) their attitudes related to DDDM; (2) what
individuals or groups impacted their use of DDDM; (3) their autonomy related to
conducting DDDM. This paper will draw from these qualitative findings to argue
that Dewey’s science of education would acknowledge the potential usefulness of
educational data when helping teachers and administrators better understand both
the strengths and needs of their students. Ultimately, it will be argued that such
a move would require reconsidering the purposes of education with a shift away
from narrow intellectual goals toward a focus on the whole child, with assessment
and DDDM being repositioned as less punitive and dictated more by teachers
themselves, rather than external stakeholders. As public discourse has begun to
turn against a narrow focus on standardized assessment, teachers and teacher
organizations should more forcefully advocate for a broader vision of education,
with more teacher autonomy, that considers the social and emotional dimensions
of children in addition to the intellectual.

Background
Data-Driven Decision Making
Definition and Purpose
The emphasis on data-driven decision making (DDDM) is largely the result of
NCLB legislation in the United States,5 which emphasized measurable outcomes of
education, prioritizing the use of standardized tests and their resulting data to inform
educational decisions. Following NCLB, many individual states adopted systems of
standardized assessment for students and of evaluations for teachers that emphasize
using assessment data to plan instruction. In many states, the results of standardized
assessments and evaluations of teacher effectiveness impact the funding allocated to
schools and districts. Indiana adopted the Indiana Statewide Testing for Educational
Progress (ISTEP) standardized assessment system and the RISE evaluation of teacher
effectiveness, which includes the use of DDDM within the classroom.6

E&C Education and Culture


Data-Driven Decision Making 43
DDDM, defined as “the systematic collection, analysis, examination, and
interpretation of data to inform practice and policy in educational settings,”7 is a
global term describing the use of data to inform educational decision making at
all levels of schooling. Within the classroom, DDDM is used to make decisions
about instruction, which can include guiding intervention, creating student groups,
prioritizing instructional time, and evaluating the effectiveness of instruction.8
Indiana’s RISE evaluation requires that teachers “use prior assessment data to
formulate achievement goals, unit plans, and lesson plans and incorporate
differentiated instructional strategies planning to reach every student.”9 Regardless
of the specific purposes of DDDM, the underlying theory claims that teachers
should continuously utilize student data to plan effective instruction.10
Proponents of DDDM encourage teachers to view the relationship between
instruction, assessment, and DDDM as a cyclical process.11 For example, the
Northwestern University Assessment/Accreditation Council12 proposes an eight-
step cycle: (1) defining learning objectives; (2) selecting or designing learning
activities and assessments with predetermined criteria; (3) engaging students in
learning activities and assessment; (4) collecting student data from instruction
and assessment; (5) systematically analyzing student data; (6) evaluating data with
respect to learning objectives and their criteria to identify areas in which students
need further improvement; (7) making instructional decisions about how to address
these areas of need; (8) redefining existing or creating new learning objectives.
Viewed in this cycle, DDDM maps onto steps 5 through 7. Data collected through
systematic assessment of students is used to identify students’ needs, which can then
be addressed through targeted instructional strategies. In this way, DDDM allows
teachers to directly respond to students’ needs using reflexive instruction.13 This
model of assessment and DDDM operates on the assumption that teachers have
autonomy in the design and implementation of their instruction and assessment.
However, evidence indicates that this is not always the case.
Systems of teacher evaluation, including RISE, are often ambiguous about the
type of assessment data that teachers are expected to use to plan instruction.14 As
such, there is often varying emphasis between schools, and even teachers, on the use
of standardized assessment data versus classroom assessment data, resulting from
assessments selected or designed by teachers themselves. Classroom assessment
is driven by teachers’ own learning objectives and instruction as it relates to their
students. As such, it is often a more valid measurement of student learning than
standardized assessment. By contrast, standardized assessment is unable to capture
the nuances of individual students’ capabilities, and often does not fully reflect
teachers’ instruction. Relying on standardized assessment to conduct DDDM is also
contradictory to the continuous and cyclical nature of the process, as standardized
assessment occurs, at most, on an annual basis.

Volume 37 (1) 2021


44 Schelling and Mason
Impact on Student Achievement
Previous research supports policymakers’ claims that DDDM has a positive impact
on student achievement. Students in classrooms that utilize DDDM show marked
improvement on standardized assessments, whether they are state assessments15 or
benchmark assessments used as predictive measures of state assessments.16 These
gains in standardized test scores are seen across grade levels, but are particularly
pronounced among low-achieving or racial minority students, who are typically
disadvantaged by standardized assessments.17 Attempts to investigate the impacts
of DDDM on students has been limited to analysis of standardized assessment
results as indicators of student achievement.
This emphasis on standardized assessment not only contradicts the intended
purpose of DDDM as a continual process, but is also not reflective of teachers’
realities. In a survey of teachers about their use of assessment data, Hoover and
Abrams18 found that the majority of teachers used student assessment data to plan
instruction (e.g., change instructional pace, group students, differentiate, etc.). In
addition, the majority of teachers responded that this data was often the result of
classroom assessments they created themselves. Teachers reported that they utilized
standardized assessment data much less frequently. Further research investigating
teachers’ use of classroom assessment to conduct DDDM is needed. Research that
explores the potential uses of DDDM to impact outcomes beyond achievement,
such as social and emotional factors of students, would also enhance the field.

The Theory of Planned Behavior


The current study draws upon data collected from a larger mixed methods study
that utilized the Theory of Planned Behavior (TPB)19 as a theoretical framework
to investigate elementary teachers’ DDDM practices. TPB hypothesizes that
individuals’ intent to engage in a behavior, which leads to actual engagement,
can be predicted by their attitudes about the behavior, perceived social norms
related to the behavior, and perceived behavioral control. Attitudes are comprised
of instrumental attitudes, individuals’ thoughts about the behavior, and experiential
attitudes, individuals’ feelings about the behavior. Social norms include both
injunctive norms, individuals’ beliefs about who expects them to engage in the
behavior, and descriptive norms, individuals’ beliefs about other individuals or
groups that engage in the behavior. Perceived behavioral control is the combination
of individuals’ beliefs about their autonomy in engaging in the behavior and their
perceived capacity to engage in the behavior. The current study utilizes qualitative
findings related to elementary teachers’ instrumental attitudes, injunctive norms,
and autonomy related to DDDM. These findings will be reconsidered in the results
section in light of Dewey’s conception of educational science.

E&C Education and Culture


Data-Driven Decision Making 45
Theoretical Lens
Dewey was an advocate for the application of science in social affairs, including
education. To Dewey, the term science conveys the use of systematic inquiry to
control relevant matters more intelligently. Formal education, for Dewey, is an
applied domain and thus the methods of social inquiry are not distinct from other
social affairs, although they need to be considered in light of their unique contexts.
In this vision, Dewey’s educational science has three primary components: (1) It
needs to be epistemologically flexible enough to not encourage rigidity; (2) it should
be methodologically open-ended so as to consider the complexity of educational
endeavors; and (3) it should be grounded in actual classroom experiences. The
following section will take up these points in turn.
On epistemological flexibility, Dewey20 warns against fostering a positivistic
educational science that mimics the logic and techniques of the physical sciences. In
contrast to hard science, educational science is necessarily more complex because it
involves more factors. Measuring the progress of students with social and emotional
needs that vary based on individual dispositions, past experiences, and familial
support is infinitely more complex than calculating the growth of farm crops or
other nonhuman applications. Because of the multitude of variables to consider,
Dewey21 argues that one should not expect to yield universal rules of adoption nor
results to have immediate implications that can be put into practice. Such rigidity
works against the spirit of scientific inquiry:
When, in education, the psychologist or observer and experimentalist in any
field reduces his findings to a rule which is to be uniformly adopted, then, only, is
there a result which is objectionable and destructive of the free play of education
as an art.22
Rather than yielding ironclad rules of application, results should suggest new
avenues for action by the teacher, or novel questions to explore by researchers in
an ongoing process. The goal is to sharpen the lenses of the researcher in asking
further questions and the teacher in more effective practices by illuminating some
of the complexities of the educational process.
As such, epistemological flexibility must be matched by a similar malleability
in research methodology. Dewey’s warning to not merely mimic the hard sciences
and his desire to keep the process continuous and open-ended requires research
to be qualitative in addition to quantitative. Dewey states that data do more than
merely allow us to draw conclusions, they also
furnish data for further inquiries and conclusions. Hence the need that they
should not be too rapidly mechanized into a standard fixed form. There
must be flexible room for change or else scientific arrest will come from a
too rigid fixation of the molds in which data are cast.23

Volume 37 (1) 2021


46 Schelling and Mason
For Dewey, conclusions from research must remain tentative to some degree, even if
some results are conclusive enough to be actionable. Within an endeavor as complex
as education, involving psychological, social, and environmental variables, new
information may always be discovered that alters what might be the best course of
action for a teacher in a given situation. Thus, research methods must allow data
to be revisited and reconsidered in light of new evidence.
A crucial variable for educational research, in Dewey’s analysis, is being
grounded in practice and close to actual classroom experience. In fact, Dewey
argues that a lack of direct engagement is a key problem of the social sciences that
inform education, stating: “Much of the barrenness and loose speculation in the
human sciences is directly due to remoteness from the material that would stim-
ulate, direct and test thought.”24 In response, Dewey calls for vital connections
between researchers and the persons and places of research.
Dewey sees practice as the beginning and end of educational science. Rather
than forming questions and problems remotely, Dewey advocates that “educational
practices provide the data, the subject-matter, which form the problems of inqui-
ry.”25 In this conception, research is positioned not as an intellectual endeavor to
be performed from without, but as a practical response to classroom problems
involving students and teachers. Dewey explains:
As far as schools are concerned, it is certain that the problems which re-
quire scientific treatment arise in actual relationships with students. Con-
sequently, it is impossible to see how there can be an adequate flow of
subject-matter to set and control the problem investigators deal with, un-
less there is active participation on the part of those directly engaged in
teaching.26

Achieving such a science requires a close relationship between researchers and


practicing teachers, with researchers taking a responsive role in constructing ques-
tions grounded in the teacher’s concerns. Dewey also states that the final arbiter of
successful conclusions should not be the researcher, but the classroom teacher as
“practice alone can test, verify, modify, and develop the conclusions of these inves-
tigations.”27 Such a teacher may not have found a recipe for improved practice, but
rather will have improved insights and sharpened attunement to issues pertaining
to the education of their students.
In stark contrast to Dewey’s vision, U.S. schools today are dominated by the
standardization regimes due to the passage of NCLB legislation and its successor,
the ESSA. These policies tie federal school funding to annual high-stakes stan-
dardized testing starting in third grade. These policies claim the mantle of sci-
ence, though this vision holds little in common with Dewey’s conception. Rather
than beginning with problems in practice, they make sweeping assumptions about
the acquisition of disparate bits of information as the purpose of learning. In this

E&C Education and Culture


Data-Driven Decision Making 47
conception, education is reduced to the absorption of information and learning
problems are narrowly construed as helping students more efficiently acquire
information. Against Dewey’s call to not mimic the physical sciences, many of
the statistical measures employed by education reformers are pulled directly from
manufacturing and industry used to measure commodities such as corn yields.28
Such postures eliminate human complexity from the education process, reducing
it to mere quantifiable data in the form of test scores.
If educators and researchers hope to push back against standardization, it
would be useful to claim a more fruitful use of science against the vulgar appropri-
ation claimed by standardization advocates. DDDM is one such possibility that the
following study considers. In the following study, teachers were asked about their
experiences related to DDDM. The discussion section will consider their responses
in light of Dewey’s vision for a science of education. It will consider how closely
the current vision of DDDM matches Dewey according to teachers’ responses, and
how schools could move toward this vision if teachers were better able to employ
DDDM in ways that they found more valuable.

Method
Participants
Given the influence of state policy on DDDM and teachers’ practices, the current
study focused on one state, Indiana. As a result of NCLB, Indiana developed a state
assessment system, ISTEP, and a teacher evaluation system, RISE, which empha-
sizes assessment practice and DDDM.29 A total of nine elementary school teachers
participated in three focus groups. The teachers were purposefully sampled from
elementary schools in Indiana to represent various (1) grade levels; (2) teaching
experience; (3) school SES; (4) district locations (see Table 1). The percentage of
students receiving free or reduced lunch was used as an indicator of school SES.30
Geographical locations were identified using the National Center for Education
Statistics classifications31 (see Table 1 for classification descriptions).

Procedures
Convenience sampling was used to identify participants for the first focus group.
A university practicum coordinator made recommendations for other districts to
represent different locations and school SES. After the university practicum coor-
dinator helped make initial contact with the district, convenience sampling was
used to identify participants. Focus groups were conducted at times and locations
convenient for the participating teachers.

Volume 37 (1) 2021


48 Schelling and Mason
Table 1. Characteristics of Teachers Participating in Focus Groups

Years Percent District


Group Pseudonyms Grade
Teaching F/R Lunch Classification*

3–5
Leslie (Reading 4 60 Town: Distant
specialist)
Annie 5 16 49 Town: Distant
1
Laura 4 7 60 Town: Distant

Angela K 7 51 Town: Distant

Emily K 4 51 Town: Distant

Heather 3 5 32 City: Small


2
Stephanie K 12 17 Suburb: Large

Faith 4 2 59 Town: Distant


3
Katie 1 4 78 City: Small

*Town: Distant—Territory inside an urban cluster that is more than 10 miles and less than
or equal to 35 miles from an urbanized area; City: Small—Territory inside an urbanized
area and inside a principal city with population less than 100,000; Suburb: Large—Territory
outside a principal city and inside an urbanized area with population of 250,000 or more

Measures
A structured interview protocol was used to guide the focus group conversation.
Interview questions were created based on the guidelines suggested by Fishbein and
Ajzen32 to address constructs within TPB. The current study primarily analyzed
four main questions:
1. What do you think about conducting DDDM?
2. What individuals or groups approve/disapprove of you con-
ducting DDDM?
3. Do people who are important to you think you should conduct DDDM?
4. Is conducting DDDM something you chose to do?

E&C Education and Culture


Data-Driven Decision Making 49
Question 1 addressed participants’ instrumental attitudes. Questions 2 and 3
addressed participants’ injunctive social norms. Question 4 addressed participants’
perceived autonomy.33 The focus group facilitator followed up on these questions
with elaboration prompts and clarification questions.

Analysis
Focus groups were audio recorded, then transcribed using the software Temi.
Researchers used deductive coding to categorize responses according to the research
constructs (i.e., social norms and autonomy). After data analysis, member checks
were conducted.34

Results
Attitudes
Teachers were asked about their instrumental attitudes, thoughts about DDDM.
In general, teachers expressed positive instrumental attitudes. They thought that
conducting DDDM was helpful and important to give their instruction further
direction and target interventions for struggling students. Teachers discussed using
DDDM as a basis to define attainable goals for students’ differing needs. In partic-
ular, teachers thought that DDDM was helpful when assessment data reaffirmed
their practice or showed students’ growth. Noting mounting pressure to “justify”
or “prove” the effectiveness of their practices to stakeholders such as administrators
and parents, teachers thought it was beneficial to have tangible data that supported
the efficacy of their teaching. Katie reflected positively on the benefits she perceived
by using DDDM in her own classroom,
If you wanted to say, [a student might] have a learning disability, you want
test data and interventions that have data will back that up. It gives you
good feedback if you’re doing your job or not. It helps with the grouping.
It helps you get a real pulse on how the class is doing collectively and indi-
vidually. It gives you support and it gives something that’s not subjective,
so you can look, not that it’s a comparison contest, but my team, we look
at each other’s data. Like in our meetings we’ll literally say this person,
their kids score in the nineties, like sometimes they’ll get like a hundred
percent and they’ll be like, what did you do? And then it’ll be a very helpful
conversation for us as teachers. So there’s benefits professionally and then
there’s benefits for the kids for individualized instruction and then there’s
benefits for the parents too.

Like Katie, most teachers discussed benefits not only to their own practice, which
benefited their students, but also for stakeholders to understand what and how
they were teaching, particularly given the high stakes associated with teaching
evaluations.

Volume 37 (1) 2021


50 Schelling and Mason
However, a key distinction that teachers made was the difference between
DDDM based on classroom assessments and standardized assessments. Teachers
felt that DDDM was especially helpful when the data addressed their own ques-
tions about students. Therefore, teachers emphasized the utility of using self-created
assessments as the foundation for DDDM. These assessments tended to be more
aligned with teachers’ own goals and more reflective of their students. In contrast,
teachers did not think that standardized assessment data was as helpful or informa-
tive for guiding their practice. In fact, many teachers discussed specific drawbacks
of relying on standardized assessment, both for themselves and their students,
particularly when considering the time spent taking standardized assessments. In
contrast to her perception of benefits of using DDDM based on classroom assess-
ment, Katie shared about her experience with standardized assessment,
All the testing, that’s like my biggest [complaint] and for me they’re so little,
like I have seven-year-olds stressed out over [standardized assessments]. . . .
And I mean the problem with data-driven decision making is you best
hope that the test represents what you taught and you better hope that the
kid tries their hardest. So there’s the negatives of “Is it reliable, is it really
representing how they are doing or is it representing how well they take a
test?” It’s good to be objective and not be subjective, but it’s also a tricky
balance because you’re dealing with people, little people, and the data try
to be hard and fast and you can’t always be hard and fast. . . . Last year I had
a class, they had terrible home lives and they brought them into the class-
room. Well, then you take my data and say that only like 60 percent of the
kids pass that test and your first inclination is, “Oh what a terrible teacher
or oh these kids are not getting it.” And it’s like, well they probably could
get it. . . . The human element is totally out when you’re viewing the data.

Across the focus groups, teachers discussed the limitations of standardized assess-
ment. Teachers reflected on the inability of standardized assessment to capture the
holistic view of the student, which is more attainable with classroom assessment.
Teachers emphasized that standardized assessment, and the analysis of the data,
occurred within the context of the already busy classroom, where they are teaching
but also attending to students’ other needs.

Social Norms
Participating teachers were asked to identify individuals who influenced their
DDDM practices, particularly who expected them to conduct DDDM. In each focus
group, participants indicated that principals and district administrators greatly
emphasized the use of DDDM. This was especially important as administrators,
especially principals, played a large role in the implementation of the RISE eval-
uation. Teachers also acknowledged that administrators, attending to the stakes
associated with standardized assessment data, were under pressure themselves.

E&C Education and Culture


Data-Driven Decision Making 51
In general, teachers noticed that their administrators’ conceptions of DDDM,
particularly the management of the process and implications of results, greatly
impacted their experience.
Supportive principals provided a low-stakes space for teachers to collaborate
in interpreting and implementing data in ways that enhanced instruction for stu-
dents. These principals took a team approach, with DDDM being a shared responsi-
bility. Teachers who perceived their principals as being supportive of their practice
and viewing student data as an opportunity for improvement felt more confident
in their own practices. In addition, teachers whose administration supported their
autonomy in the collection and use of assessment data in the classroom reaped the
benefits of DDDM by using data to adapt instruction to aid struggling students.
These teachers felt their instruction was not solely driven by standardized testing.
Further, these teachers did not mention stress related to the RISE evaluation, due
to the holistic and realistic conception of data held by their principals.
Some teachers discussed administrators who were overly focused on the
results of standardized assessment and often used data to shame teachers, some-
times sharing results with the whole faculty. These teachers discussed feeling undue
pressure to perform, driven by principals’ emphasis on the results of assessment.
Katie reflected on these experiences,
[Our students’ results are] up on the screen and there’s your name and you’re
either red, green, or yellow and you’re like, “Oh boy, there are no secrets, no
secrets are being had here.” . . . I think the cumbersome part is however your
school or curriculum interprets it and how much emphasis is put on the data
being good. If you just let the data be what it is, that’s helpful. You know, if I
have a third of my class failing, that’s good to know and be aware of. . . . So
in terms of usefulness, good. But in terms of how it plays out sometimes I
would say not helpful in that way. . . . If the data could just be the data with
no implications, that would be much better. The problem is there’s stress
and shame and anxiety all attached to bad data and praise and reward and
money attached to some good data. So if data could just be data, I wouldn’t
have a problem with it. . . . But it’s when you’re going into a meeting and you
know that there’s a good chance that you’re about to be shamed or asked what
happened, “Hey, so 60 percent, what happened there?” And it’s like we had
snow days, our kids aren’t eating, you know, any number of things could’ve
happened. So if there wasn’t this need to justify bad data, that would be good.

This pressure, both on teachers and their students, created a negative culture of
assessment. Teachers with principals who focused on results, rather than progress,
were more hesitant to discuss data with colleagues and generally less confident
in their DDDM practices. In addition, the high-stakes teaching evaluation pro-
cess, which emphasizes DDDM, caused teachers anxiety to the detriment of their
instructional effectiveness.

Volume 37 (1) 2021


52 Schelling and Mason
One teacher described the effects of this pressure on students, “I had two
kids burst into tears during the last year and I had one kid puke during ISTEP
last year because he was just that nervous and it’s unnecessary because these are
good, strong, solid students.” This teacher further explained that in his pursuit to
achieve high assessment scores, her principal often overlooked critical aspects of
students’ experiences. For example, she noticed that he would frequently ask about
test scores before asking how her students were doing in general, which she found
to be unsympathetic. Further, teachers, such as Faith, felt that this pressure and
emphasis on test results created an unhealthy learning environment for students,
Our principal is on a kick right now of our kids need to have their [multi-
plication] facts memorized. . . . ISTEP will be so much easier. So we are re-
ally, really pushing to have our math facts memorized right now and that’s
all he is focused on. . . . I mean, we do these times tests every day and he
wants us to have that extrinsic motivation up there for kids to see and for
kids to attain, that’s what they strive for. . . . And we’ve stressed scores and
grades so much, that’s what’s frustrating to me is that, you know, a B is not
bad, but I have kids who they’ll get a test back and has a B on it and they’re
upset by it. . . . Kids don’t really learn for the sake of learning anymore.
It’s all about learning to see what grade I got, what award I can get, what
praise I can get. And it’s not learning for the sake of, oh, that was actually
really interesting. . . . And I mean, how do you stop that?

Faith continued to explain that students would compete over test scores, rather
than setting their own personal goals. In these situations, DDDM is not achieving
its intended aims of allowing data to inform the creation of more effective learning
for students. Instead, focusing on results, which some principals use as the sole
metric of success, ignoring the other needs of students and teachers, is detrimental
to student learning.

Autonomy
Similar to their attitudes, teachers’ perceptions of their autonomy in DDDM differed
for standardized and classroom assessments. In general, teachers felt that they had
some degree of autonomy. There was an expectation from administrators and state-
level stakeholders to conduct assessment, but teachers felt that, at least for classroom
assessment, they were able to design assessments as needed and interpret the data
as it was useful to them, attending to the particular context of their class. Teachers
agreed that, regardless of external expectations, they would still implement their
own assessments to gauge the progress of their students. However, the emphasis
of standardized assessment in DDDM, as reflected in teaching evaluations, led the
teachers to feel that they had less autonomy. In reflecting on the implementation
of and interpretation of standardized assessment, teachers agreed that they mostly
complied with the expectations of their administrators and state guidelines,

E&C Education and Culture


Data-Driven Decision Making 53
Laura: “You choose how much you’re going to do. It’s expected that you do some
type of assessment. But you choose how much you’re going to do day to
day or week to week.”
Angela: “I wanna know what I’m supposed to do and I want to do it the right way. I
think a lot of teachers have that kind of personality. They just do it because
they want to do it.”
Laura: “You chose to do it because that’s what’s expected. And you’re always going
to choose to do what’s expected. We’re rule followers.”
Leslie: “Which is why we’re still doing all of this testing, because maybe [if] a few
less of us were rule followers we would’ve done something about it by now.
It would look different.”
Teachers’ sense of autonomy reflected their attitudes about each type of assessment
and their associated stakes. DDDM based on classroom assessment, because they
found it useful and informative, was a practice they chose to use. In contrast, DDDM
based on standardized assessment, which they felt often did not appropriately reflect
their students or instruction, was conducted mostly because it was required and
driven by high stakes.

Discussion
Consistent with Dewey’s philosophy, teachers recognized that educational research,
specifically using student data, was useful and important in the instructional design
process. On Dewey’s points of epistemological and methodological flexibility, teach-
ers often identified student data from classroom assessment as assisting in a broader
and continuous diagnostic evaluation of students and their needs, rather than as
a providing an immediate or static prescription for practice. Teachers benefited
from analyzing data in a collaborative, deliberative process with other teachers and
professionals in which the data were but one component of a bigger picture in the
process. A collaborative approach also allowed teachers to gain multiple perspec-
tives on students’ progress, perhaps from others who had taught the students. Here,
consistent with Dewey’s vision, points of data were revisited and reconsidered in
light of the particular experiences of the teachers with a given student, in addition
to considering a range of other metrics, both formal and informal, at the teachers’
disposal. Further, teachers in schools where conversations about data were collab-
orative and low-stakes felt more comfortable having authentic and meaningful
discussions, which ultimately best served the needs of students.
Also consistent with Dewey’s assertion that research be grounded in con-
cerns from actual classrooms, teachers especially benefited from instances in which
they collected and interpreted their own assessment data. Teachers believed that

Volume 37 (1) 2021


54 Schelling and Mason
assessments selected and implemented by the teacher were representative of their
instruction. This allowed them to interpret and use data to inform instruction
in meaningful ways, specifically in the context of their class. This teacher-driven
approach allowed teachers flexibility in how they adapted their instruction in
response to student data. This afforded them the opportunity to conduct DDDM
that was both reflective of the content of instruction but also their own pedagogy.
Generally, when teachers were given autonomy in what data they collected and
how they put it into action, they felt more ownership over the process and their
instruction.
Teachers further believed that DDDM that was grounded in classroom con-
texts had a particularly positive impact on students as well. Teachers were able to
contextualize this data based on their knowledge of their students’ experiences
and needs. For example, teachers, who are familiar with their own students, were
able to consider the implications of data in light of factors internal to the student,
such as learning disabilities, challenges at home, or school absence. Data based on
classroom assessment provided teachers with a dynamic, nuanced understanding
of students’ learning and further needs. Teachers also discussed conducting DDDM
on a regular and continuous basis within their own classrooms, giving them the
opportunity to reflect not only on students’ current learning but also their growth.
As such, instructional changes could also be made in ways that were reasonable for
and reflexive of individual students’ needs. This is consistent with Dewey’s vision
that emphasizes the unique contextual aspects of teaching and learning. From
this perspective, data interpretation can better serve holistic ends if employed and
interpreted by those closest to the matters at hand.
However, teachers often felt that implementation of DDDM often missed
the intended aims. In particular, teachers felt that when DDDM was based on
standardized assessment, which teachers recognized can only provide a narrow
and static evaluation of students’ capacities, the benefits often did not outweigh
negative outcomes, particularly when the focus was on achieving high scores at
the expense of students’ learning and general well-being. Teachers emphasized
that focusing on singular standardized assessment (i.e., annual state assessment)
caused undue pressure for both themselves and the students. Students with anxiety
about these assessments were less able to learn effectively and required additional
attention from teachers. In addition, teachers noted that when students were aware
of their achievement data, it often promoted competitiveness among students and
an emphasis on extrinsic motivation, which harmed the learning environment.
Drawing upon Dewey, the standardized assessments assume a universal rule of
adoption in the form of absorption of content knowledge, and assume a deficiency
on the part of teachers or students if passing scores are not attained, assumptions
that were salient both to teachers and their students.

E&C Education and Culture


Data-Driven Decision Making 55
Teachers are in alignment with Dewey in recognizing that standardized
assessments often did not adequately represent their students or instruction. In
particular, teachers emphasized that standardized assessments overlook many fac-
tors they knew affected their students (i.e., difficult home lives, missed school, poor
test taking skills, etc.). Standardized assessment, which occurs less frequently than
classroom assessment, was also unable to fully capture nuances of student growth.
Thus, simply reviewing this data was not representing the whole student. Further,
teachers were also skeptical about the degree to which standardized assessment rep-
resented the content taught and pedagogy used in their classrooms. Thus, teachers
struggled to view standardized assessment data as actionable for the purposes of
DDDM. Therefore, teachers were hesitant to rely on standardized data. Unfortu-
nately, they often felt as though they were forced to do so by administrators and
policymakers, often at the expense of their own autonomy in the classroom. This
supports Dewey’s assertion that educational decisions must begin and end with
practicing teachers in their classrooms, and a move toward more meaningful use
of educational data must yield more autonomy to classroom teachers.
Results also indicate that principals, under pressure from state-level stake-
holders, who were overly focused on standardized achievement numbers often
created an environment in which teachers felt pressured to attend only to results,
sometimes at the expense of their own pedagogy or students’ needs. These teach-
ers were less likely to participate in the deliberative sharing of information with
colleagues that yielded the most fruitful use of DDDM, particularly if there was
an attitude of shame surrounding subpar data. This limited their ability to con-
sider students in more holistic ways consistent with Dewey’s educational science.
Thus, it appears that the imposition of high-stakes standardized assessment not
only negatively affects instruction, but also the assessment and evaluation of stu-
dents by teachers.

Conclusion
The conclusions of this study suggest that DDDM, when used in a manner consistent
with Dewey’s science of education, can assist classroom learning. The results of the
qualitative study identify a great deal of alignment between Dewey’s vision and the
needs of classroom teachers and their students. Like Dewey, teachers recognize
the need to contextualize data in light of the situational complexities before them,
and to use the information diagnostically, rather than punitively. The study shows
teachers using data in a deliberate manner with other professionals as part of a
broader picture of student growth.
Perhaps most evident is the alignment between Dewey and the teacher
respondents’ focus on such work being grounded in actual classrooms. Teachers
were strong proponents of using classroom assessment in ways that were useful

Volume 37 (1) 2021


56 Schelling and Mason
in determining the strengths and needs of their students. Like Dewey, they were
skeptical of standardized assessments that reduced student data to numbers on a
single measure. This is perhaps unsurprising, as teachers are the ones closest to
students and thus should most directly recognize the strengths and weaknesses of
their students, as well as the shortcomings of standardized assessments that often
fail to adequately capture those dimensions. As noted, teachers and their students
are also the ones to bear the brunt of the dehumanizing consequences of stan-
dardization, which first simplifies the results of learning to mere numbers, and by
punitive measures that subsequently leech the human elements from the learning
process. From Dewey’s perspective, the resulting stress to teachers and students
identified in this study is detrimental to the broader goals of social and emotional
growth that are more difficult to measure. Yet results indicate stresses to both stu-
dents and teachers from this process, particular when principals are strident about
enforcing standardization.
It may behoove professional teacher organizations and other teacher advo-
cacy groups to find ways to make the case for a more holistic use of assessment
consistent with Dewey’s science of education, although the broader matter may
come down to public trust in teachers. If the public does not have adequate trust
in teachers as grounded researchers who can intelligently respond to the needs of
their students, then it is easier for the reductive standardization models to persist
despite their harm to teachers and students. So part of any rhetorical case must
also include improving the public reputation of teachers as independent profes-
sionals with the capability to intelligently and accurately measure the progress and
growth of their students.

Notes
1 John Dewey, Democracy and Education (New York: Free Press, 1916); John Dewey,
The Sources of a Science of Education (Reprint, New York: Liveright, 2013).
2 Ellen B. Mandinach, “A Perfect Time for Data Use: Using Data-Driven Decision Making
to Inform Practice,” Educational Psychologist 47 (2012): 71–85.
3 John Dewey, The Later Works, 1925–1953, Volume 3: 1929, edited by Jo Ann Boydston
(Carbondale, IL: Southern Illinois University, 1984), 173.
4 Dewey, The Sources of a Science of Education, 19.
5 Viki M. Young and Debbie H. Kim. “Using Assessments for Instructional Improvement:
A Literature Review,” Education Policy Analysis Archives 18, no. 19 (2010).
6 Indiana Department of Education, “RISE Evaluation Model: Evaluator and Principal
Handbook Version 3.0,” Indiana Department of Education, January 2020.
7 Mandinach, “A Perfect Time for Data Use.”
8 Laura Hamilton, Richard Halverson, Ellen Mandinach, Jonathan A. Supovitz, Jeffrey C.
Wayman, and What Works Clearinghouse, “Using Student Achievement Data to Support

E&C Education and Culture


Data-Driven Decision Making 57
Instructional Decision Making,” IES Practice Guide, NCEE 2009-4067, National Center
for Education Evaluation and Regional Assistance, 2009. Indiana Department of Edu-
cation. “RISE Evaluation Model: Evaluator and Principal Handbook Version 3.0,” IES
Practice Guide, NCEE 2009-4067, National Center for Education Evaluation and Regional
Assistance, 2009.
9 Indiana Department of Education, “RISE Evaluation Model,” 3.
10 Hamilton et al., “Using Student Achievement Data.” Indiana Department of Education,
“RISE Evaluation Model”; Mandinach, “A Perfect Time for Data Use.”
11 Hamilton et al., “Using Student Achievement Data.” Indiana Department of Education,
“RISE Evaluation Model.”
12 Northwestern University Searle Center for Advancing Learning and Teaching, “Assess-
ment Process,” Assessment of Student Learning, 2016.
13 Hamilton et al., “Using Student Achievement Data.”
14 Indiana Department of Education, “RISE Evaluation Model.”
15 Jane Armstrong and Katy Anthes, “How Data Can Help.” American School Board Journal
188, no. 11 (2001): 38–41.
16 Deven Carlson, Geoffrey D. Borman, and Michelle Robinson, “A Multistate District-Level
Cluster Randomized Trial of the Impact of Data-Driven Reform on Reading and Mathe-
matics Achievement,” Educational Evaluation and Policy Analysis 33, no. 3 (2011): 378–98.
17 Armstrong and Anthes, “How Data Can Help”; Carlson et al., “A Multistate District-Level
Cluster Randomized Trial.”
18 Nancy R. Hoover and Lisa M. Abrams, “Teachers’ Instructional Use of Summative Student
Assessment Data,” Applied Measurement in Education, Teachers’ and administrators’ use
of evidence of student learning to take action: Conclusions drawn from a special issue on
formative assessment, 26, no. 3 (July 2013): 219–31.
19 Martin Fishbein and Icek Ajzen, Predicting and Changing Behavior (New York: Psychol-
ogy Press, 2015).
20 Dewey, The Sources of a Science of Education, 26.
21 Ibid., 22–23.
22 Ibid., 14.
23 Ibid., 46.
24 Ibid., 41.
25 Ibid., 33, emphasis in original.
26 Ibid., 48.
27 Ibid., 34.
28 See Diane Ravitch, Reign of Error: The Hoax of the Privatizations Movement and the Dan-
ger to America’s Public Schools (New York: Alfred A. Knopf, 2013).
29 Indiana Department of Education, “RISE Evaluation Model.”
30 Indiana Department of Education, “Indiana Department of Education School & Com-
munity Nutrition Programs.”

Volume 37 (1) 2021


58 Schelling and Mason
31 Office of Management and Budget, “Standards for Defining Metropolitan and Microp-
olitan Statistical Areas; Notice (Federal Register Vol. 65).”
32 Fishbein and Ajzen, Predicting and Changing Behavior.
33 Ibid.
34 John W. Creswell, Research Design: Qualitative, Quantitative, and Mixed Method
Approaches, 4th ed (Thousand Oaks, CA: Sage Publications, 2014).

Bibliography
Armstrong, Jane, and Katy Anthes. “How Data Can Help.” American School Board Journal
188, no. 11 (2001): 38–41.
Carlson, Deven, Geoffrey D. Borman, and Michelle Robinson. “A Multistate District-Level
Cluster Randomized Trial of the Impact of Data-Driven Reform on Reading and
Mathematics Achievement.” Educational Evaluation and Policy Analysis 33, no. 3
(2011): 378–98.
Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Method Approaches,
4th ed. Thousand Oaks, CA: Sage Publications, 2014.
Dewey, John. Democracy and Education. New York: Free Press, 1916.
Dewey, John. The Later Works, 1925–1953, Volume 3: 1929. Edited by Jo Ann Boydston.
Carbondale, IL: Southern Illinois University, 1984.
Dewey, John. The Sources of a Science of Education. Reprint. New York: Liveright, 2013.
Fishbein, Martin, and Icek Ajzen. Predicting and Changing Behavior. New York: Psychology
Press, 2015.
Hamilton, Laura, Richard Halverson, Ellen Mandinach, Jonathan A. Supovitz, Jeffrey C.
Wayman, and What Works Clearinghouse. “Using Student Achievement Data to Support
Instructional Decision Making.” IES Practice Guide. NCEE 2009-4067. National Center
for Education Evaluation and Regional Assistance, 2009.
Hoover, Nancy R., and Lisa M. Abrams. “Teachers’ Instructional Use of Summative Student
Assessment Data.” Applied Measurement in Education, Teachers’ and administrators’ use
of evidence of student learning to take action: Conclusions drawn from a special issue on
formative assessment, 26, no. 3 (July 2013): 219–31. doi:10.1080/08957347.2013.793187.
Indiana Department of Education. “Indiana Department of Education School & Community
Nutrition Programs.” Retrieved in 2017 from https://www.doe.in.gov/nutrition/school
-nutrition-programs.
Indiana Department of Education. “RISE Evaluation Model: Evaluator and Principal
Handbook Version 3.0.” Indiana Department of Education, January 2020. https://www
.doe.in.gov/sites/default/files/evaluations/rise-handbook-30.pdf
Mandinach, Ellen B. “A Perfect Time for Data Use: Using Data-Driven Decision Making to
Inform Practice.” Educational Psychologist 47 (2012): 71–85. https://doi.org/10.1080/0
0461520.2012.667064.

E&C Education and Culture


Data-Driven Decision Making 59
Northwestern University Searle Center for Advancing Learning and Teaching. “Assessment
Process.” Assessment of Student Learning, 2016. http://www.northwestern.edu/searle
/assessment-of-student-learning/assessment-process/index.html.
Office of Management and Budget. “Standards for Defining Metropolitan and Micropolitan
Statistical Areas; Notice (Federal Register Vol. 65).” Retrieved in 2000 from https://nces
.ed.gov/pubs2007/ruraled/exhibit_a.asp.
Ravitch, Diane. Reign of Error: The Hoax of the Privatizations Movement and the Danger to
America’s Public Schools. New York: Alfred A. Knopf, 2013.
Young, Viki M., and Debbie H. Kim. “Using Assessments for Instructional Improvement: A
Literature Review.” Education Policy Analysis Archives 18, no. 19 (2010).

Natalie Schelling is an assistant professor of educational psychology at Indiana


University Kokomo, where she teaches courses in classroom assessment and
educational psychology. Her research investigates factors that influence teachers'
assessment practice and data-driven decision making.

Lance E. Mason is an associate professor of education at Indiana University


Kokomo. His research examines the intersections of media, politics, and democracy
in education, and has appeared in journals such as Curriculum Inquiry, Theory and
Research in Social Education, Dewey Studies, Democracy & Education, and Social
Education. He is currently coeditor of the CITE journal Social Studies.

Volume 37 (1) 2021

You might also like