0% found this document useful (0 votes)
38 views

Kelly 2004

Equally, one does not want to demonize those who are thoughtful about the use of methods such as randomized trials (see Mosteller & Boruch, 2002), techniques against which design study methods are wrongly pitted. I hope I make clear that I see design study methods as a protean expression of

Uploaded by

Maria Guerrero
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Kelly 2004

Equally, one does not want to demonize those who are thoughtful about the use of methods such as randomized trials (see Mosteller & Boruch, 2002), techniques against which design study methods are wrongly pitted. I hope I make clear that I see design study methods as a protean expression of

Uploaded by

Maria Guerrero
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

This article was downloaded by: [Stanford University Libraries]

On: 19 July 2012, At: 04:23


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954
Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,
UK

Journal of the Learning


Sciences
Publication details, including instructions for
authors and subscription information:
http://www.tandfonline.com/loi/hlns20

Design Research in Education:


Yes, but is it Methodological?
Anthony (Eamonn) Kelly
Version of record first published: 17 Nov 2009

To cite this article: Anthony (Eamonn) Kelly (2004): Design Research in Education:
Yes, but is it Methodological?, Journal of the Learning Sciences, 13:1, 115-128

To link to this article: http://dx.doi.org/10.1207/s15327809jls1301_6

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-


and-conditions

This article may be used for research, teaching, and private study purposes.
Any substantial or systematic reproduction, redistribution, reselling, loan,
sub-licensing, systematic supply, or distribution in any form to anyone is
expressly forbidden.

The publisher does not give any warranty express or implied or make any
representation that the contents will be complete or accurate or up to
date. The accuracy of any instructions, formulae, and drug doses should be
independently verified with primary sources. The publisher shall not be liable
for any loss, actions, claims, proceedings, demand, or costs or damages
whatsoever or howsoever caused arising directly or indirectly in connection
with or arising out of the use of this material.
THE JOURNAL OF THE LEARNING SCIENCES, 13(1), 115–128
Copyright © 2004, Lawrence Erlbaum Associates, Inc.

Design Research in Education: Yes, but


is it Methodological?
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

Anthony (Eamonn) Kelly


Graduate School of Education
George Mason University

It is difficult to write a commentary on emerging methods. One wishes to avoid the


temptation of the negative critic (Dewey, 1980):

… the critic, guided by personal predilection or more often by partisan conventional-


ism, will take some one procedure as his criterion or judgment and condemn all devi-
ations from it as departures from art [read: research] itself. He then misses the point
of all art, the unity of form and matter, and misses it because he lacks adequate sym-
pathy, in his natural and acquired one-sidedness, with the immense variety of interac-
tions between the live creature and his world. (p. 313)

Equally, one does not want to demonize those who are thoughtful about the use
of methods such as randomized trials (see Mosteller & Boruch, 2002), techniques
against which design study methods are wrongly pitted.
I hope I make clear that I see design study methods as a protean expression of
research “impressionists” whose art will illuminate and enrich the “studio” re-
search art we revered until the fertile contributions of Collins, Brown, Hawkins,
and Pea in the early 1990s. The next task is to establish the logos of design research
so that we can argue, methodologically, for the scientific warrants for its claims.
Thus, although it is important for the health of a set of methods that profusion pre-
cede pruning (Kelly, 2003), it is time for the pruning to begin.
This collection of articles provides evidence for the vigor of design studies in
education. Each of the articles provides a substantial contribution to prior efforts
(e.g., Kelly, 2003), and to articles that have so far primarily appeared in edited
books (e.g., Collins, 1999). Other attempts at creating design-based formative re-

Correspondence and requests for reprints should be sent to Anthony (Eamonn) Kelly, Professor,
Graduate School of Education, George Mason University, 4085 University Drive, Fairfax, VA 22030.
E-mail: [email protected]
116 KELLY

search methods exist (Reigeluth, 1989; English, & Reigeluth, 1996; Reigeluth &
Frick, 1999). A form of design research in education has been active in Europe for
some time, called “developmental research” (e.g., van den Akker, Branch,
Gustafson, Nieveen, & Plomp, 1999).
Collins, Joseph, and Bielaczyc (this issue) focus on the later work of Brown
(a co-originator of the concept of design experiments in education) and current
research to illustrate how insights from design experiments can guide a series of
studies. The article by diSessa and Cobb labels an explanatory construct an “on-
tological innovation” and shows how it can support and propel a number of re-
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

lated studies. The article by Fishman et al. pushes not only design study re-
search, but educational research, generally, to consider the outer reaches of the
model of design and research stages described in Bannan-Ritland (2003); viz.,
diffusion, adoption, and adaptation of innovative practices. If the focus on the
“ultimate fate” of adoption by others of the intervention/artifact is absent during
a study or across a series of studies, then local, but not broader impact, is likely
to be demonstrated.
Below, I begin with a brief discussion on the classification of design studies and
then highlight the methodological challenges that need to be addressed if we are to
develop design studies from a loose set of methods into a rigorous methodology.

CLASSIFICATION AND SCOPE OF DESIGN STUDIES

Engineering metaphors for design studies are proving useful in science and in
mathematics education (diSessa & Cobb, this issue), in the development and suste-
nance of communities of learners and passion curricula (Collins, et al., this issue),
and in cognitively informed technology interventions and their survival in the real
world of school systems (Fishman et al., this issue).
One way to classify design studies is by characterizing their “outcomes” or
“products.” Some design studies wish to inform us about a model of practice, oth-
ers about learning, still others about the design and use of a new piece of software
or “learning environments.” As Simon (1969) noted, design is not design without
some form of designed artifact—even if the goal of the artifact(s) is to advance a
different outcome such as new theory. Let me quickly add that the concept of an
“artifact” need not be “concrete” such as a computer program, it might describe as-
pects of “activity structures, institutions, scaffolds and curricula” (Design-Based
Research Collective, 2003, p. 6). Nevertheless, design remains a transitive verb. In
my opinion, design studies should produce an artifact that outlasts the study and
can be adopted, adapted, and used by others (e.g., either researchers or teachers);
otherwise, the fact that the study used an iterative process simply characterizes the
procedures that were followed.
YES, BUT IS IT METHODOLOGICAL? 117

Artifacts
Process as Artifact: Teaching Experiments
Some educational researchers work directly with teachers and the teaching pro-
cess; in fact, the process of engagement between the teacher and student (with the
researcher sometimes playing the role of teacher) can be claimed to be the object of
study (for a history of this method and its roots in Russian teaching experiments,
see Steffe and Thompson, 2000). Note that the word “experiment” is used advis-
edly in teaching experiment methods. For example, Steffe and Thompson (2000)
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

and Confrey and Lachance (2000) refer explicitly to the testing, respectively, of
hypotheses and conjectures, though the methodological implication of these words
deserves study (see following paragraphs).
The methods of the teaching experiment may be subdivided into variants such
as “multi-tiered teaching experiments” (Lesh & Kelly, 2000); “transformative
teaching experiments” (Confrey & Lachance, 2000); and “collaborative teaching
experiments” (Cobb, 2000). I value the adjective “teaching” in the label and prefer
it to the generic label “design experiment.”
A further reason to retain the adjective “teaching” is that these variants of de-
sign studies are centrally concerned with curricular subject matter — in the above
cases with the teaching and learning of mathematics. For example, one of the
touchstones for judging progress in a teaching experiment on the psychological
concept of learning transfer was students’ constructions of mathematical concepts
(e.g., Lobato & Siebert, 2002; Lobato & Thanheiser, 2002).

Software as Artifact
At a different pole are studies that have a focus on the design of a model of
learning or teaching as reified in a software artifact. The design of such artifacts
usually involves engineering a broader “learning environment.” Space limitations
preclude more than a listing of a small number of studies that may fall under this
umbrella (e.g., the Progress Portfolio, Loh, Radinsky, Russell, Gomez, Reiser &
Edelson, 1998; Worldwatcher, Edelson, Gordin & Pea, 1999; LiteracyAccess On-
line, Bannan-Ritland, 2003; Genscope/Biologica, Hickey, Kindfield, Horwitz, &
Christie, 1999; SimCalc, Roschelle, Kaput & Stroup, 2000; and Jasper Series,
Cognition and Technology Group at Vanderbilt, 1997).
The point here is not that these studies produce software per se. Rather, that the
exploration of research questions about learning (of students, teachers and re-
searchers) are reified, explored, and tested by the design and use of the soft-
ware/learning environment. Indeed, some claim that some student cognitions may
not emerge without a designed environment (e.g., Roschelle et al. 2000).
Centrally, the existence of a software artifact or conceptual framework (e.g.,
from a teaching experiment) allows for its use and testing elsewhere. Its later adop-
118 KELLY

tion elsewhere is not a trivial matter, of course, and this important problem has not
yet been the focus of attention of design studies (Fishman et al., this issue).

DESIGN STUDIES: A SET OF METHODS OR A


METHODOLOGY?

Proponents of design studies claim that these studies can elicit a generative frame-
work, advance our understanding of disciplinary knowledge, serve as an incubator
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

for new research techniques, advance our skill in designing learning environments
or lead to better instrumentation – all features that may prove useful in the same or
future studies.
If these goals are going to be more fully realized, however, design studies must
develop from a loose set of methods to a methodology. A method is a procedure, a
process, a set of steps to follow. Design studies to date have been described primar-
ily using a set of process descriptors (e.g., these studies are interventionist, itera-
tive, process focused, collaborative, multileveled, utility oriented, and theory
driven, see Shavelson, Phillips, Towne & Feuer, 2003). Unless this set of proce-
dures is under-girded by a conceptual structure that forms the basis for the war-
rants for their claims, design study methods do not constitute a methodology and
will contribute only haphazardly to an aggregative science of learning. Please note
that I am not suggesting for a moment that design researchers are unaware of any
of the following issues. Rather, I am suggesting that design studies are a new form
of research actions that require a rethinking of these standards. A mature method-
ology will have certain characteristics, including, but not limited to the following:

Argumentative Grammar
An argumentative grammar is the logic that guides the use of a method and that
supports reasoning about its data. It supplies the logos (reason, rationale) in the
methodology (method + logos)1 and is the basis for the warrant for the claims that
arise. Thus, for randomized field trials, the logos draws from the structure of the ar-
guments provided by R. A. Fisher concerning small sample analysis, statistical
reasoning, and randomization procedures (see, Bennett, 1990).
The argumentative grammar of randomized field trials can be described sepa-
rately from its instantiation in any given study so that the logic of a proposed study
and its later claims can be criticized. Thus, many reviewers reject studies not on the
choice of method (procedure), but on their violation of the underlying logos that
one expects to see with that choice of method. Indeed, one may argue that the ele-

1The same root appears in the names of many sciences (e.g., bio-logos, psyche-logos, socio-logos).
YES, BUT IS IT METHODOLOGICAL? 119

gance of the argumentative grammar provided by Fisher (including, as he does, a


homely example about drinking tea) is attractive to policymakers and reassures
them that randomized field trials provide a “scientific” method in education.
What, therefore, is the logos of design studies in education? What is the gram-
mar that cuts across the series of studies as they occur in different fields? Where is
the “separable” structure that justifies collecting certain data and not other data and
under what conditions? What guides the reasoning with these data to make a plau-
sible argument? Until we can be clear about their argumentative grammar, design
study methods lack a basis for warrant for their claims. Please note that a simple
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

assertion that design studies use “grounded theory” or “thick description” with a
mere citation to some complex source does not constitute an acceptable basis for
according design studies the status of a methodology. Indeed, in my opinion, these
bibliographic sources inadequately address the data-analytic methods and prob-
lems of design studies.

Contributions to Problems of Demarcation or


Meaningfulness
Problem of Demarcation
This section considers the character of the claims that can be derived from de-
sign study methods (see Runkel & McGrath, 1972; Kelly & Sloane, 2003). In order
to take their place as a methodology, design studies must clarify how their results
contribute primarily to the problem of demarcation or to the problem of
meaningfulness or to both. The problem of demarcation describes the application
of argumentative techniques to differentiate scientific claims from those of
pseudoscience, or from metaphysical claims (Popper, 1965). Scientific arguments
backed by empirical methods can, for example, debunk astrology, yet refine as-
tronomy (see also, National Academy of Sciences, 1998).
The question for design study researchers is, “Can this new set of methods es-
tablish boundaries (demarcations) between sound and unsound claims about learn-
ing and teaching?” If design studies cannot develop in ways similar to randomized
field trials, survey methods, or rigorous ethnography (to name some mature meth-
odologies), then their contribution to the problem of demarcation may be limited. I
read the following list of descriptors of design studies by Collins (1999) to suggest
that these methods are currently unequal to the task. According to Collins (1999),
design studies:

1. Conduct research in messy non-laboratory settings.


2. Conduct research involving many dependent variables.
3. Characterize the situation as opposed to controlling variables.
4. Use flexible design revision instead of staying with fixed procedures.
5. Value social interaction over social isolation in learning.
120 KELLY

6. Generate profiles rather than test hypotheses.


7. Value participants’ input for design and analysis rather than relying solely
on the judgments of the researcher.

When one foregoes experimental controls, how can one generalize to other set-
tings regardless of how rich are the local descriptions? When multiple dependent
variables are used, how does one tease complex interactions apart, and are causal
attributions plausible? While allowing for flexible design revision may make sense
in engineering product design, what are the consequences when this method is
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

used in educational settings where the “product” is something more ephemeral,


called learning, and the status of the claims are intended to be general? Although it
is attractive to consider social interaction during a design experiment, is it not also
important to forefront individual cognition and so link to other areas of science
(e.g., McCandliss, Kalchman & Bryant, 2003)?
Although it is attractive to develop profiles of the learner along a number of di-
mensions, what is the basis for selecting these profiles and not others that might
appear equally plausible since the data are so noisy? Should the testing of hypothe-
ses be eliminated entirely, or what does “hypothesis testing” mean in such a fluid
milieu? Finally, although much can be said in favor of coparticipant involvement
during research, what impact does the involvement of many voices have on the in-
ferential character of any findings that result from the study? Whose voice takes
precedence in scientific claims and why?

Generalizations Over Actors


This section on generalizations draws heavily on Runkel and McGrath (1972).
Since many of design studies focus on just a few subjects (often students), there is a
problem of generalization of the descriptions to a larger N (e.g., other students in
the same class who are not the focus of study, students in non intervention classes;
students in other schools or settings, and so on). The method thus lacks the sam-
pling and descriptive power of a survey, making claims about “actors” (in this case
students) weak.
The generalization-to-actors problem is confounded by the fact that the sample
of cognitive behavior of students is in response to a targeted perturbation (i.e., as
part of a “design experiment”), which weakens generalization to cognitions in un-
studied classmates, cognitions in the larger student body, and to the normal
cognitions of children outside of school settings. In other words, the sampling
problem over actors is compounded.

Generalizations Over Behaviors


In design studies, the notion of experimental control is often consciously vio-
lated so as to serve local emerging needs. Thus, since there is little attempt at insti-
YES, BUT IS IT METHODOLOGICAL? 121

tuting either structural or statistical controls in design studies (e.g., the use of some
form of experimental design or covariation analyses), then the warrant for making
causal claims about behavior (in the face of untested alternative hypotheses) is
weak (Shavelson et al., 2003).2

Generalizations Over Contexts


It might be argued that claims about context are legitimate for design studies,
but notice, again, that these contextual descriptions are not about “normal life in
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

classrooms;” rather, the “context” is a unique, engineered, acted-upon designed


environment. In other words, the findings that might come from a more standard
ethnography in which intervention and iteration is not central do not typically ap-
ply. As such, Barab, Thomas, Dodge, Carteaux, and Tuzun (in press) introduced
the term “design ethnography.” Whereas, the basic ethnographer describes and in-
terprets a particular culture and the applied or critical ethnographer (potentially)
critiques and changes the local culture, the design ethnographer goes one step fur-
ther; “actually reifying the social commitments and emergent critique into an arti-
fact that can be used by people in contexts beyond those in which the initial
ethnographic work was carried out.” It remains the shared responsibility of the de-
sign research community in education to help explicitly spell out the methodologi-
cal strengths and weakness of the emerging design ethnography methods.

Problem of Meaningfulness
If design study methods are not yet developed sufficiently to construct general-
izations that contribute to the problem of demarcation, does that mean that they
have no value or can provide no generalizable statements? On the contrary, impor-
tant scientific discoveries have not typically come about as a result of a scientist
slavishly following the “scientific method” (Holton, 1998). Indeed, Medawar
(1991) argued:

[T]he scientific article is a fraud in the sense that it does give a totally misleading nar-
rative of the processes of thought that go into the making of scientific discoveries.
The inductive format of the scientific article should be discarded. The discussion
which in the traditional scientific article goes last should surely come at the begin-
ning. The scientific facts and scientific acts should follow the discussion, and scien-
tists should not be ashamed to admit, as many of them apparently are ashamed to ad-
mit, that hypotheses appear in their minds along uncharted byways of thought; that
they are imaginative and inspirational in character; that they are indeed adventures of

2It should be noted that the criticism made by Shavelson et al. (2003) was weakened by its attack on

the narrative methods preferred by Bruner, which do not describe adequately the data-analytic tech-
niques used by, for example, Cobb and his colleagues (e.g., McClain, 2002; McClain & Cobb, 2001).
122 KELLY

the mind. What, after all, is the good of scientists reproaching others for their neglect
of, or indifference to, the scientific style of thinking they set such store by, if their
own writings show that they themselves have no clear understanding of it? (p. 233)

In other words, design studies, particularly to the extent that they are hypothesis
and framework generating, may be viewed as contributing to model formulation
(rather than for model estimation or model validation, see Sloane & Gorard, 2003).
Model and hypothesis generation is a crucial part of conducting a worthwhile sci-
entific investigation. It does not represent some “pre-scientific” messing around
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

that should be accorded little status. For example, Russell Hulse (Nobel Laureate
in physics, 1993) noted at a talk at the National Science Foundation (Hulse, 2003),
that an infinite number of hypotheses may be tested. This machination of testing
hypotheses (i.e., the mechanical application of the “scientific method”) does not,
by itself, create useful knowledge. It is important, rather, to test powerful (i.e., con-
struct-advancing) hypotheses, which often emerge from studies directed primarily
at the problem of meaningfulness.
In this sense, the recent National Research Council report (NRC, 2002) is clear
about the function of good questions (hypotheses) in advancing science. But, apart
from a footnote that, sometimes, good observations come “out of the blue,” there is
little discussion or direction on how a researcher might generate one (p. 57). There
is a suggestion in this report that good questions can be gleaned from a literature
review, but this exhortation begs the question of how the good questions in the lit-
erature emerge. The report does not explain how one might go beyond any litera-
ture’s constructs. Indeed, the models guiding a given scientific literature (which
are always tentative) may be inhibitors to progress. Müller, corecipient of the No-
bel Prize for the discovery of high-temperature superconductors, decided after a
thorough, but fruitless, literature review that, “I just don’t talk to the theoreticians.
They just held me back” (cited in Holton, 1998, p. xxiv).
Note that I am, of course, not advocating dispensing with literature reviews;
rather, agreeing with the authors in this special issue that the constructs that ani-
mate any competent literature review often emerge in design contexts and that the
character of these studies may promote the identification and growth of new ideas
and constructs.
In this way, design studies can contribute to the problem of meaningfulness, a
problem that precedes and is foundational for later contributions to addressing the
problem of demarcation. Indeed, how can one establish demarcations for claims if
the scientific concepts under consideration are sparse or unclear?

Generalization of conceptual frameworks or articulation strate-


gies. The contribution of design studies to generalization, as I see it, is primarily
in their contribution to the thinking of other researchers who must conduct re-
search in newly explored areas. Because the conceptual frameworks in design
YES, BUT IS IT METHODOLOGICAL? 123

studies grow authentically and are grounded in the lived experience of teachers and
students, they often appeal to researchers who work in similar contexts. Thus, a
mathematics education researcher may find that the concept of socio-mathemati-
cal norms attractive because it may guide observation, suggest variables to study,
or help make sense of data. Later studies, using different methodologies may ex-
tend generalizations across actors, behaviors, and context, and extend the work
into the purview of studies of diffusion of innovations (Rogers, 1995).

Use of Technical Language


Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

If the discussions about design studies remain within a small community, then
problems with terminology are not that important. However, when the work of a
research community becomes the focus of attention of a larger community of re-
searchers, then the use of terms that are transparent or at least understandable by
the larger community is highly desirable.
One problematic word, for me, is “theory.” According to the National Academy
of Sciences (NAS, 1998), “In science, a [theory is] a well-substantiated explana-
tion of some aspect of the natural world that can incorporate facts, laws, infer-
ences, and tested hypotheses” (p. 5). Scientists use the word, guardedly, to describe
a hard-won consensus statement or model. In the case of the teaching of evolution
(the topic of NAS, 1998), for example, the authors point out the difficulty in com-
municating with the public when the concept of “theory” has been debased in the
vernacular to mean mere opinion.
The word “theory” surfaced in various forms in the writings of many authors in
the Educational Researcher (Kelly, 2003) special issue, and reappears in this spe-
cial issue. In fact, the article by diSessa and Cobb devotes space to explicating this
issue. I find their discussion most important. Because my concern here is with the
reading of educational research studies by those outside of educational research
(particularly from the hard sciences), I prefer to use word “theory” and “theoreti-
cal” very sparingly and instead to use working words (as follows) so that the com-
prehensibility of the piece to those in other traditions is (hopefully) increased. I
adopt this perspective from the experience of having run many educational re-
search panels at NSF that involved scientists from outside of education.
Because design studies are in their (relative) infancy, and we still rely almost
exclusively on text to convey meaning, I advocate the use of working words –
words that have a task to perform and that have a clear intension, such as “hy-
pothesis,” “conjecture,” “observation,” “model,” “framework,” “explanation,”
“belief,” “rationale,” “logical or argumentative structure,” “perspective,” “analy-
sis, or “synthesis.” Note, for example, the work Confrey and Lachance (2000)
get out of the word “conjecture.”
Definition is particularly useful since it reminds us to apply concepts such as ex-
tension and intension. Thus, design studies should not only provide examples
124 KELLY

(which contribute to extension), but also point to the inherent characteristics that
cut across or distinguish exemplars (which contribute to intension). Thus, the con-
scious use of the connotative/denotative aspects of the word definition advise de-
scriptive studies to avoid falling into Stokes’ (1997) Peterson’s quadrant,3 in
which “interesting” or “fascinating” protocols are presented for their own sake
(akin to distinctive markings on birds, which contributes solely to extension). In-
stead, by paying more attention to intension, the work can begin to grow into one
of the more productive quadrants, for example, Pasteur’s quadrant (see Martinez,
Whang, & Hamilton, 2003).
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

Restrictions on Researcher Bias


Researcher bias is inevitable. It is not a moral flaw; it is an expression of human
judgment (e.g., Gigerenzer, Todd, & ABC Research Group, 1999; Goldstein &
Hogarth, 1997). Nevertheless, mature methodologies have guidelines for minimiz-
ing its occurrence. To illustrate how all-encompassing bias can be, consider the
following statement from a prominent scientist who was convinced of the obvious-
ness of an explanatory framework that proved to be nonviable: “Nothing of all this
can be considered as hypothesis or private opinion; on the contrary, they are truths
which, in order to be made clear, only require attention and the observation of
facts.” (Lamarck, 1809, defending Lamarckism, cited in NAS, 1998, p. 96).
For example, what operational caveats exist to guide the sensible use of design
study methods? How does the researcher select episodes for analysis? How does
he or she decide what aspects of these are meaningful? What about contrary in-
stances? What about outliers? What is not being said about the miles of videotape
left unwatched and student “artifacts” unread? What does triangulation mean in
this context? And what is the function of inter-rater reliability?
Further, whose view of the “environment” should guide claims: the observer or
the organism? A leading proponent of the need to take not only researchers’ but ac-
tors’ (e.g., students’) constructions into account for advancing design studies is
Lobato (2003). Her reconceptualization of the transfer of learning literature can
guide the design and analysis of measures within a design study. Further, it advises
against an uncritical acceptance of externally imposed standardized tests as the
sine qua non of measures of “scientific” research in education (see also Confrey et
al., 2001).

Stage-appropriate Use of Methods


I discussed, previously, the need to fit claims into the proper warrant category (i.e.,
claims about actors, behaviors or contexts). A related issue arises for design stud-

3A label I like despite Stokes’s protest (p. 75).


YES, BUT IS IT METHODOLOGICAL? 125

ies: stage-appropriate use of research methods. If we consider an entire program


of research as a design act (Bannan-Ritland, 2003), then different studies may be
seen as belonging in stages within a larger trajectory that animates the program or
portfolio of work. Thus, research methods and data-analytic techniques appropri-
ate for claims about an emerging framework may be inappropriate for claims that
are being subjected to a stringent test, or research claims about diffusion of innova-
tions (see Zaritsky, Kelly, Flowers, Rogers, & O’Neill, 2003). What is being sug-
gested here is that part of the methodological work for design studies is to clarify in
which stage and for what purposes methods are appropriate and inappropriate.
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

Balancing Contingent With Necessary Claims


Consider the example of airplane design in diSessa and Cobb (this issue). It is true
that there is much freedom in the design of a plane (it could be a Boeing 777 or a
Fokker). What moves aeronautics from craft know-how, to engineering knowl-
edge, to the level of a science is the investigation, discovery and modeling of the
coupling between the contingent (the parts of the design that are to some extent ar-
bitrary) and the necessary (e.g., the proper application of Bernoulli’s principle to
the design of the wing—or similar airfoil). Note that “beneath” Bernoulli’s princi-
ple lie further necessary principles including fluid mechanics, gravity, and so on.
I believe that we must support a similar trajectory for design research in education.
Design studies occur in real-world settings. They remain open to the vicissitudes of the
actors, their behaviors and the complexity of the current context. In other words, the
raw material of design studies is inescapably contingent: one action begets the next,
and so on, often in unpredictable (and perhaps unrepeatable) ways. The researcher re-
sponds to this aleatory ebullience and tries to express its essence parsimoniously.
Until we can begin to discover and model the necessary components of learning
(e.g., based in cognitive neuroscience: e.g., Dehaene, 1997; OECD, 2002), design
study output is prone to be dismissed as mere “narrative”4 (e.g. Shavelson et al.,
2003) – that the data are merely “anecdotal.” Our descriptions cannot, over the
long haul, remain at the level of contingency. When we move toward identifying
what is necessary in the situation, then the methodology becomes more mature,
scientifically. It is in the connection of the contingent to the necessary where the
real work of theory-building occurs, in my opinion.
As one strategy, other researchers can be recruited to explore the necessary as-
pects of the frameworks and findings of design studies. Three illuminating exam-
ples are provided in McCandliss et al., (2003). Second, design study methods may
be re-construed so that they are able to “hand off” hypotheses, models, intuitions,

4Design study researchers need not be overly defensive about uses of “narrative.” The history and

philosophy of science demonstrates that prospective scientific practice is shot through with narrative
and the scientific imagination is, according to Holton (1998), fundamentally thematic.
126 KELLY

and so forth, to researchers who want to develop other aspects of model building
(Sloane & Gorard, 2003), or who work at later stages of the research-as-design
process (Bannan-Ritland, 2003). Ultimately, if this set of methods is to advance, it
must collaborate with other methodologists in the social sciences and build
cross-disciplinary initiatives to the behavioral, cognitive and social sciences.

Producing Usable Knowledge


Educational researchers choose to answer to a variety of policymakers and stake-
holders. Not only are their scientific claims difficult to substantiate (in no small
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

part due to the limitations of randomized trials, survey and ethnographic methods
in real settings), but they must “wrap” their principled insights in some curricular
form so it can fit into the real work of teachers and learners.
Further, educational researchers act within and perturb ongoing systems, which
greatly complexifies the scientific problem. In one case, even when a randomized field
trial was anticipated, school-level resources contributed to the difficulties faced in
bringing a systems-thinking curriculum to fruition and worked against an assessment
of its effectiveness (Cline & Mandinach, 2000; see also, Mandinach & Cline, 2000).
The laboratory scientist’s “error variance” is the educational researcher’s reality.
The existence of a research team in a classroom is always a perturbation. We can
see this observation borne out in a multi-level sense in Fishman et al. (this issue). Fur-
ther, the impact of these perturbations at the school system level can be detrimental
even to the continuance of a research endeavor (e.g., Confrey, Bell & Carrejo, 2001).
What this implies, methodologically, is that design study researchers must con-
sider criteria for success that exceed satisfying the criteria for “scientific” claims.
Ultimately, design research may be seen as a practical endeavor. Can educational
research produce knowledge artifacts that provide solutions for teachers and learn-
ers that are efficiently, workable, economical and do not entail a significant
“switching cost” from current practice? We must consider later (and unsupported)
adoption and adaptation of our research products in the set of evaluative criteria.
Methodologically, we will have to study and borrow from other disciplines (e.g.,
Rogers, 1995; Zaritsky et al., 2003).

REFERENCES

Bannan-Ritland, B. (2003). The role of design in research: The Integrative Learning Design Frame-
work. Educational Researcher, 32, 21–24.
Barab, S. A., Thomas, M., Dodge, T., Carteaux, R., & Tuzun, H. (in press). Making learning fun: Quest
Atlantis, a game without guns.Educational Technology Research & Development.
Bennett, J. H. (Ed.), (1990). Statistical methods, experimental design, and scientific inference: R. A.
Fisher. Oxford, UK: Oxford University Press.
Cognition and Technology Group at Vanderbilt (1997). The Jasper project: Lessons in curriculum, instruc-
tion, assessment, and professional development. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
YES, BUT IS IT METHODOLOGICAL? 127

Cline, H. F., & Mandinach, E. B. (2000) The corruption of a research design: A case study of a curricu-
lum innovation project. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathe-
matics and science education (pp. 169–189). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Cobb, P. (2000). Conducting teaching experiments in collaboration with teachers. In A. E. Kelly & R.
A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 307–333).
Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Collins, A. (1999). The changing infrastructure of education research. In E. C. Lageman & L. S. Shulman
(Eds.), Issues in education research: Problems and possibilities. San Francisco: Jossey-Bass.
Confrey, J., Bell, K., & Carrejo, D. (2001). Systemic crossfire: What implementation research reveals
about urban reform in mathematics. Article presented at the annual meeting of the American Educa-
tional Research Association, Seattle. WA.
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

Confrey, J., & Lachance, A. (2000). Transformative teaching experiments through conjecture-driven
research design. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics
and science education (pp. 231–266). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Dehaene, S. (1997). The number sense: How the mind creates mathematics. New York: Oxford University Press.
Design-based Research Collective. (2003). Design-based research: An emerging paradigm for educa-
tional inquiry. Educational Researcher, 32, 5–8.
Dewey, J. (1980). Art as experience. New York: Pedigree.
Edelson, D. C., Gordin, D. N., & Pea, R. D. (1999). Addressing the challenges of inquiry-based learn-
ing through technology and curriculum design. Journal of the Learning Sciences, 8, 391–450.
English, R. E., & Reigeluth, C. M. (1996). Formative research on sequencing instruction with the elab-
oration theory. Educational Technology Research & Development, 44(1), 23–42.
Gigerenzer, G., Todd, P. M., & ABC Research Group, (1999). Simple heuristics that make us smart.
New York: Oxford University Press.
Goldstein, W. M., & Hogarth, R. M. (Eds.), (1997). Research on judgment and decision making: Cur-
rents, connections, and controversies. Cambridge, England: Cambridge University Press.
Hickey, D. T., Kindfield, A. C. H., Horwitz, P., & Christie, M., (1999). Advancing educational theory
by enhancing practice in a technology supported genetics learning environment. Journal of Educa-
tion, 181(2), 25–55.
Holton, G. (1998). The scientific imagination. Cambridge, MA: Harvard University Press.
Hulse, R. (2003). Scientific discovery. Presentation at the National Science Foundation, Arlington, VA.
Kelly, A. E. (2003). Research as design. Theme issue: The role of design in educational research. Edu-
cational Researcher, 32, 3–4.
Kelly, A. E., & Sloane, F. C. (2003). Educational research and the problems of practice. Irish Educa-
tional Studies, 22, 29–40.
Lesh, R. A., & Kelly, A. E. (2000). Multi-tiered teaching experiments. In A. E. Kelly & R. A. Lesh
(Eds.), Handbook of research design in mathematics and science education (pp. 197–230). Mahwah,
NJ: Lawrence Erlbaum Associates, Inc.
Lobato, J. (2003). How design experiments can inform a rethinking of transfer and vice versa. Educa-
tional Researcher, 32(1), 17–20.
Lobato, J., & Siebert, D. (2002). Quantitative reasoning in a reconceived view of transfer. The Journal
of Mathematical Thinking, 21(1), 87–116.
Lobato, J., & Thanheiser, E. (2002). Developing understanding of ratio as measure as a foundation for
slope. In B. Litwiller (Ed.), Making sense of fractions, ratios, and proportions: 2002 Yearbook (pp.
162–175). Reston, VA: NCTM.
Loh, B., Radinsky, J., Russell, E., Gomez, L. M., Reiser, B. J., & Edelson, D. C. (1998). The progress
portfolio: Designing reflective tools for a classrom context. In Proceedings of CHI ’98 (pp.
627–634). Reading, MA: Addison-Wesley.
Mandinach, E. B, & Cline, H. F (2000). It won’t happen soon: Practical, curricular, and methodological
problems in implementing technology-based constructivist approaches in classrooms (with H.F.
128 KELLY

Cline). In S.P. Lajoie (Ed.), Computers as cognitive tools: No more walls (Vol. II, pp.377–395).
Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Martinez, M. E., Whang, K., & Hamilton, E. (2003). ROLE and the controversy of theory versus prac-
tice. Article presented at the Annual Meeting of the American Educational Research Association,
Chicago.
McCandliss, P. D., Kalchman, M., & Bryant, P. (2003). Design experiments and laboratory approaches
to learning: Steps toward collaborative exchange. Educational Researcher, 32(1), 14–16.
McClain, K. (2002). Teacher’s and student’s understanding: The role of tools ad inscriptions in support-
ing effective communication. Journal of the Learning Sciences, 11, 217–249.
McClain, K., & Cobb, P. (2001). The development of sociomathematical norms in one first-grade class-
room. Journal for Research in Mathematics Education, 32, 234–266.
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012

Medawar, P. B. (1991). Is the scientific paper a fraud? In D. Pyke (Ed.), The threat and the glory: Re-
flections on science and scientists (pp. 228–233). Oxford, England: Oxford University Press.
Mosteller, F., & Boruch, R. (Eds.), (2002). Evidence matters: Randomized trials in education research.
Washington, DC: Brookings.
National Academy of Sciences. (1998). Teaching about evolution and the nature of science. Washing-
ton, DC: National Academy Press.
National Research Council. (2002). Scientific research in education. In R. J. Shavelson & L. Towne
(Eds.), Committee on Scientific Principles for Education Research. Washington, DC: National Acad-
emy Press.
OECD. (2002). Understanding the brain. Toward a new learning science. Paris: OECD.
Popper, K. R. (1965). Conjectures and refutations: The growth of scientific knowledge. New York:
Harper and Row.
Reigeluth, C. M. (1989). Educational technology at the crossroads: New mind sets and new directions.
Educational Technology Research & Development, 37 (1), 67–80. (Formative research is discussed
on pp. 71–73.)
Reigeluth, C. M., & Frick, T. W. (1999). Formative research: A methodology for improving design the-
ories. In C. M. Reigeluth (Ed.), Instructional-Design Theories and Models: A New Paradigm of In-
structional Theory (Volume II). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc..
Rogers, E. M. (1995). Diffusion of innovations. New York: Free Press.
Roschelle, J., Kaput, J., & Stroup, W. (2000). SimCalc: Accelerating students’ engagement with the
mathematics of change. In M. Jacobson & R. Kozma (Eds.), Educational technology and mathemat-
ics and science for the 21st century (pp. 47–75). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Runkel, P., & McGrath, J. (1972). Research on human behavior: A systematic guide to method. New
York: Holt, Rinehart, & Winston.
Shavelson, R. J., Phillips, D. C., Towne, L., & Feuer, M. J. (2003). On the science of education design
studies. Educational Researcher, 32, 25–28.
Simon, H. A. (1969). The sciences of the artificial. Cambride, MA: MIT Press.
Sloane, F. C., & Gorard, S. (2003). Exploring modeling aspects of design experiments. Educational Re-
searcher, 32(1), 29–31.
Steffe, L. P., & Thompson, P. W. (2000). Teaching experiment methodology: Underlying principles and
essential elements. In A. E.Kelly, & R. A. Lesh (Eds.), Handbook of research design in mathematics
and science education (pp. 267–307). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Stokes, D. E. (1997). Pasteur’s quadrant: Basic science and technological innovation. Washington,
DC: Brookings Institution Press.
van den Akker, J. J. H., Branch, R. M., Gustafson, K., Nieveen, N., & Plomp, T. (Eds.), (1999). Design
approaches and tools in education and training. Dordrecht, Netherlands: Kluwer. See, i.e,
http://www.geocities.com/ResearchTriangle/8788/DR.html.
Zaritsky, R., Kelly, A. E., Flowers, W., Rogers. E., & O’Neill, P. (2003). Clinical design sciences: A
view from sister design efforts. Educational Researcher, 32(1), 32–34.

You might also like