Kelly 2004
Kelly 2004
To cite this article: Anthony (Eamonn) Kelly (2004): Design Research in Education:
Yes, but is it Methodological?, Journal of the Learning Sciences, 13:1, 115-128
This article may be used for research, teaching, and private study purposes.
Any substantial or systematic reproduction, redistribution, reselling, loan,
sub-licensing, systematic supply, or distribution in any form to anyone is
expressly forbidden.
The publisher does not give any warranty express or implied or make any
representation that the contents will be complete or accurate or up to
date. The accuracy of any instructions, formulae, and drug doses should be
independently verified with primary sources. The publisher shall not be liable
for any loss, actions, claims, proceedings, demand, or costs or damages
whatsoever or howsoever caused arising directly or indirectly in connection
with or arising out of the use of this material.
THE JOURNAL OF THE LEARNING SCIENCES, 13(1), 115–128
Copyright © 2004, Lawrence Erlbaum Associates, Inc.
Equally, one does not want to demonize those who are thoughtful about the use
of methods such as randomized trials (see Mosteller & Boruch, 2002), techniques
against which design study methods are wrongly pitted.
I hope I make clear that I see design study methods as a protean expression of
research “impressionists” whose art will illuminate and enrich the “studio” re-
search art we revered until the fertile contributions of Collins, Brown, Hawkins,
and Pea in the early 1990s. The next task is to establish the logos of design research
so that we can argue, methodologically, for the scientific warrants for its claims.
Thus, although it is important for the health of a set of methods that profusion pre-
cede pruning (Kelly, 2003), it is time for the pruning to begin.
This collection of articles provides evidence for the vigor of design studies in
education. Each of the articles provides a substantial contribution to prior efforts
(e.g., Kelly, 2003), and to articles that have so far primarily appeared in edited
books (e.g., Collins, 1999). Other attempts at creating design-based formative re-
Correspondence and requests for reprints should be sent to Anthony (Eamonn) Kelly, Professor,
Graduate School of Education, George Mason University, 4085 University Drive, Fairfax, VA 22030.
E-mail: [email protected]
116 KELLY
search methods exist (Reigeluth, 1989; English, & Reigeluth, 1996; Reigeluth &
Frick, 1999). A form of design research in education has been active in Europe for
some time, called “developmental research” (e.g., van den Akker, Branch,
Gustafson, Nieveen, & Plomp, 1999).
Collins, Joseph, and Bielaczyc (this issue) focus on the later work of Brown
(a co-originator of the concept of design experiments in education) and current
research to illustrate how insights from design experiments can guide a series of
studies. The article by diSessa and Cobb labels an explanatory construct an “on-
tological innovation” and shows how it can support and propel a number of re-
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
lated studies. The article by Fishman et al. pushes not only design study re-
search, but educational research, generally, to consider the outer reaches of the
model of design and research stages described in Bannan-Ritland (2003); viz.,
diffusion, adoption, and adaptation of innovative practices. If the focus on the
“ultimate fate” of adoption by others of the intervention/artifact is absent during
a study or across a series of studies, then local, but not broader impact, is likely
to be demonstrated.
Below, I begin with a brief discussion on the classification of design studies and
then highlight the methodological challenges that need to be addressed if we are to
develop design studies from a loose set of methods into a rigorous methodology.
Engineering metaphors for design studies are proving useful in science and in
mathematics education (diSessa & Cobb, this issue), in the development and suste-
nance of communities of learners and passion curricula (Collins, et al., this issue),
and in cognitively informed technology interventions and their survival in the real
world of school systems (Fishman et al., this issue).
One way to classify design studies is by characterizing their “outcomes” or
“products.” Some design studies wish to inform us about a model of practice, oth-
ers about learning, still others about the design and use of a new piece of software
or “learning environments.” As Simon (1969) noted, design is not design without
some form of designed artifact—even if the goal of the artifact(s) is to advance a
different outcome such as new theory. Let me quickly add that the concept of an
“artifact” need not be “concrete” such as a computer program, it might describe as-
pects of “activity structures, institutions, scaffolds and curricula” (Design-Based
Research Collective, 2003, p. 6). Nevertheless, design remains a transitive verb. In
my opinion, design studies should produce an artifact that outlasts the study and
can be adopted, adapted, and used by others (e.g., either researchers or teachers);
otherwise, the fact that the study used an iterative process simply characterizes the
procedures that were followed.
YES, BUT IS IT METHODOLOGICAL? 117
Artifacts
Process as Artifact: Teaching Experiments
Some educational researchers work directly with teachers and the teaching pro-
cess; in fact, the process of engagement between the teacher and student (with the
researcher sometimes playing the role of teacher) can be claimed to be the object of
study (for a history of this method and its roots in Russian teaching experiments,
see Steffe and Thompson, 2000). Note that the word “experiment” is used advis-
edly in teaching experiment methods. For example, Steffe and Thompson (2000)
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
and Confrey and Lachance (2000) refer explicitly to the testing, respectively, of
hypotheses and conjectures, though the methodological implication of these words
deserves study (see following paragraphs).
The methods of the teaching experiment may be subdivided into variants such
as “multi-tiered teaching experiments” (Lesh & Kelly, 2000); “transformative
teaching experiments” (Confrey & Lachance, 2000); and “collaborative teaching
experiments” (Cobb, 2000). I value the adjective “teaching” in the label and prefer
it to the generic label “design experiment.”
A further reason to retain the adjective “teaching” is that these variants of de-
sign studies are centrally concerned with curricular subject matter — in the above
cases with the teaching and learning of mathematics. For example, one of the
touchstones for judging progress in a teaching experiment on the psychological
concept of learning transfer was students’ constructions of mathematical concepts
(e.g., Lobato & Siebert, 2002; Lobato & Thanheiser, 2002).
Software as Artifact
At a different pole are studies that have a focus on the design of a model of
learning or teaching as reified in a software artifact. The design of such artifacts
usually involves engineering a broader “learning environment.” Space limitations
preclude more than a listing of a small number of studies that may fall under this
umbrella (e.g., the Progress Portfolio, Loh, Radinsky, Russell, Gomez, Reiser &
Edelson, 1998; Worldwatcher, Edelson, Gordin & Pea, 1999; LiteracyAccess On-
line, Bannan-Ritland, 2003; Genscope/Biologica, Hickey, Kindfield, Horwitz, &
Christie, 1999; SimCalc, Roschelle, Kaput & Stroup, 2000; and Jasper Series,
Cognition and Technology Group at Vanderbilt, 1997).
The point here is not that these studies produce software per se. Rather, that the
exploration of research questions about learning (of students, teachers and re-
searchers) are reified, explored, and tested by the design and use of the soft-
ware/learning environment. Indeed, some claim that some student cognitions may
not emerge without a designed environment (e.g., Roschelle et al. 2000).
Centrally, the existence of a software artifact or conceptual framework (e.g.,
from a teaching experiment) allows for its use and testing elsewhere. Its later adop-
118 KELLY
tion elsewhere is not a trivial matter, of course, and this important problem has not
yet been the focus of attention of design studies (Fishman et al., this issue).
Proponents of design studies claim that these studies can elicit a generative frame-
work, advance our understanding of disciplinary knowledge, serve as an incubator
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
for new research techniques, advance our skill in designing learning environments
or lead to better instrumentation – all features that may prove useful in the same or
future studies.
If these goals are going to be more fully realized, however, design studies must
develop from a loose set of methods to a methodology. A method is a procedure, a
process, a set of steps to follow. Design studies to date have been described primar-
ily using a set of process descriptors (e.g., these studies are interventionist, itera-
tive, process focused, collaborative, multileveled, utility oriented, and theory
driven, see Shavelson, Phillips, Towne & Feuer, 2003). Unless this set of proce-
dures is under-girded by a conceptual structure that forms the basis for the war-
rants for their claims, design study methods do not constitute a methodology and
will contribute only haphazardly to an aggregative science of learning. Please note
that I am not suggesting for a moment that design researchers are unaware of any
of the following issues. Rather, I am suggesting that design studies are a new form
of research actions that require a rethinking of these standards. A mature method-
ology will have certain characteristics, including, but not limited to the following:
Argumentative Grammar
An argumentative grammar is the logic that guides the use of a method and that
supports reasoning about its data. It supplies the logos (reason, rationale) in the
methodology (method + logos)1 and is the basis for the warrant for the claims that
arise. Thus, for randomized field trials, the logos draws from the structure of the ar-
guments provided by R. A. Fisher concerning small sample analysis, statistical
reasoning, and randomization procedures (see, Bennett, 1990).
The argumentative grammar of randomized field trials can be described sepa-
rately from its instantiation in any given study so that the logic of a proposed study
and its later claims can be criticized. Thus, many reviewers reject studies not on the
choice of method (procedure), but on their violation of the underlying logos that
one expects to see with that choice of method. Indeed, one may argue that the ele-
1The same root appears in the names of many sciences (e.g., bio-logos, psyche-logos, socio-logos).
YES, BUT IS IT METHODOLOGICAL? 119
assertion that design studies use “grounded theory” or “thick description” with a
mere citation to some complex source does not constitute an acceptable basis for
according design studies the status of a methodology. Indeed, in my opinion, these
bibliographic sources inadequately address the data-analytic methods and prob-
lems of design studies.
When one foregoes experimental controls, how can one generalize to other set-
tings regardless of how rich are the local descriptions? When multiple dependent
variables are used, how does one tease complex interactions apart, and are causal
attributions plausible? While allowing for flexible design revision may make sense
in engineering product design, what are the consequences when this method is
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
tuting either structural or statistical controls in design studies (e.g., the use of some
form of experimental design or covariation analyses), then the warrant for making
causal claims about behavior (in the face of untested alternative hypotheses) is
weak (Shavelson et al., 2003).2
Problem of Meaningfulness
If design study methods are not yet developed sufficiently to construct general-
izations that contribute to the problem of demarcation, does that mean that they
have no value or can provide no generalizable statements? On the contrary, impor-
tant scientific discoveries have not typically come about as a result of a scientist
slavishly following the “scientific method” (Holton, 1998). Indeed, Medawar
(1991) argued:
[T]he scientific article is a fraud in the sense that it does give a totally misleading nar-
rative of the processes of thought that go into the making of scientific discoveries.
The inductive format of the scientific article should be discarded. The discussion
which in the traditional scientific article goes last should surely come at the begin-
ning. The scientific facts and scientific acts should follow the discussion, and scien-
tists should not be ashamed to admit, as many of them apparently are ashamed to ad-
mit, that hypotheses appear in their minds along uncharted byways of thought; that
they are imaginative and inspirational in character; that they are indeed adventures of
2It should be noted that the criticism made by Shavelson et al. (2003) was weakened by its attack on
the narrative methods preferred by Bruner, which do not describe adequately the data-analytic tech-
niques used by, for example, Cobb and his colleagues (e.g., McClain, 2002; McClain & Cobb, 2001).
122 KELLY
the mind. What, after all, is the good of scientists reproaching others for their neglect
of, or indifference to, the scientific style of thinking they set such store by, if their
own writings show that they themselves have no clear understanding of it? (p. 233)
In other words, design studies, particularly to the extent that they are hypothesis
and framework generating, may be viewed as contributing to model formulation
(rather than for model estimation or model validation, see Sloane & Gorard, 2003).
Model and hypothesis generation is a crucial part of conducting a worthwhile sci-
entific investigation. It does not represent some “pre-scientific” messing around
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
that should be accorded little status. For example, Russell Hulse (Nobel Laureate
in physics, 1993) noted at a talk at the National Science Foundation (Hulse, 2003),
that an infinite number of hypotheses may be tested. This machination of testing
hypotheses (i.e., the mechanical application of the “scientific method”) does not,
by itself, create useful knowledge. It is important, rather, to test powerful (i.e., con-
struct-advancing) hypotheses, which often emerge from studies directed primarily
at the problem of meaningfulness.
In this sense, the recent National Research Council report (NRC, 2002) is clear
about the function of good questions (hypotheses) in advancing science. But, apart
from a footnote that, sometimes, good observations come “out of the blue,” there is
little discussion or direction on how a researcher might generate one (p. 57). There
is a suggestion in this report that good questions can be gleaned from a literature
review, but this exhortation begs the question of how the good questions in the lit-
erature emerge. The report does not explain how one might go beyond any litera-
ture’s constructs. Indeed, the models guiding a given scientific literature (which
are always tentative) may be inhibitors to progress. Müller, corecipient of the No-
bel Prize for the discovery of high-temperature superconductors, decided after a
thorough, but fruitless, literature review that, “I just don’t talk to the theoreticians.
They just held me back” (cited in Holton, 1998, p. xxiv).
Note that I am, of course, not advocating dispensing with literature reviews;
rather, agreeing with the authors in this special issue that the constructs that ani-
mate any competent literature review often emerge in design contexts and that the
character of these studies may promote the identification and growth of new ideas
and constructs.
In this way, design studies can contribute to the problem of meaningfulness, a
problem that precedes and is foundational for later contributions to addressing the
problem of demarcation. Indeed, how can one establish demarcations for claims if
the scientific concepts under consideration are sparse or unclear?
studies grow authentically and are grounded in the lived experience of teachers and
students, they often appeal to researchers who work in similar contexts. Thus, a
mathematics education researcher may find that the concept of socio-mathemati-
cal norms attractive because it may guide observation, suggest variables to study,
or help make sense of data. Later studies, using different methodologies may ex-
tend generalizations across actors, behaviors, and context, and extend the work
into the purview of studies of diffusion of innovations (Rogers, 1995).
If the discussions about design studies remain within a small community, then
problems with terminology are not that important. However, when the work of a
research community becomes the focus of attention of a larger community of re-
searchers, then the use of terms that are transparent or at least understandable by
the larger community is highly desirable.
One problematic word, for me, is “theory.” According to the National Academy
of Sciences (NAS, 1998), “In science, a [theory is] a well-substantiated explana-
tion of some aspect of the natural world that can incorporate facts, laws, infer-
ences, and tested hypotheses” (p. 5). Scientists use the word, guardedly, to describe
a hard-won consensus statement or model. In the case of the teaching of evolution
(the topic of NAS, 1998), for example, the authors point out the difficulty in com-
municating with the public when the concept of “theory” has been debased in the
vernacular to mean mere opinion.
The word “theory” surfaced in various forms in the writings of many authors in
the Educational Researcher (Kelly, 2003) special issue, and reappears in this spe-
cial issue. In fact, the article by diSessa and Cobb devotes space to explicating this
issue. I find their discussion most important. Because my concern here is with the
reading of educational research studies by those outside of educational research
(particularly from the hard sciences), I prefer to use word “theory” and “theoreti-
cal” very sparingly and instead to use working words (as follows) so that the com-
prehensibility of the piece to those in other traditions is (hopefully) increased. I
adopt this perspective from the experience of having run many educational re-
search panels at NSF that involved scientists from outside of education.
Because design studies are in their (relative) infancy, and we still rely almost
exclusively on text to convey meaning, I advocate the use of working words –
words that have a task to perform and that have a clear intension, such as “hy-
pothesis,” “conjecture,” “observation,” “model,” “framework,” “explanation,”
“belief,” “rationale,” “logical or argumentative structure,” “perspective,” “analy-
sis, or “synthesis.” Note, for example, the work Confrey and Lachance (2000)
get out of the word “conjecture.”
Definition is particularly useful since it reminds us to apply concepts such as ex-
tension and intension. Thus, design studies should not only provide examples
124 KELLY
(which contribute to extension), but also point to the inherent characteristics that
cut across or distinguish exemplars (which contribute to intension). Thus, the con-
scious use of the connotative/denotative aspects of the word definition advise de-
scriptive studies to avoid falling into Stokes’ (1997) Peterson’s quadrant,3 in
which “interesting” or “fascinating” protocols are presented for their own sake
(akin to distinctive markings on birds, which contributes solely to extension). In-
stead, by paying more attention to intension, the work can begin to grow into one
of the more productive quadrants, for example, Pasteur’s quadrant (see Martinez,
Whang, & Hamilton, 2003).
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
4Design study researchers need not be overly defensive about uses of “narrative.” The history and
philosophy of science demonstrates that prospective scientific practice is shot through with narrative
and the scientific imagination is, according to Holton (1998), fundamentally thematic.
126 KELLY
and so forth, to researchers who want to develop other aspects of model building
(Sloane & Gorard, 2003), or who work at later stages of the research-as-design
process (Bannan-Ritland, 2003). Ultimately, if this set of methods is to advance, it
must collaborate with other methodologists in the social sciences and build
cross-disciplinary initiatives to the behavioral, cognitive and social sciences.
part due to the limitations of randomized trials, survey and ethnographic methods
in real settings), but they must “wrap” their principled insights in some curricular
form so it can fit into the real work of teachers and learners.
Further, educational researchers act within and perturb ongoing systems, which
greatly complexifies the scientific problem. In one case, even when a randomized field
trial was anticipated, school-level resources contributed to the difficulties faced in
bringing a systems-thinking curriculum to fruition and worked against an assessment
of its effectiveness (Cline & Mandinach, 2000; see also, Mandinach & Cline, 2000).
The laboratory scientist’s “error variance” is the educational researcher’s reality.
The existence of a research team in a classroom is always a perturbation. We can
see this observation borne out in a multi-level sense in Fishman et al. (this issue). Fur-
ther, the impact of these perturbations at the school system level can be detrimental
even to the continuance of a research endeavor (e.g., Confrey, Bell & Carrejo, 2001).
What this implies, methodologically, is that design study researchers must con-
sider criteria for success that exceed satisfying the criteria for “scientific” claims.
Ultimately, design research may be seen as a practical endeavor. Can educational
research produce knowledge artifacts that provide solutions for teachers and learn-
ers that are efficiently, workable, economical and do not entail a significant
“switching cost” from current practice? We must consider later (and unsupported)
adoption and adaptation of our research products in the set of evaluative criteria.
Methodologically, we will have to study and borrow from other disciplines (e.g.,
Rogers, 1995; Zaritsky et al., 2003).
REFERENCES
Bannan-Ritland, B. (2003). The role of design in research: The Integrative Learning Design Frame-
work. Educational Researcher, 32, 21–24.
Barab, S. A., Thomas, M., Dodge, T., Carteaux, R., & Tuzun, H. (in press). Making learning fun: Quest
Atlantis, a game without guns.Educational Technology Research & Development.
Bennett, J. H. (Ed.), (1990). Statistical methods, experimental design, and scientific inference: R. A.
Fisher. Oxford, UK: Oxford University Press.
Cognition and Technology Group at Vanderbilt (1997). The Jasper project: Lessons in curriculum, instruc-
tion, assessment, and professional development. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
YES, BUT IS IT METHODOLOGICAL? 127
Cline, H. F., & Mandinach, E. B. (2000) The corruption of a research design: A case study of a curricu-
lum innovation project. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathe-
matics and science education (pp. 169–189). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Cobb, P. (2000). Conducting teaching experiments in collaboration with teachers. In A. E. Kelly & R.
A. Lesh (Eds.), Handbook of research design in mathematics and science education (pp. 307–333).
Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Collins, A. (1999). The changing infrastructure of education research. In E. C. Lageman & L. S. Shulman
(Eds.), Issues in education research: Problems and possibilities. San Francisco: Jossey-Bass.
Confrey, J., Bell, K., & Carrejo, D. (2001). Systemic crossfire: What implementation research reveals
about urban reform in mathematics. Article presented at the annual meeting of the American Educa-
tional Research Association, Seattle. WA.
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
Confrey, J., & Lachance, A. (2000). Transformative teaching experiments through conjecture-driven
research design. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathematics
and science education (pp. 231–266). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Dehaene, S. (1997). The number sense: How the mind creates mathematics. New York: Oxford University Press.
Design-based Research Collective. (2003). Design-based research: An emerging paradigm for educa-
tional inquiry. Educational Researcher, 32, 5–8.
Dewey, J. (1980). Art as experience. New York: Pedigree.
Edelson, D. C., Gordin, D. N., & Pea, R. D. (1999). Addressing the challenges of inquiry-based learn-
ing through technology and curriculum design. Journal of the Learning Sciences, 8, 391–450.
English, R. E., & Reigeluth, C. M. (1996). Formative research on sequencing instruction with the elab-
oration theory. Educational Technology Research & Development, 44(1), 23–42.
Gigerenzer, G., Todd, P. M., & ABC Research Group, (1999). Simple heuristics that make us smart.
New York: Oxford University Press.
Goldstein, W. M., & Hogarth, R. M. (Eds.), (1997). Research on judgment and decision making: Cur-
rents, connections, and controversies. Cambridge, England: Cambridge University Press.
Hickey, D. T., Kindfield, A. C. H., Horwitz, P., & Christie, M., (1999). Advancing educational theory
by enhancing practice in a technology supported genetics learning environment. Journal of Educa-
tion, 181(2), 25–55.
Holton, G. (1998). The scientific imagination. Cambridge, MA: Harvard University Press.
Hulse, R. (2003). Scientific discovery. Presentation at the National Science Foundation, Arlington, VA.
Kelly, A. E. (2003). Research as design. Theme issue: The role of design in educational research. Edu-
cational Researcher, 32, 3–4.
Kelly, A. E., & Sloane, F. C. (2003). Educational research and the problems of practice. Irish Educa-
tional Studies, 22, 29–40.
Lesh, R. A., & Kelly, A. E. (2000). Multi-tiered teaching experiments. In A. E. Kelly & R. A. Lesh
(Eds.), Handbook of research design in mathematics and science education (pp. 197–230). Mahwah,
NJ: Lawrence Erlbaum Associates, Inc.
Lobato, J. (2003). How design experiments can inform a rethinking of transfer and vice versa. Educa-
tional Researcher, 32(1), 17–20.
Lobato, J., & Siebert, D. (2002). Quantitative reasoning in a reconceived view of transfer. The Journal
of Mathematical Thinking, 21(1), 87–116.
Lobato, J., & Thanheiser, E. (2002). Developing understanding of ratio as measure as a foundation for
slope. In B. Litwiller (Ed.), Making sense of fractions, ratios, and proportions: 2002 Yearbook (pp.
162–175). Reston, VA: NCTM.
Loh, B., Radinsky, J., Russell, E., Gomez, L. M., Reiser, B. J., & Edelson, D. C. (1998). The progress
portfolio: Designing reflective tools for a classrom context. In Proceedings of CHI ’98 (pp.
627–634). Reading, MA: Addison-Wesley.
Mandinach, E. B, & Cline, H. F (2000). It won’t happen soon: Practical, curricular, and methodological
problems in implementing technology-based constructivist approaches in classrooms (with H.F.
128 KELLY
Cline). In S.P. Lajoie (Ed.), Computers as cognitive tools: No more walls (Vol. II, pp.377–395).
Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Martinez, M. E., Whang, K., & Hamilton, E. (2003). ROLE and the controversy of theory versus prac-
tice. Article presented at the Annual Meeting of the American Educational Research Association,
Chicago.
McCandliss, P. D., Kalchman, M., & Bryant, P. (2003). Design experiments and laboratory approaches
to learning: Steps toward collaborative exchange. Educational Researcher, 32(1), 14–16.
McClain, K. (2002). Teacher’s and student’s understanding: The role of tools ad inscriptions in support-
ing effective communication. Journal of the Learning Sciences, 11, 217–249.
McClain, K., & Cobb, P. (2001). The development of sociomathematical norms in one first-grade class-
room. Journal for Research in Mathematics Education, 32, 234–266.
Downloaded by [Stanford University Libraries] at 04:23 19 July 2012
Medawar, P. B. (1991). Is the scientific paper a fraud? In D. Pyke (Ed.), The threat and the glory: Re-
flections on science and scientists (pp. 228–233). Oxford, England: Oxford University Press.
Mosteller, F., & Boruch, R. (Eds.), (2002). Evidence matters: Randomized trials in education research.
Washington, DC: Brookings.
National Academy of Sciences. (1998). Teaching about evolution and the nature of science. Washing-
ton, DC: National Academy Press.
National Research Council. (2002). Scientific research in education. In R. J. Shavelson & L. Towne
(Eds.), Committee on Scientific Principles for Education Research. Washington, DC: National Acad-
emy Press.
OECD. (2002). Understanding the brain. Toward a new learning science. Paris: OECD.
Popper, K. R. (1965). Conjectures and refutations: The growth of scientific knowledge. New York:
Harper and Row.
Reigeluth, C. M. (1989). Educational technology at the crossroads: New mind sets and new directions.
Educational Technology Research & Development, 37 (1), 67–80. (Formative research is discussed
on pp. 71–73.)
Reigeluth, C. M., & Frick, T. W. (1999). Formative research: A methodology for improving design the-
ories. In C. M. Reigeluth (Ed.), Instructional-Design Theories and Models: A New Paradigm of In-
structional Theory (Volume II). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc..
Rogers, E. M. (1995). Diffusion of innovations. New York: Free Press.
Roschelle, J., Kaput, J., & Stroup, W. (2000). SimCalc: Accelerating students’ engagement with the
mathematics of change. In M. Jacobson & R. Kozma (Eds.), Educational technology and mathemat-
ics and science for the 21st century (pp. 47–75). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Runkel, P., & McGrath, J. (1972). Research on human behavior: A systematic guide to method. New
York: Holt, Rinehart, & Winston.
Shavelson, R. J., Phillips, D. C., Towne, L., & Feuer, M. J. (2003). On the science of education design
studies. Educational Researcher, 32, 25–28.
Simon, H. A. (1969). The sciences of the artificial. Cambride, MA: MIT Press.
Sloane, F. C., & Gorard, S. (2003). Exploring modeling aspects of design experiments. Educational Re-
searcher, 32(1), 29–31.
Steffe, L. P., & Thompson, P. W. (2000). Teaching experiment methodology: Underlying principles and
essential elements. In A. E.Kelly, & R. A. Lesh (Eds.), Handbook of research design in mathematics
and science education (pp. 267–307). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Stokes, D. E. (1997). Pasteur’s quadrant: Basic science and technological innovation. Washington,
DC: Brookings Institution Press.
van den Akker, J. J. H., Branch, R. M., Gustafson, K., Nieveen, N., & Plomp, T. (Eds.), (1999). Design
approaches and tools in education and training. Dordrecht, Netherlands: Kluwer. See, i.e,
http://www.geocities.com/ResearchTriangle/8788/DR.html.
Zaritsky, R., Kelly, A. E., Flowers, W., Rogers. E., & O’Neill, P. (2003). Clinical design sciences: A
view from sister design efforts. Educational Researcher, 32(1), 32–34.