WP088
WP088
By
STANFORD UNIVERSITY
COPYRIGHT © 2004 BY
Center for Integrated Facility Engineering
INTRODUCTION
Design Team Performance
Integrated Concurrent Engineering (ICE) uses: a singularly rapid combination of expert designers;
advanced modeling, visualization and analysis tools; social processes, and a specialized design facility;
to create preliminary designs for complex systems. When compared with a traditional parallel
engineering method, successful ICE users reduce project schedule by several orders of magnitude, while
substantially improving design cost and maintaining quality standards. Today’s pioneers of ICE are in
the aerospace and automotive industries, where several closely related methods are termed “ICE”,
“Extreme Collaboration”, ‘Concurrent Design Engineering”, or “Radical Collocation.” [ Mark, MEP,
Olsens] Whereas traditional engineering superficially resembles a government bureaucracy, ICE
performs the same work in an environment more akin to NASA’s Shuttle Mission Control operations.
Our research is based primarily on the most experienced ICE team at NASA, the Jet Propulsion
Laboratory (JPL) Advanced Project Development Team, conventionally known as Team-X1. Team-X
completes early-phase design projects in less than one-tenth the time of the previous process at JPL, and
for less than one third of the variable cost. Although there is continuing effort to improve the quality of
the Team-X designs and the generality of their method, the Team-X product is good enough that outside
investigators choose to purchase Team-X services about fifty times a year. The team is in heavy
demand in the competitive market for mission design services, and its successful plans have brought
hundreds of millions of dollars in business to JPL and its suppliers [Sercel 1998].
An Illustrative Metaphor
We find that an auto metaphor conveys our intuition that in spite of superficial differences, ICE
mechanistically differs from standard design principally in that it operates more rapidly.
Metaphorically, we conceive of ICE as analogous to the operation of high-performance race cars in that
ICE engages the same considerations as standard design teams, but like the race car, many elements of
the total system are customized for high performance. The racecar has specialized engine, transmission,
tires and even a racetrack. Analogously, ICE requires expert selection and preparation for participants,
the organization, the enabling modeling and visualization methods, and the design process the
1 With thanks, but without explicit description, we leverage observations from similar practices at the
Tactical Planning Center at Sea-Land Service Inc., and at Stanford’s Real-Time Venture Design
Laboratory, Gravity Probe B Mission Control, and Center for Integrated Facility Engineering.
Observation, Theory, and Simulation of Integrated Concurrent Engineering Chachere, Kunz and Levitt
participants follow. For the racecar, any bump in the road, hardly noticeable at twenty miles per hour,
can be disastrous at two hundred. Therefore, before a race, the track must be cleared and leveled.
Analogously, the Team-X “pre-session” structures the tasks, and chooses the participants and the
variables of interest for the project at hand. Finally, once the race starts, the driver principally responds
by reflex in accordance with training and experience, because there is little time for deliberation. An
ICE team also must work quickly to do its design and make decisions quickly, conclusively and well.
Our intuition is that the race car and the ICE team are structurally identical to the standard car and
design team; The fundamental forces and operations in play are the same in both cases; and those
specialized, enabling adaptations of a generic design result in the radically different performance in both
cases. Thus, while operating at high speed (low latency), we are still looking at a car (or a multi-
disciplinary design project), and we can understand it by understanding the behavior of the fundamental
mechanisms.
This “Systems” perspective suggests that an ICE implementation that lacks a single critical
aspect may result in unimproved performance, or even project failure. In our analogy, an otherwise
optimized racecar with an ordinary engine cannot generate enough power to compete, and placing an
ordinary driver behind the wheel would be catastrophic. Furthermore, factors that are irrelevant under
some conditions may become important in others, and offer a key to understanding phenomena as
seemingly unprecedented as ICE. Wind resistance, for example, is of no consequence at low speeds, but
it motivates streamlining at high speeds. A truly novel enhancement, wings, converts the once
detrimental wind resistance into beneficial lift, and revolutionizes transportation.
Our Methodology
We offer three orthogonal and complementary research elements: observations of a radically accelerated
project at JPL, formal yet intuitive theories that have face validity and offer a straightforward
comparison with established social science theories, and simulation results that show the combined
implications of foundational micro-theories on a project scale. Our claims are based on simultaneously
validating theories by comparing them with observations, verifying theories’ consistent
operationalization in a simulation model, and calibrating the results’ implications against our initial and
new observations. Our work is therefore explicitly grounded by consistencies among reality, intuition,
and formalism.
Observation
We visited JPL’s Team-X and ethnographically observed three design sessions of a sample project. In
several hours of on-site interviews, we collected quantitative and qualitative details about the
participating organization, process, and culture. Finally, after coding and analyzing this information, we
followed up with an online survey covering the amount of time each participant spent in direct work,
communication, and rework each week. We describe the ICE practice in detail, and propose information
response latency as a fundamental, observable process performance measure.
Theory
Our observations, interviews, and survey ground a set of factors that enable radical project acceleration.
We explain ten fundamental mechanisms that work together to keep response latency at a minimum,
and, thereby, allow projects to execute at a very high speed. Although we leverage existing literature
extensively, the work also draws on behaviors and relationships observed in practice.
Simulation
We apply three computational project models to describe and predict the performance of an ICE team.
We retrospectively calibrated the Organizational Consultant (OrgCon), Virtual Design Team (VDT),
and Interaction Value Analysis (IVA) models to accurately describe our observations at Team-X, and
found that that they are able to accurately depict the observed ICE phenomena. We conclude with
analysis using a detailed VDT model that supports our enabling factor theories.
We provided none of the information in this diagnosis as input, and yet OrgCon returned specific
and strikingly accurate descriptions of JPL design sessions’ tools, people and process. Theory, model,
and professional practice validate one another, for example, in that the system’s “Frequent informal
meetings and temporary task forces” prediction accurately describes both Galbraith’s theoretical
recommendations [1973] and the observed Team-X sidebars (we explore the importance of this and
other features, such as “Ambiguity”, “Richness of the media”, and “Team spirit” later in this paper).
The result lends confidence in the OrgCon model’s applicability, and demonstrates that elements of
ICE’s success can be predicted by existing literature.
By predicting no misfits for the ICE approach, OrgCon raises the exciting possibility that ICE is
a new, distinct and effective organizational form. Many organizational researchers (notably Mintzberg
[], and Burton and Obel themselves) hypothesize that only a handful- typically 5 or 6- perfectly adapted
archetypal organizational styles exist. OrgCon is not single-handedly equipped to assess such a claim,
but it does provide a degree of confidence, complementary to empirical claims, that ICE is both effective
and sustainable. Because OrgCon does not offer positive, clear and compelling evidence of ICE’s
effectiveness, however, we cannot conclusively determine whether an important gap in theory, observed
practice, or model is present. We therefore turn to a selection of prominent and more operationally
explicit theories to assess in detail the extent to which social science theory encompasses ICE behavior.
Direct Work
We view engineering projects as consisting of many interrelated design decisions. The institutional
branch of organizational theory indicates that people make decisions and select procedures using a sense
of personal identity and appropriateness [March 1994, Scott ibid., Powell and DeMaggio]. ICE decision
support technologies, engineering culture and public decision making processes strongly encourage
Exception Handling
Organizational actors are not generally aware of all the nuances of an organization’s strategic intent and
goals. Similarly, workers will sometimes find that their technical expertise is insufficient to finalize a
work element. The VDT system models perceived technical inadequacy and ignorance of organizational
preferences as exceptions (potential errors) that management must contemplate and, perhaps, order
reworked. In the model, they emerge probabilistically during work, with a frequency based on task
2
Prospect theory [Kahneman and Tversky] observes that people respond to decisions’ contexts,
even when they do not impact the traditional decision basis elements. Because we do not calculate the
engineers’ specific choices, we safely address this framing principally as part of the decision rule.
Information Exchange
Some decisions require information that does not simply reside among management, but that a previous
or parallel work task creates during the project. These facts may impact the range of available design
alternatives (as with design configuration interdependence), or they may influence the predicted results
for a given choice. Accordingly, VDT actors request information from others who are engaged in
interdependent work (at a rate that is based on actor skill, prior team experience, and task uncertainty).
In this situation, the simulator routes a virtual information request and possible reply between the actors.
Because this process supplies actors with data produced in other activities, and this data influences the
range and significance of design options, we view the VDT communications model as capturing a
micro-organizational adaptation to gaps in the belief and alternatives components of the general rational
framework’s decision basis.
When an actor performs a task that has a very large number of interdependencies, the time spent
in communications may actually exceed the amount of direct work activity. If the workload becomes
unmanageable, quality may degrade significantly- not just for the principal task, but also for others who
rely upon the activity’s output.
5 Rework
times, and rework and information exchange 0 Work
networks. Simulating the model showed the
Intellective
Extended
Prediction
WAM
Survey
Prediction
Baseline
Survey
Total
VDT
VDT
predicted results of all the actors working,
exchanging information, and handling exceptions.
Figure 3 compares an average number of
work hours per station, by type of work activity Figure 3: Comparison of work volumes on a sample JPL Team-
and according to a number of sources. Prior to X project from various sources. The columns represent (right to
project start, all Team-X participants requested a left) reported, predicted, surveyed, simplified/simulated, and
retrospectively simulated data. At a high level, there is
time budget to do their work, which we averaged agreement among them.
as the rightmost Work Authorization Memo
(WAM) value. The Survey column reports the data participants provided after project completion. We
retrospectively calibrated the VDT simulation to predict the total work volume for each project task as
well as the direct work, coordination, rework, and time wasted waiting for exception management. The
averages of these values appear in the leftmost, Baseline VDT column.
Figure 3 illustrates that using input data collected at JPL (including work volumes reported
retrospectively by Team-X) we were able to calibrate VDT to produce emergent behavior that matches
an actual Team-X project. This suggests that at an aggregate level, a properly calibrated VDT model
can retrospectively predict the volume and distribution of work for this type of project. Because the
simulation is rooted in the information processing theory, this result coarsely cross validates the
information processing theory, ICE observations, and VDT computational model.
Product Risk
The product of a Team-X ICE project is a set of complementary design choices that form the basis of a
mission. We use the term product risk to describe the likelihood that design choices are fundamentally
invalid or inconsistent. Product risk is important because it may lead to an improper decision over
whether to proceed with a mission, or to a mission that is needlessly costly, risky, or extended in
schedule.
In this paper, we do not consider the cost, quality, or schedule of planned missions, but we do
use project behavior to predict the likely accuracy and completeness of the team’s own analysis of these
factors. Team-X requires appropriate stations as well as an effective collaborative process to correctly
estimate the mission’s programmatic risk, costs and schedule. Benjamin and Pate-Cornell highlight the
need for probabilistic risk analysis in this project setting [2004], and Team-X’s new Risk Station
testifies to its perceived importance at JPL [ JPL Risk Station Paper]. Our analysis of product risk is
distinct from, and complimentary to these efforts.
Our analysis highlights the impact of organizational risk factors on process quality because they
are estimated to contribute to 50-75% of major modern catastrophes [ MEP]. For descriptions of over a
hundred organizational risk factors, and related literature reviews, see Ciaverelli [] or Cooke and
Gorman []. Important factors that VDT does not evaluate include conformity, which decreases the
likelihood that individuals will contradict peers’ public, erroneous statements [- Festinger?].
“Groupthink”, reduces the likelihood of thorough, critical evaluation of alternatives in a group setting [
Janis]. Finally, the “Risky shift” phenomenon, that leads groups to select choices that are more risky
than those which any participant would individually choose [ Bem]. Each of these organizational factors
acts principally to reduce the quality of the selected design.
VDT does calculate several measures that regard risk to the product design. Overloaded or
unqualified actors tend to ignore exceptions and information exchange requests, which contributes to
three product risk metrics. Project risk measures the rate of rework or design iteration that is ordered in
response to interdependencies among functionally related tasks. When a simulation shows high project
risk, this indicates a propensity for failures in the “System of systems” that involve more than one
station. Functional risk measures the rate of rework (or design iteration) that is ordered for individual
tasks. High functional risk at a particular station indicates that the station’s design is likely to be
independently faulty. Finally, communications risk is the fraction of information exchange requests that
stations take time to complete. High communications risk indicates that interrelated tasks are not always
sharing information appropriately, which tends to reduce integrated design quality. We can predict
overall design quality using VDT by viewing these metrics at an aggregate project level, or we may drill
down to characterize the product in detail. For example, elevated project risk at the Power station
indicates that other subsystems have not redesigned according to its needs, and a high communications
risk at the Cost station suggests that the estimates do not include relevant design details.
Organization Risk
By organization risk, we refer to the likelihood and consequences of events that degrade the operating
effectiveness of the design team (Team-X) itself. VDT measures several important pressures on the
Process Risk
We consider three measures of process risk that anticipate the perceived efficiency of the design study
project. These are the cost, schedule, and structural stability of the simulated design project. Our VDT
model uses the total work volume among all engineers and supervisors to represent the cost of an ICE
design project. Figure 3 shows that our calibrated Team-X model produces a similar cost structure to
that reported in surveys (with the exception of meetings, which VDT does not schedule in contingency
with project performance). Although VDT calculates detailed schedules including average start and
finish times for each station’s task, we compare alternative cases using the total project schedule, or
time between execution of the first and last work items. For structural stability, we use the same
technique as described under organization risk.
Knowledge Distribution
Persistent dynamics of change in the distribution of technical knowledge produce an important deviation
between the traditional, hierarchical information processing theories and modern, multidisciplinary
collaborative engineering behavior. As projects become more technically complex and dynamic, we
find that actors of superior knowledge or technical skill come to handle organizational deficiencies in
work procedures and alternative sets.
VDT is calibrated with a broad range of academics’ and professionals’ project study experiences,
and it has made some strikingly accurate predictions of project performance [ Lockheed, others?].
Because our VDT model is calibrated with the theory and experience of these traditional, hierarchical
projects, it offers predictions like those of an expert in “traditional” project planning. These predictions
are based on the assumption that workers route exceptions only through an authoritative management
hierarchy, and that information exchange only transpires between actors engaged in interdependent tasks
(or through manually scheduled meetings). Our Team-X model using the current, standard version of
REMARKS ON METHODOLOGY
In recent years, the computational modeling of organizations has enjoyed a popular resurgence among
researchers seeking to better understand new and established theories [March 01, and Burton 01]. By
grounding a computational model explicitly in a theoretical framework, researchers can explore complex
ramifications of a theory (or set of theories) that extend qualitatively beyond the reach of human
intuition. In addition, our team has used models to quantitatively predict the effects of theoretical and
practical changes in a baseline model. Following the tradition of mathematical proof, when a model of
theory produces a recognizable pattern of results, we interpret this and make a new claim. In a perfect
world, if the new hypothesis is shown to be false, the model’s theoretical premises are disproved (a
“proof by contradiction”).
At this time, however, model based theory generation is new to domains as complex as project
design. In this paper, we apply the technique in its most common modern form- as an engineering
method that relies in part on intuition and external observation to validate its claims. Therefore, we
accompany our model analysis with intuitive descriptions as well as observational data.
“In their anxiety to be scientific, students of psychology have often imitated the latest forms of
sciences with a long history, while ignoring the steps these sciences took when they were young”
-Psychologist Solomon Asch
The recent expansions of particularly compatible social science theories and analytic techniques
are creating an exciting time for computational organizational modelers [March, and Burton, in Lomi
and Larsen 2001]. Properly applied, the methodology facilitates practical organizational design just as
effectively as it strengthens scholarly results [Kunz et al 1998]. Our work illustrates the power of
computational organizational models to both extend and lend specificity to qualitative theory,
ethnography, and survey research.
In planning a project or adapting one midstream, sometimes alternatives may be introduced
directly to the organization. At other times, it may be more economical to test these interventions first in
a computational model. Schedule tracking systems such as Primavera are the most frequently consulted
quantitative project models, but they are not the most sophisticated. When testing interventions in the
Virtual Design Team (VDT) simulator, for example, planners can compare project participants’
predicted backlog, coordination effectiveness, schedule risk, and other results between many alternative
cases [Kunz et al 1998, ACM; Jin et al 1995, Levitt 1996, Levitt et al, Management Science]. In this
way, modelers can plan joint adaptations to organizations, processes, and culture that will meet a
NEXT STEPS
Evolutionary organizational theorists would predict that if ICE performance were viable, it would be
widespread. The system perspective we present in the introduction suggests that this apparent conflict
may result from a careful balance of factors that aren’t ordinarily available in combination. For
example, moving to a flat hierarchy or task parallelism, alone, might be disastrous in a traditional
organization, even though they are complementary in ICE.
We are designing a computational experiment to investigate this issue by calculating the impacts
of each enabling factor from Table 1. We hope that this analysis will more clearly illuminate the
interactions between enabling factors and explain:
1. Can a single calibration of the VDT engine simultaneously demonstrate ordinary teams’
performance and that of Team-X?
2. Are intermediate states, in which some, but not all enabling factors are satisfied, better or worse
than traditional practice?
3. How precipitously does performance drop off when enabling factors are reduced in strength?
4. Is there a sequence of interventions that leads from traditional conceptual design to ICE behavior
without reducing performance at any step?
5. How do organizational risk properties change under these conditions?
6. Is there a tightrope of high performance between tradition and ICE that involves simultaneous,
gradual improvements in the enabling factors?
7. Do certain intuitive compound factors, such as collocation, match theoretical predictions, and do
they complement one another as interventions? [ Tse Tse Wong and Richard Burton, CMOT]
ACKNOWLEDGMENTS
The research described in this paper was carried out at the Jet Propulsion Laboratory, California Institute
of Technology, under a contract with the National Aeronautics and Space Administration. We are
grateful to NASA ARC's Engineering for Complex Systems Program for supporting the work under
Grant Number NCC21388, and for providing valuable feedback. We are especially indebted to the Jet
Propulsion Laboratory’s Team-X, including Rebecca Wheeler, Robert Oberto, Ted Sweetser, and Jason
Andringa.
We also thank the Kozmetsky Research Fellowship Program and the Stanford Media-X Center
for graciously funding this continuing research.
We further appreciate collaboration with Ingrid Erickson and Pamela Hines; The Stanford
ReVeL research team (Including Ben Shaw, Cliff Nass and Syed Shariq); The Center for Integrated
Facility Engineering (John Kunz and Martin Fischer), The Center for Design Research (Ade Mabogunje
and Larry Leifer); And the Virtual Design Team Research Group.
BIBLIOGRAPHY
Asch, S.E. 1987 (original work published 1952) “Social Psychology”. New York: Oxford University
Press
Bem, D., M. Wallach, and N. Kogan 1965 “Gropu Decision Under Risk of Aversive Consequences”
Journal of Personality and Social Psychology, 1(5), 453-460
Benjamin, J. and Pate’-Cornell, M.Elizabeth 2004 “Risk Chair for Concurrent Design Engineering:
Satellite Swarm Illustration” Journal of Spacecraft and Rockets Vol. 41 No. 1 January-February
2004
Burton R and Obel B. 2004 “Strategic Organizational Diagnosis and Design: Developing Theory for
Application 3rd Edition”. Boston: Kluwer Academic Publishers.
Carley, K. 1996 “Validating Computational Models” Working paper prepared at Carnegie Mellon
University
Chachere, J., Kunz, J., and Levitt, R. 2004, “Can You Accelerate Your Project Using Extreme
Collaboration? A Model Based Analysis” 2004 International Symposium on Collaborative
Technologies and Systems; Also available as Center for Integrated Facility Engineering Technical
Report T152, Stanford University, Stanford, CA
Ciavarelli, A. 2003 “Organizational Risk Assessment” Unpublished manuscript prepared at the Naval
Postgraduate School
Cooke, N., J. Gorman, and H. Pedersen 2002 “Toward a Model of Organizational Risk: Critical
Factors at the Team Level” Unpublished manuscript prepared at New Mexico State University and
Arizona State University
Covi, L. M., Olson, J. S., Rocco, E., Miller, W. J., Allie, P. 1998 “A Room of Your Own: What Do
We Learn about Support of Teamwork from Assessing Teams in Dedicated Project Rooms?
Cooperative Buildings : Integrating Information, Organization, and Architecture. Proceedings of
First International Workshop, CoBuild '98, Darmstadt, Germany, February 25-26, Norbert A.
Streitz, Shin’ichi Konomi, Heinz-Jürgen Burkhardt (eds.). Berlin ; New York : Springer: 53-65.
D. B. Smith, “Reengineering Space Projects”, Paris, France, March 3-5, 1997.
D. Smith, and L. Koenig, “Modeling and Project Development”, European Space & Research Contre,
Noordwijk, The Netherlands, November 3, 1998.
Eisenhardt, K. 1989 “Building Theories from Case Study Research”, Academy of Management
Review Vol. 14 No. 4 532-550