Manual Book 2
Manual Book 2
Manual Book 2
Social Sciences
Multi-disciplinary Teamwork
Organizational Behavior
Leadership
Body of Knowledge
Problem definition
System boundaries
Objectives hierarchy
Concept of operations
Originating requirements
Concurrent engineering
System life cycle phases
Integration/Qualification
Architectures
Functional / Logical
Physical / Operational
Interface
Trades
Concept Level
Risk Management
Key Performance Parameters
Defined interfaces
Solution architecture
Product breakdown structure
Possess a CONOPS & a System CONTEXT
Have at least ONE mission event time line defined
Unprecedented Systems
Conceptual (at best, may not even be conceived)
Need can be expressed
Must develop CONOPS, define mission(s), & Context
Describe FUNCTIONS, INTERFACES, and TESTING
Techn
Deplo
Conce
Produ
Opera
Defin
Integr
Dispoe
System Engineering Knowledge
The diagram is divided into five sections, each describing how systems knowledge is
treated in the SEBoK.
The Systems Fundamentals Knowledge Area considers the question �What is a System?�
It explores the wide range of system definitions and considers open systems, system
types, groupings of systems, complexity, and emergence. All of these ideas are
particularly relevant to engineered systems and to the groupings of such systems
associated with the systems approach applied to engineered systems (i.e. product
system, service system, enterprise system and system of systems).
The Systems Approach Applied to Engineered Systems Knowledge Area defines a
structured approach to problem/opportunity discovery, exploration, and resolution,
that can be applied to all engineered systems. The approach is based on systems
thinking and utilizes appropriate elements of system approaches and
representations. This KA provides principles that map directly to SE practice.
The Systems Science Knowledge Area presents some influential movements in systems
science, including the chronological development of systems knowledge and
underlying
theories behind some of the approaches taken in applying systems science to real
problems.
The Systems Thinking Knowledge Area describes key concepts, principles and patterns
shared across systems research and practice.
The Representing Systems with Models Knowledge Area considers the key role that
abstract models play in both the development of system theories and the application
of systems approaches.
Systems thinking is a fundamental paradigm describing a way of looking at the
world. People who think and act in a systems way are essential to the success of
both the research and practice of system disciplines. In particular, individuals
who have an awareness of and/or active involvements in both research and practice
of system disciplines are needed to help integrate these closely related
activities.
The knowledge presented in this part of the SEBoK has been organized into these
areas to facilitate understanding; the intention is to present a rounded picture of
research and practice based on system knowledge. These knowledge areas should be
seen together as a �system of ideas� for connecting research, understanding, and
practice, based on system knowledge which underpins a wide range of scientific,
management, and engineering disciplines and applies to all types of domains.
The Vee Model endorses the INCOSE Systems Engineering Handbook (INCOSE 2012)
definition of life cycle stages and their purposes or activities, as shown in
Figure 2 below.
The INCOSE Systems Engineering Handbook 3.2.2 contains a more detailed version of
the Vee diagram which incorporates life cycle activities into the more generic Vee
model.
A similar diagram, developed at the U.S. Defense Acquisition University (DAU), can
be seen in Figure 3 below.
Figure 3. The Vee Activity Diagram (Prosnik 2010). Released by the
Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).
Figure 5 shows the generic life cycle stages for a variety of stakeholders, from a
standards organization (ISO/IEC) to commercial and government organizations.
Although these stages differ in detail, they all have a similar sequential format
that emphasizes the core activities as noted in Figure 2 (definition, production,
and utilization/retirement).
Figure 5. Comparisons of Life Cycle Models (Forsberg, Mooz, and Cotterman 2005).
Reprinted with
permission of John Wiley & Sons. All other rights are reserved by the copyright
owner.
It is important to note that many of the activities throughout the life cycle are
iterated. This is an example of recursion (glossary).
The term stage refers to the different states of a system during its life cycle;
some stages may overlap in time, such as the utilization stage and the support
stage. The term �stage� is used
in ISO/IEC/IEEE 15288.
The term phase refers to the different steps of the program that support and manage
the life of the system; the phases usually do not overlap. The term �phase� is used
in many well-established models as an equivalent to the term �stage.�
Program management employs phases, milestones, and decision gates which are used to
assess the evolution of a system through its various stages. The stages contain the
activities performed to achieve goals and serve to control and manage the sequence
of stages and the transitions between each stage. For each project, it is essential
to define and publish the terms and related definitions used on respective projects
to minimize confusion.
A typical program is composed of the following phases:
The pre-study phase, which identifies potential opportunities to address user needs
with new solutions that make business sense.
The feasibility phase consists of studying the feasibility of alternative concepts
to reach a second decision gate before initiating the execution stage. During the
feasibility phase, stakeholders' requirements and system requirements are
identified, viable solutions are identified and studied, and virtual prototypes
(glossary) can be implemented. During this phase, the decision to move forward is
based on:
whether a concept is feasible and is considered able to counter an identified
threat or exploit an opportunity;
whether a concept is sufficiently mature to warrant continued development of a new
product or line of products; and
whether to approve a proposal generated in response to a request for proposal.
The execution phase includes activities related to four stages of the system life
cycle: development, production, utilization, and support. Typically, there are two
decision gates and two milestones associated with execution activities. The first
milestone provides the opportunity for management to review the plans for execution
before giving the go-ahead. The second milestone provides the opportunity to review
progress before the decision is made to initiate production. The decision gates
during execution can be used to determine whether to produce the developed Solution
and whether to improve it or retire it.
These program management views apply not only to the Solution, but also to its
elements and structure.
New projects typically begin with an exploratory research phase which generally
includes the activities of concept definition, specifically the topics of business
or mission analysis and the understanding of stakeholder needs and requirements.
These mature as the project goes from the exploratory stage to the concept stage to
the development stage.
The production phase includes the activities of system definition and system
realization, as well as the development of the system requirements (glossary) and
architecture
(glossary) through verification and validation.
The utilization phase includes the activities of system deployment and system
operation.
The support phase includes the activities of system maintenance, logistics, and
product and service life management, which may include activities such as service
life extension or capability updates, upgrades, and modernization.
The retirement phase includes the activities of disposal and retirement, though in
some models, activities such as service life extension or capability updates,
upgrades, and modernization are grouped into the "retirement" phase.
Additional information on each of these stages can be found in the sections below
(see links to additional Part 3 articles above for further detail). It is important
to note that these life cycle stages, and the activities in each stage, are
supported by a set of systems engineering management processes.
Exploratory Research Stage
User requirements analysis and agreement is part of the exploratory research stage
and is critical to the development of successful systems. Without proper
understanding of the user needs, any system runs the risk of being built to solve
the wrong problems. The first step in the exploratory research phase is to define
the user (and stakeholder) requirements and constraints. A key part of this process
is to establish the feasibility of meeting the user requirements, including
technology readiness assessment. As with many SE activities this is often done
iteratively, and stakeholder needs and requirements are revisited as new
information becomes available.
A recent study by the National Research Council (National Research Council 2008)
focused on reducing the development time for US Air Force projects. The report
notes that, �simply stated, systems engineering is the translation of a user�s
needs into a definition of a system and its architecture through an iterative
process that results in an effective system design.� The iterative involvement with
stakeholders is critical to the project success.
Except for the first and last decision gates of a project, the gates are performed
simultaneously. See Figure 6 below.
Concept Stage
During the concept stage, alternate concepts are created to determine the best
approach to meet stakeholder needs. By envisioning alternatives and creating
models, including appropriate prototypes, stakeholder needs will be clarified and
the driving issues highlighted. This may lead to an incremental or evolutionary
approach to system development. Several different concepts may be explored in
parallel.
Development Stage
The selected concept(s) identified in the concept stage are elaborated in detail
down to the lowest level to produce the solution that meets the stakeholder
requirements. Throughout this stage, it is vital to continue with user involvement
through in-process validation (the upward arrow on the Vee models). On hardware,
this is done with frequent program reviews and a customer resident
representative(s) (if appropriate). In agile development, the practice is to have
the customer representative integrated into the development team.
Production Stage
The production stage is where the SoI is built or manufactured. Product
modifications may be required to resolve production problems, to reduce production
costs, or to enhance product or SoI capabilities. Any of these modifications may
influence system requirements and may require system re-qualification, re-
verification, or re-validation. All such changes require SE assessment before
changes are approved.
Utilization Stage
A significant aspect of product life cycle management is the provisioning of
supporting systems which are vital in sustaining operation of the product. While
the supplied product or service may be seen as the narrow system-of-interest (NSOI)
for an acquirer, the acquirer also must incorporate the supporting systems into a
wider system-of-interest (WSOI). These supporting systems should be seen as system
assets that, when needed, are activated in response to a situation that has emerged
in respect to the operation of the NSOI. The collective name for the set of
supporting systems is the integrated logistics support (ILS) system.
It is vital to have a holistic view when defining, producing, and operating system
products and services. In Figure 7, the relationship between system design and
development and the ILS requirements is portrayed.
The system requirements review (SRR) is planned to verify and validate the set of
system requirements before starting the detailed design activities.
The preliminary design review (PDR) is planned to verify and validate the set of
system requirements, the design artifacts, and justification elements at the end of
the first engineering loop (also known as the "design-to" gate).
The critical design review (CDR) is planned to verify and validate the set of
system requirements, the design artifacts, and justification elements at the end of
the last engineering loop (the �build-to� and �code-to� designs are released after
this review).
The integration, verification, and validation reviews are planned as the components
are assembled into higher level subsystems and elements. A sequence of reviews is
held to ensure that everything integrates properly and that there is objective
evidence that all requirements have been met. There should also be an in-process
validation that the system, as it is evolving, will meet the stakeholders�
requirements (see Figure 7).
The final validation review is carried out at the end of the integration phase.
Other management related reviews can be planned and conducted in order to control
the correct progress of work, based on the type of system and the associated risks.
The systems engineer leads the team in developing strategies to meet the
requirements. Formulation of these strategies will be an iterative process that
leverages trade-off studies, risk analyses, and verification considerations, as
process inputs. Risk management is one of the activities where the systems engineer
can make the most significant contributions by increasing patient and user safety.
Users can have many different points of contact with a device, and without the
holistic approach of a systems engineer, it becomes difficult to mitigate risk
through all interfaces. Systems engineers formulate strategies to minimize not only
safety risks, but also technical and programmatic risks while at the same time
maximizing performance, reliability, extensibility, and profitability. A systems
engineer coordinates interdisciplinary design activities to reduce safety and
programmatic risk profiles while maximizing product and project performance.
Step Three: Design Synthesis
During Design Synthesis, the systems engineer leads the team through a systematic
process of quantitatively evaluating each of proposed design solutions against a
set of prioritized metrics. This can help the team formulate questions or
uncover problems that were not initially obvious. When Design Synthesis is well-
executed, it helps reduce the risk, cost, and time of product development. An
experienced Systems Engineer is able to distill informal input from key
stakeholders into actionable guidance and combine this information with formal
design input requirements to formulate a more accurate picture of the design intent
for the product and business goals of the enterprise.
Systems Analysis and Control activities enable the systems engineer to measure
progress, evaluate and select alternatives, and document decisions made during
development. Systems engineers help teams prioritize decisions by guiding them
through trade-off matrices, which rank many options against a range of pre- defined
criteria. Systems engineers look at a wide range of metrics, such as cost,
technical qualifications, and interfacing parameters in order to help the team make
decisions that will lead to the most successful project. A Systems engineer can
also provide assistance with modeling and simulation tasks.
Step Five: Verification
Verification is the process of evaluating the finished design using traceable and
objectives methods for design confirmation. The goal of verification is to make
sure the design outputs satisfy the design inputs. The systems engineer coordinates
the efforts of the verification team to ensure that feedback from Quality
Engineering gets incorporated into the final product. An experienced Systems
Engineer knows how to leverage different verification methods to streamline the
verification process, providing maximum flexibility in addressing the inevitable
changes that occur during the development process. By properly
compartmentalizing design and verification activities the systems engineer can
minimize the extent of retesting resulting from regression analyses.
Need for Framework
We need a framework because systems engineering has failed to fulfill 50 years of
promises of providing solutions to the complex problems facing society. (Wymore,
1994) pointed out that it was necessary for systems engineering to become an
engineering discipline if it was to fulfill its promises and thereby survive.
Nothing has changed in that respect since then. (Wymore, 1994) also stated that
�Systems engineering is the intellectual, academic, and professional discipline,
the principal concern of which is to ensure that all requirements for
bioware/hardware/software systems are satisfied throughout the lifecycles of the
systems. This statement defines systems engineering as a discipline, not as a
process. The currently accepted processes of systems engineering are only
implementations of systems engineering. Elements of a discipline
Consider the elements that make up a discipline. One view was provided by (Kline,
1995) page 3) who states �a discipline possesses a specific area of study, a
literature, and a working community of paid scholars and/or paid practitioners�.
Systems engineering has a working community of paid scholars and paid
practitioners. However, the area of study seems to be different in each academic
institution but with various degrees of commonality. This situation can be
explained by the recognition that (1) systems engineering has only been in
existence since the middle of the 20th century (Johnson, 1997; Jackson and Keys,
1984; Hall, 1962), and (2) as an emerging discipline, systems engineering is
displaying the same characteristics as did other now established disciplines in
their formative years. Thus, systems engineering may be considered as being in a
similar situation to the state of chemistry before the development of the periodic
table of the elements, or similar to the state of electrical engineering before the
development of Ohm?s Law. This is why various academic institutions focus on
different areas of study but with some
degree of commonality in the systems development life cycle. Nevertheless, to be
recognized as a discipline, the degree of overlap of the various areas of study in
the different institutions needs to be much, much greater.
Elements relevant to research in a discipline
According to (Checkland and Holwell, 1998) research into a discipline needs the
following three items: An Area of Concern (A), which might be a particular problem
in a discipline (area of study), a real-world problem situation, or a system of
interest. A particular linked Framework of Ideas (F) in which the knowledge about
the area of concern is expressed. It includes current theories, bodies of
knowledge, heuristics, etc as documented in the literature as well as tacit
knowledge.
ARCHITECTURAL FRAMEWORKS, MODELS, AND VIEWS
Figure 1.
MITRE SE Roles & Expectations: MITRE systems engineers (SE) are expected to assist
in or lead efforts to define an architecture, based on a set of requirements
captured during the concept development and requirements engineering phases of the
systems engineering life cycle. The architecture definition activity usually
produces operational, system, and technical views. This architecture becomes the
foundation for developers and integrators to create design and implementation
architectures and views. To effectively communicate and guide the ensuing system
development activities, the MITRE SE should have a sound understanding of
architecture frameworks and their use, and the circumstances under which each
available framework might be used. They also must be able to convey the appropriate
framework that applies to the various decisions and phases of the program.
Getting Started
Because systems are inherently multidimensional and have numerous stakeholders with
different concerns, their descriptions are as well. Architecture frameworks enable
the creation of system views that are directly relevant to stakeholders' concerns.
Often, multiple models and non-model artifacts are generated to capture and track
the concerns of all stakeholders.
MITRE SEs should be actively involved in determining key architecture artifacts and
content, and guiding the development of the architecture and its depictions at the
appropriate levels of abstraction or detail. MITRE SEs should take a lead role in
standardizing the architecture modeling approach. They should provide a "reference
implementation" of the needed models and views with the goals of:
(1) setting the standards for construction and content of the models, and (2)
ensuring that the model and view elements clearly trace to the concepts and
requirements from which they are derived.
While many MITRE SEs have probably heard of the Department of Defense Architecture
Framework (DoDAF), there are other frameworks that should be considered. As shown
in Figure 3, an SE working at an enterprise level should also be versed in the
Federal Enterprise Architecture Framework (FEAF). To prevent duplicate efforts in
describing a system using multiple frameworks, establish overlapping description
requirements and ensure that they are understood among the SEs generating those
artifacts. The SEG article on Approaches to Architecture Development provides
details of the frameworks.
Frameworks
Figure 3. Applying
Best Practices and Lessons Learned
A program may elect to not use architectural models and views, or elect to create
only those views dictated by policy or regulation. The resources and time required
to create architecture views may be seen as not providing a commensurate return on
investment in systems engineering or program execution. Consider these cultural
impediments. Guide your actions with the view that architecture is a tool that
enables and is integral to systems engineering. The following are best practices
and lessons learned for making architectures work in your program.
Purpose is paramount. Determine the purpose for the architecting effort, views, and
models needed. Plan the architecting steps to generate the views and models to meet
the purpose only. Ultimately models and views should help each stakeholder reason
about the structure and behavior of the system or part of the system they represent
so they can conclude that their objectives will be met. Frameworks help by
establishing minimum guidelines for each stakeholder's interest. However,
stakeholders can have other concerns, so use the framework requirements as
discussion to help uncover as many concerns as possible.
Know the relationships. Models and views that relate to each other should be
consistent, concordant, and developed with reuse in mind. It is good practice to
identify the data or information that each view shares, and manage it centrally to
help create the different views. Refer to the SEG Architectural Patterns article
for guidance on patterns and their use/reuse.
Be the early bird. Inject the idea of architectures early in the process.
Continuously influence your project to use models and views throughout execution.
The earlier the better.
Which way is right and how do I get there from here? Architectures can be used to
help assess today's alternatives and different evolutionary paths to the future.
Views of architecture alternatives can be used to help judge the strengths and
weaknesses of different approaches. Views of "as is" and "to be" architectures help
stakeholders understand potential migration paths and transitions.
Try before you buy. Architectures (or parts of them) can sometimes be "tried out"
during live exercises. This can either confirm an architectural approach for
application to real-world situations or be the basis for refinement that better
aligns the architecture with operational reality. Architectures also can be used as
a basis for identifying prototyping and experimentation activities to reduce
technical risk and engagements with operational users to better illuminate their
needs and operational concepts.
Keep it simple. Avoid diagrams that are complicated and non-intuitive, such as node
connectivity diagrams with many nodes and edges, especially in the early phases of
a program. This can be a deterrent for the uninitiated. Start with the operational
concepts, so your architecture efforts flow from information that users and many
other stakeholders already understand.
Determining the right models and views. Once the frameworks have been chosen, the
models and views will need to be determined. It is not unusual to have to refer to
several sets of guidance, each calling for a different set of views and models to
be generated.
But it looked so pretty in the window. Lay out the requirements for your
architectures � what decisions it supports, what it will help stakeholders reason
about, and how it will do so. A simple spreadsheet can be used for this purpose.
This should happen early and often throughout the system's life cycle to ensure
that the architecture is used. Figure 4 provides an example of a worksheet that
was used to gather architecture requirements for a major aircraft program.
How do I create the right views? Selecting the right modeling approach to develop
accurate and consistent representations that can be used across program boundaries
is a critical systems engineering activity. Some of the questions to answer are:
Bringing dolls to life. If your program is developing models for large systems
supporting missions and businesses with time-sensitive needs, insight into system
behavior is crucial. Seriously consider using executable models to gain it. Today,
many architecture tools support the development of executable models easily and at
reasonable cost. Mission-Level Modeling (MLM) and Model Driven or Architecture-
Based/Centric Engineering are two modeling approaches that incorporate executable
modeling. They are worth investigating to support reasoning about technology
impacts to mission performance and internal system behavior, respectively.
How much architecture is enough? The most difficult conundrum when deciding to
launch an architecture effort is determining the level of detail needed and when to
stop producing/updating artifacts. Architecture models and views must be easily
changeable. There is an investment associated with having a "living" architecture
that contains current information, and differing levels of abstraction and views to
satisfy all stakeholders. Actively discuss this sufficiency issue with stakeholders
so that the architecture effort is "right-sized." Refer to the Architecture
Specification for CANES [2].
Penny wise, pound-foolish. Generating architecture models and views can seem
a lot easier to not do. Before jumping on the "architecture is costly and has
minimal utility" bandwagon, consider the following:
If the answer to one or more of these questions is "yes," then consider concise,
accurate, concordant, and consistent models of your system.
UNIT1
Formulation of issues with a case study, Value system design, Functional analysis,
Business Process Reengineering, Quality function deployment, System synthesis,
Approaches for generation of alternatives.
The systems engineering process is the heart of systems engineering management. Its
purpose is to provide a structured but flexible process that transforms
requirements into specifications, architectures, and configuration baselines.
The discipline of this process provides the control and trace- ability to develop
solutions that meet customer needs. The systems engineering process may be repeated
one or more times during any phase of the development process.
1
Figure 1-1. Three Activities of Systems Engineering Management
Life cycle integration is necessary to ensure that the design solution is viable
throughout the life of the system. It includes the planning associated with product
and process development, as well as the integration of multiple functional
concerns into the design and engineering process. In this manner, product cycle-
times can be reduced, and the need for redesign and rework substantially reduced.
DEVELOPMENT PHASING
The systems engineering process is applied to each level of system development, one
level at a time, to produce these descriptions commonly called configuration
baselines. This results in a series of configuration baselines, one at each
development level. These baselines become more detailed with each level.
In the Department of Defense (DoD) the configu- ration baselines are called the
functional baseline for the system-level description, the allocated baseline for
the subsystem/ component performance descriptions, and the product baseline for the
sub- system/component detail descriptions. Figure 1-2 shows the basic relationships
between the baselines. The triangles represent baseline control decision points,
and are usually referred to as technical re- views or audits.
Significant development at any given level in the system hierarchy should not occur
until the con- figuration baselines at the higher levels are con- sidered complete,
stable, and controlled. Reviews and audits are used to ensure that the baselines
are ready for the next level of development. As will be shown in the next chapter,
this review and audit process also provides the necessary assessment of system
maturity, which supports the DoD Milestone decision process.
(Product Baseline)
Transform needs and requirements into a set of system product and process
descriptions (add- ing value and more detail with each level of development),
The Functional Architecture identifies and structures the allocated functional and
performance requirements. The Physical Architecture depicts the
PROCESS OUTPUT
system product by showing how it is broken down into subsystems and components. The
System Architecture identifies all the products (including enabling products) that
are necessary to support the system and, by implication, the processes necessary
for development, production/construc- tion, deployment, operations, support,
disposal, training, and verification.
Design-level IPT members are chosen to meet the team objectives and generally have
distinctive competence in:
Life cycle functions are the characteristic actions associated with the system life
cycle. As illustrated by Figure 1-4, they are development, production and
construction, deployment (fielding), operation, support, disposal, training, and
verification. These activities cover the �cradle to grave� life cycle process and
are associated with major functional groups that provide essential support to the
life cycle process. These key life
cycle functions are commonly referred to as the eight primary functions of systems
engineering.
The customers of the systems engineer perform the life-cycle functions. The system
user�s needs are emphasized because their needs generate the requirement for the
system, but it must be remembered that all of the life-cycle functional areas
generate requirements for the systems engineering process once the user has
established the basic need. Those that perform the primary functions also provide
life-cycle representation in design- level integrated teams.
Development includes the activities required to evolve the system from customer
needs to product or process solutions.
Operation is the user function and includes activities necessary to satisfy defined
operational objectives and tasks in peacetime and wartime environments.
Support includes the activities necessary to pro- vide operations support,
maintenance, logistics, and material management.
Training includes the activities necessary to achieve and maintain the knowledge
and skill levels necessary to efficiently and effectively perform operations and
support functions.
Systems engineering ensures that the correct technical tasks get done during
development through planning, tracking, and coordinating. Responsibilities of
systems engineers include:
Proper focus and structure for system and major sub-system level design IPTs.
GUIDANCE
System engineering is applied during all acquisi- tion and support phases for
large- and small-scale systems, new developments or product improve- ments, and
single and multiple procurements. The process must be tailored for different needs
and/or requirements. Tailoring considerations include system size and complexity,
level of system definition detail, scenarios and missions, con- straints and
requirements, technology base, major risk factors, and organizational best
practices and strengths.
For example, systems engineering of software should follow the basic systems
engineering approach as presented in this book. However, it must be tailored to
accommodate the software development environment, and the unique progress
List of Examples from the SE Literature The following examples are included:
Successful Business Transformation within a Russian Information Technology
Company
Federal Aviation Administration Next Generation Air Transportation System
How Lack of Information Sharing Jeopardized the NASA/ESA Cassini/Huygens
Mission to Saturn
Hubble Space Telescope Case Study
Global Positioning System Case Study
Global Positioning System Case Study II
Medical Radiation Case Study
FBI Virtual Case File System Case Study
MSTI Case Study
Next Generation Medical Infusion Pump Case Study
Design for Maintainability
Complex Adaptive Operating System Case Study
Complex Adaptive Project Management System Case Study
Complex Adaptive Taxi Service Scheduler Case Study
Submarine Warfare Federated Tactical Systems Case Study
Northwest Hydro System
Systems engineering (SE) case studies can be characterized in terms of at least two
relevant parameters, viz., their degrees of complexity and engineering difficulty,
for example. Although a so-called quad chart is likely an oversimplification, a 2 x
2 array can be used to make a first-order characterization, as shown in Figure 1.
The x-axis depicts complicated, the simplest form of complexity, at the low-end on
the left, and complex, representing the range of all higher forms of complexity on
the right.The y-axis suggests how difficult it might be to engineer (or re-
engineer) the system to be improved, using Conventional (classical or traditional)
SE, at the low-end on the bottom, and Complex SE, representing all more
sophisticated forms SE, on the top. This upper range is intended to cover system of
systems (SoS) engineering (SoSE), enterprise
Case studies have been used for decades in medicine, law, and business to help
students learn fundamentals and to help practitioners improve their practice. A
Matrix of Implementation Examples is used to show the alignment of systems
engineering case
studies to specific areas of the SEBoK. This matrix is intended to provide linkages
between each implementation example to the discussion of the systems engineering
principles illustrated. The selection of case studies cover a variety of sources,
domains, and geographic locations. Both effective and ineffective use of systems
engineering principles are illustrated.
The number of publicly available systems engineering case studies is growing. Case
studies that highlight the aerospace domain are more prevalent, but there is a
growing number of examples beyond this domain.
The United States Air Force Center for Systems Engineering (AF CSE) has developed a
set of case studies "to facilitate learning by emphasizing the long-term
consequences of the systems engineering/programmatic decisions on cost, schedule,
and operational effectiveness." (USAF Center for Systems Engineering 2011) The AF
CSE is using these cases to enhance SE curriculum. The cases are structured using
the Friedman-Sage framework (Friedman and Sage 2003; Friedman and Sage 2004, 84-
96), which decomposes a case into contractor, government, and shared
responsibilities in the following nine concept areas:
This framework forms the basis of the case study analysis carried out by the AF
CSE. Two of these case studies are highlighted in this SEBoK section, the Hubble
Space Telescope Case Study and the Global Positioning System Case Study.
The United States National Aeronautics and Space Administration (NASA) has a
catalog of more than fifty NASA-related case studies (NASA 2011). These case
studies include insights about both program management and systems engineering.
Varying in the level of detail, topics addressed, and source organization, these
case studies are used to enhance
learning at workshops, training, retreats, and conferences. The use of case studies
is viewed as important by NASA since "organizational learning takes place when
knowledge is shared in usable ways among organizational members. Knowledge is most
usable when it is contextual" (NASA 2011). Case study teaching is a method for
sharing contextual knowledge to enable reapplication of lessons learned. The MSTI
Case Study is from this catalog.
Value of System Design
Systems design is an interdisciplinary engineering activity that enables the
realization of successful systems. Systems design is the process of defining the
architecture, product design, modules, interfaces, and data for a system to
satisfy specified requirements.
Systems design could be seen as the application of systems theory to product
development. There is some overlap with the disciplines of systems analysis,
systems architecture and systems engineering.
A system may be denned as an integrated set of components that accomplish a defined
objective. The process of systems design includes defining software and hardware
architecture, components, modules, interfaces, and data to enable a system to
satisfy a set of well-specified operational requirements.
In general, systems design, systems engineering, and systems design engineering all
refer to the same intellectual process of being able to define and model complex
interactions among many components that comprise a system, and being able to
implement the system with proper and effective use of available resources. Systems
design focuses on defining customer needs and required functionality early in the
development cycle, documenting requirements, then proceeding with design synthesis
and system validation while considering the overall problem consisting of:
Operations
Performance
Test and integration
Manufacturing
Cost and schedule
Deployment
Training and support
Maintenance
Disposal
Systems design integrates all of the engineering disciplines and specialty groups
into a team effort forming a structured development process that proceeds from
concept to production to operation. Systems design considerations include both the
business and technical requirements of customers with the goal of providing a
quality product that meets the user needs. Successful systems design is dependent
upon project management, that is, being able to control costs, develop timelines,
procure resources, and manage risks.
Information systems design is a related discipline of applied computer systems,
which also incorporates both software and hardware, and often includes networking
and telecommunications, usually in the context of a business or other enterprise.
The general principals of systems design engineering may be applied to information
systems design. In addition, information systems design focuses on data-centric
themes such as subjects, objects, and programs.
If the broader topic of product development "blends the perspective of marketing,
design, and manufacturing into a single approach to product development," then
design is the act of taking the marketing information and creating the design of
the product to be manufactured. Systems design is therefore the process of defining
and developing systems to satisfy specified requirements of the user.
The basic study of system design is the understanding of component parts and their
subsequent interaction with one another.[4]
Until the 1990s, systems design had a crucial and respected role in the data
processing industry. In the 1990s, standardization of hardware and software
resulted in the ability to build modular systems. The increasing
importance of software running on generic platforms has enhanced the discipline of
software engineering.
Architectural design[edit]
The physical design relates to the actual input and output processes of the system.
This is explained in terms of how data is input into a system, how it is
verified/authenticated, how it is processed, and how it is displayed. In physical
design, the following requirements about the system are decided.
Input requirement,
Output requirements,
Storage requirements,
Processing requirements,
System control and backup or recovery.
Put another way, the physical portion of system design can generally be broken down
into three sub-tasks:
product database structure processor and a control processor. The H/S personal
specification is developed for the proposed system.
Functional Analysis and Allocation
The Functional Analysis and Allocation bridges the gap between the high-level set
of system requirements and constraints (from the Requirements Analysis) and the
detailed set required
modes are proposed and evaluated to determine which provides the best fit to the
parent requirements and the best balance between conflicting ones. The initial
decomposition is the starting point for the development of the functional
architecture and the allocation of requirements to the lower functional levels.
Adjustments to the decomposition strategy may be necessary as details are
developed.
Allocation: All requirements of the top-level functions must be allocated for all
lower-level functions. Traceability is an on-going record of the pedigree of
requirements imposed on system and subsystem elements. Because requirements are
derived or apportioned among several functions, they must be traceable across
functional boundaries to parent and child requirements. Traceability allows the
System Engineer to ascertain rapidly what effects any proposed changes in
requirements may have on related requirements at any system level. The allocated
requirements must be defined in measurable terms, contain applicable go/no go
criteria, and be in sufficient detail to be used as design criteria in the
subsequent Synthesis activity.
The four (4) steps that comprise the SE Process are:
Synthesis is the process whereby the Functional Architectures and their associated
requirements are translated into physical architectures and one or more physical
sets of hardware, software, and personnel solutions. It is the output end of the
Design Loop. As the designs are formulated, their characteristics are compared to
the original requirements, developed at the beginning of the process, to verify the
fit. The output of this activity is a set of analysis- verified specifications
which describe a balanced, integrated system meeting the requirements, and a
database that documents the process and rationale used to establish these
specifications.
The first step of Synthesis is to group the functions into physical architectures.
This high-level structure is used to
define system concepts and products and processes, which can be used to implement
the concepts. Growing out of these efforts are the internal and external
interfaces. As concepts are developed they are fed back in the Design Loop to
ascertain that functional requirements have been satisfied. The mature concepts,
and product and process solutions are verified against the original system
requirements before they are released as the Systems Engineering Process product
output. Detailed descriptions of the activities of Synthesis are provided below.
Physical architecture is a traditional term. Despite the name, it includes software
elements as well as hardware elements. Among the characteristics of the physical
architecture (the primary output of Design Synthesis) are the following: [2]
The correlation with functional analysis requires that each physical or software
component meets at least one (or part of one) functional requirement, though any
component can meet more than one requirement,
The architecture is justified by trade studies and effectiveness analyses,
A product Work Breakdown Structure (WBS) is developed from the physical
architecture,
Metrics are developed to track progress among Key Performance Parameters (KPP), and
All supporting information is documented in a database.
BusinessProcessReengineering(BPR)�Definition,Steps,andExamples
Your company is making great progress. You�re meeting goals easily, but the way you
meet goals is where the problem is. Business processes play an important role in
driving goals, but they are not as efficient as you�d like them to be.
Making changes to the process gets more and more difficult as your business grows
because of habits and investments in old methods. But in reality, you cannot
improve processes without making changes. Processes have to be reengineered
carefully since experiments and mistakes bring in a lot of confusion
Whatisbusinessprocessre-engineering(BPR)?
Business process re-engineering is the radical redesign of business processes to
achieve dramatic improvements in critical aspects like quality, output, cost,
service, and speed. Business process reengineering (BPR) aims at cutting down
enterprise costs and process redundancies on a very huge scale.
Isbusinessprocessreengineering(BPR)sameasbusinessprocessimprovement(BPI)?
On the surface, BPR sounds a lot like business process improvement (BPI). However,
there are fundamental differences that distinguish the two. BPI might be about
tweaking a few rules here and there. But reengineering is an unconstrained approach
to look beyond the defined boundaries and bring in seismic changes.
While BPI is an incremental setup that focuses on tinkering with the existing
processes to improve them, BPR looks at the broader picture. BPI doesn�t go against
the grain. It identifies the process bottlenecks and recommends changes in specific
functionalities. The process framework principally remains the same when BPI is in
play. BPR, on the other hand, rejects the existing rules and often takes an
unconventional route to redo processes from a high-level management perspective.
BPI is like upgrading the exhaust system on your project car. Business Process
Reengineering, BPR is about rethinking the entire way the exhaust is handled.
Fivestepsofbusinessprocessreengineering(BPR)
To keep business process reengineering fair, transparent, and efficient,
stakeholders need to get a better understanding of the key steps involved in it.
Although the process can differ from one organization to another, these steps
listed below succinctly summarize the process:
Areal-lifeexampleofBPR
Many companies like Ford Motors, GTE, and Bell Atlantic tried out BPR during the
1990s to reshuffle their operations. The reengineering process they adopted made a
substantial difference to them, dramatically cutting down their expenses and making
them more effective against increasing competition.
The story
An American telecom company that had several departments to address customer
support regarding technical snags, billing, new connection requests, service
termination, etc. Every time a customer had an issue, they were required to call
the respective department to get their complaints resolved. The company was doling
out millions of dollars to ensure customer satisfaction, but smaller companies with
minimal resources were threatening their business.
The telecom giant reviewed the situation and concluded that it needed drastic
measures to simplify things�a one-stop solution for all customer queries. It
decided to merge the various departments into one, let go of employees to minimize
multiple handoffs and form a nerve center of customer support to handle all issues.
A few months later, they set up a customer care center in Atlanta and started
training their repair clerks as �frontend technical experts� to do the new,
comprehensive job. The company equipped the team with new software that allowed the
support team to instantly access the customer database and handle almost all kinds
of requests.
Now, if a customer called for billing query, they could also have that erratic dial
tone fixed or have a new service request confirmed without having to call another
number. While they were still on the phone, they could also make use of the push-
button phone menu to connect directly with another department to make a query or
input feedback about the call quality.
The redefined customer-contact process enabled the company to achieve new goals.
The problem with BPR is that the larger you are, the more expensive it is to
implement. A startup, five months after launch, might undergo a pivot including
business process reengineering that only has minimal costs to execute.
However, once an organization grows, it will have a harder and more expensive time
to completely reengineer its processes. But they are also the ones who are forced
to change due to competition and unexpected marketplace shifts.
But more than being industry-specific, the call for BPR is always based on what an
organization is aiming for. BPR is effective when companies need to break the mold
and turn the tables in order to accomplish ambitious goals. For such measures,
adopting any other process management options will only be rearranging the deck
chairs on the Titanic.
Quality Function Deployment (QFD) is a process and set of tools used to effectively
define customer requirements and convert them into detailed engineering
specifications and plans to produce the products that fulfill those requirements.
QFD is used to translate customer requirements (or VOC) into measureable design
targets and drive them from the assembly level down through the sub-assembly,
component and production process levels. QFD methodology provides a defined set of
matrices utilized to facilitate this progression.
QFD was first developed in Japan by Yoji Akao in the late 1960s while working for
Mitsubishi�s shipyard. It was later adopted by other companies including Toyota and
its supply chain. In the early 1980s, QFD was introduced in the United States
mainly by the big three automotive companies and a few electronics manufacturers.
Acceptance and growth of the use of QFD in the US was initially rather slow but has
since gained popularity and is currently being used in manufacturing, healthcare
and service organizations.
The House of Quality is an effective tool used to translate the customer wants and
needs into product or service design characteristics utilizing a relationship
matrix. It is usually the first matrix used in the QFD process. The House of
Quality demonstrates the relationship between the customer wants or �Whats� and the
design parameters or �Hows�. The matrix is data intensive and allows the team to
capture a large amount of information in one place. The matrix earned the name
�House of Quality� due to its structure resembling that of a house. A cross-
functional team possessing thorough knowledge of the product, the Voice of the
Customer and the company�s capabilities, should complete the matrix. The different
sections of the matrix and a brief description of each are listed below:
�Whats�: This is usually the first section to be completed. This column is where
the VOC, or the wants and needs, of the customer are listed.
Importance Factor: The team should rate each of the functions based on their level
of importance to the customer. In many cases, a scale of 1 to 5 is used with 5
representing the highest level of importance.
�Hows� or Ceiling: Contains the design features and technical requirements
the product will need to align with the VOC.
Body or Main Room: Within the main body or room of the house of quality the �Hows�
are ranked according to their correlation or effectiveness of fulfilling each of
the �Whats�. The ranking system used is a set of symbols indicating either a
strong, moderate or a weak correlation. A blank box would represent no correlation
or influence on meeting the �What�, or customer requirement. Each of the symbols
represents a numerical value of 0, 1, 3 or 9.
Roof: This matrix is used to indicate how the design requirements interact with
each other. The interrelationships are ratings that range from a strong positive
interaction (++) to a strong negative interaction (�) with a blank box indicating
no interrelationship.
Competitor Comparison: This section visualizes a comparison of the competitor�s
product in regards to fulfilling the �Whats�. In many cases, a scale of 1 to 5 is
used for the ranking, with 5 representing the highest level of customer
satisfaction. This section should be completed using direct feedback from customer
surveys or other means of data collection.
Relative Importance: This section contains the results of calculating the total of
the sums of each column when multiplied by the importance factor. The numerical
values are represented as discrete numbers or percentages of the total. The data is
useful for ranking each of the �Hows� and determining where to allocate the most
resources.
Lower Level / Foundation: This section lists more specific target values for
technical specifications relating to the �Hows� used to satisfy VOC.
Upon completion of the House of Quality, the technical requirements derived from
the VOC can then be deployed to the appropriate teams within the organization and
populated into the Level 2 QFDs for more detailed analysis. This is the first step
in driving the VOC throughout the product or process design process.
Level 2 QFD
The Level 2 QFD matrix is a used during the Design Development Phase. Using the
Level 2 QFD, the team can discover which of the assemblies, systems, sub-systems
and components have the most impact on meeting the product design requirements and
identify key design characteristics. The information produced from performing a
Level 2 QFD is often used as a direct input to the Design Failure Mode and Effects
Analysis (DFMEA) process. Level 2 QFDs may be developed at the following levels:
System Level: The technical specifications and functional requirements or
�Hows� identified and prioritized within The House of Quality become the �Whats�
for the system level QFD. They are then evaluated according to which of the systems
or assemblies they impact. Any systems deemed critical would then progress to a
sub-system QFD.
Sub-system Level: The requirements cascaded down from the system level are re-
defined to align with how the sub-system contributes to the system meeting its
functional requirements. This information then becomes the �Whats� for the QFD and
the components and other possible �Hows� are listed and ranked to determine the
critical components. The components deemed critical would then require progression
to a component level QFD.
Component Level: The component level QFD is extremely helpful in identifying the
key and critical characteristics or features that can be detailed on the drawings.
The key or critical characteristics then flow down into the Level 3 QFD activities
for use in designing the process. For purchased components, this information is
valuable for communicating key and critical characteristics to suppliers during
sourcing negotiations and as an input to the Production Part Approval Process
(PPAP) submission.
Level 3 QFD
The Level 3 QFD is used during the Process Development Phase where we examine
which of the processes or process steps have any correlation to meeting the
component or part specifications. In the Level 3 QFD matrix, the �Whats� are the
component part technical specifications and the �Hows� are the manufacturing
processes or process steps involved in producing the part. The matrix highlights
which of the processes or process steps have the most impact on meeting the part
specifications. This information allows the production and quality teams to focus
on the Critical to Quality (CTQ) processes, which flow down into the Level 4 QFD
for further examination.
Level 4 QFD
The Level 4 QFD is not utilized as often as the previous three. Within the Level 4
QFD matrix, the team should list all the critical processes or process
characteristics in the �Whats� column on the left and then determine the �Hows� for
assuring quality parts are produced and list them across the top of the matrix.
Through ranking of the interactions of the �Whats� and the �Hows�, the team can
determine which controls could be most useful and develop quality targets for each.
This information may also be used for creating Work Instructions, Inspection Sheets
or as an input to Control Plans.
The purpose of Quality Function Deployment is not to replace an organization�s
existing design process but rather support and improve an organization�s design
process. QFD methodology is a systemic, proven means of embedding the Voice of the
Customer into both the design and production process. QFD is a method of ensuring
customer requirements are accurately translated into relevant technical
specifications from product definition to product design, process development and
implementation. The fact is that every business, organization and industry has
customers. Meeting the customer�s needs is critical to success. Implementing QFD
methodology can enable you to drive the voice of your customers throughout your
processes to increase your ability to satisfy or even excite your customers.
System synthesis
Synthesis is one of the key automation techniques for improving productivity and
developing efficient implementations from a design specification. Synthesis refers
to the creation of a detailed model or blueprint of the design from an abstract
specification, typically a software model of the design. Synthesis takes different
forms during different stages of the design process. In hardware system design,
several synthesis steps automate various parts of the design process. For instance,
physical synthesis automates the placement of transistors and the routing of
interconnects from a gate level description, which itself has been created by logic
synthesis from a register transfer level (RTL) model. The same principle of
translation from higher level model to a lower level model applies to system
synthesis.
Roles:
Required tasks:
Architectural analysis
Architectural design
Artifacts:
When you perform system architectural analysis, you merge realized use cases into
integrated architecture analysis project. This task is often based on a trade study
pertinent to the system you intend to design. During architectural analysis, use
cases are not mapped to functional interfaces. Instead, you take a black box
approach to examine functional entities and determine reuse of those entities.
After you examine functional entities, you can allocate the logical architecture
into a physical architecture.
You can use a white box activity diagram to allocate use cases to a physical or
logical architecture. Typically, this diagram is derived from a black box activity
diagram. The white box activity diagram is partitioned into swim lanes, which show
the hierarchical structure of the architecture. Then you can move system-level
operations into an appropriate swim lane.
You can use subsystem white box scenarios allow you to decompose system blocks to
the lowest level of functional allocation. At that level, you specify the
operations to be implemented in both the hardware and software of your system. You
can derive subsystem logical interfaces from white box sequence diagrams. The
interfaces belong to the blocks at the lowest level of your system.
You can derive subsystem logical interfaces from white box sequence diagrams. The
interfaces belong to the blocks at the lowest level of your system. Then, you can
define subsystem behavior, also known as leaf block behavior for each lowest level
of decomposition in your system. This type of derived behavior is the physical
implementation of decomposed subsystems and is shown in a state chart diagram.
Model execution of leaf block behavior is performed on both the leaf block behavior
itself, as well as the interaction between each of the decomposed subsystems.
After understanding the situation thoroughly and realizing the need for action, a
manager may find the problem solving approach useful to devise action programmes.
The problem solving approach involves problem definition and identification of
decision area, generating decision making alternatives, and specifying criteria for
selection, assessing alternatives and the optimal selection, and developing an
action plan for implementation, including a contingency plan.
Problem definition is one of the most crucial steps in the problem solving
approach. A wrong definition of the problem would not only fail to resolve the
issues involved but could also lead to more complicated problems. The following
steps have been found to be useful in defining problems:
Step 1
Step 2
Step 3
Step 4
List all concerns (symptoms), particularly from the point of view of the decision-
maker in the situation (i.e., the answer to 'Who?' and 'What?' of the situational
analysis).
Diagnose (from the answers to 'How?' and 'Why?') the concerns in order to establish
real causes.
Establish decision (problem) areas, and prioritize them in order of importance.
Evaluate - if appropriate decisions are taken in these areas - whether the overall
situation would improve particularly from the decision-maker's point of view.
Generating alternatives
The skills which could help in discovering alternatives would be holistic and
logical thinking to comprehend the situation, as well as creative skills in
generating the options which fit the situation. Knowledge of both the internal and
external environments of the organization and the subject matter pertinent to the
problem (human relations, how scientists can be motivated, etc.) would also help in
arriving at better alternatives.
Specifying criteria
The skills needed for improving the ability to specify criteria are basically two:
holistic skills, for identifying broader aims, goals, objectives and purposes in a
situation, and
logical reasoning, for deducing the specific criteria and their prioritization from
such higher-order considerations.
consistent with the requirements of the situation, bearing in mind the uncertainty
involved,
implementable, and
convincing to others involved.
The skills needed for improving this phase would thus be the ability to analyse
logically, the ability to infer implications based on incomplete information and
uncertainty, and the skill to convince others about the decision taken so as to
obtain approval or help in proper implementation, or both.
Once the alternatives are developed, an action plan has to be developed. This is
essentially the implementation phase. In this phase, the decision-maker needs to
decide who would do what, where, when, how, etc. The process of arriving at these
decisions is just like the steps involved in the problem solving approach, except
that the chosen alternative becomes an input to this step. This phase would require
coordination skills to properly organize a variety of resources (human, material
and fiscal) and develop a time-phased programme for implementation.
For a variety of reasons, the original decision (chosen alternative) may not work
well and the decision-maker may have to be ready with a contingency plan. This
implies devising feedback mechanisms allowing monitoring of the status of the
situation, including results of the action plan. It also implies anticipating the
most likely points of failure and devising appropriate contingency plans to handle
the possible failures.
The additional skills required in this step would be those of devising control and
feedback mechanisms.
UNIT III
ANALYSIS OF ALTERNATIVES- I
Cross-impact analysis, Structural modeling tools, System Dynamics models with case
studies, Economic models: present value analysis � NPV, Benefits and costs over
time, ROI, IRR; Work and Cost breakdown structure,
1.2 What is an AoA?
As defined in the A5R Guidebook, the AoA is an analytical comparison of the
operational effectiveness, suitability, risk, and life cycle cost of alternatives
under consideration to satisfy validated capability needs (usually stipulated in an
approved ICD). Other definitions of an AoA can be found in various official
documents. The following are examples from DoDI 5000.02 and the Defense Acquisition
Guidebook:
� The AoA assesses potential materiel solutions that could satisfy validated
capability requirement(s) documented in the Initial Capabilities Document, and
supports a decision on the most cost effective solution to meeting the validated
capability requirement(s). In developing feasible alternatives, the AoA will
identify a wide range of solutions that have a reasonable likelihood of providing
the needed capability.
� An AoA is an analytical comparison of the operational effectiveness, suitability,
and life cycle cost (or total ownership cost, if applicable) of alternatives that
satisfy established capability needs.
Though the definitions vary slightly, they all generally describe the AoA as a
study that is used to assess alternatives that have the potential to address
capability needs or requirements that are documented in a validated or approved
capability requirements document. The information provided in an AoA helps decision
makers select courses of action to satisfy an operational capability need.
1.3 What is the Purpose of the AoA?
According to the A5R Guidebook, the purpose of the AoA is to help decision-makers
understand the tradespace for new materiel solutions to satisfy an operational
capability need, while providing the analytic basis for performance attributes
documented in follow-on JCIDS documents. The AoA provides decision-quality analysis
and results to inform the Milestone Decision Authority (MDA) and other stakeholders
at the next milestone or decision point. In short, the AoA must provide compelling
evidence of the capabilities and military worth of the alternatives. The results
should enable decision makers to discuss the appropriate cost, schedule,
performance, and risk tradeoffs and assess the operational capabilities and
affordability of the alternatives assessed in the study. The AoA results help
decision makers shape and scope the courses of action for new materiel solutions to
satisfy operational capability needs and the Request for Proposal (RFP) for the
next acquisition phase. Furthermore, AoAs provide the foundation for the
development of documents required later in the acquisition cycle such as the
Acquisition Strategy, Test and Evaluation Master Plan (TEMP), and Systems
Engineering Plan (SEP).
The AoA should also provide recommended changes, as needed, to validated capability
requirements that appear unachievable or undesirable from a cost, schedule,
performance, or risk point of view. It is important to note that the AoA provides
the analytic basis for performance parameters documented in the appropriate
requirements documents (e.g., AF Form 1067, Joint DOTmLPF-P2 Change Request (DCR),
AF-only DCR, Draft Capability Development Document (CDD), Final CDD, or Capability
Production Document (CPD)).
1.4 When is the AoA Conducted?
As noted earlier, the AoA is an important element of both the capability
development and acquisition processes. As presented in the A5R Guidebook, Figure 1-
1 highlights where AoA is conducted in these processes. The capability development
phases are shown across the top of the figure3 while the lower right of the figure
illustrates the acquisition phases, decision points, and milestones. In accordance
with the Weapon Systems Acquisition Reform Act (WSARA) of 2009, DoDI 5000.02, and
the A5R Guidebook, for all ACAT initiatives, the AoA is typically conducted during
the Materiel Solution Analysis (MSA) phase. Follow-on AoAs, however; may be
conducted later during the Technology Maturation & Risk Reduction and the
Engineering & Manufacturing Development phases.
cross-impact analysis
Cross-impact analysis, also known as cross-impact matrix or cross-impact balance
analysis, is a method used in systems thinking and scenario planning to explore the
potential interactions between different factors or variables in a complex system.
It is a tool for assessing the interdependencies and feedback loops between
different elements within a system.
The basic idea behind cross-impact analysis is to analyze how changes in one
variable can affect other variables in the system and vice versa. By understanding
these interconnections, it becomes possible to identify potential consequences,
unintended effects, and critical relationships within the system.
Here's how cross-impact analysis typically works:
Identify factors: The first step is to identify the relevant factors or variables
that influence the system being studied. These factors can be social, economic,
technological, environmental, political, or any other relevant aspect of the
system.
Construct a cross-impact matrix: A cross-impact matrix is created by systematically
evaluating how each factor influences or impacts the others. The matrix is usually
filled with qualitative judgments or expert opinions regarding the strength and
direction of the impact. The interactions are usually rated using a scale (e.g.,
strong positive, weak positive, neutral, weak negative, strong negative).
Analyze the matrix: Once the cross-impact matrix is completed, it can be analyzed
to identify the most critical relationships and potential feedback loops within the
system. Some variables may have significant impacts on many other variables, making
them central to the system's behavior.
Scenario development: Cross-impact analysis can be used to develop scenarios that
explore different future states of the system. By combining the interactions
identified in the matrix with different initial conditions or assumptions, multiple
scenarios can be constructed to understand the range of possible outcomes.
Policy implications: The analysis helps decision-makers understand the implications
of different policies or actions on the system. By identifying critical
relationships, decision-makers can focus on leveraging positive interactions and
mitigating negative ones.
Cross-impact analysis is a valuable tool for understanding complex systems and
making informed decisions in scenarios where various factors interact in intricate
ways. It is commonly used in fields such as strategic planning, environmental
assessment, technology foresight, and risk analysis. However, it is important to
note that the accuracy and reliability of the analysis depend heavily on the
quality of data and expert judgment used to construct the cross-impact matrix.
4.1 Introduction
In today�s competitive business situations characterized by globalization, short
product
life cycles, open systems architecture, and diverse customer preferences, many
managerial
innovations such as the just-in-time inventory management, total quality
management, Six
Sigma quality, customer�supplier partnership, business process reengineering, and
supply
chain integration, have been developed. Value improvement of services based on
value
engineering and systems approach (Miles, 1984) is also considered a method of
managerial
innovation. It is indispensable for corporations to expedite the value improvement
of
services and provide fine products satisfying the required function with reasonable
costs.
This chapter provides a performance measurement system (PMS) for the value
improvement of services, which is considered an ill-defined problem with
uncertainty
(Terano, 1985). To recognize a phenomenon as a problem and then solve it, it will
be necessary
to grasp the essence (real substance) of the problem. In particular, for the value
improvement problems discussed in this chapter, they can be defined as complicated,
ill-defined problems since uncertainty in the views and experiences of decision
makers,
called �fuzziness,� is present.
Building the method involves the following processes: (a) selecting measures and
building a system recognition process for management problems, and (b) providing
the
performance measurement system for the value improvement of services based on the
system
recognition process. We call (a) and (b) the PMS design process, also considered a
core
decision-making process, because in the design process, strategy and vision are
exactly
interpreted, articulated with, and translated into a set of qualitative and/or
quantitative
measures under the �means to purpose� relationship.
We propose in this chapter a system recognition process that is based on system
definition,
system analysis, and system synthesis to clarify the essence of the ill-defined
problem.
Further, we propose and examine a PMS based on the system recognition process
as a value improvement method for services, in which the system recognition process
reflects the views of decision makers and enables one to compute the value indices
for
the resources. In the proposed system, we apply the fuzzy structural modeling for
building
the structural model of PMS. We introduce the fuzzy Choquet integral to obtain the
total value index for services by drawing an inference for individual linkages
between the
scores of PMS, logically and analytically. In consequence, the system we suggest
provides
decision makers with a mechanism to incorporate subjective understanding or insight
about the evaluation process, and also offers a flexible support for changes in the
business
environment or organizational structure.
A practical example is illustrated to show how the system works, and its
effectiveness
is examined.
4.2 System recognition process
Management systems are considered to include cover for large-scale complicated
problems.
However, for a decision maker, it is difficult to know where to start solving ill-
defined
problems involving uncertainty.
In general, the problem is classified broadly into two categories. One is a problem
with preferable conditions�the so-called well-defined problem (structured or
programmable),
which has an appropriate algorithm to solve it. The other one is a problem with
non-preferable conditions�the so-called ill-defined problem (unstructured or
nonprogrammable),
which may not have an existing algorithm to solve it or there may be only a partial
algorithm. Problems involving human decision making or large-scale problems with a
complicated nature are applicable to that case. Therefore, uncertainties such as
fuzziness (ambiguity in decision making) and randomness (uncertainty of the
probability of an event) characterize the ill-defined problem.
In this chapter, the definition of management problems is extended to
semistructured
and/or unstructured decision-making problems (Simon, 1977; Anthony, 1965; Gorry and
Morton, 1971; Sprague and Carlson, 1982). It is extremely important to consider the
way to
recognize the essence of an �object� when necessary to solve some problems in the
fields
of social science, cultural science, natural science, etc.
This section will give a systems approach to the problem to find a preliminary way
to
propose the PMS for value improvement of services. In this approach, the three
steps taken
in natural recognition pointed out by Taketani (1968) are generally applied to the
process of
recognition development. These steps�phenomenal, substantial, and
essential�regarding
system recognition are necessary processes to go through to recognize the object.
With the definitions and the concept of systems thinking, a conceptual diagram of
system recognition can be described as in Figure 4.1. The conceptual diagram of
system recognition will play an important role to the practical design and
development of the value improvement system for services. Phase 1, phase 2, and
phase 3 in Figure 4.1 correspond to the respective three steps of natural
recognition described above. At the phenomenal stage (phase 1), we assume that
there exists a management system as an object; for example, suppose a management
problem concerning
management strategy, human resource, etc., and then extract the characteristics of
the problem. Then, in the substantial stage, we may recognize the characteristics
of the problem as available information, which are extracted at the previous step,
and we perform systems analysis to clarify the elements, objective, constraints,
goal, plan, policy, principle, etc., concerning the problem. Next, the objective of
the problem is optimized subject to constraints arising from the viewpoint of
systems synthesis so that the optimal management system can be obtained. The result
of the optimization process, as feedback information, may be returned to phase 1 if
necessary, comparing with the phenomena at stage 1. The decision maker examines
whether the result will meet the management system he conceives in his mind (mental
model). If the result meets the management system conceived in the phenomenal
stage, it becomes the optimal management system and proceeds to the essential stage
(phase 3). The essential stage is considered a step to recognize the basic laws
(Rules) and principles residing in the object. Otherwise, going back to the
substantial stage becomes necessary, and the procedure is continued until the
optimal system is obtained.
The importance degrees of service functions are computed by using the ratio between
the
functions as follows:
Let F be a matrix determined by paired comparisons among the functions.
Assume that reflexive law is not satisfied in F, and only each element
corresponding to
fi,i+1 (i = 1, 2, � ,n � 1) of the matrix is given as an evaluation value,
where 0 ? fi,i+1
? 1 and fi+1,i satisfies the relation fi+1,i = 1 � fi,i+1 (i = 1, 2, � , k, � , n �
1).
Then, the weight vector E(={Ei, i = 1, 2, � , n}) of functions (Fi, i = 1, 2, � ,
n) can be
found below,
We apply the formulas mentioned above to find the weights of functions used. Then,
the matrix is constituted with paired comparisons by decision makers (specialists)
who
take part in the value improvement of services in the corporation. Figure 4.5 shows
stages
B and C of the PMS.
(1) Importance degree of functions composing customer satisfaction
Suppose, in this chapter, that the functions composing customer satisfaction are
extracted and regulated as a set as follows:
F Fi i = =
=
{ , , , }
{
1 2 �6
Employee�s behavior, Management of a store,
Providing customers with information, Response to customers,
Exchange of information, Delivery service}
Improvement of customer satisfaction becomes a main purpose of corporate
management, and Fi(i = 1, 2, � , 6) are respectively defined as the function to
achieve
customer satisfaction.
Then, for example, let each cell of the matrix be intuitively and empirically
filled
in a paired comparison manner whose values are given by the ratio method by taking
into consideration the knowledge and/or experiences of the decision makers
(specialists):
Also, assume that as an evaluation standard to apply paired comparison, we
specify five different degrees of grade based on the membership functions.
Not important: [0.0, 0.2)
Not so important: [0.2, 0.4)
Important: [0.4, 0.6)
Very important: [0.6, 0.8)
Critically important: [0.8, 1.0)
For instance, if Fi is believed to be critically more important than Fj, the
decision
makers may make an entry of 0.9 in Fij. Each value is empirically given by the
decision makers (or specialists) who have their experiences and knowledge, with
the know-how for value improvement. As a result, the values yielded by the ratio
method are recognized as weights for the functions.
Thus, the weight vector E of functions (Fi, i = 1, 2, � , 6) is obtained as
follows:
E = {0.046, 0.012, 0.017, 0.040, 0.027, 0.007}
Further, F can be standardized
E = {0.31, 0.08, 0.11, 0.27, 0.18, 0.05}
a. Importance degrees of constituent elements of �employee�s behavior (F1)�
i. As it is clear from the structural model of the customer satisfaction shown
earlier in Figure 4.3, F1 consists of all subfunctions F1i (i = 1, 2, 3).
ii. Here, we compute the importance degrees of {F1i, i = 1, 2, 3} by the ratio
method
in the same way as F1was obtained.
b. Importance degrees of subfunctions of �employee�s behavior (F1)�
Therefore, the value index, which is based on importance degree and cost concerning
each resources used to give services, is obtained.
30.4 Summary
Engineering economics plays an increasingly important role in investment analysis.
It is
basically a decision-making process. In engineering economics, one learns to solve
engineering
problems involving costs and benefits. Interest can be thought of as the rent for
using someone�s money; interest rate is a measure of the cost of this use. Interest
rates are
of two types: simple or compounded. Under simple interest, only the principal earns
interest.
Simple interest is non-existent in today�s financial marketplace. Under
compounding,
the interest earned during a period augments the principal; thus, interest earns
interest.
Compounding of interest is beneficial to the lender. Owing to its capacity to earn
interest,
money has time value. The time value of money is important in making decisions
pertaining
to engineering projects. Among various tools and techniques available to solve
engineering
economic problems, NPV analysis and IRR analysis are widely used in industry.
They are very simple but effective tools for investment analysis.
RELIABILITY
in their study: functions 3.2 and 3.4 correspond to the basic maneuvering task,
function 3.3
corresponds to the reconnaissance task, and function 3.5 matches up with the
landing task.
If we assume that functions 3.2 to 3.5 are functionally independent, then the set
of functions
constitutes a simple series system. Thus, total system reliability was estimated by
taking the
mathematical product of the three logistic regression models, meaning that we had
an expression
for total system reliability that was a function of the personnel and training
domains.
A good plan for choosing a source of I-SPY Predator pilots, particularly from a
system
sustainability perspective, is to seek a solution that most effectively utilizes
personnel
given total system reliability and training resource constraints. In such a
situation, the
quality of feasible solutions might then be judged in terms of maximizing total
system
reliability for the personnel and training costs expended. This approach was
adopted to
answer the I-SPY selection and training question. A non-linear program was
formulated
to determine the optimal solution in terms of cost-effectiveness, the latter
expressed as the
ratio of system reliability to total personnel and training costs. The feasible
solution space
was constrained by a lower limit (i.e., minimum acceptable value) on total system
reliability
and an upper limit (i.e., maximum acceptable value) on training time.
Availability: When searching for additional information, sources that are more
easily
accessed or brought to mind will be considered first, even when other sources are
more diagnostic.
Reliability: The reliability of information sources is hard to integrate into the
decision making
process. Differences in reliability are often ignored or discounted.
Stochastic networks and Markov models are fundamental concepts in the field of
probability theory, applied mathematics, and various fields such as computer
science, operations research, telecommunications, and more. Let's explore each of
these concepts:
Stochastic Networks: Stochastic networks deal with systems that involve a certain
degree of randomness or uncertainty. These systems may include multiple
interconnected components or nodes, where the behavior of each node is subject to
probabilistic influences. Examples of stochastic networks can be found in various
real-world scenarios, such as computer networks, communication systems,
transportation networks, and manufacturing processes.
Key features of stochastic networks:
Randomness: The behavior of the network components or the interactions between them
is subject to probabilistic or random effects.
Queuing Theory: Stochastic networks often involve queuing systems, where entities
(e.g., customers, data packets) wait in lines or queues before being processed by
network components.
Performance Analysis: Stochastic networks are analyzed to understand their
performance characteristics, such as throughput, delay, and resource utilization,
under various operating conditions.
There is no such thing as the best model for a given phenomenon. Thepragmatic
criterion of usefulness often allows the existence of two ormore models for the
same event, but serving distinct purposes. Considerlight. The wave form model, in
which light is viewed as a continuous flow, is entirely adequate for designing
eyeglass and telescope lenses. In contrast, for understanding the impact of light
on the retina of the eye, thephoton model, which views light as tiny discrete
bundles of energy, ispreferred. Neither model supersedes the other; both are
relevant and useful.
The word "stochastic" derives from the Greed to aim, toguess) and means "random" or
"chance." The antonym is "sure," "deterministic," or "certain." A deterministic
model predicts a single outcomefrom a given set of circumstances. A stochastic
model predicts a set ofpossible outcomes weighted by their likelihoods, or
probabilities. A coinflipped into the air will surely return to earth somewhere.
Whether it landsheads or tails is random. For a "fair" coin we consider these
alternativesequally likely and assign to each the probability 12.
However, phenomena are not in and of themselves inherently stochastic or
deterministic. Rather, to model a phenomenon as stochastic or deterministic is the
choice of the observer. The choice depends on the observer's purpose; the criterion
for judging the choice is usefulness. Mostoften the proper choice is quite clear,
but controversial situations do arise. If the coin once fallen is quickly covered
by a book so that the outcome"heads" or "tails" remains unknown, two participants
may still usefullyemploy probability concepts to evaluate what is a fair bet
between them;that is, they may usefully view the coin as random, even though most
people would consider the outcome now to be fixed or deterministic. As a
lessmundane example of the converse situation, changes in the level of a
largepopulation are often usefully modeled deterministically, in spite of
thegeneral agreement among observers that many chance events contributeto their
fluctuations. Scientific modeling has three components: (i) a natural
phenomenonunder study, (ii) a logical system for deducing implications about the
phenomenon, and (iii) a connection linking the elements of the natural systemunder
study to the logical system used to model it. If we think of thesethree components
in terms of the great-circle air route problem, the natural system is the earth
with airports at Los Angeles and New York; thelogical system is the mathematical
subject of spherical geometry; and the
two are connected by viewing the airports in the physical system as pointsin the
logical system. The modern approach to stochastic modeling is in a similar spirit.
Nature does not dictate a unique definition of "probability," in the same waythat
there is no nature-imposed definition of "point" in geometry. "Probability" and
"point" are terms in pure mathematics, defined only throughthe properties invested
in them by their respective sets of axioms. (SeeSection 2.8 for a review of
axiomatic probability theory.) There are, however, three general principles that
are often useful in relating or connecting the abstract elements of mathematical
probability theory to a real ornatural phenomenon that is to be modeled. These are
(i) the principle ofequally likely outcomes, (ii) the principle of long run
relative frequency, and (iii) the principle of odds making or subjective
probabilities. Historically, these three concepts arose out of largely unsuccessful
attempts todefine probability in terms of physical experiences. Today, they are
relevant as guidelines for the assignment of probability values in a model, andfor
the interpretation of the conclusions of a model in terms of the phenomenon under
study.
We illustrate the distinctions between these principles with a long experiment. We
will pretend that we are part of a group of people who decide to toss a coin and
observe the event that the coin will fall heads up. This event is denoted by H, and
the event of tails, by T. Initially, everyone in the group agrees that Pr{H} = ;.
When asked why, people give two reasons: Upon checking the coin construction, they
believe that the two possible outcomes, heads and tails, are equally likely;and
extrapolating from past experience, they also believe that if the coinis tossed
many times, the fraction of times that heads is observed will beclose to one-half
The equally likely interpretation of probability surfaced in the works ofLaplace in
1812, where the attempt was made to define the probability ofan event A as the
ratio of the total number of ways that A could occur tothe total number of possible
outcomes of the experiment. The equallylikely approach is often used today to
assign probabilities that reflect somenotion of a total lack of knowledge about the
outcome of a chance phenomenon. The principle requires judicious application if it
is to be useful, however. In our coin tossing experiment, for instance, merely
introducingthe possibility that the coin could land on its edge (E) instantly
results inPr(H) = Pr{T} = Pr{E} = 1/3.
The next principle, the long run relative frequency interpretation
ofprobability, is a basic building block in modern stochastic modeling, madeprecise
and justified within the axiomatic structure by the law of largenumbers. This law
asserts that the relative fraction of times in which anevent occurs in a sequence
of independent similar experiments approaches, in the limit, the probability of the
occurrence of the event on anysingle trial. The principle is not relevant in all
situations, however. When the surgeon tells a patient that he has an 80-20 chance
of survival, the surgeonmeans, most likely, that 80 percent of similar patients
facing similarsurgery will survive it. The patient at hand is not concerned with
the longrun, but in vivid contrast, is vitally concerned only in the outcome of
his, the next, trial. Returning to the group experiment, we will suppose next that
the coin isflipped into the air and, upon landing, is quickly covered so that no
one cansee the outcome. What is Pr{H} now? Several in the group argue that
theoutcome of the coin is no longer random, that Pr{H} is either 0 or 1, andthat
although we don't know which it is, probability theory does not apply. Others
articulate a different view, that the distinction between "random" and "lack of
knowledge" is fuzzy, at best, and that a person with asufficiently large computer
and sufficient information about such factorsas the energy, velocity, and direction
used in tossing the coin could havepredicted the outcome, heads or tails, with
certainty before the toss.Therefore, even before the coin was flipped, the problem
was a lack ofknowledge and not some inherent randomness in the experiment.
In a related approach, several people in the group are willing to bet witheach
other, at even odds, on the outcome of the toss. That is, they are willing to use
the calculus of probability to determine what is a fair bet, without considering
whether the event under study is random or not. The usefulness criterion for
judging a model has appeared. While the rest of the mob were debating "random"
versus "lack ofknowledge," one member, Karen, looked at the coin. Her probability
forheads is now different from that of everyone else. Keeping the coin covered, she
announces the outcome "Tails," whereupon everyone mentallyassigns the value Pr{H} =
0. But then her companion, Mary, speaks upand says that Karen has a history of
prevarication. The last scenario explains why there are horse races; different
peopleassign different probabilities to the same event. For this reason, probabil-
ities used in odds making are often called subjective probabilities. Then, odds
making forms the third principle for assigning probability values inmodels and for
interpreting them in the real world. The modern approach to stochastic modeling is
to divorce the definitionof probability from any particular type of application.
Probability theoryis an axiomatic structure (see Section 2.8), a part of pure
mathematics. Itsuse in modeling stochastic phenomena is part of the broader realm
of science and parallels the use of other branches of mathematics in
modelingdeterministic phenomena. To be useful, a stochastic model must reflect all
those aspects of thephenomenon under study that are relevant to the question at
hand. In addition, the model must be amenable to calculation and must allow the
deduction of important predictions or implications about the phenomenon.
Stochastic Processes
A stochastic process is a family of random variables X where t is a parameter
running over a suitable index set T. (Where convenient, we willwrite X(t) instead
of X,.) In a common situation, the index t correspondsto discrete units of time,
and the index set is T = {0, 1, 2, . . .}. In thiscase, X, might represent the
outcomes at successive tosses of a coin, repeated responses of a subject in a
learning experiment, or successive observations of some characteristics of a
certain population. Stochasticprocesses for which T = [0, c) are particularly
important in applications. Here t often represents time, but different situations
also frequently arise. For example, t may represent distance from an arbitrary
origin, and X, maycount the number of defects in the interval (0, t] along a
thread, or thenumber of cars in the interval (0, t] along a highway. Stochastic
processes are distinguished by their state space, or the rangeof possible values
for the random variables X by their index set T, and bythe dependence relations
among the random variables X,. The most widelyused classes of stochastic processes
are systematically and thoroughlypresented for study in the following chapters,
along with the mathematical techniques for calculation and analysis that are most
useful with theseprocesses. The use of these processes as models is taught by
example.Sample applications from many and diverse areas of interest are an integral
part of the exposition.
Markov Models: Markov models are mathematical models used to describe systems that
exhibit a specific probabilistic property known as the Markov property or
memorylessness. This property states that the future behavior of the system depends
only on its current state and not on its past states. Markov models are widely used
in the analysis of systems that undergo transitions between a finite set of states.
Key features of Markov models:
State Transitions: The system moves from one state to another according to
probabilistic transition probabilities, which are often represented using a
transition matrix.
Markov Chains: A fundamental type of Markov model is the Markov chain, which is a
sequence of states with probabilistic transitions.
Applications: Markov models are used in a wide range of applications, including
reliability analysis, queueing systems, machine learning (e.g., Hidden Markov
Models), finance (e.g., Markov Chain Monte Carlo methods), and more.
Combining stochastic networks and Markov models allows for the analysis and
modeling of complex systems with randomness and state transitions. These concepts
provide valuable insights into system behavior, performance, and optimization,
making them essential tools in various fields where uncertainty and dynamic
behavior are prevalent.
Figure 47.13 Comparison of triage policies. (From Giachetti, R., Queueing theory
chapter in Design
of Enterprise Systems: Theory, Architecture, and Methods. CRC Press, Boca Raton,
FL, 2010.)
Figure 47.14 Queuing network. (From Giachetti, R., Queueing theory chapter in
Design of Enterprise
Systems: Theory, Architecture, and Methods. CRC Press, Boca Raton, FL, 2010.)
classes that arrive to the queuing network at the first node with different arrival
rates. Each
customer class follows a different route through the network. The customers have
different
service times at each node they visit, which depends on the customer class they
come from.
Finally, after being served, the customers depart from the queuing network (Figure
47.14).
y = a + bx (31.1)
where
x = independent variable
y = dependent variable
x = average of independent variable
y = average of dependent variable
The mathematical expressions used to estimate a and b in Equation 31.1 are
Where
A positive value of r indicates that the independent and the dependent variables
increase at the same rate. When r is negative, one variable decreases as the other
increases.
If there is no relationship between these variables, r will be zero.
The accuracy of the exponential costing method depends largely on the similarity
between the two projects and the accuracy of the cost-exponent factor. Generally,
error
ranges from �10% to �30% of the actual cost.
31.2.3.5 Learning curves
In repetitive operations involving direct labor, the average time to produce an
item or
provide a service is typically found to decrease over time as workers learn their
tasks bet ter. As a result, cumulative average and unit times required to complete
a task will drop
considerably as output increases.
Let
DECISION ASSESSMENT
41.1 Introduction
Maintenance is a combination of all technical, administrative, and managerial
actions
during the life cycle of an item intended to keep it in or restore it to a state in
which it can
perform the required function (Komonen, 2002) (see Figure 41.1). Traditionally,
mainte nance has been perceived as an expense account with performance measures
developed
to track direct costs or surrogates such as the headcount of tradesmen and the
total dura tion of forced outages during a specified period. Fortunately, this
perception is chang ing (Tsang, 1998; Kumar and Liyanage, 2001, 2002a; Kutucuoglu
et al., 2001b). In the 21st
century, plant maintenance has evolved as a major area in the business environment
and
is viewed as a value-adding function instead of a �bottomless pit of expenses�
(Kaplan
and Norton, 1992). The role of plant maintenance in the success of business is
crucial in
view of the increased international competition, market globalization, and the
demand for
profitability and performance by the stakeholders in business (Labib et al., 1998;
Liyanage
and Kumar, 2001b; Al-Najjar and Alsyouf, 2004). Today, maintenance is acknowledged
as
a major contributor to the performance and profitability of business organizations
(Arts
et al., 1998; Tsang et al., 1999; Oke, 2005). Maintenance managers therefore
explore every
opportunity to improve on profitability and performance and achieve cost savings
for
the organization (Al-Najjar and Alsyouf, 2004). A major concern has been the issue
of
what organizational structure ought to be adopted for the maintenance system:
should
it be centralized or decentralized? Such a policy should offer significant savings
as well
(HajShirmohammadi and Wedley, 2004).
The maintenance organization is confronted with a wide range of challenges that
include quality improvement, reduced lead times, set up time and cost reductions,
capac ity expansion, managing complex technology and innovation, improving the
reliability
of systems, and related environmental issues (Kaplan and Norton, 1992; Dwight,
1994,
1999; De Groote, 1995; Cooke and Paulsen, 1997; Duffua and Raouff, 1997; Chan et
al.,
2001). However, trends suggest that many maintenance organizations are adopting
total
productive maintenance, which is aimed at the total participation of plant
personnel in
maintenance decisions and cost savings (Nakajima, 1988, 1989; HajShirmohammadi and
Wedley, 2004). The challenges of intense international competition and market glob
alization have placed enormous pressure on maintenance system to improve efficiency
and reduce operational costs (Hemu, 2000). These challenges have forced maintenance
managers to adopt tools, methods, and concepts that could stimulate performance
growth and minimize errors, and to utilize resources effectively toward making the
organization a �world-class manufacturing� or a �high-performance manufacturing�
plant.
Maintenance information is an essential resource for setting and meeting mainte
nance management objectives and plays a vital role within and outside the
maintenance
organization. The need for adequate maintenance information is motivated by the fol
lowing four factors: (1) an increasing amount of information is available and data
and
information are required on hand and to be accessible in real-time for decision-
making
(Labib, 2004); (2) data lifetime is diminishing as a result of shop-floor realities
(Labib, 2004);
(3) the way data is being accessed has changed (Labib, 2004); and (4) it helps in
building
knowledge and in measurement of the overall performance of the organization. The
com puterized maintenance management system (CMMS) is now a central component of
many
companies� maintenance departments, and it offers support on a variety of levels in
the
organizational hierarchy (Labib, 2004). Indeed, a CMMS is a means of achieving
world class maintenance, as it offers a platform for decision analysis and thereby
acts as a guide
to management (Labib, 1998; Fernandez et al., 2003).
Consequently, maintenance information systems must contain modules that can
provide management with value-added information necessary for decision support
and decision-making. Computerized maintenance management systems are computer based
software programs used to control work activities and resources, as well as to
monitor and report work execution. Computerized maintenance management systems
are tools for data capture and data analysis. However, they should also offer the
capa bility to provide maintenance management with a facility for decision analysis
(Bamber
et al., 2003).
many firms to reassess how they organize and manage their resources (Blanchard,
1997).
This is particularly important where information exchange is very vital. Examples
of
such systems are multinationals, which have manufacturing plants scattered all over
the
world where the exchange of resources takes place.
With the users connected to the Internet, information exchange is possible since
plants can communicate with one another through high-speed data transmission paths.
With the emergence of the Internet and the web data management, significant improve
ments have been made by the maintenance organization in terms of updating infor
mation on equipment management, equipment manufacturer directory services, and
security systems in maintenance data. The web data system has tremendously assisted
the maintenance organization to source for highly skilled manpower needed for main
tenance management, training and retraining of the maintenance workforce, location
of
equipment specialists and manufacturers, and the exchange of maintenance
professionals across the globe.
41.2 The importance of maintenance
Industrial maintenance has two essential objectives: (1) a high availability of
produc tion equipment and (2) low maintenance costs (Komonen, 2002). However, a
strong fac tor militating against the achievement of these objectives is the nature
and intensity
of equipment failures in plants. Since system failure can lead to costly stoppages
of
an organization�s operation, which may result in low human, material, and equipment
with equipment. Thus, maintenance is not only important for these reasons, but its
successful implementation also leads to maximum capacity utilization, improved prod
uct quality, customer satisfaction, adequate equipment life span, among other
benefits.
Equipment does not have to finally breakdown before maintenance is carried out.
Implementing a good maintenance policy prevents system failures and leads to high
productivity (Vineyard et al., 2000).
41.3 Maintenance categories
In practice, failure of equipment could be partial or total. Even with the current
sophistica tion of equipment automation, failure is still a common phenomenon that
generates seri ous consideration of standard maintenance practices. Nonetheless,
the basic categories of
maintenance necessary for the control of equipment failures are traditionally
divided into
three main groups: (1) preventive maintenance (PM) (condition monitoring, condition
based actions, and scheduled maintenance); (2) corrective/breakdown maintenance
(BM);
and (3) improvement maintenance (Komonen, 2002). Breakdown maintenance is repair
work carried out on equipment only when a failure has occurred. Preventive
maintenance
is carried out to keep equipment in good working state. It is deployed before
failure occurs,
thereby reducing the likelihood of failure. It prevents breakdown through repeated
inspec tion and servicing.
each objective function with a suitable weight and then by adding them together.
One
well-known approach in this category is goal programming, which requires the
decision
maker to set goals for each desired objective. A preferred solution is then defined
as the
one that minimizes deviations from the set goals.
35.1.3 Group decision making
Group decision making has gained prominence owing to the complexity of modern-day
decisions, which involve complex social, economical, technological, political, and
many
other critical domains. Oftentimes, a group of experts needs to make a decision
that repre sents the individual opinions and yet is mutually agreeable.
Such group decisions usually involve multicriteria accompanied by multiple attri
butes. Clearly, the complexity of MCDM encourages group decision as a way to
combine
interdisciplinary skills and improve management of the decision process. The theory
and
practice of multiple objectives and multiple attribute decision making for a single
decision
maker has been studied extensively in the past 30 years. However, extending this
method ology to group decision making is not so simple. This is due to the
complexity introduced
by the conflicting views of the decision makers and their varying significance or
weight in
the decision process.
Moreover, the problem of group decision making is complicated because of several
additional factors. Usually, one expects such a decision model to follow a precise
math ematical model. Such a model can enforce consistency and precision to the
decision gener ated. Human decision makers, however, are quite reluctant to follow
a decision generated
by a formal model, unless they are confident of the model assumptions and methods.
Oftentimes, the input to such a decision model cannot be precisely quantified,
conflicting
with the perceived accuracy of the model. Intuitively, the act of optimization of
the group
decision�as a mathematical model would perform�is contradictory to the concept of
consensus and a group agreement.
The benefits from group decision making, however, are quite numerous, justifying
the
additional efforts required. Some of these benefits are as follows:
1. Better learning. Groups are better than individuals at understanding problems.
2. Accountability. People are held accountable for decisions in which they
participate.
3. Fact screening. Groups are better than individuals at catching errors.
4. More knowledge. A group has more information (knowledge) than any one member.
Groups can combine this knowledge to create new knowledge. More and more cre ative
alternatives for problem solving can be generated, and better solutions can be
derived (e.g., by group stimulation).
5. Synergy. The problem solving process may generate better synergy and communica
tion among the parties involved.
6. Creativity. Working in a group may stimulate the creativity of the participants.
7. Commitment. Many times, group members have their egos embedded in the deci sion,
and so they will be more committed to the solution.
8. Risk propensity is balanced. Groups moderate high-risk takers and encourage
conservatives.
Generally, there are three basic approaches toward group decision making (Hwang
and Lin, 1987):
1. Game theory. This approach implies a conflict or competition between the
decision
makers.
2. Social choice theory. This approach represents voting mechanisms that allow the
majority to express a choice.
3. Group decision using expert judgment. This approach deals with integrating the
preferences of several experts into a coherent and just group position.
35.1.3.1 Game theory
Game theory can be defined as the study of mathematical models of conflict and
coopera tion between intelligent and rational decision makers (Myerson, 1991).
Modern game the ory gained prominence after the publication of Von Neumann�s work
in 1928 and in 1944
(Von Neumann and Morgenstern, 1944). Game theory became an important field during
World War II and the ensuing Cold War, culminating with the famous Nash
Equilibrium.
The objective of the games as a decision tool is to maximize some utility function
for all
decision makers under uncertainty. Because this technique does not explicitly
accommo date multicriteria for selection of alternatives, it will not be considered
in this review.
35.1.3.2 Social choice theory
The social choice theory deals with MCDM since this methodology considers votes of
many individuals as the instrument for choosing a preferred candidate or
alternative. The
candidates can exhibit many characteristics such as honesty, wisdom, and experience
as
the criteria evaluated. The complexity of this seemingly simple problem of voting
can be
illustrated by the following example: a committee of nine people needs to select an
office
holder from three candidates, a, b, and c. The votes that rank the candidates are
as follows:
Three votes have the order a, b, c.
Three votes agree on the order b, c, a.
Two votes have the preference of c, b, a.
One vote prefers the order c, a, b.
After quickly observing the results, one can realize that each candidate received
three
votes as the preferred option, resulting in an inconclusive choice.
The theory of social choice was studied extensively with notable theories such as
Arrow�s impossibility theorem (Arrow, 1963; Arrow and Raynaud, 1986). This type of
deci sion making is based on the ranking of choices by the individual voters,
whereas the scores
that each decision maker gives to each criterion of each alternative are not
considered
explicitly. Therefore, this methodology is less suitable for MCDM in which each
criterion
in each alternative is carefully weighed by the decision makers.
Many decision situations are complex and poorly understood. No one person has all
the
information to make all decisions accurately. As a result, crucial decisions are
made by a
group of people. Some organizations use outside consultants with appropriate
expertise
to make recommendations for important decisions. Other organizations set up their
own
internal consulting groups without having to go outside the organization. Decisions
can
be made through linear responsibility, in which case one person makes the final
decision
based on inputs from other people. Decisions can also be made through shared
responsi bility, in which case a group of people share the responsibility for
making joint decisions.
The major advantages of group decision-making are:
1. Ability to share experience, knowledge, and resources. Many heads are better
than one. A
group will possess greater collective ability to solve a given decision problem.
2. Increased credibility. Decisions made by a group of people often carry more
weight in
an organization.
3. Improved morale. Personnel morale can be positively influenced because many
people
have the opportunity to participate in the decision-making process.
4. Better rationalization. The opportunity to observe other people�s views can lead
to an
improvement in an individual�s reasoning process.
Some disadvantages of group decision-making are:
1. Difficulty in arriving at a decision. Individuals may have conflicting
objectives.
2. Reluctance of some individuals in implementing the decisions.
3. Potential for conflicts among members of the decision group.
4. Loss of productive employee time.
33.11.1 Brainstorming
Brainstorming is a way of generating many new ideas. In brainstorming, the decision
group comes together to discuss alternate ways of solving a problem. The members of
the brainstorming group may be from different departments, may have different back
grounds and training, and may not even know one another. The diversity of the
partici pants helps create a stimulating environment for generating different ideas
from different
viewpoints. The technique encourages free outward expression of new ideas no matter
how far-fetched the ideas might appear. No criticism of any new idea is permitted
dur ing the brainstorming session. A major concern in brainstorming is that
extroverts may
take control of the discussions. For this reason, an experienced and respected
individual
should manage the brainstorming discussions. The group leader establishes the
procedure
for proposing ideas, keeps the discussions in line with the group�s mission,
discourages
disruptive statements, and encourages the participation of all members.
After the group runs out of ideas, open discussions are held to weed out the
unsuitable
ones. It is expected that even the rejected ideas may stimulate the generation of
other ideas,
which may eventually lead to other favored ideas. Guidelines for improving
brainstorm ing sessions are presented as follows:
� Focus on a specific problem.
� Keep ideas relevant to the intended decision.
� Be receptive to all new ideas.
� Evaluate the ideas on a relative basis after exhausting new ideas.
� Maintain an atmosphere conducive to cooperative discussions.
� Maintain a record of the ideas generated.
33.11.2 Delphi method
The traditional approach to group decision-making is to obtain the opinion of
experienced
participants through open discussions. An attempt is made to reach a consensus
among
the participants. However, open group discussions are often biased because of the
influ ence or subtle intimidation from dominant individuals. Even when the threat
of a domi nant individual is not present, opinions may still be swayed by group
pressure. This is
called the �bandwagon effect� of group decision-making.
The Delphi method, developed in 1964, attempts to overcome these difficulties
by
requiring individuals to present their opinions anonymously through an
intermediary.
The method differs from the other interactive group methods because it eliminates
face to-face confrontations. It was originally developed for forecasting
applications, but it has
been modified in various ways for application to different types of decision-
making. The
method can be quite useful for project management decisions. It is particularly
effective
when decisions must be based on a broad set of factors. The Delphi method is
normally
implemented as follows:
1. Problem definition. A decision problem that is considered significant is
identified and
clearly described.
2. Group selection. An appropriate group of experts or experienced individuals is
formed
to address the particular decision problem. Both internal and external experts may
be involved in the Delphi process. A leading individual is appointed to serve as
the
administrator of the decision process. The group may operate through the mail or
gather together in a room. In either case, all opinions are expressed anonymously
on
paper. If the group meets in the same room, care should be taken to provide enough
room so that each member does not have the feeling that someone may accidentally
or deliberately observe their responses.
3. Initial opinion poll. The technique is initiated by describing the problem to be
addressed
in unambiguous terms. The group members are requested to submit a list of major
areas of concern in their specialty areas as they relate to the decision problem.
4. Questionnaire design and distribution. Questionnaires are prepared to address
the
areas of concern related to the decision problem. The written responses to the ques
tionnaires are collected and organized by the administrator. The administrator
aggregates the responses in a statistical format. For example, the average, mode,
and
median of the responses may be computed. This analysis is distributed to the deci
sion group. Each member can then see how his or her responses compare with the
anonymous views of the other members.
available data.
6. It is often difficult to get all members of a decision group together at the
same time.
Despite the noted disadvantages, group decision-making definitely has many advan
tages that may nullify shortcomings. The advantages as presented earlier will have
vary ing levels of effect from one organization to another. The Triple C principle
presented in
Chapter 2 may also be used to improve the success of decision teams. Teamwork can
be
enhanced in group decision-making by adhering to the following guidelines:
1. Get a willing group of people together.
2. Set an achievable goal for the group.
3. Determine the limitations of the group.
4. Develop a set of guiding rules for the group.
5. Create an atmosphere conducive to group synergism.
6. Identify the questions to be addressed in advance.
7. Plan to address only one topic per meeting.
For major decisions and long-term group activities, arrange for team training,
which
allows the group to learn the decision rules and responsibilities together. The
steps for the
nominal group technique are:
1. Silently generate ideas, in writing.
2. Record ideas without discussion.
3. Conduct group discussion for clarification of meaning, not argument.
4. Vote to establish the priority or rank of each item.
5. Discuss vote.
6. Cast final vote.
33.11.4 Interviews, surveys, and questionnaires
Interviews, surveys, and questionnaires are important information-gathering
techniques.
They also foster cooperative working relationships. They encourage direct
participation and
inputs into project decision-making processes. They provide an opportunity for
employees
at the lower levels of an organization to contribute ideas and inputs for decision-
making. The
greater the number of people involved in the interviews, surveys, and
questionnaires, the
more valid the final decision. The following guidelines are useful for conducting
interviews,
surveys, and questionnaires to collect data and information for project decisions:
1. Collect and organize background information and supporting documents on the
items to be covered by the interview, survey, or questionnaire.
2. Outline the items to be covered and list the major questions to be asked.
3. Use a suitable medium of interaction and communication: telephone, fax,
electronic
mail, face-to-face, observation, meeting venue, poster, or memo.
4. Tell the respondent the purpose of the interview, survey, or questionnaire, and
indi cate how long it will take.
5. Use open-ended questions that stimulate ideas from the respondents.
6. Minimize the use of yes or no types of questions.
7. Encourage expressive statements that indicate the respondent�s views.
8. Use the who, what, where, when, why, and how approach to elicit specific
information.
9. Thank the respondents for their participation.
10. Let the respondents know the outcome of the exercise.
33.11.5 Multivote
Multivoting is a series of votes used to arrive at a group decision. It can be used
to assign
priorities to a list of items. It can be used at team meetings after a
brainstorming session
has generated a long list of items. Multivoting helps reduce such long lists to a
few items,
usually three to five. The steps for multivoting are
1. Take a first vote. Each person votes as many times as desired, but only once per
item.
2. Circle the items receiving a relatively higher number of votes (i.e., majority
vote) than
the other items.
3. Take a second vote. Each person votes for a number of items equal to one half
the
total number of items circled in step 2. Only one vote per item is permitted.
4. Repeat steps 2 and 3 until the list is reduced to three to five items, depending
on the
needs of the group. It is not recommended to multivote down to only one item.
5. Perform further analysis of the items selected in step 4, if needed.