Manual Book 2

Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 70

Why Systems Engineering?

Ex � 1 Air bags, safety device appearing in automobiles in the early 1990�s,


became the cause of death for a noticeable number of individuals.
There were severe flaws in the design, testing and deployment conditions envisaged.
Ex � 2 Ariane 5, the launch vehicle developed by European Space agency, was first
launched on June 4 1996, with four satellites. At 37 seconds into flight, Ariane5
veered off course and disintegrated shortly thereafter
A major flaw in the communication interface resulted into this catastrophe Ex � 3
Space shuttle Columbia disintegrates on February 1, 2003
Space shuttle Columbia�s crew members who died in the crash
System Engineering
Systems Engineering is a top-down, life-cycle approach to the design, development,
and deployment of large-scale systems, processes, or operations to meet the
effective needs of users and stakeholders in a cost-effective, high-quality way.
An organised and systematic way of design
Considers all the factors involved in the design
Integrates all the disciplines and specialty groups into a team effort
Ensures the business and customer needs of all stakeholders and ensures a
system that meets the user needs
The process of devising a system, component, or process to meet desired needs. It
is a decision-making process (often iterative) in which the basic sciences,
mathematics, and engineering sciences are applied to convert resources optimally to
meet a stated objective.
�Accreditation Board for Engineering and Technology

Then, What is a System?


A group of interacting, interrelated, or interdependent elements forming a complex
whole.
A functionally related group of elements, : The human body regarded as a
functional physiological unit.
An organism as a whole, especially with regard to its vital processes or
functions.
A group of physiologically or anatomically complementary organs or parts: the
nervous system; the skeletal system.
A group of interacting mechanical or electrical components.
A network of related computer software, hardware, and data transmission devices.

A system is commonly defined to be a collection of hardware, software, people,


facilities, and procedures organised to accomplish some common objectives.
Definition of a System
(NASA Systems Engineering Handbook)
A system is a set of interrelated components which interact with one another in
an organized fashion toward a common purpose.
System components may be quite diverse
Persons and Organizations
Software and Data
Equipment and Hardware
Facilities and Materials
Services and Techniques
Systems Engineering is a robust approach to the design, creation,
and operation of systems.
Systems Engineering consists of
Identification and quantification of system goals
Creation of alternative system design concepts
Performance of design trades
Selection and implementation of the best design (balanced and robust)
Verification that the design is actually built and properly integrated
in accordance with specifications
Assessment of how well the system meets the goals
What is Systems Engineering?
Systems Engineering is a top-down, life-cycle approach to the
design, development, and deployment of large-scale systems, processes, or
operations
to meet the effective needs of users and stakeholders in a cost-effective, high-
quality way.
Systems Engineering typically involves an interdisciplinary approach and means to
enable the realization of successful systems.
It focuses on defining customer needs and required functionality early in the
development cycle, documenting requirements, then proceeding with design synthesis
and system validation while considering the complete problem.

Role of Systems Engineering in Product Development


Building Blocks of Systems Engineering
Math & Physical Sciences
Qualitative modeling
Quantitative modeling
Physical modeling
Theory of Constraints
Physical Laws
Management Sciences
Economics
Organizational Design
Business Decision Analysis
Operations Research

Social Sciences
Multi-disciplinary Teamwork
Organizational Behavior
Leadership
Body of Knowledge
Problem definition
System boundaries
Objectives hierarchy
Concept of operations
Originating requirements
Concurrent engineering
System life cycle phases
Integration/Qualification
Architectures
Functional / Logical
Physical / Operational
Interface
Trades
Concept Level
Risk Management
Key Performance Parameters

Achieving balance between inherent conflicts


System functionality and performance
Development cost and recurring cost
Development schedule (Time to Market)
Development risk (Probability of Success)
Business viability and success
System Optimization
Subsystems often suboptimal to achieve best balance at system level

Ultimate system purpose must prevail against conflicting


considerations
Long-term considerations (e.g., disposal) may drive technical
decisions
Customer Interface
Often must act as �honest broker�
Carries burden of educating customer on hard choices
Must think ahead to the next customer and next application
Must �challenge� all requirements
Systems Engineering Heritage
Water distribution systems in Mesopotamia 4000 BC
Irrigation systems in Egypt 3300 BC
Urban systems such as Athens, Greece 400 BC
Roman highway systems 300 BC
Water transportation systems like Erie Canal 1800s
Telephone systems 1877
Electrical power distribution systems 1880
Modern Origins of the Systems Approach
British multi-disciplined team was formed (1937) to analyze Air Defense System
Bell Labs supported Nike development (1939-1945)
SAGE Air defense system defined and managed by MIT (1951-1980)

ATLAS Intercontinental Ballistic Missile Program managed by systems


contractor, Ramo-Wooldridge Corp (1954-1964)
Spread of the Systems Approach
Early Proponents
Research and Development Corporation (RAND)
Robert McNamara (Secretary of Defense)
Jay Forrester (Modeling Urban Systems at MIT)
Growth in systems engineering citations (Engineering Index)
Nil in 1964
One Page in 1966
Eight Pages in 1969
Nine Universities Offered Systems Engineering Programs in 1964
Teaching SE Included in 1971 DoD Acquisition Reforms
Study group chaired by David Packard, co-founder of Hewlett Packard
Recommended formal training for Department of Defense (DoD) program managers
Defense Systems Management College (DMSC) Established in 1971
DSMC charged with teaching program managers how to direct complex projects
Systems Engineering a core curriculum course
Two Basic Types of Systems
Precedented Systems
Extant, pre-existing, built, defined

Defined interfaces
Solution architecture
Product breakdown structure
Possess a CONOPS & a System CONTEXT
Have at least ONE mission event time line defined
Unprecedented Systems
Conceptual (at best, may not even be conceived)
Need can be expressed
Must develop CONOPS, define mission(s), & Context
Describe FUNCTIONS, INTERFACES, and TESTING
Techn
Deplo
Conce

Produ
Opera

Defin
Integr
Dispoe
System Engineering Knowledge

The diagram is divided into five sections, each describing how systems knowledge is
treated in the SEBoK.

The Systems Fundamentals Knowledge Area considers the question �What is a System?�
It explores the wide range of system definitions and considers open systems, system
types, groupings of systems, complexity, and emergence. All of these ideas are
particularly relevant to engineered systems and to the groupings of such systems
associated with the systems approach applied to engineered systems (i.e. product
system, service system, enterprise system and system of systems).
The Systems Approach Applied to Engineered Systems Knowledge Area defines a
structured approach to problem/opportunity discovery, exploration, and resolution,
that can be applied to all engineered systems. The approach is based on systems
thinking and utilizes appropriate elements of system approaches and
representations. This KA provides principles that map directly to SE practice.
The Systems Science Knowledge Area presents some influential movements in systems
science, including the chronological development of systems knowledge and
underlying
theories behind some of the approaches taken in applying systems science to real
problems.
The Systems Thinking Knowledge Area describes key concepts, principles and patterns
shared across systems research and practice.
The Representing Systems with Models Knowledge Area considers the key role that
abstract models play in both the development of system theories and the application
of systems approaches.
Systems thinking is a fundamental paradigm describing a way of looking at the
world. People who think and act in a systems way are essential to the success of
both the research and practice of system disciplines. In particular, individuals
who have an awareness of and/or active involvements in both research and practice
of system disciplines are needed to help integrate these closely related
activities.
The knowledge presented in this part of the SEBoK has been organized into these
areas to facilitate understanding; the intention is to present a rounded picture of
research and practice based on system knowledge. These knowledge areas should be
seen together as a �system of ideas� for connecting research, understanding, and
practice, based on system knowledge which underpins a wide range of scientific,
management, and engineering disciplines and applies to all types of domains.

System Engineering Life Cycle


There are a large number of life cycle process models. these models fall into three
major categories:
primarily pre-specified and sequential processes;
primarily evolutionary and concurrent processes (e.g., the rational unified process
and various forms of the Vee and spiral models); and
primarily interpersonal and unconstrained processes (e.g., agile development,
Scrum, extreme programming (XP), the dynamic system development method, and
innovation-based processes).
This specifically focuses on the Vee Model as the primary example of pre-specified
and sequential processes. it is important to note that the Vee model, and
variations of the Vee model, all address the same basic set of systems engineering
(SE) activities.
The key difference between these models is the way in which they group and
represent the aforementioned SE activities.

A Primarily Pre-specified and Sequential Process Model: The Vee Model


The sequential version of the Vee Model is shown in Figure 1. Its core involves a
sequential progression of plans, specifications, and products that are baselined
and put under configuration management. The vertical, two-headed arrow enables
projects to perform concurrent opportunity and risk analyses, as well as continuous
in-process validation.
The Vee Model encompasses the first three life cycle stages listed in the "Generic
Life Cycle Stages" table of the INCOSE Systems Engineering Handbook: exploratory
research, concept, and development (INCOSE 2012).

The Vee Model endorses the INCOSE Systems Engineering Handbook (INCOSE 2012)
definition of life cycle stages and their purposes or activities, as shown in
Figure 2 below.

The INCOSE Systems Engineering Handbook 3.2.2 contains a more detailed version of
the Vee diagram which incorporates life cycle activities into the more generic Vee
model.
A similar diagram, developed at the U.S. Defense Acquisition University (DAU), can
be seen in Figure 3 below.
Figure 3. The Vee Activity Diagram (Prosnik 2010). Released by the
Defense Acquisition University (DAU)/U.S. Department of Defense (DoD).

Application of the Vee Model


Lawson (Lawson 2010) elaborates on the activities in each life cycle stage and
notes that it is useful to consider the structure of a generic life cycle stage
model for any type of system-of-interest (SoI) as portrayed in Figure 4. This (T)
model indicates that one or more definition stages precede a production stage(s)
where the implementation (acquisition, provisioning, or development) of two or more
system elements has been accomplished.

Figure 5 shows the generic life cycle stages for a variety of stakeholders, from a
standards organization (ISO/IEC) to commercial and government organizations.
Although these stages differ in detail, they all have a similar sequential format
that emphasizes the core activities as noted in Figure 2 (definition, production,
and utilization/retirement).
Figure 5. Comparisons of Life Cycle Models (Forsberg, Mooz, and Cotterman 2005).
Reprinted with

permission of John Wiley & Sons. All other rights are reserved by the copyright
owner.

It is important to note that many of the activities throughout the life cycle are
iterated. This is an example of recursion (glossary).

Fundamentals of Life Cycle Stages and Program


Management Phase

The term stage refers to the different states of a system during its life cycle;
some stages may overlap in time, such as the utilization stage and the support
stage. The term �stage� is used
in ISO/IEC/IEEE 15288.

The term phase refers to the different steps of the program that support and manage
the life of the system; the phases usually do not overlap. The term �phase� is used
in many well-established models as an equivalent to the term �stage.�
Program management employs phases, milestones, and decision gates which are used to
assess the evolution of a system through its various stages. The stages contain the
activities performed to achieve goals and serve to control and manage the sequence
of stages and the transitions between each stage. For each project, it is essential
to define and publish the terms and related definitions used on respective projects
to minimize confusion.
A typical program is composed of the following phases:

The pre-study phase, which identifies potential opportunities to address user needs
with new solutions that make business sense.
The feasibility phase consists of studying the feasibility of alternative concepts
to reach a second decision gate before initiating the execution stage. During the
feasibility phase, stakeholders' requirements and system requirements are
identified, viable solutions are identified and studied, and virtual prototypes
(glossary) can be implemented. During this phase, the decision to move forward is
based on:
whether a concept is feasible and is considered able to counter an identified
threat or exploit an opportunity;
whether a concept is sufficiently mature to warrant continued development of a new
product or line of products; and
whether to approve a proposal generated in response to a request for proposal.

The execution phase includes activities related to four stages of the system life
cycle: development, production, utilization, and support. Typically, there are two
decision gates and two milestones associated with execution activities. The first
milestone provides the opportunity for management to review the plans for execution
before giving the go-ahead. The second milestone provides the opportunity to review
progress before the decision is made to initiate production. The decision gates
during execution can be used to determine whether to produce the developed Solution
and whether to improve it or retire it.
These program management views apply not only to the Solution, but also to its
elements and structure.

Life Cycle Stages


Variations of the Vee model deal with the same general stages of a life cycle:

New projects typically begin with an exploratory research phase which generally
includes the activities of concept definition, specifically the topics of business
or mission analysis and the understanding of stakeholder needs and requirements.
These mature as the project goes from the exploratory stage to the concept stage to
the development stage.
The production phase includes the activities of system definition and system
realization, as well as the development of the system requirements (glossary) and
architecture
(glossary) through verification and validation.
The utilization phase includes the activities of system deployment and system
operation.
The support phase includes the activities of system maintenance, logistics, and
product and service life management, which may include activities such as service
life extension or capability updates, upgrades, and modernization.
The retirement phase includes the activities of disposal and retirement, though in
some models, activities such as service life extension or capability updates,
upgrades, and modernization are grouped into the "retirement" phase.
Additional information on each of these stages can be found in the sections below
(see links to additional Part 3 articles above for further detail). It is important
to note that these life cycle stages, and the activities in each stage, are
supported by a set of systems engineering management processes.
Exploratory Research Stage
User requirements analysis and agreement is part of the exploratory research stage
and is critical to the development of successful systems. Without proper
understanding of the user needs, any system runs the risk of being built to solve
the wrong problems. The first step in the exploratory research phase is to define
the user (and stakeholder) requirements and constraints. A key part of this process
is to establish the feasibility of meeting the user requirements, including
technology readiness assessment. As with many SE activities this is often done
iteratively, and stakeholder needs and requirements are revisited as new
information becomes available.
A recent study by the National Research Council (National Research Council 2008)
focused on reducing the development time for US Air Force projects. The report
notes that, �simply stated, systems engineering is the translation of a user�s
needs into a definition of a system and its architecture through an iterative
process that results in an effective system design.� The iterative involvement with
stakeholders is critical to the project success.
Except for the first and last decision gates of a project, the gates are performed
simultaneously. See Figure 6 below.
Concept Stage
During the concept stage, alternate concepts are created to determine the best
approach to meet stakeholder needs. By envisioning alternatives and creating
models, including appropriate prototypes, stakeholder needs will be clarified and
the driving issues highlighted. This may lead to an incremental or evolutionary
approach to system development. Several different concepts may be explored in
parallel.
Development Stage
The selected concept(s) identified in the concept stage are elaborated in detail
down to the lowest level to produce the solution that meets the stakeholder
requirements. Throughout this stage, it is vital to continue with user involvement
through in-process validation (the upward arrow on the Vee models). On hardware,
this is done with frequent program reviews and a customer resident
representative(s) (if appropriate). In agile development, the practice is to have
the customer representative integrated into the development team.
Production Stage
The production stage is where the SoI is built or manufactured. Product
modifications may be required to resolve production problems, to reduce production
costs, or to enhance product or SoI capabilities. Any of these modifications may
influence system requirements and may require system re-qualification, re-
verification, or re-validation. All such changes require SE assessment before
changes are approved.
Utilization Stage
A significant aspect of product life cycle management is the provisioning of
supporting systems which are vital in sustaining operation of the product. While
the supplied product or service may be seen as the narrow system-of-interest (NSOI)
for an acquirer, the acquirer also must incorporate the supporting systems into a
wider system-of-interest (WSOI). These supporting systems should be seen as system
assets that, when needed, are activated in response to a situation that has emerged
in respect to the operation of the NSOI. The collective name for the set of
supporting systems is the integrated logistics support (ILS) system.

It is vital to have a holistic view when defining, producing, and operating system
products and services. In Figure 7, the relationship between system design and
development and the ILS requirements is portrayed.

The requirements for reliability, resulting in the need of maintainability and


testability, are driving factors.
Support Stage
In the support stage, the SoI is provided services that enable continued operation.
Modifications may be proposed to resolve supportability problems, to reduce
operational costs, or to extend the life of a system. These changes require SE
assessment to avoid loss of system capabilities while under operation. The
corresponding technical process is the maintenance process.
Retirement Stage
In the retirement stage, the SoI and its related services are removed from
operation. SE activities in this stage are primarily focused on ensuring that
disposal requirements are satisfied. In fact, planning for disposal is part of the
system definition during the concept stage. Experiences in the 20th century
repeatedly demonstrated the consequences when system retirement and disposal was
not considered from the outset. Early in the 21st century, many countries have
changed their laws to hold the creator of a SoI accountable for proper end-of-life
disposal of the system.

Life Cycle Reviews


To control the progress of a project, different types of reviews are planned. The
most commonly used are listed as follows, although the names are not universal:

The system requirements review (SRR) is planned to verify and validate the set of
system requirements before starting the detailed design activities.

The preliminary design review (PDR) is planned to verify and validate the set of
system requirements, the design artifacts, and justification elements at the end of
the first engineering loop (also known as the "design-to" gate).

The critical design review (CDR) is planned to verify and validate the set of
system requirements, the design artifacts, and justification elements at the end of
the last engineering loop (the �build-to� and �code-to� designs are released after
this review).

The integration, verification, and validation reviews are planned as the components
are assembled into higher level subsystems and elements. A sequence of reviews is
held to ensure that everything integrates properly and that there is objective
evidence that all requirements have been met. There should also be an in-process
validation that the system, as it is evolving, will meet the stakeholders�
requirements (see Figure 7).

The final validation review is carried out at the end of the integration phase.

Other management related reviews can be planned and conducted in order to control
the correct progress of work, based on the type of system and the associated risks.

Logical Steps Of Systems Engineering

Systems engineers use a structured, five-step process to guide projects from


conception through development. The Systems Engineering Process Engine (SEPE)
results in an efficiently-created finished product that satisfies customers and
performs as expected. The SEPE can be compared to an air traffic control system,
which provides a wide range of information to aircraft enabling them to
successfully navigate complex and dynamic airspace.
Step One: Requirements Analysis and Management
The first step in any successful development project is the collection, refinement,
and organization of design inputs. Requirements Analysis is the art of making sure
design inputs can be translated into verifiable technical requirements.
Effective prioritization of these requirements enables a systems engineer to
formulate contingency plans for addressing risks and taking advantages of
opportunities as they present themselves. Organization of a hierarchical
requirement structure supports the management of complex products across
distributed development teams. Having a systems engineer in charge of design inputs
helps projects move smoothly through each stage of development.

Step Two: Functional Analysis and Allocation

The systems engineer leads the team in developing strategies to meet the
requirements. Formulation of these strategies will be an iterative process that
leverages trade-off studies, risk analyses, and verification considerations, as
process inputs. Risk management is one of the activities where the systems engineer
can make the most significant contributions by increasing patient and user safety.
Users can have many different points of contact with a device, and without the
holistic approach of a systems engineer, it becomes difficult to mitigate risk
through all interfaces. Systems engineers formulate strategies to minimize not only
safety risks, but also technical and programmatic risks while at the same time
maximizing performance, reliability, extensibility, and profitability. A systems
engineer coordinates interdisciplinary design activities to reduce safety and
programmatic risk profiles while maximizing product and project performance.
Step Three: Design Synthesis

During Design Synthesis, the systems engineer leads the team through a systematic
process of quantitatively evaluating each of proposed design solutions against a
set of prioritized metrics. This can help the team formulate questions or

uncover problems that were not initially obvious. When Design Synthesis is well-
executed, it helps reduce the risk, cost, and time of product development. An
experienced Systems Engineer is able to distill informal input from key
stakeholders into actionable guidance and combine this information with formal
design input requirements to formulate a more accurate picture of the design intent
for the product and business goals of the enterprise.

Step Four: Systems Analysis and Control

Systems Analysis and Control activities enable the systems engineer to measure
progress, evaluate and select alternatives, and document decisions made during
development. Systems engineers help teams prioritize decisions by guiding them
through trade-off matrices, which rank many options against a range of pre- defined
criteria. Systems engineers look at a wide range of metrics, such as cost,
technical qualifications, and interfacing parameters in order to help the team make
decisions that will lead to the most successful project. A Systems engineer can
also provide assistance with modeling and simulation tasks.
Step Five: Verification

Verification is the process of evaluating the finished design using traceable and
objectives methods for design confirmation. The goal of verification is to make
sure the design outputs satisfy the design inputs. The systems engineer coordinates
the efforts of the verification team to ensure that feedback from Quality
Engineering gets incorporated into the final product. An experienced Systems
Engineer knows how to leverage different verification methods to streamline the
verification process, providing maximum flexibility in addressing the inevitable
changes that occur during the development process. By properly
compartmentalizing design and verification activities the systems engineer can
minimize the extent of retesting resulting from regression analyses.
Need for Framework
We need a framework because systems engineering has failed to fulfill 50 years of
promises of providing solutions to the complex problems facing society. (Wymore,
1994) pointed out that it was necessary for systems engineering to become an
engineering discipline if it was to fulfill its promises and thereby survive.
Nothing has changed in that respect since then. (Wymore, 1994) also stated that
�Systems engineering is the intellectual, academic, and professional discipline,
the principal concern of which is to ensure that all requirements for
bioware/hardware/software systems are satisfied throughout the lifecycles of the
systems. This statement defines systems engineering as a discipline, not as a
process. The currently accepted processes of systems engineering are only
implementations of systems engineering. Elements of a discipline
Consider the elements that make up a discipline. One view was provided by (Kline,
1995) page 3) who states �a discipline possesses a specific area of study, a
literature, and a working community of paid scholars and/or paid practitioners�.
Systems engineering has a working community of paid scholars and paid
practitioners. However, the area of study seems to be different in each academic
institution but with various degrees of commonality. This situation can be
explained by the recognition that (1) systems engineering has only been in
existence since the middle of the 20th century (Johnson, 1997; Jackson and Keys,
1984; Hall, 1962), and (2) as an emerging discipline, systems engineering is
displaying the same characteristics as did other now established disciplines in
their formative years. Thus, systems engineering may be considered as being in a
similar situation to the state of chemistry before the development of the periodic
table of the elements, or similar to the state of electrical engineering before the
development of Ohm?s Law. This is why various academic institutions focus on
different areas of study but with some
degree of commonality in the systems development life cycle. Nevertheless, to be
recognized as a discipline, the degree of overlap of the various areas of study in
the different institutions needs to be much, much greater.
Elements relevant to research in a discipline
According to (Checkland and Holwell, 1998) research into a discipline needs the
following three items: An Area of Concern (A), which might be a particular problem
in a discipline (area of study), a real-world problem situation, or a system of
interest. A particular linked Framework of Ideas (F) in which the knowledge about
the area of concern is expressed. It includes current theories, bodies of
knowledge, heuristics, etc as documented in the literature as well as tacit
knowledge.
ARCHITECTURAL FRAMEWORKS, MODELS, AND VIEWS

Definition: An architecture framework is an encapsulation of a minimum set of


practices and requirements for artifacts that describe a system's architecture.
Models are representations of how objects in a system fit structurally in and
behave as part of the system. Views are a partial expression of the system from a
particular perspective. A viewpoint is a set of representations (views and models)
of an architecture that covers a stakeholder's issues.

Keywords: architecture, architecture description, architecture frameworks, models,


viewpoint, views
Architecture Framework, Models, and Views Relationship [1]

Figure 1.
MITRE SE Roles & Expectations: MITRE systems engineers (SE) are expected to assist
in or lead efforts to define an architecture, based on a set of requirements
captured during the concept development and requirements engineering phases of the
systems engineering life cycle. The architecture definition activity usually
produces operational, system, and technical views. This architecture becomes the
foundation for developers and integrators to create design and implementation
architectures and views. To effectively communicate and guide the ensuing system
development activities, the MITRE SE should have a sound understanding of
architecture frameworks and their use, and the circumstances under which each
available framework might be used. They also must be able to convey the appropriate
framework that applies to the various decisions and phases of the program.

Getting Started

Because systems are inherently multidimensional and have numerous stakeholders with
different concerns, their descriptions are as well. Architecture frameworks enable
the creation of system views that are directly relevant to stakeholders' concerns.
Often, multiple models and non-model artifacts are generated to capture and track
the concerns of all stakeholders.

By interacting with intra- and extra-program stakeholders, including users,


experimenters, acquirers, developers, integrators, and testers, key architectural
aspects that need to be captured and communicated in a program are determined.
These architecture needs then should be consolidated and rationalized as a basis
for the SE's recommendation to develop and use specific models and views that
directly support the program's key decisions and activities. Concurrently,
an architecture content and development governance structure should be developed to
manage and satisfy the collective needs. The figure below highlights the
architecture planning and implementation activities.
Figure 2. Architecture Planning and Implementation Activities

MITRE SEs should be actively involved in determining key architecture artifacts and
content, and guiding the development of the architecture and its depictions at the
appropriate levels of abstraction or detail. MITRE SEs should take a lead role in
standardizing the architecture modeling approach. They should provide a "reference
implementation" of the needed models and views with the goals of:
(1) setting the standards for construction and content of the models, and (2)
ensuring that the model and view elements clearly trace to the concepts and
requirements from which they are derived.

Determining the Right Framework

While many MITRE SEs have probably heard of the Department of Defense Architecture
Framework (DoDAF), there are other frameworks that should be considered. As shown
in Figure 3, an SE working at an enterprise level should also be versed in the
Federal Enterprise Architecture Framework (FEAF). To prevent duplicate efforts in
describing a system using multiple frameworks, establish overlapping description
requirements and ensure that they are understood among the SEs generating those
artifacts. The SEG article on Approaches to Architecture Development provides
details of the frameworks.
Frameworks

Figure 3. Applying
Best Practices and Lessons Learned

A program may elect to not use architectural models and views, or elect to create
only those views dictated by policy or regulation. The resources and time required
to create architecture views may be seen as not providing a commensurate return on
investment in systems engineering or program execution. Consider these cultural
impediments. Guide your actions with the view that architecture is a tool that
enables and is integral to systems engineering. The following are best practices
and lessons learned for making architectures work in your program.
Purpose is paramount. Determine the purpose for the architecting effort, views, and
models needed. Plan the architecting steps to generate the views and models to meet
the purpose only. Ultimately models and views should help each stakeholder reason
about the structure and behavior of the system or part of the system they represent
so they can conclude that their objectives will be met. Frameworks help by
establishing minimum guidelines for each stakeholder's interest. However,
stakeholders can have other concerns, so use the framework requirements as
discussion to help uncover as many concerns as possible.

A plan is a point of departure. There should be clear milestone development dates,


and the needed resources should be established for the development of the
architecture views and models. Some views are precursors for others. Ensure that it
is understood which views are "feeds" for others.

Know the relationships. Models and views that relate to each other should be
consistent, concordant, and developed with reuse in mind. It is good practice to
identify the data or information that each view shares, and manage it centrally to
help create the different views. Refer to the SEG Architectural Patterns article
for guidance on patterns and their use/reuse.

Be the early bird. Inject the idea of architectures early in the process.
Continuously influence your project to use models and views throughout execution.
The earlier the better.

No one trusts a skinny cook. By using models as an analysis tool yourself,


particularly in day-to-day and key discussions, you maintain focus on key
architectural issues and demonstrate how architecture artifacts can be used to
enable decision making.

Which way is right and how do I get there from here? Architectures can be used to
help assess today's alternatives and different evolutionary paths to the future.
Views of architecture alternatives can be used to help judge the strengths and
weaknesses of different approaches. Views of "as is" and "to be" architectures help
stakeholders understand potential migration paths and transitions.

Try before you buy. Architectures (or parts of them) can sometimes be "tried out"
during live exercises. This can either confirm an architectural approach for
application to real-world situations or be the basis for refinement that better
aligns the architecture with operational reality. Architectures also can be used as
a basis for identifying prototyping and experimentation activities to reduce
technical risk and engagements with operational users to better illuminate their
needs and operational concepts.

Taming the complexity beast. If a program or an effort is particularly large,


models and views can provide a disciplined way of communicating how you expect the
system to behave. Some behavioral models such as business process models, activity
models, and sequence diagrams are intuitive, easy to use, and easy to change to
capture consensus views of system behavior. Refer to the SEG Approaches to
Architecture Development article for guidance on for model characterization.

Keep it simple. Avoid diagrams that are complicated and non-intuitive, such as node
connectivity diagrams with many nodes and edges, especially in the early phases of
a program. This can be a deterrent for the uninitiated. Start with the operational
concepts, so your architecture efforts flow from information that users and many
other stakeholders already understand.
Determining the right models and views. Once the frameworks have been chosen, the
models and views will need to be determined. It is not unusual to have to refer to
several sets of guidance, each calling for a different set of views and models to
be generated.

But it looked so pretty in the window. Lay out the requirements for your
architectures � what decisions it supports, what it will help stakeholders reason
about, and how it will do so. A simple spreadsheet can be used for this purpose.
This should happen early and often throughout the system's life cycle to ensure
that the architecture is used. Figure 4 provides an example of a worksheet that
was used to gather architecture requirements for a major aircraft program.

Figure 4. Architecture Models and Views Development Analysis

How do I create the right views? Selecting the right modeling approach to develop
accurate and consistent representations that can be used across program boundaries
is a critical systems engineering activity. Some of the questions to answer are:

Is a disciplined architecture approach embedded in the primary tool my team will be


using, as in the case of Activity-Based Modeling (ABM) being embedded in system
architect, or do we have to enforce an approach ourselves?
Are the rules/standards of the modeling language enforced in the tool, as in the
case of BPMN
2.0 being embedded in iGrafix?

Do I plan to generate executable models? If so, will my descriptions need to adhere


to strict development guidelines to easily support the use of executable models to
help reason about performance and timing issues of the system?

Bringing dolls to life. If your program is developing models for large systems
supporting missions and businesses with time-sensitive needs, insight into system
behavior is crucial. Seriously consider using executable models to gain it. Today,
many architecture tools support the development of executable models easily and at
reasonable cost. Mission-Level Modeling (MLM) and Model Driven or Architecture-
Based/Centric Engineering are two modeling approaches that incorporate executable
modeling. They are worth investigating to support reasoning about technology
impacts to mission performance and internal system behavior, respectively.

How much architecture is enough? The most difficult conundrum when deciding to
launch an architecture effort is determining the level of detail needed and when to
stop producing/updating artifacts. Architecture models and views must be easily
changeable. There is an investment associated with having a "living" architecture
that contains current information, and differing levels of abstraction and views to
satisfy all stakeholders. Actively discuss this sufficiency issue with stakeholders
so that the architecture effort is "right-sized." Refer to the Architecture
Specification for CANES [2].

Penny wise, pound-foolish. Generating architecture models and views can seem
a lot easier to not do. Before jumping on the "architecture is costly and has
minimal utility" bandwagon, consider the following:

Will there be a need to tell others how the system works?


Will there be a need to train new personnel on a regular basis (every one to three
years) in system operations?
Will there be a need to tell a different contractor how the system works so that
costs for maintaining and refreshing the system remain competitive?
Will there be a need to assess the system's viability to contribute to future
mission needs?

If the answer to one or more of these questions is "yes," then consider concise,
accurate, concordant, and consistent models of your system.

UNIT1

� II - Systems Engineering Processes

Formulation of issues with a case study, Value system design, Functional analysis,
Business Process Reengineering, Quality function deployment, System synthesis,
Approaches for generation of alternatives.

Systems Engineering Management Is�

systems engineering management is accomplished by integrating three major


activities:
Development phasing that controls the design process and provides baselines that
coordinate design efforts,
A systems engineering process that provides a structure for solving design
problems and tracking requirements flow through the design effort, and
Life cycle integration that involves customers in the design process and ensures
that the system developed is viable throughout its life.

Each one of these activities is necessary to achieve proper management of a


development effort. Phasing has two major purposes: it controls the design effort
and is the major connection between the technical management effort and the overall
acquisition effort. It controls the design effort by developing design baselines
that govern each level of development. It interfaces with acquisition management by
providing key events in the development process, where design viability can be
assessed. The viability of the baselines developed is a major input for acquisition
management Mile- stone (MS) decisions. As a result, the timing and coordination
between technical development phasing and the acquisition schedule is critical to
maintain a healthy acquisition program.

The systems engineering process is the heart of systems engineering management. Its
purpose is to provide a structured but flexible process that transforms
requirements into specifications, architectures, and configuration baselines.
The discipline of this process provides the control and trace- ability to develop
solutions that meet customer needs. The systems engineering process may be repeated
one or more times during any phase of the development process.

1
Figure 1-1. Three Activities of Systems Engineering Management
Life cycle integration is necessary to ensure that the design solution is viable
throughout the life of the system. It includes the planning associated with product
and process development, as well as the integration of multiple functional
concerns into the design and engineering process. In this manner, product cycle-
times can be reduced, and the need for redesign and rework substantially reduced.
DEVELOPMENT PHASING

Development usually progresses through distinct levels or stages:

Concept level, which produces a system concept description (usually described in a


concept study);

System level, which produces a system description in performance requirement terms;


and

Subsystem/Component level, which produces first a set of subsystem and component


product performance descriptions, then a set of corresponding detailed descriptions
of the products� characteristics, essential for their production.

The systems engineering process is applied to each level of system development, one
level at a time, to produce these descriptions commonly called configuration
baselines. This results in a series of configuration baselines, one at each
development level. These baselines become more detailed with each level.

In the Department of Defense (DoD) the configu- ration baselines are called the
functional baseline for the system-level description, the allocated baseline for
the subsystem/ component performance descriptions, and the product baseline for the
sub- system/component detail descriptions. Figure 1-2 shows the basic relationships
between the baselines. The triangles represent baseline control decision points,
and are usually referred to as technical re- views or audits.

Levels of Development Considerations

Significant development at any given level in the system hierarchy should not occur
until the con- figuration baselines at the higher levels are con- sidered complete,
stable, and controlled. Reviews and audits are used to ensure that the baselines
are ready for the next level of development. As will be shown in the next chapter,
this review and audit process also provides the necessary assessment of system
maturity, which supports the DoD Milestone decision process.

THE SYSTEMS ENGINEERING PROCESS

(Product Baseline)

Figure 1-2. Development Phasing

solving process, applied sequentially through all stages of development, that is


used to:

Transform needs and requirements into a set of system product and process
descriptions (add- ing value and more detail with each level of development),

Generate information for decision makers, and


Provide input for the next level of development. As illustrated by Figure 1-3, the
fundamental systems engineering activities are Requirements Analysis, Functional
Analysis and Allocation, and Design Synthesis�all balanced by techniques and tools
collectively called System Analysis and Control. Systems engineering controls are
used to track decisions and requirements, maintain technical baselines, manage
interfaces, manage risks, track cost and schedule, track technical performance,
verify requirements are met, and review/audit the progress.

During the systems engineering process architectures are generated to better


describe and under- stand the system. The word �architecture� is used in various
contexts in the general field of engineering. It is used as a general description
of how the subsystems join together to form the system. It can also be a detailed
description of an aspect of a system: for example, the Operational, System, and
Technical Architectures used in Command, Con- trol, Communications, Computers,
Intelligence, Surveillance, and Reconnaissance (C4ISR), and software intensive
developments. However, Sys- tems Engineering Management as developed in DoD
recognizes three universally usable architectures that describe important aspects
of the system: functional, physical, and system architectures. This book will focus
on these architectures as necessary components of the systems engineering process.

The Functional Architecture identifies and structures the allocated functional and
performance requirements. The Physical Architecture depicts the

Figure 1-3. The Systems Engineering Process

PROCESS OUTPUT
system product by showing how it is broken down into subsystems and components. The
System Architecture identifies all the products (including enabling products) that
are necessary to support the system and, by implication, the processes necessary
for development, production/construc- tion, deployment, operations, support,
disposal, training, and verification.

Life Cycle Integration

Life cycle integration is achieved through integrated development�that is,


concurrent consideration of all life cycle needs during the development process.
DoD policy requires integrated development, called Integrated Product and Prod- uct
Development (IPPD) in DoD, to be practiced at all levels in the acquisition chain
of command as will be explained in the chapter on IPPD. Con- current consideration
of all life cycle needs can be greatly enhanced through the use of
interdisciplinary teams. These teams are often referred to as Integrated Product
Teams (IPTs).

The objective of an Integrated Product Team is to:

Produce a design solution that satisfies initially defined requirements, and


Communicate that design solution clearly, effectively, and in a timely manner.
Multi-functional, integrated teams:
Place balanced emphasis on product and process development, and

Require early involvement of all disciplines appropriate to the team task.

Design-level IPT members are chosen to meet the team objectives and generally have
distinctive competence in:

Technical management (systems engineering),

Life cycle functional areas (eight primary functions),

Technical specialty areas, such as safety, risk management, quality, etc., or

When appropriate, business areas such as finance, cost/budget analysis, and


contracting.

Life Cycle Functions

Life cycle functions are the characteristic actions associated with the system life
cycle. As illustrated by Figure 1-4, they are development, production and
construction, deployment (fielding), operation, support, disposal, training, and
verification. These activities cover the �cradle to grave� life cycle process and
are associated with major functional groups that provide essential support to the
life cycle process. These key life
cycle functions are commonly referred to as the eight primary functions of systems
engineering.

The customers of the systems engineer perform the life-cycle functions. The system
user�s needs are emphasized because their needs generate the requirement for the
system, but it must be remembered that all of the life-cycle functional areas
generate requirements for the systems engineering process once the user has
established the basic need. Those that perform the primary functions also provide
life-cycle representation in design- level integrated teams.

Primary Function Definitions

Development includes the activities required to evolve the system from customer
needs to product or process solutions.

Manufacturing/Production/Construction includes the fabrication of engineering test


models and �brass boards,� low rate initial production, full-rate production of
systems and end items, or the construction of large or unique systems or sub-
systems.

Deployment (Fielding) includes the activities necessary to initially deliver,


transport, receive, pro- cess, assemble, install, checkout, train, operate, house,
store, or field the system to achieve full operational capability.

Figure 1-4. Primary Life Cycle Functions

Operation is the user function and includes activities necessary to satisfy defined
operational objectives and tasks in peacetime and wartime environments.
Support includes the activities necessary to pro- vide operations support,
maintenance, logistics, and material management.

Disposal includes the activities necessary to ensure that the disposal of


decommissioned, destroyed, or irreparable system components meets all applicable
regulations and directives.

Training includes the activities necessary to achieve and maintain the knowledge
and skill levels necessary to efficiently and effectively perform operations and
support functions.

Verification includes the activities necessary to evaluate progress and


effectiveness of evolving system products and processes, and to measure
specification compliance.
Systems Engineering Considerations

Systems engineering is a standardized, disciplined management process for


development of system solutions that provides a constant approach to system
development in an environment of change and uncertainty. It also provides for
simultaneous product and process development, as well as a common basis for
communication.

Systems engineering ensures that the correct technical tasks get done during
development through planning, tracking, and coordinating. Responsibilities of
systems engineers include:

Development of a total system design solution that balances cost,


schedule, performance, and risk,

Development and tracking of technicalinformation needed for decision making,

Verification that technical solutions satisfy customer requirements,

Development of a system that can be produced economically and supported throughout


the life cycle,

Development and monitoring of internal and external interface compatibility of the


sys- tem and subsystems using an open systems approach,

Establishment of baselines and configuration control, and

Proper focus and structure for system and major sub-system level design IPTs.

GUIDANCE

It requires that an Integrated Product and Process approach be taken to design


wherever practicable, and

It requires that a disciplined systems engineer- ing process be used to translate


operational needs and/or requirements into a system solution.

Tailoring the Process

System engineering is applied during all acquisi- tion and support phases for
large- and small-scale systems, new developments or product improve- ments, and
single and multiple procurements. The process must be tailored for different needs
and/or requirements. Tailoring considerations include system size and complexity,
level of system definition detail, scenarios and missions, con- straints and
requirements, technology base, major risk factors, and organizational best
practices and strengths.
For example, systems engineering of software should follow the basic systems
engineering approach as presented in this book. However, it must be tailored to
accommodate the software development environment, and the unique progress

This provides a conceptual-level description of systems engineering management. The


specific techniques, nomenclature, and recommended methods are not meant to be
prescriptive. Technical managers must tailor their systems engineering planning to
meet their particular requirements and constraints, environment, technical domain,
and schedule/budget situation.

However, the basic time-proven concepts inherent in the systems engineering


approach must be retained to provide continuity and control. For complex system
designs, a full and documented understanding of what the system must do should
precede development of component performance descriptions, which should precede
component detail descriptions. Though some parts of the sys- tem may be dictated as
a constraint or interface, in general, solving the design problem should start with
analyzing the requirements and determining what the system has to do before
physical alternatives are chosen. Configurations must be controlled and risk must
be managed.

Tailoring of this process has to be done carefully to avoid the introduction of


substantial unseen risk and uncertainty. Without the control, coordination, and
traceability of systems engineering, an environment of uncertainty results which
will lead to surprises. Experience has shown that these surprises almost invariably
lead to significant impacts to cost and schedule. Tailored processes that reflect
the general conceptual approach of this book have been developed and adopted by
professional societies, academia, industry associations, government agencies, and
major companies.
FORMULATION OF ISSUES WITH A CASE STUDIES
Systems engineering principles described in the Systems Engineering Body of
Knowledge (SEBoK) Parts 1-6 are illustrated in Part 7, Systems Engineering
Implementation Examples. These examples describe the application of systems
engineering practices, principles, and concepts in real settings. These systems
engineering examples can be used to improve the practice of systems engineering by
illustrating to students and practitioners the benefits of effective practice and
the risks of poor practice. There are two kinds of SE implementation examples:
articles written for the SEBoK and those based on the SE literature.

List of Examples from the SE Literature The following examples are included:
Successful Business Transformation within a Russian Information Technology
Company
Federal Aviation Administration Next Generation Air Transportation System
How Lack of Information Sharing Jeopardized the NASA/ESA Cassini/Huygens
Mission to Saturn
Hubble Space Telescope Case Study
Global Positioning System Case Study
Global Positioning System Case Study II
Medical Radiation Case Study
FBI Virtual Case File System Case Study
MSTI Case Study
Next Generation Medical Infusion Pump Case Study
Design for Maintainability
Complex Adaptive Operating System Case Study
Complex Adaptive Project Management System Case Study
Complex Adaptive Taxi Service Scheduler Case Study
Submarine Warfare Federated Tactical Systems Case Study
Northwest Hydro System
Systems engineering (SE) case studies can be characterized in terms of at least two
relevant parameters, viz., their degrees of complexity and engineering difficulty,
for example. Although a so-called quad chart is likely an oversimplification, a 2 x
2 array can be used to make a first-order characterization, as shown in Figure 1.

The x-axis depicts complicated, the simplest form of complexity, at the low-end on
the left, and complex, representing the range of all higher forms of complexity on
the right.The y-axis suggests how difficult it might be to engineer (or re-
engineer) the system to be improved, using Conventional (classical or traditional)
SE, at the low-end on the bottom, and Complex SE, representing all more
sophisticated forms SE, on the top. This upper range is intended to cover system of
systems (SoS) engineering (SoSE), enterprise

systems engineering (ESE), as well as Complex SE (CSE).The distinctions among these


various forms of SE may be explored by visiting other sections of the SEBoK. In
summary, the SEBoK case study editors have placed each case study in one of these
four quadrants to provide readers with a suggested characterization of their case
study's complexity and difficulty. For sake of compactness the following
abbreviations have been used:

Business Transformation (Successful Business Transformation within a Russian


Information Technology Company)
NextGen ATC (Federal Aviation Administration Next Generation Air Transportation
System)
Saturn Mission (How Lack of Information Sharing Jeopardized the NASA/ESA
Cassini/Huygens Mission to Saturn)
Hubble (Hubble Space Telescope Case Study)
GPS and GPS II (Global Positioning System Case Study)
Medical Radiator (Medical Radiation Case Study)
FBI Case Files (FBI Virtual Case File System Case Study)
Small Satellite MSTI (MSTI Case Study)
Medical Infusion Pump (Next Generation Medical Infusion Pump Case Study)
Incubator Maintainability Design (Design for Maintainability)
Complex Adaptive Operations (Complex Adaptive Operating System)
Taxi Scheduler (The Development of the First Real-Time Complex Adaptive Scheduler
for a London Taxi Service)
Project Management (The Development of a Real-Time Complex Adaptive Project
Management Systems)
SWFTS MBSE(Submarine Warfare Federated Tactical Systems Case Study)

Value of Case Studies

Case studies have been used for decades in medicine, law, and business to help
students learn fundamentals and to help practitioners improve their practice. A
Matrix of Implementation Examples is used to show the alignment of systems
engineering case

studies to specific areas of the SEBoK. This matrix is intended to provide linkages
between each implementation example to the discussion of the systems engineering
principles illustrated. The selection of case studies cover a variety of sources,
domains, and geographic locations. Both effective and ineffective use of systems
engineering principles are illustrated.
The number of publicly available systems engineering case studies is growing. Case
studies that highlight the aerospace domain are more prevalent, but there is a
growing number of examples beyond this domain.
The United States Air Force Center for Systems Engineering (AF CSE) has developed a
set of case studies "to facilitate learning by emphasizing the long-term
consequences of the systems engineering/programmatic decisions on cost, schedule,
and operational effectiveness." (USAF Center for Systems Engineering 2011) The AF
CSE is using these cases to enhance SE curriculum. The cases are structured using
the Friedman-Sage framework (Friedman and Sage 2003; Friedman and Sage 2004, 84-
96), which decomposes a case into contractor, government, and shared
responsibilities in the following nine concept areas:

Requirements Definition and Management


Systems Architecture Development
System/Subsystem Design
Verification/Validation
Risk Management
Systems Integration and Interfaces
Life Cycle Support
Deployment and Post Deployment
System and Program Management

This framework forms the basis of the case study analysis carried out by the AF
CSE. Two of these case studies are highlighted in this SEBoK section, the Hubble
Space Telescope Case Study and the Global Positioning System Case Study.
The United States National Aeronautics and Space Administration (NASA) has a
catalog of more than fifty NASA-related case studies (NASA 2011). These case
studies include insights about both program management and systems engineering.
Varying in the level of detail, topics addressed, and source organization, these
case studies are used to enhance
learning at workshops, training, retreats, and conferences. The use of case studies
is viewed as important by NASA since "organizational learning takes place when
knowledge is shared in usable ways among organizational members. Knowledge is most
usable when it is contextual" (NASA 2011). Case study teaching is a method for
sharing contextual knowledge to enable reapplication of lessons learned. The MSTI
Case Study is from this catalog.
Value of System Design
Systems design is an interdisciplinary engineering activity that enables the
realization of successful systems. Systems design is the process of defining the
architecture, product design, modules, interfaces, and data for a system to
satisfy specified requirements.
Systems design could be seen as the application of systems theory to product
development. There is some overlap with the disciplines of systems analysis,
systems architecture and systems engineering.
A system may be denned as an integrated set of components that accomplish a defined
objective. The process of systems design includes defining software and hardware
architecture, components, modules, interfaces, and data to enable a system to
satisfy a set of well-specified operational requirements.
In general, systems design, systems engineering, and systems design engineering all
refer to the same intellectual process of being able to define and model complex
interactions among many components that comprise a system, and being able to
implement the system with proper and effective use of available resources. Systems
design focuses on defining customer needs and required functionality early in the
development cycle, documenting requirements, then proceeding with design synthesis
and system validation while considering the overall problem consisting of:
Operations
Performance
Test and integration
Manufacturing
Cost and schedule
Deployment
Training and support
Maintenance
Disposal
Systems design integrates all of the engineering disciplines and specialty groups
into a team effort forming a structured development process that proceeds from
concept to production to operation. Systems design considerations include both the
business and technical requirements of customers with the goal of providing a
quality product that meets the user needs. Successful systems design is dependent
upon project management, that is, being able to control costs, develop timelines,
procure resources, and manage risks.
Information systems design is a related discipline of applied computer systems,
which also incorporates both software and hardware, and often includes networking
and telecommunications, usually in the context of a business or other enterprise.
The general principals of systems design engineering may be applied to information
systems design. In addition, information systems design focuses on data-centric
themes such as subjects, objects, and programs.
If the broader topic of product development "blends the perspective of marketing,
design, and manufacturing into a single approach to product development," then
design is the act of taking the marketing information and creating the design of
the product to be manufactured. Systems design is therefore the process of defining
and developing systems to satisfy specified requirements of the user.
The basic study of system design is the understanding of component parts and their
subsequent interaction with one another.[4]
Until the 1990s, systems design had a crucial and respected role in the data
processing industry. In the 1990s, standardization of hardware and software
resulted in the ability to build modular systems. The increasing
importance of software running on generic platforms has enhanced the discipline of
software engineering.
Architectural design[edit]

The architectural design of a system emphasizes the design of the system


architecture that describes the structure, behavior and more views of that system
and analysis.
Logical design[edit]

The logical design of a system pertains to an abstract representation of the data


flows, inputs and outputs of the system. This is often conducted via
modelling, using an over-abstract (and
sometimes graphical) model of the actual system. In the context of systems, designs
are included. Logical design includes entity-relationship diagrams (ER diagrams).
Physical design[edit]

The physical design relates to the actual input and output processes of the system.
This is explained in terms of how data is input into a system, how it is
verified/authenticated, how it is processed, and how it is displayed. In physical
design, the following requirements about the system are decided.

Input requirement,
Output requirements,
Storage requirements,
Processing requirements,
System control and backup or recovery.

Put another way, the physical portion of system design can generally be broken down
into three sub-tasks:

User Interface Design


Data Design
Process Design
User Interface Design is concerned with how users add information to the system and
with how the system presents information back to them. Data Design is concerned
with how the data is represented and stored within the system. Finally, Process
Design is concerned with how data moves through the system, and with how and where
it is validated, secured and/or transformed as it flows into, through and out of
the system. At the end of the system design phase, documentation describing the
three sub-tasks is produced and made available for use in the next phase.
Physical design, in this context, does not refer to the tangible physical design of
an information system. To use an analogy, a personal computer's physical design
involves input via a keyboard, processing within the CPU, and output via a monitor,
printer, etc. It would not concern the actual layout of the tangible hardware,
which for a PC would be a monitor, CPU, motherboard, hard drive, modems,
video/graphics cards, USB slots, etc. It involves a detailed design of a user and a

product database structure processor and a control processor. The H/S personal
specification is developed for the proposed system.
Functional Analysis and Allocation

Functional Analysis and Allocation is a top-down process of translating system-


level requirements into detailed functional and performance design criteria. The
result of the process is a defined Functional Architecture with allocated system
requirements that are traceable to each system function.

Functional Analysis and Allocation

The Functional Analysis and Allocation bridges the gap between the high-level set
of system requirements and constraints (from the Requirements Analysis) and the
detailed set required

(in Synthesis) to develop or purchase systems and implement programs. It is an


integral part of both the Requirements Loop and the Design Loop. During this
activity, an
integrated Functional Architecture is defined in sufficient depth to support the
synthesis of solutions in terms of people, products, and processes, and to allow
identification and management of attendant risk. It is an iterative process,
interacting and reacting to the ongoing activities in both the Requirements and
Design Loops.
The initial step is to identify the lower-level functions required to perform the
various system functions. As this is accomplished, the system requirements are
allocated and functional architecture(s) are developed. These activities track and
interact so that as details evolve, they are continually validated against each
other. Should anomalies occur, alternate architectures and allocations may be
carried through early stages of this activity until the optimum approach becomes
apparent. The internal and external functional interfaces are defined as the
architecture matures. The functional architecture(s) and their companion functional
requirements are the input to the Synthesis activity. Completing the Design Loop,
the detailed results of the Synthesis are compared to the candidate architecture(s)
and allocated requirements to help zero in on the optimum approach and to assure
that all proposed solutions meet established requirements.
Decomposition: to lower-level functions is the incoming interface for the
Requirements Loop. The functions identified in the Requirements Analysis are
analyzed to define successively lower-levels of functions that accomplish the
higher-level functional requirements. Alternate lower-level functional solutions
covering all anticipated operating

modes are proposed and evaluated to determine which provides the best fit to the
parent requirements and the best balance between conflicting ones. The initial
decomposition is the starting point for the development of the functional
architecture and the allocation of requirements to the lower functional levels.
Adjustments to the decomposition strategy may be necessary as details are
developed.
Allocation: All requirements of the top-level functions must be allocated for all
lower-level functions. Traceability is an on-going record of the pedigree of
requirements imposed on system and subsystem elements. Because requirements are
derived or apportioned among several functions, they must be traceable across
functional boundaries to parent and child requirements. Traceability allows the
System Engineer to ascertain rapidly what effects any proposed changes in
requirements may have on related requirements at any system level. The allocated
requirements must be defined in measurable terms, contain applicable go/no go
criteria, and be in sufficient detail to be used as design criteria in the
subsequent Synthesis activity.
The four (4) steps that comprise the SE Process are:

Step 1: Requirements Analysis


Requirements Analysis (Step 1) is one of the first activities of the System
Engineering Process and functions somewhat as an interface between the internal
activities and the external sources providing inputs to the process. It examines,
evaluates, and translates the external inputs into a set of functional and
performance requirements that are the basis for the Functional Analysis and
Allocation. It links with the Functional Analysis and Allocation to form the
requirements loop of the System
Engineering Process. The goal of requirements analysis is to determine the needs
that make up a system to satisfy an overall need.

Step 2: System Analysis and Control


System Analysis and Control manages and controls the overall Systems Engineering
Process. This activity identifies the work to be performed and develops the
schedules and costs estimates for the effort. It coordinates all activities and
assures that all are operating from the same set of requirements, agreements, and
design iteration. It�s the center for configuration management throughout the
systems engineering process.
System Analysis and Control (see Figure Above) interacts with all the other
activities of the Systems Engineering Process. It evaluates the outputs of the
other activities and conducts independent studies to determine which of the
alternate approaches is best suited to the application. It determines when the
results of one activity require the action of another activity and directs the
action to be performed. The initial analyses performed in this activity are the
basis for the Systems
Engineering Plan (SEP) and the systems engineering entries in the
Integrated Master Plan (IMP) which define the overall systems engineering effort.
From the SEP and IMP, the Integrated Master Schedule (IMS) is developed to relate
the IMP events and SEP processes to calendar dates.
As the process progresses, trade-off studies and system/cost- effectiveness
analyses are performed in support of the evaluation and selection processes of
the other activities. Risk identification / Risk
Mitigation studies are conducted to aid in Risk Management. Analyses also identify
critical parameters to be used in progress measurement. The management activity
directs all operations and also performs Configuration Management (CM),
Interface Management (IM) and data management (DM). It specifies the performance
parameters to be tracked for progress measurement. It conducts reviews and reports
progress.
The information from System Analysis and Control is a major part of the systems
engineering process database that forms the process output. The analysis activity
provides the results of all analyses performed, identifies approaches considered
and discarded, and the rationales used to reach all conclusions.

Step 3: Functional Analysis Allocation


Step 4: Design Synthesis
Design Synthesis is the process of taking the functional architecture developed in
the Functional Analysis and Allocation step and decomposing those functions into
a Physical Architecture (a set of product, system, and/or software elements) that
satisfy system required functions.

Synthesis is the process whereby the Functional Architectures and their associated
requirements are translated into physical architectures and one or more physical
sets of hardware, software, and personnel solutions. It is the output end of the
Design Loop. As the designs are formulated, their characteristics are compared to
the original requirements, developed at the beginning of the process, to verify the
fit. The output of this activity is a set of analysis- verified specifications
which describe a balanced, integrated system meeting the requirements, and a
database that documents the process and rationale used to establish these
specifications.
The first step of Synthesis is to group the functions into physical architectures.
This high-level structure is used to
define system concepts and products and processes, which can be used to implement
the concepts. Growing out of these efforts are the internal and external
interfaces. As concepts are developed they are fed back in the Design Loop to
ascertain that functional requirements have been satisfied. The mature concepts,
and product and process solutions are verified against the original system
requirements before they are released as the Systems Engineering Process product
output. Detailed descriptions of the activities of Synthesis are provided below.
Physical architecture is a traditional term. Despite the name, it includes software
elements as well as hardware elements. Among the characteristics of the physical
architecture (the primary output of Design Synthesis) are the following: [2]
The correlation with functional analysis requires that each physical or software
component meets at least one (or part of one) functional requirement, though any
component can meet more than one requirement,
The architecture is justified by trade studies and effectiveness analyses,
A product Work Breakdown Structure (WBS) is developed from the physical
architecture,
Metrics are developed to track progress among Key Performance Parameters (KPP), and
All supporting information is documented in a database.
BusinessProcessReengineering(BPR)�Definition,Steps,andExamples
Your company is making great progress. You�re meeting goals easily, but the way you
meet goals is where the problem is. Business processes play an important role in
driving goals, but they are not as efficient as you�d like them to be.

Making changes to the process gets more and more difficult as your business grows
because of habits and investments in old methods. But in reality, you cannot
improve processes without making changes. Processes have to be reengineered
carefully since experiments and mistakes bring in a lot of confusion

Whatisbusinessprocessre-engineering(BPR)?
Business process re-engineering is the radical redesign of business processes to
achieve dramatic improvements in critical aspects like quality, output, cost,
service, and speed. Business process reengineering (BPR) aims at cutting down
enterprise costs and process redundancies on a very huge scale.

Isbusinessprocessreengineering(BPR)sameasbusinessprocessimprovement(BPI)?
On the surface, BPR sounds a lot like business process improvement (BPI). However,
there are fundamental differences that distinguish the two. BPI might be about
tweaking a few rules here and there. But reengineering is an unconstrained approach
to look beyond the defined boundaries and bring in seismic changes.

While BPI is an incremental setup that focuses on tinkering with the existing
processes to improve them, BPR looks at the broader picture. BPI doesn�t go against
the grain. It identifies the process bottlenecks and recommends changes in specific
functionalities. The process framework principally remains the same when BPI is in
play. BPR, on the other hand, rejects the existing rules and often takes an
unconventional route to redo processes from a high-level management perspective.
BPI is like upgrading the exhaust system on your project car. Business Process
Reengineering, BPR is about rethinking the entire way the exhaust is handled.

Fivestepsofbusinessprocessreengineering(BPR)
To keep business process reengineering fair, transparent, and efficient,
stakeholders need to get a better understanding of the key steps involved in it.
Although the process can differ from one organization to another, these steps
listed below succinctly summarize the process:

Below are the 5 Business Process Re-engineering Steps:

Mapthecurrent state of your business processes


Gather data from all resources�both software tools and stakeholders. Understand how
the process is performing currently.

Analyze them andfind any process gaps or disconnects


Identify all the errors and delays that hold up a free flow of the process. Make
sure if all details are available in the respective steps for the stakeholders to
make quick decisions.

Look for improvement opportunities andvalidate them


Check if all the steps are absolutely necessary. If a step is there to solely
inform the person, remove the step, and add an automated email trigger.

Design a cutting-edge future-state process map


Create a new process that solves all the problems you have identified. Don�t be
afraid to design a totally new process that is sure to work well. Designate KPIs
for every step of the process.

Implement future state changes and bemindful of dependencies


Inform every stakeholder of the new process. Only proceed after everyone is on
board and educated about how the new process works. Constantly monitor the KPIs.

Areal-lifeexampleofBPR
Many companies like Ford Motors, GTE, and Bell Atlantic tried out BPR during the
1990s to reshuffle their operations. The reengineering process they adopted made a
substantial difference to them, dramatically cutting down their expenses and making
them more effective against increasing competition.

The story
An American telecom company that had several departments to address customer
support regarding technical snags, billing, new connection requests, service
termination, etc. Every time a customer had an issue, they were required to call
the respective department to get their complaints resolved. The company was doling
out millions of dollars to ensure customer satisfaction, but smaller companies with
minimal resources were threatening their business.

The telecom giant reviewed the situation and concluded that it needed drastic
measures to simplify things�a one-stop solution for all customer queries. It
decided to merge the various departments into one, let go of employees to minimize
multiple handoffs and form a nerve center of customer support to handle all issues.

A few months later, they set up a customer care center in Atlanta and started
training their repair clerks as �frontend technical experts� to do the new,
comprehensive job. The company equipped the team with new software that allowed the
support team to instantly access the customer database and handle almost all kinds
of requests.
Now, if a customer called for billing query, they could also have that erratic dial
tone fixed or have a new service request confirmed without having to call another
number. While they were still on the phone, they could also make use of the push-
button phone menu to connect directly with another department to make a query or
input feedback about the call quality.

The redefined customer-contact process enabled the company to achieve new goals.

Reorganized the teams and saved cost and cycle time


Accelerated the information flow, minimized errors, and prevented reworks
Improved the quality of service calls and enhanced customer satisfaction
Defined clear ownership of processes within the now-restructured team
Allowed the team to evaluate their performance based on instant feedback

Here are 6 more real-world business process management examples.

When should you consider BPR

The problem with BPR is that the larger you are, the more expensive it is to
implement. A startup, five months after launch, might undergo a pivot including
business process reengineering that only has minimal costs to execute.

However, once an organization grows, it will have a harder and more expensive time
to completely reengineer its processes. But they are also the ones who are forced
to change due to competition and unexpected marketplace shifts.

But more than being industry-specific, the call for BPR is always based on what an
organization is aiming for. BPR is effective when companies need to break the mold
and turn the tables in order to accomplish ambitious goals. For such measures,
adopting any other process management options will only be rearranging the deck
chairs on the Titanic.

Introduction to Quality Function Deployment (QFD)


The average consumer today has a multitude of options available to select from for
similar products and services. Most consumers make their selection based upon a
general perception of quality or value. Consumers typically want �the most bang for
their buck�. In order to remain competitive, organizations must determine what is
driving the consumer�s perception of value or quality in a product or service. They
must define which characteristics of the products such as reliability, styling or
performance form the customer�s perception of quality and value. Many successful
organizations gather and integrate the Voice of the Customer (VOC) into the design
and manufacture of their products. They actively design quality and customer
perceived value into their products and services. These companies are utilizing a
structured process to define their customer�s wants and needs and transforming them
into specific product designs and process plans to produce products that satisfy
the customer�s needs. The process or tool they are using is called Quality Function
Deployment (QFD).

What is Quality Function Deployment (QFD)

Quality Function Deployment (QFD) is a process and set of tools used to effectively
define customer requirements and convert them into detailed engineering
specifications and plans to produce the products that fulfill those requirements.
QFD is used to translate customer requirements (or VOC) into measureable design
targets and drive them from the assembly level down through the sub-assembly,
component and production process levels. QFD methodology provides a defined set of
matrices utilized to facilitate this progression.
QFD was first developed in Japan by Yoji Akao in the late 1960s while working for
Mitsubishi�s shipyard. It was later adopted by other companies including Toyota and
its supply chain. In the early 1980s, QFD was introduced in the United States
mainly by the big three automotive companies and a few electronics manufacturers.
Acceptance and growth of the use of QFD in the US was initially rather slow but has
since gained popularity and is currently being used in manufacturing, healthcare
and service organizations.

Why Implement Quality Function Deployment (QFD)


Effective communication is one of the most important and impactful aspects of any
organization�s success. QFD methodology effectively communicates customer needs to
multiple business operations throughout the organization including design, quality,
manufacturing, production, marketing and sales. This effective communication of the
Voice of the Customer allows the entire organization to work together and produce
products with high levels of customer perceived value. There are several additional
benefits to using Quality Function Deployment:
Customer Focused: QFD methodology places the emphasis on the wants and needs of the
customer, not on what the company may believe the customer wants. The Voice of the
Customer is translated into technical design specifications. During the QFD
process, design specifications are driven down from machine level to system, sub-
system and component level requirements. Finally, the design specifications are
controlled throughout the production and assembly processes to assure the customer
needs are met.
VOC Competitor Analysis: The QFD �House of Quality� tool allows for direct
comparison of how your design or product stacks up to the competition in meeting
the VOC. This quick analysis can be beneficial in making design decisions that
could place you ahead of the pack.
Shorter Development Time and Lower Cost: QFD reduces the likelihood of late design
changes by focusing on product features and improvements based on customer
requirements. Effective QFD methodology prevents valuable project time and
resources from being wasted on development of non-value added features or
functions.
Structure and Documentation: QFD provides a structured method and tools for
recording decisions made and lessons learned during the product development
process. This knowledge base can serve as a historical record that can be utilized
to aid future projects.
Companies must bring new and improved products to market that meet the customer�s
actual wants and needs while reducing development time. QFD methodology is for
organizations committed to listening to the Voice of the Customer and meeting their
needs.

How to Implement Quality Function Deployment (QFD)


The Quality Function Deployment methodology is a 4-phase process that encompasses
activities throughout the product development cycle. A series of matrices are
utilized at each phase to translate the Voice of the Customer to design
requirements for each system, sub-system and component. The four phases of QFD are:
Product Definition: The Product Definition Phase begins with collection of
VOC and translating the customer wants and needs into product specifications. It
may also involve a competitive analysis to evaluate how effectively the
competitor�s product fulfills the customer wants and needs. The initial design
concept is based on the particular product performance requirements and
specifications.
Product Development: During the Product Development Phase, the critical parts and
assemblies are identified. The critical product characteristics are cascaded down
and translated to critical or key part and assembly characteristics or
specifications. The functional requirements or specifications are then defined for
each functional level.
Process Development: During the Process Development Phase, the manufacturing and
assembly processes are designed based on product and component specifications. The
process flow is developed and the critical process characteristics are identified.
Process Quality Control: Prior to production launch, the QFD process identifies
critical part and process characteristics. Process parameters are determined and
appropriate process controls are developed and implemented. In addition, any
inspection and test specifications are developed. Full production begins upon
completion of process capability studies during the pilot build.
Effective use of QFD requires team participation and discipline inherent in the
practice of QFD, which has proven to be an excellent team-building experience.
Level 1 QFD

The House of Quality is an effective tool used to translate the customer wants and
needs into product or service design characteristics utilizing a relationship
matrix. It is usually the first matrix used in the QFD process. The House of
Quality demonstrates the relationship between the customer wants or �Whats� and the
design parameters or �Hows�. The matrix is data intensive and allows the team to
capture a large amount of information in one place. The matrix earned the name
�House of Quality� due to its structure resembling that of a house. A cross-
functional team possessing thorough knowledge of the product, the Voice of the
Customer and the company�s capabilities, should complete the matrix. The different
sections of the matrix and a brief description of each are listed below:
�Whats�: This is usually the first section to be completed. This column is where
the VOC, or the wants and needs, of the customer are listed.
Importance Factor: The team should rate each of the functions based on their level
of importance to the customer. In many cases, a scale of 1 to 5 is used with 5
representing the highest level of importance.
�Hows� or Ceiling: Contains the design features and technical requirements
the product will need to align with the VOC.
Body or Main Room: Within the main body or room of the house of quality the �Hows�
are ranked according to their correlation or effectiveness of fulfilling each of
the �Whats�. The ranking system used is a set of symbols indicating either a
strong, moderate or a weak correlation. A blank box would represent no correlation
or influence on meeting the �What�, or customer requirement. Each of the symbols
represents a numerical value of 0, 1, 3 or 9.
Roof: This matrix is used to indicate how the design requirements interact with
each other. The interrelationships are ratings that range from a strong positive
interaction (++) to a strong negative interaction (�) with a blank box indicating
no interrelationship.
Competitor Comparison: This section visualizes a comparison of the competitor�s
product in regards to fulfilling the �Whats�. In many cases, a scale of 1 to 5 is
used for the ranking, with 5 representing the highest level of customer
satisfaction. This section should be completed using direct feedback from customer
surveys or other means of data collection.
Relative Importance: This section contains the results of calculating the total of
the sums of each column when multiplied by the importance factor. The numerical
values are represented as discrete numbers or percentages of the total. The data is
useful for ranking each of the �Hows� and determining where to allocate the most
resources.
Lower Level / Foundation: This section lists more specific target values for
technical specifications relating to the �Hows� used to satisfy VOC.
Upon completion of the House of Quality, the technical requirements derived from
the VOC can then be deployed to the appropriate teams within the organization and
populated into the Level 2 QFDs for more detailed analysis. This is the first step
in driving the VOC throughout the product or process design process.
Level 2 QFD

The Level 2 QFD matrix is a used during the Design Development Phase. Using the
Level 2 QFD, the team can discover which of the assemblies, systems, sub-systems
and components have the most impact on meeting the product design requirements and
identify key design characteristics. The information produced from performing a
Level 2 QFD is often used as a direct input to the Design Failure Mode and Effects
Analysis (DFMEA) process. Level 2 QFDs may be developed at the following levels:
System Level: The technical specifications and functional requirements or
�Hows� identified and prioritized within The House of Quality become the �Whats�
for the system level QFD. They are then evaluated according to which of the systems
or assemblies they impact. Any systems deemed critical would then progress to a
sub-system QFD.
Sub-system Level: The requirements cascaded down from the system level are re-
defined to align with how the sub-system contributes to the system meeting its
functional requirements. This information then becomes the �Whats� for the QFD and
the components and other possible �Hows� are listed and ranked to determine the
critical components. The components deemed critical would then require progression
to a component level QFD.
Component Level: The component level QFD is extremely helpful in identifying the
key and critical characteristics or features that can be detailed on the drawings.
The key or critical characteristics then flow down into the Level 3 QFD activities
for use in designing the process. For purchased components, this information is
valuable for communicating key and critical characteristics to suppliers during
sourcing negotiations and as an input to the Production Part Approval Process
(PPAP) submission.
Level 3 QFD

The Level 3 QFD is used during the Process Development Phase where we examine
which of the processes or process steps have any correlation to meeting the
component or part specifications. In the Level 3 QFD matrix, the �Whats� are the
component part technical specifications and the �Hows� are the manufacturing
processes or process steps involved in producing the part. The matrix highlights
which of the processes or process steps have the most impact on meeting the part
specifications. This information allows the production and quality teams to focus
on the Critical to Quality (CTQ) processes, which flow down into the Level 4 QFD
for further examination.
Level 4 QFD
The Level 4 QFD is not utilized as often as the previous three. Within the Level 4
QFD matrix, the team should list all the critical processes or process
characteristics in the �Whats� column on the left and then determine the �Hows� for
assuring quality parts are produced and list them across the top of the matrix.
Through ranking of the interactions of the �Whats� and the �Hows�, the team can
determine which controls could be most useful and develop quality targets for each.
This information may also be used for creating Work Instructions, Inspection Sheets
or as an input to Control Plans.
The purpose of Quality Function Deployment is not to replace an organization�s
existing design process but rather support and improve an organization�s design
process. QFD methodology is a systemic, proven means of embedding the Voice of the
Customer into both the design and production process. QFD is a method of ensuring
customer requirements are accurately translated into relevant technical
specifications from product definition to product design, process development and
implementation. The fact is that every business, organization and industry has
customers. Meeting the customer�s needs is critical to success. Implementing QFD
methodology can enable you to drive the voice of your customers throughout your
processes to increase your ability to satisfy or even excite your customers.

System synthesis
Synthesis is one of the key automation techniques for improving productivity and
developing efficient implementations from a design specification. Synthesis refers
to the creation of a detailed model or blueprint of the design from an abstract
specification, typically a software model of the design. Synthesis takes different
forms during different stages of the design process. In hardware system design,
several synthesis steps automate various parts of the design process. For instance,
physical synthesis automates the placement of transistors and the routing of
interconnects from a gate level description, which itself has been created by logic
synthesis from a register transfer level (RTL) model. The same principle of
translation from higher level model to a lower level model applies to system
synthesis.

The third phase in systems engineering is design synthesis. Before design


synthesis, all use cases are ranked according to hierarchy.

Roles:

Chief engineers, systems engineers

Required tasks:

Architectural analysis

Architectural design

Artifacts:

Internal block diagram


Block definition diagram

The third phase in systems engineering is design synthesis. Before design


synthesis, all use cases are ranked according to hierarchy. During design
synthesis, you develop a physical architecture that can perform the functions that
you derived in the functional analysis phase. You also account for performance
constraints as you develop the physical architecture.

When you perform system architectural analysis, you merge realized use cases into
integrated architecture analysis project. This task is often based on a trade study
pertinent to the system you intend to design. During architectural analysis, use
cases are not mapped to functional interfaces. Instead, you take a black box
approach to examine functional entities and determine reuse of those entities.
After you examine functional entities, you can allocate the logical architecture
into a physical architecture.

You can use a white box activity diagram to allocate use cases to a physical or
logical architecture. Typically, this diagram is derived from a black box activity
diagram. The white box activity diagram is partitioned into swim lanes, which show
the hierarchical structure of the architecture. Then you can move system-level
operations into an appropriate swim lane.

You can use subsystem white box scenarios allow you to decompose system blocks to
the lowest level of functional allocation. At that level, you specify the
operations to be implemented in both the hardware and software of your system. You
can derive subsystem logical interfaces from white box sequence diagrams. The
interfaces belong to the blocks at the lowest level of your system.

You can derive subsystem logical interfaces from white box sequence diagrams. The
interfaces belong to the blocks at the lowest level of your system. Then, you can
define subsystem behavior, also known as leaf block behavior for each lowest level
of decomposition in your system. This type of derived behavior is the physical
implementation of decomposed subsystems and is shown in a state chart diagram.
Model execution of leaf block behavior is performed on both the leaf block behavior
itself, as well as the interaction between each of the decomposed subsystems.

Approaches for generation of alternatives.

A problem essentially means an area of decision making.

After understanding the situation thoroughly and realizing the need for action, a
manager may find the problem solving approach useful to devise action programmes.
The problem solving approach involves problem definition and identification of
decision area, generating decision making alternatives, and specifying criteria for
selection, assessing alternatives and the optimal selection, and developing an
action plan for implementation, including a contingency plan.

Defining the problem

Problem definition is one of the most crucial steps in the problem solving
approach. A wrong definition of the problem would not only fail to resolve the
issues involved but could also lead to more complicated problems. The following
steps have been found to be useful in defining problems:
Step 1

Step 2
Step 3
Step 4

List all concerns (symptoms), particularly from the point of view of the decision-
maker in the situation (i.e., the answer to 'Who?' and 'What?' of the situational
analysis).
Diagnose (from the answers to 'How?' and 'Why?') the concerns in order to establish
real causes.
Establish decision (problem) areas, and prioritize them in order of importance.

Evaluate - if appropriate decisions are taken in these areas - whether the overall
situation would improve particularly from the decision-maker's point of view.

A knowledge of the problems encountered in similar organizations would be helpful


in this exercise. Besides this, holistic as well as logical thinking would
significantly help in understanding the nature of problems, their categorization
into long or short term, and in prioritization.

Generating alternatives

Having identified the problem, the decision-maker needs to generate appropriate


alternatives for resolving the problem. An understanding of organizational and
external constraints as well as organizational resources helps in identifying the
range of feasible action alternatives open to the decision-maker. A proper
assessment of what is possible helps them to rule out infeasible options. Sometimes
the alternatives for resolving different problems are obvious. However, more often
than not, there could be a real possibility of generating comprehensive
alternatives, which could address more than one problem area while respecting
differing points of view. The next step, after generating alternatives, would be to
rank them, before actually evaluating them. The decision-maker should check whether
the alternatives generated cover the entire range (collectively and exhaustively)
available, and whether each is distinct from the other (mutually exclusive).

The skills which could help in discovering alternatives would be holistic and
logical thinking to comprehend the situation, as well as creative skills in
generating the options which fit the situation. Knowledge of both the internal and
external environments of the organization and the subject matter pertinent to the
problem (human relations, how scientists can be motivated, etc.) would also help in
arriving at better alternatives.

Specifying criteria

The ultimate purpose of developing and specifying criteria is to evaluate


alternatives and select the best one for resolving the problem. Criteria are
developed from a proper understanding of the situation and the inherent goals,
objectives and purposes of the organization and the decision-maker involved in the
situation. They would also be influenced by the goals, objectives and purposes of
other individuals, departments and organizations connected with the situation.
Criteria could be economic, social or personal. For effective use, criteria should
be specific and measurable through quantification or other means. They should also
be prioritized to assist proper selection among alternatives.

The skills needed for improving the ability to specify criteria are basically two:

holistic skills, for identifying broader aims, goals, objectives and purposes in a
situation, and
logical reasoning, for deducing the specific criteria and their prioritization from
such higher-order considerations.

Evaluation and decision

Alternatives need to be evaluated against the specified criteria in order to


resolve the problem. Also, the outcome of choosing any alternative is not known
with certainty. Usually, any one alternative would not be uniformly superior by all
criteria. As such, prioritization of criteria could help in identifying the best
alternative. The decision- maker might explicitly consider trade-offs between
alternatives in order to select the best. Assessments of alternatives among the
criteria need to be made, given partial and limited information about the possible
outcomes of the alternatives. A final check may yet be needed to see whether
adoption of the best assessed option is:

consistent with the requirements of the situation, bearing in mind the uncertainty
involved,
implementable, and
convincing to others involved.

The skills needed for improving this phase would thus be the ability to analyse
logically, the ability to infer implications based on incomplete information and
uncertainty, and the skill to convince others about the decision taken so as to
obtain approval or help in proper implementation, or both.

Developing an action plan

Once the alternatives are developed, an action plan has to be developed. This is
essentially the implementation phase. In this phase, the decision-maker needs to
decide who would do what, where, when, how, etc. The process of arriving at these
decisions is just like the steps involved in the problem solving approach, except
that the chosen alternative becomes an input to this step. This phase would require
coordination skills to properly organize a variety of resources (human, material
and fiscal) and develop a time-phased programme for implementation.

Feedback and contingency planning

For a variety of reasons, the original decision (chosen alternative) may not work
well and the decision-maker may have to be ready with a contingency plan. This
implies devising feedback mechanisms allowing monitoring of the status of the
situation, including results of the action plan. It also implies anticipating the
most likely points of failure and devising appropriate contingency plans to handle
the possible failures.
The additional skills required in this step would be those of devising control and
feedback mechanisms.

UNIT III
ANALYSIS OF ALTERNATIVES- I
Cross-impact analysis, Structural modeling tools, System Dynamics models with case
studies, Economic models: present value analysis � NPV, Benefits and costs over
time, ROI, IRR; Work and Cost breakdown structure,
1.2 What is an AoA?
As defined in the A5R Guidebook, the AoA is an analytical comparison of the
operational effectiveness, suitability, risk, and life cycle cost of alternatives
under consideration to satisfy validated capability needs (usually stipulated in an
approved ICD). Other definitions of an AoA can be found in various official
documents. The following are examples from DoDI 5000.02 and the Defense Acquisition
Guidebook:
� The AoA assesses potential materiel solutions that could satisfy validated
capability requirement(s) documented in the Initial Capabilities Document, and
supports a decision on the most cost effective solution to meeting the validated
capability requirement(s). In developing feasible alternatives, the AoA will
identify a wide range of solutions that have a reasonable likelihood of providing
the needed capability.
� An AoA is an analytical comparison of the operational effectiveness, suitability,
and life cycle cost (or total ownership cost, if applicable) of alternatives that
satisfy established capability needs.
Though the definitions vary slightly, they all generally describe the AoA as a
study that is used to assess alternatives that have the potential to address
capability needs or requirements that are documented in a validated or approved
capability requirements document. The information provided in an AoA helps decision
makers select courses of action to satisfy an operational capability need.
1.3 What is the Purpose of the AoA?
According to the A5R Guidebook, the purpose of the AoA is to help decision-makers
understand the tradespace for new materiel solutions to satisfy an operational
capability need, while providing the analytic basis for performance attributes
documented in follow-on JCIDS documents. The AoA provides decision-quality analysis
and results to inform the Milestone Decision Authority (MDA) and other stakeholders
at the next milestone or decision point. In short, the AoA must provide compelling
evidence of the capabilities and military worth of the alternatives. The results
should enable decision makers to discuss the appropriate cost, schedule,
performance, and risk tradeoffs and assess the operational capabilities and
affordability of the alternatives assessed in the study. The AoA results help
decision makers shape and scope the courses of action for new materiel solutions to
satisfy operational capability needs and the Request for Proposal (RFP) for the
next acquisition phase. Furthermore, AoAs provide the foundation for the
development of documents required later in the acquisition cycle such as the
Acquisition Strategy, Test and Evaluation Master Plan (TEMP), and Systems
Engineering Plan (SEP).
The AoA should also provide recommended changes, as needed, to validated capability
requirements that appear unachievable or undesirable from a cost, schedule,
performance, or risk point of view. It is important to note that the AoA provides
the analytic basis for performance parameters documented in the appropriate
requirements documents (e.g., AF Form 1067, Joint DOTmLPF-P2 Change Request (DCR),
AF-only DCR, Draft Capability Development Document (CDD), Final CDD, or Capability
Production Document (CPD)).
1.4 When is the AoA Conducted?
As noted earlier, the AoA is an important element of both the capability
development and acquisition processes. As presented in the A5R Guidebook, Figure 1-
1 highlights where AoA is conducted in these processes. The capability development
phases are shown across the top of the figure3 while the lower right of the figure
illustrates the acquisition phases, decision points, and milestones. In accordance
with the Weapon Systems Acquisition Reform Act (WSARA) of 2009, DoDI 5000.02, and
the A5R Guidebook, for all ACAT initiatives, the AoA is typically conducted during
the Materiel Solution Analysis (MSA) phase. Follow-on AoAs, however; may be
conducted later during the Technology Maturation & Risk Reduction and the
Engineering & Manufacturing Development phases.
cross-impact analysis
Cross-impact analysis, also known as cross-impact matrix or cross-impact balance
analysis, is a method used in systems thinking and scenario planning to explore the
potential interactions between different factors or variables in a complex system.
It is a tool for assessing the interdependencies and feedback loops between
different elements within a system.
The basic idea behind cross-impact analysis is to analyze how changes in one
variable can affect other variables in the system and vice versa. By understanding
these interconnections, it becomes possible to identify potential consequences,
unintended effects, and critical relationships within the system.
Here's how cross-impact analysis typically works:
Identify factors: The first step is to identify the relevant factors or variables
that influence the system being studied. These factors can be social, economic,
technological, environmental, political, or any other relevant aspect of the
system.
Construct a cross-impact matrix: A cross-impact matrix is created by systematically
evaluating how each factor influences or impacts the others. The matrix is usually
filled with qualitative judgments or expert opinions regarding the strength and
direction of the impact. The interactions are usually rated using a scale (e.g.,
strong positive, weak positive, neutral, weak negative, strong negative).
Analyze the matrix: Once the cross-impact matrix is completed, it can be analyzed
to identify the most critical relationships and potential feedback loops within the
system. Some variables may have significant impacts on many other variables, making
them central to the system's behavior.
Scenario development: Cross-impact analysis can be used to develop scenarios that
explore different future states of the system. By combining the interactions
identified in the matrix with different initial conditions or assumptions, multiple
scenarios can be constructed to understand the range of possible outcomes.
Policy implications: The analysis helps decision-makers understand the implications
of different policies or actions on the system. By identifying critical
relationships, decision-makers can focus on leveraging positive interactions and
mitigating negative ones.
Cross-impact analysis is a valuable tool for understanding complex systems and
making informed decisions in scenarios where various factors interact in intricate
ways. It is commonly used in fields such as strategic planning, environmental
assessment, technology foresight, and risk analysis. However, it is important to
note that the accuracy and reliability of the analysis depend heavily on the
quality of data and expert judgment used to construct the cross-impact matrix.
4.1 Introduction
In today�s competitive business situations characterized by globalization, short
product
life cycles, open systems architecture, and diverse customer preferences, many
managerial
innovations such as the just-in-time inventory management, total quality
management, Six
Sigma quality, customer�supplier partnership, business process reengineering, and
supply
chain integration, have been developed. Value improvement of services based on
value
engineering and systems approach (Miles, 1984) is also considered a method of
managerial
innovation. It is indispensable for corporations to expedite the value improvement
of
services and provide fine products satisfying the required function with reasonable
costs.
This chapter provides a performance measurement system (PMS) for the value
improvement of services, which is considered an ill-defined problem with
uncertainty
(Terano, 1985). To recognize a phenomenon as a problem and then solve it, it will
be necessary
to grasp the essence (real substance) of the problem. In particular, for the value
improvement problems discussed in this chapter, they can be defined as complicated,
ill-defined problems since uncertainty in the views and experiences of decision
makers,
called �fuzziness,� is present.
Building the method involves the following processes: (a) selecting measures and
building a system recognition process for management problems, and (b) providing
the
performance measurement system for the value improvement of services based on the
system
recognition process. We call (a) and (b) the PMS design process, also considered a
core
decision-making process, because in the design process, strategy and vision are
exactly
interpreted, articulated with, and translated into a set of qualitative and/or
quantitative
measures under the �means to purpose� relationship.
We propose in this chapter a system recognition process that is based on system
definition,
system analysis, and system synthesis to clarify the essence of the ill-defined
problem.
Further, we propose and examine a PMS based on the system recognition process
as a value improvement method for services, in which the system recognition process
reflects the views of decision makers and enables one to compute the value indices
for
the resources. In the proposed system, we apply the fuzzy structural modeling for
building
the structural model of PMS. We introduce the fuzzy Choquet integral to obtain the
total value index for services by drawing an inference for individual linkages
between the
scores of PMS, logically and analytically. In consequence, the system we suggest
provides
decision makers with a mechanism to incorporate subjective understanding or insight
about the evaluation process, and also offers a flexible support for changes in the
business
environment or organizational structure.
A practical example is illustrated to show how the system works, and its
effectiveness
is examined.
4.2 System recognition process
Management systems are considered to include cover for large-scale complicated
problems.
However, for a decision maker, it is difficult to know where to start solving ill-
defined
problems involving uncertainty.
In general, the problem is classified broadly into two categories. One is a problem
with preferable conditions�the so-called well-defined problem (structured or
programmable),
which has an appropriate algorithm to solve it. The other one is a problem with
non-preferable conditions�the so-called ill-defined problem (unstructured or
nonprogrammable),
which may not have an existing algorithm to solve it or there may be only a partial
algorithm. Problems involving human decision making or large-scale problems with a
complicated nature are applicable to that case. Therefore, uncertainties such as
fuzziness (ambiguity in decision making) and randomness (uncertainty of the
probability of an event) characterize the ill-defined problem.
In this chapter, the definition of management problems is extended to
semistructured
and/or unstructured decision-making problems (Simon, 1977; Anthony, 1965; Gorry and
Morton, 1971; Sprague and Carlson, 1982). It is extremely important to consider the
way to
recognize the essence of an �object� when necessary to solve some problems in the
fields
of social science, cultural science, natural science, etc.
This section will give a systems approach to the problem to find a preliminary way
to
propose the PMS for value improvement of services. In this approach, the three
steps taken
in natural recognition pointed out by Taketani (1968) are generally applied to the
process of
recognition development. These steps�phenomenal, substantial, and
essential�regarding
system recognition are necessary processes to go through to recognize the object.
With the definitions and the concept of systems thinking, a conceptual diagram of
system recognition can be described as in Figure 4.1. The conceptual diagram of
system recognition will play an important role to the practical design and
development of the value improvement system for services. Phase 1, phase 2, and
phase 3 in Figure 4.1 correspond to the respective three steps of natural
recognition described above. At the phenomenal stage (phase 1), we assume that
there exists a management system as an object; for example, suppose a management
problem concerning

management strategy, human resource, etc., and then extract the characteristics of
the problem. Then, in the substantial stage, we may recognize the characteristics
of the problem as available information, which are extracted at the previous step,
and we perform systems analysis to clarify the elements, objective, constraints,
goal, plan, policy, principle, etc., concerning the problem. Next, the objective of
the problem is optimized subject to constraints arising from the viewpoint of
systems synthesis so that the optimal management system can be obtained. The result
of the optimization process, as feedback information, may be returned to phase 1 if
necessary, comparing with the phenomena at stage 1. The decision maker examines
whether the result will meet the management system he conceives in his mind (mental
model). If the result meets the management system conceived in the phenomenal
stage, it becomes the optimal management system and proceeds to the essential stage
(phase 3). The essential stage is considered a step to recognize the basic laws
(Rules) and principles residing in the object. Otherwise, going back to the
substantial stage becomes necessary, and the procedure is continued until the
optimal system is obtained.

4.3 PMS for value improvement of services


A PMS should act flexibly in compliance with changes in social and/or business
environments. In this section, a PMS for the value improvement of services is
suggested as shown
in Figure 4.2. At stage A, the algorithm starts at the initial stage, termed
structural modeling, in
which each model of the function and the cost with respect to services is built up
in its own way through the processes encircled with the dotted line in Figure 4.1.
For obtaining a concrete model for every individual case, we apply the fuzzy
structural modeling method (FSM) (Tazaki and Amagasa, 1979; Amagasa, 2004) to
depict an intuitively

graphical hierarchy with well-preserved contextual relations among measured


elements. For FSM, binary fuzzy relation within the closed interval of [0, 1] based
on fuzzy set (Zadeh, 1965) is used to represent the subordination relations among
the elements, and relaxes the transitivity constraint in contrast to ISM
(Interpretive Structural Modeling) (Warfield et al., 1975) or DEMATEL (Decision
Making Trial and Evaluation Laboratory) (Gabus and Fontela, 1975). The major
advantage of those methods may be found in showing intuitive appeal of the
graphical picture to decision makers. First, the decision makers� mental model
(imagination) about the given problem, which is the value improvement of services,
is embedded in a subordination matrix and then reflected on a structural model.
Here, the measured elements are identified by methods such as nominal group
techniques (NGT) (Delbecq et al., 1975, 1995), survey with questionnaire, or
interview depending on the operational conditions. Thus, we may apply NGT in
extracting the measured elements composing the service value and regulating them,
clarifying the measurement elements and the attributes. Then, the contextual
relations among the elements are examined and represented on the assumption of
�means to purpose.� The hierarchy of the measurement system is constructed and
regarded as an interpretative
structural model. Furthermore, to compare the structural model with the mental
model, a feedback for learning will be performed by group members (decision
makers). If an agreement among the decision makers is obtained, then the process
proceeds to the next stage, and the result is considered to be the outcome of stage
A. Otherwise, the modeling process restarts from the embedding process or from
drawing out and representing the measurement elements process. Then, the process
may continue to make progress in the same way as illustrated in Figure 4.2 until a
structural model with some consent is obtained. Thus, we obtain the models of the
function and the cost for services as the outcomes of stage A, which are useful for
applying to the value improvement of services. Further, we extract and regulate the
functions used to perform the value improvement of services by making use of the
NGT method described above.

4.3.1 Structural model of functions composing customer satisfaction


We provide, as shown in Figure 4.3, an example of a structural model of function
showing
the relations between elements (functions) used to find the value of services,
which is
identified by making use of FSM. In this example, customer satisfaction consists of
a set
of service functions such as �employee�s behavior,� �management of a store,�
�providing
customers with information,� �response to customers,� �exchange of information,�
and

�delivery service.� In addition, for each function, �employee�s behavior� is


described as functions such as �ability to explain products,� �telephone manner,�
and �attitude toward customers.� For �management of stores,� �sanitation control of
stores,� �merchandise control,� and �dealing with elderly and disabled persons� are
enumerated. �Providing customers with information� includes �campaign information,�
�information about new products,� and �announcement of emergencies.� �Response to
customers� consists of �cashier�s speed,� �use of credit cards,� �discount for a
point card system,� and �settlement of complaints.� In �exchange of information,�
�communication among staff members,� �contact with business acquaintances,� and
�information exchange with customers� are included. Finally, �delivery service�
contains some functions of �set delivery charges,� �delivery speed,� and �arrival
conditions.�

4.3.2 Structural model of resources composing cost


Resources (subcosts) composing the cost are also extracted and regulated with the
NGT method. An example is illustrated in Figure 4.4 to show the structural model
with some resources (subcosts) constituting the cost that is used to offer services
in this chapter. Resource (cost) consists of �human resources,� �material
resources,� �financial resources,� and �information resources,� each of which is
also identified by using FSM in the same way as the customer satisfaction was
identified. Furthermore, costs relevant to human resources consist of �employee�s
salaries,� �cost of study training for work,�

and �employment of new graduates/mid-career workers.� �Material resources� contain


some subcosts such as �buying cost of products,� �rent and utilities,� and
�depreciation and amortization.� �Financial
resources� consists of subcosts that are �interest of payments,� �expenses incurred
in raising funds,� and �expenses incurred for a meeting of stockholders.� Subcosts
for �information resources� are �communication expenses,� �expenses for PR,� and
�costs for installation of a system.� With the structural models of customer
satisfaction and the resources (costs) mentioned above, we evaluate the value
indices of services. At stage B shown in Figure 4.2, the value indices for use of
four resources, which consist of human resources (R1), material resources (R2),
financial resources (R3), and information resources (R4), are evaluated on the
basis of the structural models identified at stage A
to perform the value improvement of services. The weights can be computed by using
the Frobenius theorem or the ratio approach with transitive law (Furuya, 1957;
Amagasa and Cui, 2009). In this chapter, we use the ratio approach to compute the
weights of the function and the cost in the structural models shown in Figures 4.3
and 4.4, and their weights are also used in multi-attribute decision making.

4.3.2.1 Ratio method

The importance degrees of service functions are computed by using the ratio between
the
functions as follows:
Let F be a matrix determined by paired comparisons among the functions.
Assume that reflexive law is not satisfied in F, and only each element
corresponding to
fi,i+1 (i = 1, 2, � ,n � 1) of the matrix is given as an evaluation value,

where 0 ? fi,i+1
? 1 and fi+1,i satisfies the relation fi+1,i = 1 � fi,i+1 (i = 1, 2, � , k, � , n �
1).
Then, the weight vector E(={Ei, i = 1, 2, � , n}) of functions (Fi, i = 1, 2, � ,
n) can be
found below,

We apply the formulas mentioned above to find the weights of functions used. Then,
the matrix is constituted with paired comparisons by decision makers (specialists)
who
take part in the value improvement of services in the corporation. Figure 4.5 shows
stages
B and C of the PMS.
(1) Importance degree of functions composing customer satisfaction
Suppose, in this chapter, that the functions composing customer satisfaction are
extracted and regulated as a set as follows:
F Fi i = =
=
{ , , , }
{
1 2 �6
Employee�s behavior, Management of a store,
Providing customers with information, Response to customers,
Exchange of information, Delivery service}
Improvement of customer satisfaction becomes a main purpose of corporate
management, and Fi(i = 1, 2, � , 6) are respectively defined as the function to
achieve
customer satisfaction.

Then, for example, let each cell of the matrix be intuitively and empirically
filled
in a paired comparison manner whose values are given by the ratio method by taking
into consideration the knowledge and/or experiences of the decision makers
(specialists):
Also, assume that as an evaluation standard to apply paired comparison, we
specify five different degrees of grade based on the membership functions.
Not important: [0.0, 0.2)
Not so important: [0.2, 0.4)
Important: [0.4, 0.6)
Very important: [0.6, 0.8)
Critically important: [0.8, 1.0)
For instance, if Fi is believed to be critically more important than Fj, the
decision
makers may make an entry of 0.9 in Fij. Each value is empirically given by the
decision makers (or specialists) who have their experiences and knowledge, with
the know-how for value improvement. As a result, the values yielded by the ratio
method are recognized as weights for the functions.
Thus, the weight vector E of functions (Fi, i = 1, 2, � , 6) is obtained as
follows:
E = {0.046, 0.012, 0.017, 0.040, 0.027, 0.007}
Further, F can be standardized
E = {0.31, 0.08, 0.11, 0.27, 0.18, 0.05}
a. Importance degrees of constituent elements of �employee�s behavior (F1)�
i. As it is clear from the structural model of the customer satisfaction shown
earlier in Figure 4.3, F1 consists of all subfunctions F1i (i = 1, 2, 3).
ii. Here, we compute the importance degrees of {F1i, i = 1, 2, 3} by the ratio
method
in the same way as F1was obtained.
b. Importance degrees of subfunctions of �employee�s behavior (F1)�

4.3.3 Computing for value indices of four resources

In general, the value index of object in value engineering is defined by the


following
formula.
Value index = satisfaction for necessity/use of resources (4.2)
The value index is interpreted to show the degree of satisfaction to fill
necessity, which
is brought by the resources when they are utilized. On the basis of this formula,
in this
study, we define the value of services composing four resources as below.
Value of services = function of services/cost of services (4.3)

Therefore, the value index, which is based on importance degree and cost concerning
each resources used to give services, is obtained.

At stage C, the multi-attribute decision-making method (MADM) based on Choquet


integral (Grabisch, 1995; Modave and Grabisch, 1998) can be introduced and a total
value
index of services (service value) is found by integrating the value indices of the
human,
material, financial, and information resources. Let Xi (i = 1, 2) be fuzzy sets of
universe of
discourse X. Then the ? fuzzy measure g of the union of these fuzzy set, X1
? X2 can be
defined as follows:
where ? is a parameter with values �1 < ? < ?, and note that g(?) becomes identical
to
probability measure when ? = 0. Here, since it is assumed that when the assessment
of
corporation is considered, the correlations between factors are usually
independent, the
fuzzy sets X1 and X2 are independent, that is, ? = 0. Then, the total value index
of services
is expressed as in Equation 4.5.

where wi(0 ? wi ? 1; i = 1, 2, 3, 4) are weights for respective resources.


At stage D, if the integrated evaluation value is examined and its validity is
shown, the
process goes to the final stage (stage E).
At stage E, the integrated value indices of services computed in the previous step
are
ranked using the fuzzy outranking method (Roy, 1991; Siskos and Oudiz, 1986) and
draw
the graphic structure of value control (Amagasa, 1986). Then the process
terminates.
In this study, each of the value indices of services is represented in the graphic
structure
of the value control depicted.
4.4 Simulation for value improvement system of services
In this section, we carry out a simulation of the procedure to perform the value
improvement
system of services and examine the effectiveness of the proposed value improvement
system.
Here, as specific services trade, we take up a fictitious household appliance
store,
DD Company. This store is said to be a representative example providing �a thing
and
services� to customers. The store sells �things� such as household electrical
appliances,
which are essential necessities of life and commercial items used in everyday life.
In
addition, it supplies customer services when customers purchase the �thing� itself.
DD
Company was established in 1947 and the capital is 19,294 million yen, the yearly
turnover
is 275,900 million yen, total assets are worth 144,795 million yen, the number of
the
stores is 703 (the number of franchise stores is 582 on March 31, 2007), and the
number
of employees is 3401. The store is well known to the customers on the grounds that
it
would make a difference with other companies, by which the management technique
is designed for a customer-oriented style, pursuing customer satisfaction. For
example,
salespersons have sufficient knowledge about the products they sell and give
suitable
advice and suggestions according to customers� requirements, which often happens on
the sales floor. We conducted a survey for DD Company. The simulation was based on
the results of a questionnaire survey and performed by applying the PMS for the
value
improvement of services shown in Figure 4.2.
4.4.1 Stage A: Structural modeling
Figures 4.3 and 4.4 show the structural models with respect to the functions
composing
customer satisfaction, and the cost showing the use of resources.
4.4.2 Stage B: Weighting and evaluating
Table 4.2 shows the importance degrees of resources for functions of customer
satisfaction,
which is obtained by consensus among decision makers (specialists) with know-how
deeply related to the value improvement of services.
By Table 4.4, it is understood that the distributed amount for four resources and
the
real ratios, which are used to attain customer satisfaction related to six
functions, are provided
with the four resources.
From this, each of the value indices with respect to the respective resources used
to supply
customer services, for which human resources, material resources, financial
resources,
and information resources are considered, is obtained by using Tables 4.1 through
4.4 and
Equation 4.4.
(1) Value index of Human resources = 45.64/42 (= 1.1)
(2) Value index of Material = 4.08/28 (= 0.14)
(3) Value index of Financial = 13.19/12 (= 1.08)
(4) Value index of Information = 36.37/18 (= 2)
From the value indices for the resources mentioned above, the chart of value
control
graphic structure is depicted as shown in Figure 4.5. Thus, it may be concluded by
Figure
4.6 that the following results with respect to the value improvement of services
from the
viewpoints of function and cost are ascertained.
(1) In this corporation, there is no need for doing the value improvement related
to each
of human resources, financial resources, and information resources because three of
all four resources are below the curved line, implying a good balance between the
cost and the function of services.
(2) For material resources, it will be necessary to exert all possible efforts for
the value
improvement of the resource because the value index is 0.04, which is much smaller
than 1.00.
(3) On the whole, the total value index of services is counted 1.23 as shown below,
so
that the value indices for four resources are included within the optimal zone of
the
chart of value control graphic structure shown in Figure 4.5. Therefore, it could
be
concluded that the corporation may not have to improve the value of services of
their
organization.
4.4.3 Stage C: Integrating (value indices)
At the integrating stage, MADM based on Choquet integral (Grabish, 1995; Modave and
Grabish, 1998) can be introduced for the value improvement of services, and the
total value
index of services is obtained by integrating the value indices of the four
resources as follows:

As a result of the simulation, the value of services of DD Company indicates a


considerably
high level because the total value index becomes 1.23 (>1.00), which belongs to the
optimal
region.
About DD Company, Nikkei Business announces that the store scored high on the
evaluation. The store advocates that customers �have trust and satisfaction with
buying
the products all the time.� Also, the store supplies �attractive goods� at �a
reasonable
price� as well as �superior services,� as a household appliance retail trade based
on
this management philosophy. Furthermore, the store realizes a customer-oriented and
community-oriented business, and supplies smooth services reflecting area features
and
scales advantages by controlling the total stock in the whole group. From this, it
can be
said that the validity of the proposed method was verified by the result of this
simulation
experiment, which corresponds to the high assessment of DD Company by Nikkei
Business, as described above.
4.5 Conclusion
It is important for an administrative action to pursue profit of a corporation by
making
effective use of four resources�capable persons, materials, capital, and
information. In
addition, allowing each employee to attach great importance to services, and then
hoping
that the employee would willingly improve service quality, and thus enhancing the
degree
of customer satisfaction, is important in the services trade. These surely promise
to bring
about profit improvement for the corporation.
We proposed in this chapter a system recognition process that is based on system
definition, system analysis, and system synthesis, clarifying the �essence� of an
ill-defined
problem. Further, we suggest the PMS as a method for the value improvement of
services
and examined it, in which the system recognition process reflects the views of
decision
makers and enables to compute the effective service scores. As an illustrative
example,
we took up the evaluation problem of a household appliance store selected from the
viewpoint
of service functions, and come up with a new value improvement method by which
the value indices of services are computed. To verify the effectiveness of the new
method
we suggested, we performed a questionnaire survey about the service functions for
the household appliance store. As a result, it was determined that the proposed
method is significant for the value improvement of services in
corporations.Finally, the soundness of this system was verified by the result of
this simulation. With this procedure, it is possible to build PMS for services that
is based on realities. This part of the study remains a future subject.
30.3 Investment analysis
There are various tools and techniques available for investment analysis. All of
those have
their own advantages and disadvantages. Some of the tools are very simple to use
while some
of them are not applicable to all types of investment analysis. This section will
briefly discuss
about two simple tools that are widely used in investment analysis. They are net
present
value (NPV) analysis and internal rate of return (IRR) or simply rate of return
(ROR) analysis.
30.3.1 Net present value analysis
The NPV criterion relies on the sound logic that economic decisions should be made
on
the basis of costs and benefits pertaining to the time of decision making. We can
apply our
best knowledge to forecast the future of investments and convert those into present
value
to compare. Most decisions of life are made in the present assuming that the future
will be
unchanged. Economic decisions should be no exception. In the NPV method, the
economic
data are analyzed through the �window� of the present.
In engineering projects, cash flows are determined in the future in terms of costs
and
benefits. Cost amounts are considered negative cash flows, whereas benefit amounts
are
considered positive cash flows. Once all the present and future cash flows are
identified
for a project, they are converted into present value. This conversion is necessary
due to the
time value of money.
The NPV method of solving engineering economics problems involves determination
of the present worth of the project or its alternatives. Using the criterion of
NPV, the better
of the two projects, or the best of the three or more projects, is selected. Thus,
the application
of this method involves following tasks:
1. Identify all cash flows pertaining to project(s)�in most cases, this information
is
furnished by the investment experts.
2. Evaluation of the NPV of project(s).
3. Decision on the basis of NPV criterion, according to which project with the
highest
positive NPV is selected.
Figure 30.1 depicts the acceptable and unacceptable regions of project selection
based
on NPV method.

be more attractive to investors if other project selection criteria are included in


the
decision making.
30.3.2 Internal rate of return
The IRR is a widely used investment analysis tool in industry. It is easier to
comprehend,
but its analysis is relatively complex, and the pertaining calculations are
lengthy. In NPV
analysis, the interest rate i is known; in the IRR analysis, i is unknown.
We evaluate a project�s IRR to ensure that it is higher than the cost of capital,
as
expressed by minimum acceptable rate of return (MARR). MARR is a cutoff value,
determined
on the basis of financial market conditions and company�s business �health,� below
which investment is undesirable. In general, MARR is significantly higher than what
financial institutions charge for the lending capital. The MARR includes associated
project
risks and other business costs. If MARR is higher only marginally, doing
engineering
business does not make much of a business sense.
The IRR method of solving engineering economics problems involves determination
of the effective rate of returns of the project or its alternatives. Using the
criterion of IRR,
the better of the two projects, or the best of the three or more projects, is the
one with the
highest IRR. Thus, the application of this method involves following tasks:
1. Identify all cash flows pertaining to project(s)�in most cases, this information
is
furnished by the investment experts.
2. Calculate the interest rate that yields the net present value of project(s) as
zero.
3. Decision on the basis of IRR criterion�any project(s) with the IRR that is
greater than
the MARR is acceptable and among all acceptable projects, the best one is the one
with the highest IRR.
Figure 30.4 shows the acceptable RORs and unacceptable RORs of project selection
based on the IRR method.

30.4 Summary
Engineering economics plays an increasingly important role in investment analysis.
It is
basically a decision-making process. In engineering economics, one learns to solve
engineering
problems involving costs and benefits. Interest can be thought of as the rent for
using someone�s money; interest rate is a measure of the cost of this use. Interest
rates are
of two types: simple or compounded. Under simple interest, only the principal earns
interest.
Simple interest is non-existent in today�s financial marketplace. Under
compounding,
the interest earned during a period augments the principal; thus, interest earns
interest.
Compounding of interest is beneficial to the lender. Owing to its capacity to earn
interest,
money has time value. The time value of money is important in making decisions
pertaining
to engineering projects. Among various tools and techniques available to solve
engineering
economic problems, NPV analysis and IRR analysis are widely used in industry.
They are very simple but effective tools for investment analysis.

RELIABILITY

A significant advantage in working with reliability, rather than directly with


human performance,
is the ability to avail ourselves of basic system models. A system�s functional and
physical decomposition can be used to construct a system-level reliability block
diagram, the
structure of which is used to compute reliability in terms of component and
subsystem reliabilities.
In the I-SPY case, we considered the reliability block diagram shown in Figure
11.11.
This diagram was derived, with some adaptation, from a front-end analysis of the
workflow
of an Air Force Predator pilot (Nagy et al., 2006). It was simplified such that the
functions
depicted could be reasonably matched with those tasks assessed by Schreiber and
colleagues

in their study: functions 3.2 and 3.4 correspond to the basic maneuvering task,
function 3.3
corresponds to the reconnaissance task, and function 3.5 matches up with the
landing task.
If we assume that functions 3.2 to 3.5 are functionally independent, then the set
of functions
constitutes a simple series system. Thus, total system reliability was estimated by
taking the
mathematical product of the three logistic regression models, meaning that we had
an expression
for total system reliability that was a function of the personnel and training
domains.
A good plan for choosing a source of I-SPY Predator pilots, particularly from a
system
sustainability perspective, is to seek a solution that most effectively utilizes
personnel
given total system reliability and training resource constraints. In such a
situation, the
quality of feasible solutions might then be judged in terms of maximizing total
system
reliability for the personnel and training costs expended. This approach was
adopted to
answer the I-SPY selection and training question. A non-linear program was
formulated
to determine the optimal solution in terms of cost-effectiveness, the latter
expressed as the
ratio of system reliability to total personnel and training costs. The feasible
solution space
was constrained by a lower limit (i.e., minimum acceptable value) on total system
reliability
and an upper limit (i.e., maximum acceptable value) on training time.

Availability: When searching for additional information, sources that are more
easily
accessed or brought to mind will be considered first, even when other sources are
more diagnostic.
Reliability: The reliability of information sources is hard to integrate into the
decision making
process. Differences in reliability are often ignored or discounted.

Reliability, Availability, Maintainability, and Supportability (RAMS) are critical


factors in the design, operation, and maintenance of complex systems, particularly
in engineering and technology fields. These concepts help organizations ensure that
their systems or products meet performance and operational requirements while
minimizing downtime and maximizing user satisfaction. Let's delve into each of
these models:
Reliability: Reliability refers to the ability of a system or product to perform
its intended function without failure over a specific period of time under given
operating conditions. It's often measured using metrics like Mean Time Between
Failures (MTBF) or Failure Rate. High reliability indicates a low probability of
failure and is essential for systems that must operate consistently and safely,
such as medical devices, aerospace systems, and critical infrastructure.
Availability: Availability is the proportion of time that a system or product is
operational and able to perform its intended function when needed. It's influenced
by factors such as maintenance practices, repair times, and system redundancy.
Availability can be calculated as (MTBF / (MTBF + Mean Time To Repair (MTTR))).
Maximizing availability is crucial for systems that need to provide uninterrupted
services, like data centers and communication networks.
Maintainability: Maintainability refers to the ease with which a system can be
repaired, restored, or updated. It encompasses factors such as how quickly faults
can be diagnosed, the availability of spare parts, and the simplicity of
maintenance procedures. High maintainability reduces downtime and repair costs,
making it easier to manage and operate systems over their lifecycle.
Supportability: Supportability encompasses the overall ability to provide effective
and efficient support throughout a system's lifecycle, including its development,
deployment, and operation phases. This involves aspects like training,
documentation, help desks, and remote diagnostics. Supportability ensures that
users and operators can receive assistance when needed and that the system can be
effectively managed by support teams.
Organizations often use various models, methodologies, and analyses to evaluate and
improve RAMS factors:
FMEA (Failure Modes and Effects Analysis): Identifies potential failure modes,
their effects, and their likelihoods to prioritize improvement efforts.
RBD (Reliability Block Diagram): Represents the reliability and redundancy of
components within a system using graphical diagrams.
Fault Tree Analysis: Analyzes the combinations of events and conditions that could
lead to system failures.
Reliability Testing: Involves subjecting systems to various stress conditions to
assess how they perform over time and identify weak points.
Life Cycle Cost Analysis: Evaluates the total cost of ownership over a system's
lifecycle, considering factors like maintenance, repairs, and downtime.
Spare Parts Management: Ensures that an appropriate inventory of spare parts is
maintained to minimize downtime during repairs.
User Training and Documentation: Ensures users and operators have the necessary
knowledge and resources to operate and maintain the system effectively.
By incorporating these models and practices, organizations can design, build, and
maintain systems that meet performance expectations, minimize downtime, and deliver
reliable and efficient services to users.

Stochastic networks and Markov models

Stochastic networks and Markov models are fundamental concepts in the field of
probability theory, applied mathematics, and various fields such as computer
science, operations research, telecommunications, and more. Let's explore each of
these concepts:
Stochastic Networks: Stochastic networks deal with systems that involve a certain
degree of randomness or uncertainty. These systems may include multiple
interconnected components or nodes, where the behavior of each node is subject to
probabilistic influences. Examples of stochastic networks can be found in various
real-world scenarios, such as computer networks, communication systems,
transportation networks, and manufacturing processes.
Key features of stochastic networks:
Randomness: The behavior of the network components or the interactions between them
is subject to probabilistic or random effects.
Queuing Theory: Stochastic networks often involve queuing systems, where entities
(e.g., customers, data packets) wait in lines or queues before being processed by
network components.
Performance Analysis: Stochastic networks are analyzed to understand their
performance characteristics, such as throughput, delay, and resource utilization,
under various operating conditions.

Stochastic Modeling A quantitative description of a natural phenomenon is called a


mathematical model of that phenomenon. Examples abound, from the simpleequation S =
Zgt2 describing the distance S traveled in time t by a fallingobject starting at
rest to a complex computer program that simulates abiological population or a large
industrial system. In the final analysis, a model is judged using a single, quite
pragmatic, factor, the model's usefulness. Some models are useful as detailed
quantitative prescriptions of behavior, as for example, an inventory model thatis
used to determine the optimal number of units to stock. Another model in a
different context may provide only general qualitative informationabout the
relationships among and relative importance of several factorsinfluencing an event.
Such a model is useful in an equally important butquite different way. Examples of
diverse types of stochastic models arespread throughout this book. Such often
mentioned attributes as realism, elegance, validity, andreproducibility are
important in evaluating a model only insofar as theybear on that model's ultimate
usefulness. For instance, it is both unrealistic and quite inelegant to view the
sprawling city of Los Angeles as a geometrical point, a mathematical object of no
size or dimension. Yet it isquite useful to do exactly that when using spherical
geometry to derive aminimum-distance great circle air route from New York City,
another"point."

There is no such thing as the best model for a given phenomenon. Thepragmatic
criterion of usefulness often allows the existence of two ormore models for the
same event, but serving distinct purposes. Considerlight. The wave form model, in
which light is viewed as a continuous flow, is entirely adequate for designing
eyeglass and telescope lenses. In contrast, for understanding the impact of light
on the retina of the eye, thephoton model, which views light as tiny discrete
bundles of energy, ispreferred. Neither model supersedes the other; both are
relevant and useful.
The word "stochastic" derives from the Greed to aim, toguess) and means "random" or
"chance." The antonym is "sure," "deterministic," or "certain." A deterministic
model predicts a single outcomefrom a given set of circumstances. A stochastic
model predicts a set ofpossible outcomes weighted by their likelihoods, or
probabilities. A coinflipped into the air will surely return to earth somewhere.
Whether it landsheads or tails is random. For a "fair" coin we consider these
alternativesequally likely and assign to each the probability 12.
However, phenomena are not in and of themselves inherently stochastic or
deterministic. Rather, to model a phenomenon as stochastic or deterministic is the
choice of the observer. The choice depends on the observer's purpose; the criterion
for judging the choice is usefulness. Mostoften the proper choice is quite clear,
but controversial situations do arise. If the coin once fallen is quickly covered
by a book so that the outcome"heads" or "tails" remains unknown, two participants
may still usefullyemploy probability concepts to evaluate what is a fair bet
between them;that is, they may usefully view the coin as random, even though most
people would consider the outcome now to be fixed or deterministic. As a
lessmundane example of the converse situation, changes in the level of a
largepopulation are often usefully modeled deterministically, in spite of
thegeneral agreement among observers that many chance events contributeto their
fluctuations. Scientific modeling has three components: (i) a natural
phenomenonunder study, (ii) a logical system for deducing implications about the
phenomenon, and (iii) a connection linking the elements of the natural systemunder
study to the logical system used to model it. If we think of thesethree components
in terms of the great-circle air route problem, the natural system is the earth
with airports at Los Angeles and New York; thelogical system is the mathematical
subject of spherical geometry; and the

two are connected by viewing the airports in the physical system as pointsin the
logical system. The modern approach to stochastic modeling is in a similar spirit.
Nature does not dictate a unique definition of "probability," in the same waythat
there is no nature-imposed definition of "point" in geometry. "Probability" and
"point" are terms in pure mathematics, defined only throughthe properties invested
in them by their respective sets of axioms. (SeeSection 2.8 for a review of
axiomatic probability theory.) There are, however, three general principles that
are often useful in relating or connecting the abstract elements of mathematical
probability theory to a real ornatural phenomenon that is to be modeled. These are
(i) the principle ofequally likely outcomes, (ii) the principle of long run
relative frequency, and (iii) the principle of odds making or subjective
probabilities. Historically, these three concepts arose out of largely unsuccessful
attempts todefine probability in terms of physical experiences. Today, they are
relevant as guidelines for the assignment of probability values in a model, andfor
the interpretation of the conclusions of a model in terms of the phenomenon under
study.
We illustrate the distinctions between these principles with a long experiment. We
will pretend that we are part of a group of people who decide to toss a coin and
observe the event that the coin will fall heads up. This event is denoted by H, and
the event of tails, by T. Initially, everyone in the group agrees that Pr{H} = ;.
When asked why, people give two reasons: Upon checking the coin construction, they
believe that the two possible outcomes, heads and tails, are equally likely;and
extrapolating from past experience, they also believe that if the coinis tossed
many times, the fraction of times that heads is observed will beclose to one-half
The equally likely interpretation of probability surfaced in the works ofLaplace in
1812, where the attempt was made to define the probability ofan event A as the
ratio of the total number of ways that A could occur tothe total number of possible
outcomes of the experiment. The equallylikely approach is often used today to
assign probabilities that reflect somenotion of a total lack of knowledge about the
outcome of a chance phenomenon. The principle requires judicious application if it
is to be useful, however. In our coin tossing experiment, for instance, merely
introducingthe possibility that the coin could land on its edge (E) instantly
results inPr(H) = Pr{T} = Pr{E} = 1/3.
The next principle, the long run relative frequency interpretation
ofprobability, is a basic building block in modern stochastic modeling, madeprecise
and justified within the axiomatic structure by the law of largenumbers. This law
asserts that the relative fraction of times in which anevent occurs in a sequence
of independent similar experiments approaches, in the limit, the probability of the
occurrence of the event on anysingle trial. The principle is not relevant in all
situations, however. When the surgeon tells a patient that he has an 80-20 chance
of survival, the surgeonmeans, most likely, that 80 percent of similar patients
facing similarsurgery will survive it. The patient at hand is not concerned with
the longrun, but in vivid contrast, is vitally concerned only in the outcome of
his, the next, trial. Returning to the group experiment, we will suppose next that
the coin isflipped into the air and, upon landing, is quickly covered so that no
one cansee the outcome. What is Pr{H} now? Several in the group argue that
theoutcome of the coin is no longer random, that Pr{H} is either 0 or 1, andthat
although we don't know which it is, probability theory does not apply. Others
articulate a different view, that the distinction between "random" and "lack of
knowledge" is fuzzy, at best, and that a person with asufficiently large computer
and sufficient information about such factorsas the energy, velocity, and direction
used in tossing the coin could havepredicted the outcome, heads or tails, with
certainty before the toss.Therefore, even before the coin was flipped, the problem
was a lack ofknowledge and not some inherent randomness in the experiment.
In a related approach, several people in the group are willing to bet witheach
other, at even odds, on the outcome of the toss. That is, they are willing to use
the calculus of probability to determine what is a fair bet, without considering
whether the event under study is random or not. The usefulness criterion for
judging a model has appeared. While the rest of the mob were debating "random"
versus "lack ofknowledge," one member, Karen, looked at the coin. Her probability
forheads is now different from that of everyone else. Keeping the coin covered, she
announces the outcome "Tails," whereupon everyone mentallyassigns the value Pr{H} =
0. But then her companion, Mary, speaks upand says that Karen has a history of
prevarication. The last scenario explains why there are horse races; different
peopleassign different probabilities to the same event. For this reason, probabil-
ities used in odds making are often called subjective probabilities. Then, odds
making forms the third principle for assigning probability values inmodels and for
interpreting them in the real world. The modern approach to stochastic modeling is
to divorce the definitionof probability from any particular type of application.
Probability theoryis an axiomatic structure (see Section 2.8), a part of pure
mathematics. Itsuse in modeling stochastic phenomena is part of the broader realm
of science and parallels the use of other branches of mathematics in
modelingdeterministic phenomena. To be useful, a stochastic model must reflect all
those aspects of thephenomenon under study that are relevant to the question at
hand. In addition, the model must be amenable to calculation and must allow the
deduction of important predictions or implications about the phenomenon.
Stochastic Processes
A stochastic process is a family of random variables X where t is a parameter
running over a suitable index set T. (Where convenient, we willwrite X(t) instead
of X,.) In a common situation, the index t correspondsto discrete units of time,
and the index set is T = {0, 1, 2, . . .}. In thiscase, X, might represent the
outcomes at successive tosses of a coin, repeated responses of a subject in a
learning experiment, or successive observations of some characteristics of a
certain population. Stochasticprocesses for which T = [0, c) are particularly
important in applications. Here t often represents time, but different situations
also frequently arise. For example, t may represent distance from an arbitrary
origin, and X, maycount the number of defects in the interval (0, t] along a
thread, or thenumber of cars in the interval (0, t] along a highway. Stochastic
processes are distinguished by their state space, or the rangeof possible values
for the random variables X by their index set T, and bythe dependence relations
among the random variables X,. The most widelyused classes of stochastic processes
are systematically and thoroughlypresented for study in the following chapters,
along with the mathematical techniques for calculation and analysis that are most
useful with theseprocesses. The use of these processes as models is taught by
example.Sample applications from many and diverse areas of interest are an integral
part of the exposition.

12.6 Stochastic approximations/innovations concept


Stochastic approximation is a scheme for successive approximation of a sought quan
tity when the observation and the system dynamics involve random errors (Albert and

Gardner, 1967). It is applicable to the statistical problem of


(i) finding the value of a parameter that causes an unknown noisy-regression
function
to take on some preassigned value,
(ii) finding the value of a parameter that minimizes an unknown noisy-regression
function.
Stochastic approximation has wide applications to system modeling, data filtering,
and
data prediction. It is known that a procedure that is optimal in the decision
theoretic sense
can be nonoptimal. Sometimes, the algorithm is too complex to implement, for
example,
in situations where the nonlinear effects cannot be accurately approximated by
lineariza tion or the noise processes are strictly non-Gaussian. A theoretical
solution is obtained by
using the concepts of innovations and martingales. Subsequently, a numerically
feasible
solution is achieved through stochastic approximation. The innovations approach
sepa rates the task of obtaining a more tractable expression for the equation

Markov Models: Markov models are mathematical models used to describe systems that
exhibit a specific probabilistic property known as the Markov property or
memorylessness. This property states that the future behavior of the system depends
only on its current state and not on its past states. Markov models are widely used
in the analysis of systems that undergo transitions between a finite set of states.
Key features of Markov models:
State Transitions: The system moves from one state to another according to
probabilistic transition probabilities, which are often represented using a
transition matrix.
Markov Chains: A fundamental type of Markov model is the Markov chain, which is a
sequence of states with probabilistic transitions.
Applications: Markov models are used in a wide range of applications, including
reliability analysis, queueing systems, machine learning (e.g., Hidden Markov
Models), finance (e.g., Markov Chain Monte Carlo methods), and more.
Combining stochastic networks and Markov models allows for the analysis and
modeling of complex systems with randomness and state transitions. These concepts
provide valuable insights into system behavior, performance, and optimization,
making them essential tools in various fields where uncertainty and dynamic
behavior are prevalent.

47.4 Queuing networks


Until now, our discussion on queuing theory has been limited to systems with a
single queue.
The system may have had multiple servers in the queue, but there was only a single
queue.
Many enterprise systems are better described as a queuing network. Queuing networks
describe a system with multiple queues in which the departure from one server
becomes
the arrival to another server in the same network. The dependencies created by the
customer
flow in the network creates complexity that is not easy to handle analytically.
Jackson showed
for a queuing network with certain properties that a closed-form solution could be
found.
However, for general queuing networks, no closed-form solutions can be found.
Instead, most
researchers have turned to various approximation techniques. Here, we describe
different
classes of queuing networks and then briefly summarize what is called the two-
moment
parameter decomposition approach that has been developed by Whitt and others.
For queuing networks, we can draw a distinction between open and closed queuing
networks. Open queuing networks have customer arrivals from outside of the system
(coming
from a conceptually infinite calling population), and then they later leave the
system. In
closed queuing networks, the number of customers is constant, and no customer
enters or
leaves the system. Numerous manufacturing systems, such as just-in-time production
systems,
are modeled as closed queuing networks because the same number of parts is always
in the system. Most service systems are better modeled as open queuing networks
because
the customers arrive to the system, are served, and then depart the system.
A second way to classify queuing networks is to determine whether they serve a
single
customer class or multiple customer classes. In a single class queuing network,
there is only
one customer class such that all customers share the same characteristics�and
importantly,
the same route and service times. The single class has a single external arrival
rate to the
system. In multiple customer class queuing networks, there are many different types
of customers,
with different external arrival rates and different service rates at each node.
Figure
47.13 shows an example of a multiclass open queuing network. Here, there are two
customer

Figure 47.13 Comparison of triage policies. (From Giachetti, R., Queueing theory
chapter in Design
of Enterprise Systems: Theory, Architecture, and Methods. CRC Press, Boca Raton,
FL, 2010.)

Figure 47.14 Queuing network. (From Giachetti, R., Queueing theory chapter in
Design of Enterprise
Systems: Theory, Architecture, and Methods. CRC Press, Boca Raton, FL, 2010.)

classes that arrive to the queuing network at the first node with different arrival
rates. Each
customer class follows a different route through the network. The customers have
different
service times at each node they visit, which depends on the customer class they
come from.
Finally, after being served, the customers depart from the queuing network (Figure
47.14).

31.2 Cost estimation techniques


Cost estimation is the process of forecasting the present and future cash-flow
consequences
of engineering designs and investments (Canada et al., 1996). The process is useful
if it
reduces the uncertainty surrounding a revenue or cost element. In doing this, a
decision
should result that creates increased value relative to the cost of making the
estimate. Three
groups of estimating have proven to be very useful in preparing estimates for
economic
analysis: time-series, subjective, and cost engineering techniques.
31.2.1 Time-series techniques
Time-series data are revenue and cost elements that are functions of time, e.g.,
unit sales
per month and annual operating cost.
31.2.1.1 Correlation and regression analysis
Regression is a statistical analysis of fitting a line through data to minimize
squared error.
With linear regression, approximated model coefficients can be used to obtain an
estimate
of a revenue/cost element.
The relationship between x and y used to fit n data points (1 ? i ? n) is

y = a + bx (31.1)
where
x = independent variable
y = dependent variable
x = average of independent variable
y = average of dependent variable
The mathematical expressions used to estimate a and b in Equation 31.1 are

The correlation coefficient is a measure of the strength of the relationship


between two
variables only if the variables are linearly related.
Let

Where

r = the correlation coefficient and measures the degree of strength


r2 = the coefficient of determination that measures the proportion of the total
variation
that is explained by the regression line

A positive value of r indicates that the independent and the dependent variables
increase at the same rate. When r is negative, one variable decreases as the other
increases.
If there is no relationship between these variables, r will be zero.

31.2.1.2 Exponential smoothing


An advantage of the exponential smoothing method compared with the simple linear
regression for time-series estimates is that it permits the estimator to place
relatively more
weight on current data rather than treating all prior data points with equal
importance.
In addition, exponential smoothing is more sensitive to changes than linear
regression.
However, the basic assumption that trends and patterns of the past will continue
into the
future is a disadvantage. Hence, expert judgment should be used in interpreting
results.
Let

31.2.2 Subjective techniques


These techniques are subjective in nature; however, they are widely used in cases
where
the current investment events are not well enough understood to apply cost
engineering
or time-series techniques.
31.2.2.1 The Delphi method
This method is a progressive practice to develop consensus from different experts.
The
experts are usually selected on the basis of their experience and visibility in the
organiza tion. The Delphi method is usually both complex and poorly understood. In
it, experts are
asked to make forecasts anonymously and through an intermediary. The process
involved
can be summarized as follows (Canada et al., 1996):
� Each invited participant is given an unambiguous description of the forecasting
problem and the necessary background information.
� The participants are asked to provide their estimates based on the presented
problem
scenarios.
� An interquartile range of the opinions is computed and presented to the
participants
at the beginning of the second round.
� The participants are asked in the second round to review their responses in the
first
round in relation to the interquartile range from that round.
� The participants can, at this stage, request additional information. They may
main tain or change their responses.
� If there is a significant deviation in opinion in the second round, a third-round
ques tionnaire may be given to the participants. During this round, participants
receive
a summary of the second-round responses and a request to reconsider and explain
their decisions in view of the second-round responses.

31.2.2.2 Technological forecasting


This method can be used to estimate the growth and direction of a technology. It
uses
historical information of a selected technological parameter to extrapolate future
trends.
This method assumes that factors that affect historical data will remain constant
into the
future. Some of the commonly predicted parameters are speed, horsepower, and
weight.
This method cannot predict accurately when there are unforeseen changes in
technology
interactions.

31.2.3 Cost engineering techniques


Cost engineering techniques are usually used for estimating investment and working
cap ital parameters. They can be easily applied because they make use of various
cost/revenue
indexes.
31.2.3.1 Unit technique
This is the most popular of the cost engineering techniques. It uses an assumed or
esti mated �per unit� factor such as, for example, maintenance cost per month. This
factor is
multiplied by the appropriate number of units to provide the total estimate. It is
usually
used for preliminary estimating.
31.2.3.2 Ratio technique
This technique is used for updating costs through the use of a cost index over a
period of
time.
Let

31.2.3.3 Factor technique


This is an extension of the unit technique in which one sums the product of one or
more
quantities involving unit factors and adds these to any components estimated
directly.
Let
31.2.3.4 Exponential costing
This is also called �cost-capacity equation.� It is good for estimating costs from
design
variables for equipment, materials, and construction. It recognizes that cost
varies as some
power of the change in capacity of size.
Let

The accuracy of the exponential costing method depends largely on the similarity
between the two projects and the accuracy of the cost-exponent factor. Generally,
error
ranges from �10% to �30% of the actual cost.
31.2.3.5 Learning curves
In repetitive operations involving direct labor, the average time to produce an
item or
provide a service is typically found to decrease over time as workers learn their
tasks bet ter. As a result, cumulative average and unit times required to complete
a task will drop
considerably as output increases.
Let

31.2.3.6 A range of estimates


To reduce the uncertainties surrounding estimating future values, a range of
possible
values rather than a single value is usually more realistic. A range could include
an
optimistic estimate (O), the most likely estimate (M), and a pessimistic estimate
(P).
Hence, the estimated mean cost or revenue value can be estimated as (Badiru, 1996;
Canada et al., 1996)

DECISION ASSESSMENT

41.1 Introduction
Maintenance is a combination of all technical, administrative, and managerial
actions
during the life cycle of an item intended to keep it in or restore it to a state in
which it can
perform the required function (Komonen, 2002) (see Figure 41.1). Traditionally,
mainte nance has been perceived as an expense account with performance measures
developed
to track direct costs or surrogates such as the headcount of tradesmen and the
total dura tion of forced outages during a specified period. Fortunately, this
perception is chang ing (Tsang, 1998; Kumar and Liyanage, 2001, 2002a; Kutucuoglu
et al., 2001b). In the 21st

Figure 41.1 The maintenance management system.

century, plant maintenance has evolved as a major area in the business environment
and
is viewed as a value-adding function instead of a �bottomless pit of expenses�
(Kaplan
and Norton, 1992). The role of plant maintenance in the success of business is
crucial in
view of the increased international competition, market globalization, and the
demand for
profitability and performance by the stakeholders in business (Labib et al., 1998;
Liyanage
and Kumar, 2001b; Al-Najjar and Alsyouf, 2004). Today, maintenance is acknowledged
as
a major contributor to the performance and profitability of business organizations
(Arts
et al., 1998; Tsang et al., 1999; Oke, 2005). Maintenance managers therefore
explore every
opportunity to improve on profitability and performance and achieve cost savings
for
the organization (Al-Najjar and Alsyouf, 2004). A major concern has been the issue
of
what organizational structure ought to be adopted for the maintenance system:
should
it be centralized or decentralized? Such a policy should offer significant savings
as well
(HajShirmohammadi and Wedley, 2004).
The maintenance organization is confronted with a wide range of challenges that
include quality improvement, reduced lead times, set up time and cost reductions,
capac ity expansion, managing complex technology and innovation, improving the
reliability
of systems, and related environmental issues (Kaplan and Norton, 1992; Dwight,
1994,
1999; De Groote, 1995; Cooke and Paulsen, 1997; Duffua and Raouff, 1997; Chan et
al.,
2001). However, trends suggest that many maintenance organizations are adopting
total
productive maintenance, which is aimed at the total participation of plant
personnel in
maintenance decisions and cost savings (Nakajima, 1988, 1989; HajShirmohammadi and
Wedley, 2004). The challenges of intense international competition and market glob
alization have placed enormous pressure on maintenance system to improve efficiency

and reduce operational costs (Hemu, 2000). These challenges have forced maintenance

managers to adopt tools, methods, and concepts that could stimulate performance
growth and minimize errors, and to utilize resources effectively toward making the
organization a �world-class manufacturing� or a �high-performance manufacturing�
plant.
Maintenance information is an essential resource for setting and meeting mainte
nance management objectives and plays a vital role within and outside the
maintenance
organization. The need for adequate maintenance information is motivated by the fol
lowing four factors: (1) an increasing amount of information is available and data
and
information are required on hand and to be accessible in real-time for decision-
making
(Labib, 2004); (2) data lifetime is diminishing as a result of shop-floor realities
(Labib, 2004);
(3) the way data is being accessed has changed (Labib, 2004); and (4) it helps in
building
knowledge and in measurement of the overall performance of the organization. The
com puterized maintenance management system (CMMS) is now a central component of
many
companies� maintenance departments, and it offers support on a variety of levels in
the
organizational hierarchy (Labib, 2004). Indeed, a CMMS is a means of achieving
world class maintenance, as it offers a platform for decision analysis and thereby
acts as a guide
to management (Labib, 1998; Fernandez et al., 2003).
Consequently, maintenance information systems must contain modules that can
provide management with value-added information necessary for decision support
and decision-making. Computerized maintenance management systems are computer based
software programs used to control work activities and resources, as well as to
monitor and report work execution. Computerized maintenance management systems
are tools for data capture and data analysis. However, they should also offer the
capa bility to provide maintenance management with a facility for decision analysis
(Bamber
et al., 2003).

With the current advancements in technology, a number of contributions have


been made by the development and application of sophisticated techniques that have
enhanced the quality of decision-making using CMMSs. Data mining and web data
management are two new concepts that are gradually finding applications in mainte
nance management. The utilization of data-mining applications in maintenance has
helped in the discovery, selection, and development of core knowledge in the manage
ment of large maintenance databases hitherto unknown. Data-mining applications in
maintenance encourage adequate and systematic database analysis for correct mainte
nance management decisions. With today�s sophistication in data mining, evaluation
and interpretation of relational, transactional, or multimedia databases in
maintenance
are much easier than before. We can classify, summarize, predict, describe, and con
trast maintenance data characteristics in a manufacturing milieu for efficient
mainte nance data management and high productivity. With the emergence of the
Internet and
World Wide Web (WWW), numerous communication problems in maintenance have
been solved. This new communication mechanism quickly distributes information to
distant and multiple locations (HajShirmohammadi and Wedley, 2004). This challenges

many firms to reassess how they organize and manage their resources (Blanchard,
1997).
This is particularly important where information exchange is very vital. Examples
of
such systems are multinationals, which have manufacturing plants scattered all over
the
world where the exchange of resources takes place.
With the users connected to the Internet, information exchange is possible since
plants can communicate with one another through high-speed data transmission paths.

With the emergence of the Internet and the web data management, significant improve
ments have been made by the maintenance organization in terms of updating infor
mation on equipment management, equipment manufacturer directory services, and
security systems in maintenance data. The web data system has tremendously assisted

the maintenance organization to source for highly skilled manpower needed for main
tenance management, training and retraining of the maintenance workforce, location
of
equipment specialists and manufacturers, and the exchange of maintenance
professionals across the globe.
41.2 The importance of maintenance
Industrial maintenance has two essential objectives: (1) a high availability of
produc tion equipment and (2) low maintenance costs (Komonen, 2002). However, a
strong fac tor militating against the achievement of these objectives is the nature
and intensity
of equipment failures in plants. Since system failure can lead to costly stoppages
of
an organization�s operation, which may result in low human, material, and equipment

utilization, the occurrence of failure must therefore be reduced or eliminated. An


orga nization can have its customers build confidence in it by having uninterrupted
flow in
operations. Thus, maintenance ensures system sustenance by avoiding factors that
can
disrupt effective productivity, such as machine breakdown and its several attendant

consequences. In order to carry out effective maintenance activities, the team


players
must be dedicated, committed, unflagging, and focused on achieving good mainte
nance practices. Not only are engineers and technicians are involved, but also
every
other employee, especially those involved in production and having physical contact

with equipment. Thus, maintenance is not only important for these reasons, but its
successful implementation also leads to maximum capacity utilization, improved prod
uct quality, customer satisfaction, adequate equipment life span, among other
benefits.
Equipment does not have to finally breakdown before maintenance is carried out.
Implementing a good maintenance policy prevents system failures and leads to high
productivity (Vineyard et al., 2000).
41.3 Maintenance categories
In practice, failure of equipment could be partial or total. Even with the current
sophistica tion of equipment automation, failure is still a common phenomenon that
generates seri ous consideration of standard maintenance practices. Nonetheless,
the basic categories of
maintenance necessary for the control of equipment failures are traditionally
divided into
three main groups: (1) preventive maintenance (PM) (condition monitoring, condition
based actions, and scheduled maintenance); (2) corrective/breakdown maintenance
(BM);
and (3) improvement maintenance (Komonen, 2002). Breakdown maintenance is repair
work carried out on equipment only when a failure has occurred. Preventive
maintenance
is carried out to keep equipment in good working state. It is deployed before
failure occurs,
thereby reducing the likelihood of failure. It prevents breakdown through repeated
inspec tion and servicing.

41.3.1 Implementing preventive maintenance


Preventive maintenance is basically of two types: (1) a control necessitating
machine stop pages and (2) a tool change, which results in machine stoppage. In
order to attain constant
system functioning, it is necessary to be aware of when a system is susceptible to
failure or
when it requires servicing. This PM will enable these faults to be forestalled
before failure
occurs. Failure can occur at any time in the life of equipment. It is known as
infant mortal ity when failure occurs during the initial stages of equipment use.
Thus, manufacturers
guide against this initial �setback� through various tests before sales. More often
than not,
these initial failures are the results of improper usage by the user. Thus,
operators must be
adequately trained in the handling of equipment and process. Once the machine is
oper ating normally, we can then determine the MTBF (mean-time-between-failure)
distribu tion. Any departure from this distribution is an indication that a fault
may be developing;
PM can then be carried out. The MTBF distribution is an indication of how
economical
maintenance will be. The narrower the distribution, the more expensive is
maintenance.
Sometimes, the cost of PM may roughly equal the cost of repair of breakdown. In
this case,
equipment may be left to break down before repair work is done. Occasionally, the
cost of
PM may be low, even though there is a wide departure from the distribution. Minor
break downs must not be overlooked, as they may result in major problems. These
minor faults
should be tackled with readily available tools.
Figure 41.2 shows the relationship between PM cost and BM cost under normal and
inflation conditions. The operation managers should find the balance between the
two
costs. Utilizing inventory, personnel, and money in PM can reduce breakdown experi
enced by a firm. But the increase in PM costs will be more than the decrease in BM
costs
at a point, leading to an increase in total cost (TC). Beyond this optimal point,
it could
be better to allow breakdowns to occur before repair is carried out. This analysis
does
not consider the full costs of breakdown and neglects many costs not associated
with the
breakdown. This negligence does not nullify the validity of our assumption. Here,
we
will consider two costs usually neglected: (i) cost of inventory, which compensates
for
downtime and (ii) low employee morale resulting from downtime. In theory, it is
possible
to find the optimal point of maintenance activity considering downtime and its
associated

cost. Here, the distribution of breakdown probabilities, maintenance cost, and


number of
repairs over a period must be precisely computed.
41.3.2 Breakdown maintenance
Breakdown maintenance is sometimes referred to as emergency maintenance. It is main
tenance strategy on equipment that is allowed to fail before it is repaired. Here,
efforts are
made to restore the equipment back to operational mode in order to avoid serious
conse quences that may result from the breakdown of the equipment. Such
consequences may
take the dimension of safety, economic losses, or excessive idle time. When
equipment
breaks down, it may pose safety and environmental risks to workers if it produces
fumes
that may be injurious to the health of workers or excessive noise that could damage
the
hearing mechanism of human beings. Other consequences may include high production
loss, which would result in economic losses for the company. Consequently, the
mainte nance manager must restore the facility to its operating condition
immediately.

33.11 Group decision-making


Group decision making is an essential activity in many domains such as financial,
engi neering, and medical fields. Group decision making basically solicits opinions
from
experts and combines these judgments into a coherent group decision. Experts
typically
express their opinion in numerous different formats belonging to two categories:
quantita tive evaluations and qualitative ones.
Oftentimes, experts cannot express judgment in accurate numerical terms and use
linguistic labels or fuzzy preferences. The use of linguistic labels makes expert
judgment
more reliable and informative for decision making.
This chapter presents a review of group decision-making methods with emphasis on
using fuzzy preference relations and linguistic labels. In this chapter, we explore
various
methods to aggregate individual opinions into a group decision and show ways to
calcu late a consensus level, which represents the degree of consistency and
agreement between
the experts.
This chapter discusses the benefits and limitations of these methods and provides
numerous examples.

35.1 Introduction�decision making


Decision making, as a specialized field of Operations Research, is the process of
specifying
a problem or opportunity, identifying alternatives and criteria, evaluating the
alternatives,
and selecting a preferred alternative from among the possible ones. Typically,
there are
three types of decision-making approaches:
Multicriteria decision making
Multiple objective decision making
Group decision making
35.1.1 Multicriteria decision making
Multicriteria decision making (MCDM) is one of the most widely used methods in the
decision-making area (Hwang and Yoon, 1981). The objective of MCDM is to select the

best alternative from several mutually exclusive alternatives based on their


general perfor mance regarding various criteria (or attributes) decided by the
decision maker. Depending
on the type and the characteristic of the problem, a number of MCDM methods have
been developed such as simple additive weighting method, Analytic Hierarchical
Process (AHP)
method, outranking methods, maximin methods, and lexicographic methods. Introduced
by
Thomas Saaty in early 1970s, AHP has gained wide popularity and acceptance in
decision
making. AHP is a procedure that supports a hierarchical structure of the problem
and
uses pairwise comparison of all objects and alternative solutions. Lexicographic
method is
appropriate for solving problems in which the weight relationship among criteria is
domi nant and non-compensatory (Liu and Chi, 1995).
35.1.2 Multiple objective decision making
In multiple objective decision making, the decision maker wants to attain more than
one
objective or goal in electing the course of action while satisfying the constraints
dictated
by environment, processes, and resources. This problem is often referred to as a
vector
maximum problem (VMP). There are two approaches for solving the VMP (Hwang and
Masud, 1979). The first approach is to optimize one of the objectives while
appending the
other objectives to a constraint set so that the optimal solution would satisfy
these objec tives at least up to a predetermined level. This method requires the
decision maker to rank
the objectives in order of importance. The preferred solution obtained by this
method
is one that maximizes the objectives starting with the most important and
proceeding
according to the order of importance of the objectives.
The second approach is to optimize a superobjective function created by multiplying

each objective function with a suitable weight and then by adding them together.
One
well-known approach in this category is goal programming, which requires the
decision
maker to set goals for each desired objective. A preferred solution is then defined
as the
one that minimizes deviations from the set goals.
35.1.3 Group decision making
Group decision making has gained prominence owing to the complexity of modern-day
decisions, which involve complex social, economical, technological, political, and
many
other critical domains. Oftentimes, a group of experts needs to make a decision
that repre sents the individual opinions and yet is mutually agreeable.
Such group decisions usually involve multicriteria accompanied by multiple attri
butes. Clearly, the complexity of MCDM encourages group decision as a way to
combine
interdisciplinary skills and improve management of the decision process. The theory
and
practice of multiple objectives and multiple attribute decision making for a single
decision
maker has been studied extensively in the past 30 years. However, extending this
method ology to group decision making is not so simple. This is due to the
complexity introduced
by the conflicting views of the decision makers and their varying significance or
weight in
the decision process.
Moreover, the problem of group decision making is complicated because of several
additional factors. Usually, one expects such a decision model to follow a precise
math ematical model. Such a model can enforce consistency and precision to the
decision gener ated. Human decision makers, however, are quite reluctant to follow
a decision generated
by a formal model, unless they are confident of the model assumptions and methods.
Oftentimes, the input to such a decision model cannot be precisely quantified,
conflicting
with the perceived accuracy of the model. Intuitively, the act of optimization of
the group
decision�as a mathematical model would perform�is contradictory to the concept of
consensus and a group agreement.
The benefits from group decision making, however, are quite numerous, justifying
the
additional efforts required. Some of these benefits are as follows:
1. Better learning. Groups are better than individuals at understanding problems.
2. Accountability. People are held accountable for decisions in which they
participate.
3. Fact screening. Groups are better than individuals at catching errors.
4. More knowledge. A group has more information (knowledge) than any one member.
Groups can combine this knowledge to create new knowledge. More and more cre ative
alternatives for problem solving can be generated, and better solutions can be
derived (e.g., by group stimulation).
5. Synergy. The problem solving process may generate better synergy and communica
tion among the parties involved.
6. Creativity. Working in a group may stimulate the creativity of the participants.
7. Commitment. Many times, group members have their egos embedded in the deci sion,
and so they will be more committed to the solution.
8. Risk propensity is balanced. Groups moderate high-risk takers and encourage
conservatives.
Generally, there are three basic approaches toward group decision making (Hwang
and Lin, 1987):
1. Game theory. This approach implies a conflict or competition between the
decision
makers.
2. Social choice theory. This approach represents voting mechanisms that allow the
majority to express a choice.
3. Group decision using expert judgment. This approach deals with integrating the
preferences of several experts into a coherent and just group position.
35.1.3.1 Game theory
Game theory can be defined as the study of mathematical models of conflict and
coopera tion between intelligent and rational decision makers (Myerson, 1991).
Modern game the ory gained prominence after the publication of Von Neumann�s work
in 1928 and in 1944
(Von Neumann and Morgenstern, 1944). Game theory became an important field during
World War II and the ensuing Cold War, culminating with the famous Nash
Equilibrium.
The objective of the games as a decision tool is to maximize some utility function
for all
decision makers under uncertainty. Because this technique does not explicitly
accommo date multicriteria for selection of alternatives, it will not be considered
in this review.
35.1.3.2 Social choice theory
The social choice theory deals with MCDM since this methodology considers votes of
many individuals as the instrument for choosing a preferred candidate or
alternative. The
candidates can exhibit many characteristics such as honesty, wisdom, and experience
as
the criteria evaluated. The complexity of this seemingly simple problem of voting
can be
illustrated by the following example: a committee of nine people needs to select an
office
holder from three candidates, a, b, and c. The votes that rank the candidates are
as follows:
Three votes have the order a, b, c.
Three votes agree on the order b, c, a.
Two votes have the preference of c, b, a.
One vote prefers the order c, a, b.
After quickly observing the results, one can realize that each candidate received
three
votes as the preferred option, resulting in an inconclusive choice.
The theory of social choice was studied extensively with notable theories such as
Arrow�s impossibility theorem (Arrow, 1963; Arrow and Raynaud, 1986). This type of
deci sion making is based on the ranking of choices by the individual voters,
whereas the scores
that each decision maker gives to each criterion of each alternative are not
considered
explicitly. Therefore, this methodology is less suitable for MCDM in which each
criterion
in each alternative is carefully weighed by the decision makers.

35.1.3.3 Expert judgment approach


Within the expert judgment approach, there are two minor styles denoted as �team
deci sion� and �group decision� (terminology based on Zimmermann, 1987). Both
styles differ
in the degree of disagreement that the experts are allowed to have while
constructing the
common decision.
Generally, expert judgment methods can be divided into the following categories:
� Methods of generating ideas. These methods include brainstorming in verbal or
written forms.
� Methods of polling ideas. These methods produce quick estimates of the
preferences
of the experts. Surveys, the Delphi method, and conferencing are implementations
of polling ideas.
� Simulation models. These models include cognitive Maps, and the SPAN method
(Successive Proportional Additive Network, also known as Social Participatory
Allocative Network).
There is a vast amount of literature available on this topic, and this chapter
provides
the most basic review in order to provide the background for a more detailed
discussion
on fuzzy group decision making. A good review of the general MCDM field can be
found
in the work of Triantaphyllou (2000).
The essence of group decision making can be summarized as follows: there is a set
of
options and a set of individuals (termed experts) who provide their preferences
over the
set of options. The problem is to find an option (or a set of options) that is best
acceptable
to the group of experts. Such a solution entertains the concept of majority, which
is further
explored below.
This chapter explores the application of fuzzy logic toward generating a group deci
sion using expert judgment. As a part of the decision evaluation, the chapter
explains the
concept of consensus and develops various measures for that property.
The motivation behind using fuzzy sets in group decision making comes from several
sources:
1. The available information about the true state of nature lacks evidence and thus
the
representation of such piece of knowledge by a probability function is not
possible.
2. The user preferences are assessed by means of linguistic terms instead of
numerical
values. These terms in many cases are subjective and inconsistent.
3. Decision maker�s objectives, criteria, or preferences are vaguely established
and can not be induced with a crisp relation.

Many decision situations are complex and poorly understood. No one person has all
the
information to make all decisions accurately. As a result, crucial decisions are
made by a
group of people. Some organizations use outside consultants with appropriate
expertise
to make recommendations for important decisions. Other organizations set up their
own
internal consulting groups without having to go outside the organization. Decisions
can
be made through linear responsibility, in which case one person makes the final
decision
based on inputs from other people. Decisions can also be made through shared
responsi bility, in which case a group of people share the responsibility for
making joint decisions.
The major advantages of group decision-making are:

1. Ability to share experience, knowledge, and resources. Many heads are better
than one. A
group will possess greater collective ability to solve a given decision problem.
2. Increased credibility. Decisions made by a group of people often carry more
weight in
an organization.
3. Improved morale. Personnel morale can be positively influenced because many
people
have the opportunity to participate in the decision-making process.
4. Better rationalization. The opportunity to observe other people�s views can lead
to an
improvement in an individual�s reasoning process.
Some disadvantages of group decision-making are:
1. Difficulty in arriving at a decision. Individuals may have conflicting
objectives.
2. Reluctance of some individuals in implementing the decisions.
3. Potential for conflicts among members of the decision group.
4. Loss of productive employee time.
33.11.1 Brainstorming
Brainstorming is a way of generating many new ideas. In brainstorming, the decision

group comes together to discuss alternate ways of solving a problem. The members of

the brainstorming group may be from different departments, may have different back
grounds and training, and may not even know one another. The diversity of the
partici pants helps create a stimulating environment for generating different ideas
from different
viewpoints. The technique encourages free outward expression of new ideas no matter

how far-fetched the ideas might appear. No criticism of any new idea is permitted
dur ing the brainstorming session. A major concern in brainstorming is that
extroverts may
take control of the discussions. For this reason, an experienced and respected
individual
should manage the brainstorming discussions. The group leader establishes the
procedure
for proposing ideas, keeps the discussions in line with the group�s mission,
discourages
disruptive statements, and encourages the participation of all members.
After the group runs out of ideas, open discussions are held to weed out the
unsuitable
ones. It is expected that even the rejected ideas may stimulate the generation of
other ideas,
which may eventually lead to other favored ideas. Guidelines for improving
brainstorm ing sessions are presented as follows:
� Focus on a specific problem.
� Keep ideas relevant to the intended decision.
� Be receptive to all new ideas.
� Evaluate the ideas on a relative basis after exhausting new ideas.
� Maintain an atmosphere conducive to cooperative discussions.
� Maintain a record of the ideas generated.
33.11.2 Delphi method
The traditional approach to group decision-making is to obtain the opinion of
experienced
participants through open discussions. An attempt is made to reach a consensus
among
the participants. However, open group discussions are often biased because of the
influ ence or subtle intimidation from dominant individuals. Even when the threat
of a domi nant individual is not present, opinions may still be swayed by group
pressure. This is
called the �bandwagon effect� of group decision-making.
The Delphi method, developed in 1964, attempts to overcome these difficulties
by
requiring individuals to present their opinions anonymously through an
intermediary.
The method differs from the other interactive group methods because it eliminates
face to-face confrontations. It was originally developed for forecasting
applications, but it has
been modified in various ways for application to different types of decision-
making. The
method can be quite useful for project management decisions. It is particularly
effective
when decisions must be based on a broad set of factors. The Delphi method is
normally
implemented as follows:
1. Problem definition. A decision problem that is considered significant is
identified and
clearly described.
2. Group selection. An appropriate group of experts or experienced individuals is
formed
to address the particular decision problem. Both internal and external experts may
be involved in the Delphi process. A leading individual is appointed to serve as
the
administrator of the decision process. The group may operate through the mail or
gather together in a room. In either case, all opinions are expressed anonymously
on
paper. If the group meets in the same room, care should be taken to provide enough
room so that each member does not have the feeling that someone may accidentally
or deliberately observe their responses.
3. Initial opinion poll. The technique is initiated by describing the problem to be
addressed
in unambiguous terms. The group members are requested to submit a list of major
areas of concern in their specialty areas as they relate to the decision problem.
4. Questionnaire design and distribution. Questionnaires are prepared to address
the
areas of concern related to the decision problem. The written responses to the ques
tionnaires are collected and organized by the administrator. The administrator
aggregates the responses in a statistical format. For example, the average, mode,
and
median of the responses may be computed. This analysis is distributed to the deci
sion group. Each member can then see how his or her responses compare with the
anonymous views of the other members.

5. Iterative balloting. Additional questionnaires based on the previous


responses are
passed to the members. The members submit their responses again. They may
choose to alter or not to alter their previous responses.
6. Silent discussions and consensus. The iterative balloting may involve anonymous
writ ten discussions of why some responses are correct or incorrect. The process is
con tinued until a consensus is reached. A consensus may be declared after five or
six
iterations of the balloting or when a specified percentage (e.g., 80%) of the group
agrees on the questionnaires. If a consensus cannot be declared on a particular
point, it may be displayed to the whole group with a note that it does not
represent a
consensus.
In addition to its use in technological forecasting, the Delphi method has been
widely
used in other general decision-making processes. Its major characteristics of
anonymity of
responses, statistical summary of responses, and controlled procedure make it a
reliable
mechanism for obtaining numeric data from subjective opinion. The major limitations
of
the Delphi method are:
1. Its effectiveness may be limited in cultures where strict hierarchy, seniority,
and age
influence decision-making processes.
2. Some experts may not readily accept the contribution of nonexperts to the group
decision-making process.
3. Since opinions are expressed anonymously, some members may take the liberty
of making ludicrous statements. However, if the group composition is carefully
reviewed, this problem may be avoided.
33.11.3 Nominal group technique
The nominal group technique is a silent version of brainstorming. It is a method of
reach ing consensus. Rather than asking people to state their ideas aloud, the team
leader asks
each member to jot down a minimum number of ideas, for example, five or six. A
single
list of ideas is then written on a chalkboard for the whole group to see. The group
then dis cusses the ideas and weeds out some iteratively until a final decision is
made. The nominal
group technique is easier to control. Unlike brainstorming where members may get
into
shouting matches, the nominal group technique permits members to silently present
their
views. In addition, it allows introverted members to contribute to the decision
without the
pressure of having to speak out too often.
In all of the group decision-making techniques, an important aspect that can
enhance
and expedite the decision-making process is to require that members review all
pertinent
data before coming to the group meeting. This will ensure that the decision process
is
not impeded by trivial preliminary discussions. Some disadvantages of group
decision making are
1. Peer pressure in a group situation may influence a member�s opinion or
discussions.
2. In a large group, some members may not get to participate effectively in the
discussions.
3. A member�s relative reputation in the group may influence how well his or her
opin ion is rated
4. A member with a dominant personality may overwhelm the other members in the
discussions.
5. The limited time available to the group may create a time pressure that forces
some
members to present their opinions without fully evaluating the ramifications of the

available data.
6. It is often difficult to get all members of a decision group together at the
same time.
Despite the noted disadvantages, group decision-making definitely has many advan
tages that may nullify shortcomings. The advantages as presented earlier will have
vary ing levels of effect from one organization to another. The Triple C principle
presented in
Chapter 2 may also be used to improve the success of decision teams. Teamwork can
be
enhanced in group decision-making by adhering to the following guidelines:
1. Get a willing group of people together.
2. Set an achievable goal for the group.
3. Determine the limitations of the group.
4. Develop a set of guiding rules for the group.
5. Create an atmosphere conducive to group synergism.
6. Identify the questions to be addressed in advance.
7. Plan to address only one topic per meeting.
For major decisions and long-term group activities, arrange for team training,
which
allows the group to learn the decision rules and responsibilities together. The
steps for the
nominal group technique are:
1. Silently generate ideas, in writing.
2. Record ideas without discussion.
3. Conduct group discussion for clarification of meaning, not argument.
4. Vote to establish the priority or rank of each item.
5. Discuss vote.
6. Cast final vote.
33.11.4 Interviews, surveys, and questionnaires
Interviews, surveys, and questionnaires are important information-gathering
techniques.
They also foster cooperative working relationships. They encourage direct
participation and
inputs into project decision-making processes. They provide an opportunity for
employees
at the lower levels of an organization to contribute ideas and inputs for decision-
making. The
greater the number of people involved in the interviews, surveys, and
questionnaires, the
more valid the final decision. The following guidelines are useful for conducting
interviews,
surveys, and questionnaires to collect data and information for project decisions:
1. Collect and organize background information and supporting documents on the
items to be covered by the interview, survey, or questionnaire.
2. Outline the items to be covered and list the major questions to be asked.
3. Use a suitable medium of interaction and communication: telephone, fax,
electronic
mail, face-to-face, observation, meeting venue, poster, or memo.
4. Tell the respondent the purpose of the interview, survey, or questionnaire, and
indi cate how long it will take.
5. Use open-ended questions that stimulate ideas from the respondents.
6. Minimize the use of yes or no types of questions.
7. Encourage expressive statements that indicate the respondent�s views.
8. Use the who, what, where, when, why, and how approach to elicit specific
information.
9. Thank the respondents for their participation.
10. Let the respondents know the outcome of the exercise.
33.11.5 Multivote
Multivoting is a series of votes used to arrive at a group decision. It can be used
to assign
priorities to a list of items. It can be used at team meetings after a
brainstorming session
has generated a long list of items. Multivoting helps reduce such long lists to a
few items,
usually three to five. The steps for multivoting are
1. Take a first vote. Each person votes as many times as desired, but only once per

item.
2. Circle the items receiving a relatively higher number of votes (i.e., majority
vote) than
the other items.
3. Take a second vote. Each person votes for a number of items equal to one half
the
total number of items circled in step 2. Only one vote per item is permitted.
4. Repeat steps 2 and 3 until the list is reduced to three to five items, depending
on the
needs of the group. It is not recommended to multivote down to only one item.
5. Perform further analysis of the items selected in step 4, if needed.

You might also like