Stucor Cs8494 SJ
Stucor Cs8494 SJ
Stucor Cs8494 SJ
Tools Methods
Process
A Quality Focus
Process
Methods
Software engineering methods provide the technical how-to’s for building software.
Methods include tasks like communication, requirements analysis, design modeling,
program construction, testing, and support.
Tools
Software engineering tools provide automated support for the process and the methods.
When tools are integrated, the information created by one tool can be used by another
1.2 SOFTWARE PROCESS
A software process is a collection of activities, actions, and tasks that are required to
build high-quality software.
A process defines who is doing what when and how to reach a certain goal.
The aim of software process is effective on-time delivery of software with quality.
1.2.1 Elements of a software Process
Activity:
An activity helps to achieve a broad objective (e.g., communication with stakeholders)
and is applied regardless of the application domain, size, complexity and degree of rigor
with which software engineering is to be applied.
Action
Actions consist of the set of tasks that is used to produce the product.
Task
A task focuses on a small, but well-defined objective that produces a tangible outcome.
Deployment
The software is delivered to the customer, who evaluates the product and
provides feedback based on the evaluation.
Generally, the framework activities are applied iteratively as a project progresses.
Each project iteration produces a software increment that provides stakeholders with a
subset of overall software features and functionality.
As each increment is produced, the software becomes more and more complete.
The Process framework activities are complemented by a number of umbrella activities
which helps to manage and control progress, quality, change, and risk.
Umbrella activities:
Software project tracking and control helps to assess progress against the
project plan and take any necessary action to maintain the schedule.
Risk management assesses risks that may affect the outcome and quality of the
project.
Software quality assurance defines the activities required to ensure software
quality.
Technical reviews assess the products to uncover and remove errors before they
are propagated to the next activity.
Measurement defines and collects process, project, and product measures that
assist the team in delivering software that meets stakeholders’ needs.
Software configuration management manages the effects of change throughout
the software process.
Reusability management defines criteria for work product reuse and establishes
mechanisms to achieve reusable components.
Work product preparation and production encompasses the activities required
to create work products such as models, documents, and lists.
So a process adopted for one project might be significantly different than a process
adopted for another project.
SOFTWARE PROCESS
PROCESS FRAMEWORK
UMBRELLA ACTIVITIES
Framework activity #1
Software Engineering action #1.1 Task sets
.
Framework activity #n .
Software Engineering action #n.1
Work tasks Work Products
Quality Assurance Points Project milestones
Task sets
Task sets
For a small software project, the communication activity includes contacting the
stakeholder, discuss the requirements, generate a brief statement of the requirement and
get the review and approval of it from the stakeholder.
For complex Projects, the communication activity might have six distinct actions:
inception, elicitation, elaboration, negotiation, specification, and validation.
1.3.2 Identifying a Task Set
A task set defines the actual work to be done to accomplish the objectives of a software
engineering action.
A list of the task to be accomplished
A list of the work products to be produced
A list of the quality assurance filters to be applied
Eg: For a small project, the task set may include:
Make a list of stakeholders for the project.
Informal meeting with stake holders to identify the functions required
Discuss requirements and build a final list.
Prioritize requirements.
Choose the task sets that achieve the goal and still maintain quality and agility.
3. Phase patterns define the sequence of framework activities that occur with the
process. Eg: Sprial Model or Prototyping
o Initial context - This describes the conditions under which the pattern applies.
o Problem - The specific problem to be solved by the pattern.
o Solution - Describes how to implement the pattern successfully and how the initial
state of the process is modified as a consequence of the initiation of the pattern.
o Resulting Context - Describes the conditions that will result once the pattern
has been successfully implemented.
o Related Patterns - Provide a list of all process patterns that are directly related to
this one. Eg: the stage pattern Communication encompasses the task patterns: Project
Team, Collaborative Guidelines, Scope Isolation, Requirements Gathering, Constraint
Description, and Scenario Creation.
o Known Uses and Examples - Indicate the specific instances in which the pattern
is applicable.
Process patterns provide an effective mechanism for addressing problems associated with
any software process.
After developed the process patterns can be reused for the definition of process variants.
1.4 PRESCRIPTIVE PROCESS MODELS
Prescriptive process models stress detailed definition, identification, and application of
process activities and tasks.
Their intent is to improve system quality, make projects more manageable, make
delivery dates and costs more predictable, and guide teams of software engineers as they
perform the work required to build a system.
Prescriptive Process model
tasks. The time spent waiting can exceed the time spent on productive work. The
blocking states tend to be more prevalent at the beginning and end of a linear sequential
process.
It can serve as a useful process model in situations where requirements are fixed and
work is to proceed to completion in a linear manner.
1.4.2 V-Model
A variation of waterfall model is referred as V model which includes the quality
assurance actions associated with communication, modeling and early code construction
activities.
Team first moves down the left side of the V to refine the problem requirements.
Once code is generated, the team moves up the right side of the V, performing a series
of tests that validate each of the models created.
The V-model provides a way of visualizing how verification and validation actions are
applied at the different stages of development.
Fig: V - Model
1.4.3 Incremental Process Models
Incremental models construct a partial implementation of a total system and then slowly
add increased functionality
The incremental model prioritizes requirements of the system and then implements them
in groups.
Each subsequent release of the system adds function to the previous release, until all
designed functionality has been implemented.
It combines elements of both linear and parallel process flows.
Each linear sequence produces deliverable increments of the software.
The first increment is the core product with many supplementary features.
After implementation and evaluation, a plan is developed for the next increment.
The plan addresses the modification of the core product to better meet the needs of the
customer and includes additional features and functionality.
This process is repeated following the delivery of each increment, until the complete
product is produced.
core product is well received, then additional staff if required can be added to implement
the next increment.
Increments can be planned to manage technical risks.
Advantages
Use of reusable components helps to reduce the cycle time of the project.
Encourages user involvement
Reduced cost.
Flexible and adaptable to changes
Disadvantages
For large scalable projects, RAD requires sufficient human resources to create the right
number of RAD teams.
If developers and customers are not committed to the rapid-fire activities, then project
will fail.
If a system cannot properly be modularized, building the components necessary for RAD
will be problematic.
The use of powerful and efficient tools requires highly skilled professionals.
Customer involvement is required throughout the life cycle.
Spiral Model
Spiral model couples the iterative nature of prototyping with the systematic aspects of
the waterfall model
It is a risk-driven process model generator that is used to guide multi-stakeholder
concurrent engineering of software intensive systems.
Two main distinguishing features:
- Cyclic approach for incrementally growing a system’s degree of definition and
implementation while decreasing its degree of risk.
- Set of anchor point milestones for ensuring stakeholder commitment to feasible
and mutually satisfactory system solutions.
A series of evolutionary releases are delivered.
During the early iterations, the release might be a model or prototype. During later
iterations, increasingly more complete version of the engineered system are produced.
The first circuit in the clockwise direction might result in the product specification;
subsequent passes around the spiral might be used to develop a prototype and then
progressively more sophisticated versions of the software.
After each iteration, the project plan has to be refined. Cost and schedule are adjusted
based on feedback. Also, the number of iterations will be adjusted by project manager.
For example, communication activity has completed its first iteration and in the awaiting
changes state. The modeling activity was in inactive state, now makes a transition into
the
A well-designed agile process may “flatten” the cost of change curve by coupling the
incremental delivery with agile practices such as continuous unit testing and pair
programming. Thus team can accommodate changes late in the software project without
dramatic cost and time impact.
11. The best architectures, requirements, and designs emerge from self–organizing teams.
12. At regular intervals, the team reflects on how to become more effective, then tunes
and adjusts its behavior accordingly.
1.6.4 Human Factors
“Agile development focuses on the talents and skills of individuals, molding
the process to specific people and teams.” i.e. The process molds to the needs
of the people and team, not the other way around.
The key traits that must exist among the people on an agile team are:
Competence.
Common focus.
Collaboration.
Decision-making ability.
Fuzzy problem-solving ability.
Mutual trust and respect.
Self-organization.
1.6.5 Agile Process
Agile software process addresses a number of key assumptions say:
1. It is difficult to predict in advance which software requirements will persist and which
will change. It is equally difficult to predict how customer priorities will change as the
project proceeds.
2. For many types of software, design and construction are interleaved. It is difficult to
predict how much design is necessary before construction is used to prove the design.
3. Analysis, design, construction, and testing are not as predictable (from a planning point
of view) as we might like.
An agile process must be adaptable in order to manage the unpredictability.
An agile software process must adapt incrementally.
Agile SDLC model is a combination of iterative and incremental process models with
focus on process adaptability and customer satisfaction by rapid delivery of working
software product.
Agile Methods break the product into small incremental builds.
Each build is incremental in terms of features and the final build holds all the
features required by the customer.
Every iteration involves cross functional teams working simultaneously on various areas
like −
Planning
Requirements Analysis
Design
Coding
Testing
In Agile methodology, there is no detailed planning and there is clarity only in respect of
what features need to be developed. The team adapts to the changing product
requirements dynamically.
The product is tested very frequently, through the release iterations, minimizing the risk of
any major failures in future.
Advantages:
Rapid Functionality development
Promotes team work
Adaptable to changing requirements
Delivers early partial working solutions.
Flexible and Easy to manage.
Disadvantages
Not suitable for complex projects/dependencies.
Depends heavily on customer interaction, so if customer is not clear, team can be
driven in the wrong direction.
Prioritizing changes can be difficult where there are multiple stakeholders.
Note:
Agile methods include Rational Unified Process, Scrum, Extreme Programming, Adaptive
Software Development, Crystal clear, Feature Driven Development, and Dynamic Systems
Development Method. These are now collectively referred to as Agile Methodologies.
Each of these values is used as a driver for specific XP activities, actions, and tasks.
Communication
To achieve effective communication, XP emphasizes close, informal (verbal)
collaboration between customers and developers.
Simplicity
To achieve simplicity, XP restricts developers to design only for immediate needs, rather
than consider future needs. The design can be refactored later if necessary.
Feedback
Feedback is derived from three sources: the implemented software itself, the customer,
and other software team members.
Courage
Team should be prepared to make hard decisions that support the other principles and
practices.
Respect
By following each of these values, the agile team inculcates respect among its members,
between other stakeholders and team members, and indirectly, for the software itself.
1.7.2 XP Process
Extreme Programming uses an object-oriented approach that encompasses a set of rules
and practices that occur within the context of four framework activities:
Planning
Design
Coding
Testing
XP Planning
The planning activity is also called as planning game.
Planning begins with a requirements gathering where customers and developers
negotiate requirements in the form of User stories captured on index cards.
User stories describe the required output, features, and functionality for software to be
built. Each story is written by the customer and is placed on an index card.
The customer assigns a value (i.e., a priority) to the story based on the overall business
value of the function.
Agile team assesses each story and assigns a cost measured in development weeks.
If the story is estimated to require more than three development weeks, the customer is
asked to split the story into smaller stories and the assignment of value and cost occurs
again.
Stories are grouped to for a deliverable increment and a commitment is made on delivery
date.
Stories are developed in one of three ways:
1.All stories will be implemented immediately
2. The stories with highest value will be moved up in the schedule and implemented
first 3.The riskiest stories will be moved up in the schedule and implemented first.
Fig: Sample Index card with User stories
#5
Customer can change his/her personal information in
the system
Priority: 2 (high/low/medium)
Estimate: 4 (development weeks)
After the first increment, the XP team computes project velocity.
Project velocity is the number of customer stories implemented during the first release.
Project velocity can be used to
1. help estimate delivery dates and schedule for subsequent releases.
2. determine whether an over commitment has been made for all stories across the
entire development project.
During development, the customer can add new stories, change the value of an existing
story, split stories, or eliminate them. The XP team then reconsiders all and modifies its
plans accordingly.
XP Design
XP design follows the KIS (keep it simple) principle.
A simple design is preferred than complex representation.
Design provides implementation guidance for the story.
XP encourages the use of CRC cards. CRC (class-responsibility collaborator) cards
identify and organize the object-oriented classes that are relevant to the current software
increment.
Eg: sample CRC for ATM system
Class : User Menu
Responsibility Collaborators
Display Main menu Bank System
Ask PIN
Validate PIN .
Select transaction type
Debit amount Printer
Print balance etc
The CRC cards are the only design work product produced as part of the XP process.
For difficult design problems, XP recommends creation of an intermediate operational
prototype called a spike solution.
XP encourages “refactoring”—an iterative refinement of the internal program design
XP Coding
XP recommends the construction of a unit test for a store before coding commences.
A key concept during the coding activity is “pair programming”
This provides a mechanism for real-time problem solving and real-time quality assurance.
As pair programmers complete their work, the code they develop is integrated with the
work of others.
This “continuous integration” strategy helps to avoid compatibility and interfacing
problems and provides a “smoke testing” environment that helps to uncover errors early.
XP Testing
XP uses Unit and Acceptance Testing.
All unit tests are executed daily
o “Acceptance tests” are defined by the customer and executed to assess customer
visible functionality
The unit tests that are created should be implemented using a framework that enables
them to be automated.
This encourages a regression testing strategy whenever code is modified.
The individual unit tests are organized into a “universal testing suite”, integration and
validation testing of the system can occur on a daily basis.
This provides the XP team with a continual indication of progress and also can raise
warning flags early if things go awry.
XP acceptance tests (customer tests), are specified by the customer and focus on overall
system features and functionality. Acceptance tests are derived from user stories.
1.7.3 Industrial XP
IXP incorporates six new practices that are designed to help ensure that an XP project
works successfully for significant projects within a large organization.
1. Readiness assessment:
The assessment ascertains whether (1) an appropriate development environment
exists to support IXP, (2) the team will be populated by the proper set of stakeholders,
(3) the organization has a distinct quality program and supports continuous
improvement, (4) the organizational culture will support the new values of an agile team,
and (5) the broader project community will be populated appropriately.
2. Project community:
Classic XP suggests that the right people be used to populate the agile team to
ensure success. The people on the team must be well-trained, adaptable and skilled, and
have the proper temperament to contribute to a self-organizing team.
3. Project chartering:
Chartering also examines the context of the project to determine how it
complements, extends, or replaces existing systems or processes.
4. Test-driven management:
Test-driven management establishes a series of measurable “destinations “and
then defines mechanisms for determining whether or not these destinations have been
reached.
5. Retrospectives:
An IXP team conducts a specialized technical review after a software increment
is delivered. Called a retrospective, the review examines “issues, events, and lessons-
learned” across a software increment and/or the entire software release.
6. Continuous learning:
Because learning is a vital part of continuous process improvement, members of
the XP team are encouraged to learn new methods and techniques that can lead to a
higher quality product.
Requirements of a system are the descriptions of the services provided by the system and
its operational constraints.
The requirements reflect the needs of customers for a system that helps solve the problem
Requirements specify what the system is supposed to do but not how the system is to
accomplish the task.
Requirements should be precise, complete, and consistent
Precise - They should state exactly what is desired of the system
Complete - They should include descriptions of all facilities required
Consistent - There should be no conflicts in the descriptions of the system
facilities
Requirement analysis
specifies software’s operational characteristics
indicates software's interface with other system elements
establishes constraints that software must meet
Requirements analysis allows the software engineer to:
elaborate on basic requirements established during earlier requirement
engineering tasks
Build models that depict user scenarios, functional activities, problem classes and
their relationships, system and class behavior, and the flow of data as it is
transformed.
Requirement can be
Functional
Non-functional
Domain
Non functional requirements are the constraints on the services offered by the system
such as timing constraints, constraints on the development process, standards, etc.
Non functional requirements are not directly concerned with specific functions delivered
by the system.
They specify system performance, security, availability, and other emergent properties.
Types of non-functional requirements:
1. Product requirements:
These requirements specify the product behavior.
Ex: execution speed, memory requirement, reliability requirements that set out the
acceptable failure rate; portability requirements; and usability requirements etc
2. Organisational requirements
These requirements are derived from policies and procedures of the organization.
Ex: process standards that must be used; implementation requirements such as the
programming language or design method used and delivery requirements that specify
when the product and its documentation are to be delivered.
Non-functional requirements
Graphical models are most useful to show how state changes and to describe a sequence
of actions
Eg: sequence diagram for ATM withdrawal
Requirements
document
The amount of time and effort for each activity in the iteration depends on the stage of
the overall process and the type of system being developed.
Early stage includes understanding the non-functional requirements and the user
requirements. Later stages are devoted to system requirements engineering and system
modeling.
The number of iterations around the spiral can vary
Requirements engineering is the process of applying a structured analysis method such as
object-oriented analysis
This involves analyzing the system and developing a set of graphical system models,
such as use-case models, that then serve as a system specification.
2.2.1 FEASIBILITY STUDY
Feasibility study is used to determine if the user needs can be satisfied with the available
technology and budget
Feasibility study checks the following:
Does the system contribute to organisational objectives?
Can the system can be implemented using current technology and within budget
Can the system can be integrated with other systems that are used
If a system does not support these objectives, it has no real value to the business.
Feasibility study is based on the information assessment, information collection and
report writing.
Sample Questions that may be asked for information collection are:
1. What if the system wasn’t implemented?
2. What are current process problems?
3. How will the proposed system help?
4. What will be the integration problems?
5. Is new technology needed? What skills?
6. What facilities must be supported by the proposed system?
Information sources are the managers of the departments, software engineers, technology
experts and end-users of the system.
The feasibility study should be completed in two or three weeks.
After collecting the information, the feasibility report is created.
In the report, changes to the scope, budget and schedule of the system can be proposed
and also suggest further high-level requirements for the system.
1.2.2 REQUIREMENTS ELICITATION AND ANALYSIS
Requirements elicitation is nothing but identifying the application domain, the services
that the system should provide and the constraints.
Requirements are gathered from the stakeholders
Stakeholders are any person or group who will be affected by the system, directly or
indirectly. Eg: end-users, managers, engineers, domain experts etc
Drawbacks of Eliciting and understanding stakeholder requirements:
1. Stakeholders don’t know what they really want.
2. Stakeholders express requirements in their own terms.
3. Different stakeholders may have conflicting requirements.
4. Organisational and political factors may influence the system requirements.
5. The requirements change during the analysis process. New stakeholders may
emerge and the business environment change.
Steps involved in requirements elicitation and analysis
1. Requirements discovery
This is the process of interacting with stakeholders in the system to collect their
requirements. Domain requirements are also identified.
2. Requirements classification and organisation
This activity takes the unstructured collection of requirements, groups related
requirements and organises them into coherent clusters.
3. Requirements prioritisation and negotiation
Since multiple stakeholders are involved, requirements will conflict. This activity
is concerned with prioritizing the requirements, and finding and resolving requirements
conflicts through negotiation.
4. Requirements documentation
The requirements are documented and input into the next round of the spiral for
further requirements discovery. Formal or informal requirements documents may be
produced.
Requiremen t s Requiremen t s
classificat io n an d o rgan isat ion p riori t izat i on and n ego t iat ion
Almost all organisational systems must interoperate with other systems in the
organisation. When a new system is planned, the interactions with other systems must be
planned which may place requirements and constraints on the new system.
Finally organise and structure the viewpoints into a hierarchy.
Eg: viewpoint hierarchy for LIBSYS
Once viewpoints have been identified and structured, identify the most important
viewpoints and start with them to discover the system requirements.
Interviewing
RE team asks questions to stakeholders about the system they are using and the system to
be developed and derive the requirements from their answers.
Interviews may be of two types:
1. Closed interviews where the stakeholder answers a predefined set of questions.
2. Open interviews where there is no predefined agenda and a range of issues are
explored with stakeholders.
Completely open-ended discussions rarely work well; most interviews require some
questions to get started and to keep the interview focused on the system to be developed.
Interviews are good for getting an overall understanding of what stakeholders do, how
they might interact with the system and the difficulties that they face with current
systems.
Interviews are not good for understanding the domain requirements
Interviews are not an effective technique for eliciting knowledge about organisational
requirements and constraints
It is hard to elicit domain knowledge during interviews for two reasons:
o Requirements engineers cannot understand specific domain specific terminology;
Tamizharasi A , Asst Professor – CSE , RMD Engineering College Page 15
o Some domain knowledge is so familiar that people find to explain or they think it
is so fundamental that it isn't worth mentioning.
Scenarios
A set of use cases should describe all possible interactions with the system.
Actors in the process are represented as stick figures, and each class of interaction is
represented as a named ellipse.
Sequence diagrams
Sequence diagrams are used to add detail to use-cases by showing the sequence of event
processing in the system.
Eg:
1.4 References
1.5 Overview of the remainder of the document
2. General description
2.1 Product perspective
2.2 Product functions
2.3 User characteristics
2.4 General constraints
2.5 Assumptions and dependencies
3. Specific requirements cover functional, non functional and interface
requirements.
The requirements may document external interfaces, describe system
functionality and performance, specify logical database requirements, design
constraints, emergent system properties and quality characteristics.
4. Appendices
5. Index
It is a general framework that can be tailored and adapted to define a standard to the
needs of a particular organization
Requirements documents are essential when an outside contractor is developing the
software system.
The focus will be on defining the user requirements and high-level, non-functional
system requirements.
When the software is part of a large system engineering project that includes interacting
hardware and software systems, it is essential to define the requirements to a fine level of
detail.
2.2.4 REQUIREMENTS VALIDATION
Requirements validation is concerned with showing that the requirements actually define
the system that the customer wants.
Requirements error costs are high so validation is very important
When the requirements change that the system design and implementation must also be
changed and then the system must be tested again. So the cost of fixing a requirement
problem is greater than repairing the design or coding errors
Requirements checking:
1. Validity checks
A user needs the system to perform certain functions.
Does the system provide the functions which best support the customer’s
needs?
2. Consistency checks
Are there any requirements conflicts?
3. Completeness checks
The requirements document should include requirements, which define all
functions, and constraints intended by the system user.
Are all functions required by the customer included?
4. Realism checks
Can the requirements be implemented given available budget and technology?
5. Verifiability
To reduce the potential for dispute between customer and contractor, system
requirements should always be written so that they are verifiable.
Can the requirements be checked?
Requirements validation techniques
1. Requirements reviews
The requirements are analysed systematically by a team of reviewers.
2. Prototyping
An executable model of the system is demonstrated to end-users and customers to see
if it meets their real needs
3. Test-case generation
Requirements should be testable. This approach is for developing tests for
requirements to check testability.
If the tests for the requirements are devised as part of the validation process, this often
reveals requirements problems. If a test is difficult or impossible to design, then the
requirements will be difficult to implement and should be reconsidered.
Requirements reviews
A requirements review is a manual process that involves people from both client and
contractor organisations to check the requirements document for anomalies and
omissions.
Requirements reviews can be informal or formal.
Informal reviews involve contractors discussing requirements with the system
stakeholders. Good communications can help to resolve problems at an early stage.
In a formal requirements review, the development team explains the implications of each
requirement to the client. The review team should check each requirement for
consistency as well as completeness.
Reviewers may also check for:
1. Verifiability. Is the requirement realistically testable?
2. Comprehensibility. Is the requirement properly understood?
3. Traceability. Is the origin of the requirement clearly stated? Traceability is important
as it allows the impact of change on the rest of the system to be assessed
4. Adaptability. Can the requirement be changed without a large impact on other
requirements?
Conflicts, contradictions, errors and omissions in the requirements should be pointed out
by reviewers and formally recorded in the review report.
It is then up to the users, the system procurer and the system developer to negotiate a
solution to these identified problems.
1.3 REQUIREMENTS MANAGEMENT
Requirements management is the process of managing changing requirements during
the requirements engineering process and system development.
Requirements management is the process of understanding and controlling changes to
system requirements.
Requirements are incomplete and inconsistent because:
o New requirements arise during the process as business needs change and a
better understanding of the system is developed;
o Different viewpoints have different requirements and these are
often contradictory.
Consequential requirements
Requirements that result from the introduction of the computer system.
Introducing the computer system may change the organisations processes and open up
new ways of working which generate new system requirements
Compatibility requirements
Requirements that depend on the particular systems or business processes within
an organisation. As these change, the compatibility requirements on the commissioned or
delivered system may also have to evolve.
Requirements management planning:
During the requirements engineering process, you have to plan:
o Requirements identification
Each requirement must be uniquely identified so that it can be cross-
referenced by other requirements and so that it may be used in traceability
assessments.
o A change management process
This is the set of activities that assess the impact and cost of changes
o Traceability policies
These policies define the relationships between requirements, and between
the requirements and the system design that should be recorded and how
these records should be maintained.
o CASE tool support
The tool support required to help manage requirements change;
Traceability
Traceability is concerned with the relationships between requirements, their sources and
the system design
Types of traceability information:
o Source traceability
Links from requirements to stakeholders who proposed these
requirements;
o Requirements traceability
Links between dependent requirements;
Requirement problem
Change Implementation
Incorporated changes in the requirements
DATA MODELLING:
Entity-Relationship Diagram is a very useful method for data modeling.
It represents:
o data objects, object attributes, and relationships between objects
Eg:
PROCESS MODELLING
Process model represents the system’s function.
This graphically represent the process that capture, manipulate, store and distribute data
between the system and its environment
Eg: Data Flow Diagram
DATA FLOW DIAGRAM
A Data Flow Diagram (DFD) is a graphical representation of the flow of data through an
information system, modeling its process aspects.
This is used to create an overview of the system
Elements of DFD:
external entity - people or organisations that send data into the system or receive data
from the system
process - models what happens to the data i.e. transforms incoming data into outgoing
data
data store - represents permanent data that is used by the system
data flow - models the actual flow of the data between the other elements
Developing DFDs
Context diagram is an overview of an organizational system that shows:
o The system boundaries.
o External entities that interact with the system.
o Major information flows between the entities and the system.
Note: only one process symbol, and no data stores shown
Eg:
Level-n Diagram shows how the system is divided into sub-systems (processes), each of
which deals with one or more of the data flows to or from an external agent, and which
together provide all of the functionality of the system as a whole.
Eg:
Fig: level 2 DFD for Process 1: Register student for the course
Class Diagram
A class is a collection of objects with common structure, common behavior, common
relationships and common semantics
Classes are found by examining the objects in sequence and collaboration diagram
A class is drawn as a rectangle with three compartments
A class diagram shows the existence of classes and their relationships in the logical view
of a system
UML modeling elements in class diagrams
Classes and their structure and behavior
Association, aggregation, dependency, and inheritance relationships
Multiplicity and navigation indicators
Role names
A transition is firable or enabled when there are sufficient tokens in its input places.
After firing, tokens will be transferred from the input places (old state) to the output
places, denoting the new state.
Eg: firing event
UNIT-III
SOFTWARE DESIGN
Quality Guidelines
A design should exhibit an architecture that is created using recognizable architectural
styles or patterns and composed of components that exhibit good design characteristics
and can be implemented in an evolutionary fashion
A design should be modular
A design should contain distinct representations of data, architecture, interfaces, and
components.
A design should lead to data structures that are appropriate for the classes to be
implemented and are drawn from recognizable data patterns.
A design should lead to components that exhibit independent functional characteristics.
A design should lead to interfaces that reduce the complexity of connections between
components and with the external environment.
A design should be derived using a repeatable method that is driven by information
obtained during software requirements analysis.
A design should be represented using a notation that effectively communicates its
meaning.
Design Principles
The design process should not suffer from ‘tunnel vision.’
The design should be traceable to the analysis model.
The design should exhibit uniformity and integration.
The design should be structured to accommodate change.
The design should be structured to degrade gently, even when aberrant data, events, or
operating conditions are encountered.
Design is not coding, coding is not design.
The design should be assessed for quality as it is being created, not after the fact.
The design should be reviewed to minimize semantic errors.
Quality Attributes. [FURPS]
Functionality is assessed by evaluating the feature set and capabilities of the program
and the security of the overall system etc.
Usability is assessed by considering human factors, consistency and documentation.
Reliability is evaluated by measuring the frequency and severity of failure, the mean-
time-to-failure (MTTF), the ability to recover from failure etc.
Performance is measured by considering processing speed, response time, resource
consumption, throughput, and efficiency.
Supportability combines the ability to extend the program (extensibility), adaptability,
serviceability
The Evolution of Software Design
Early design work concentrated on criteria for the development of modular programs and
methods for refining software structures in a top down manner.
Procedural aspects of design definition evolved into structured programming
Later work proposed methods for the translation of data flow into a design definition.
Newer design approaches proposed an object-oriented approach to design derivation.
The design patterns can be used to implement software architectures and lower levels of
design abstractions.
Aspect-oriented methods, model-driven development, and test-driven development
emphasize techniques for achieving more effective modularity and architectural structure
in the designs.
Characteristics of design methods:
(1) A mechanism for the translation of the requirements model into a design
representation,
(2) A notation for representing functional components and their interfaces,
(3) Heuristics for refinement and partitioning, and
(4) Guidelines for quality assessment.
3.2 DESIGN CONCEPTS
Concepts that has to be considered while designing the software are:
Abstraction, Modularity, Architecture, pattern, Functional independence,
refinement, information hiding, refactoring and design classes
3.2.1 Abstraction
There are many levels of abstraction
At the highest level of abstraction, a solution is stated in broad terms and at lower levels
of abstraction, a more detailed description of the solution is provided.
Types:
Procedural abstraction refers to a sequence of instructions that have a specific and
limited function
Example: open for a door. Open implies a long sequence of procedural steps (e.g.,
walk to the door, reach out and grasp knob, turn knob and pull door, step away from
moving door, etc.).
Data abstraction is a named collection of data that describes a data object
Eg: Data abstraction for door would encompass a set of attributes that describe the
door (e.g., door type, swing direction, opening mechanism, weight, dimensions).
3.2.2 Architecture
Software architecture is the overall structure of the software and the ways in which that
structure provides conceptual integrity for a system
Properties:
Structural properties. This representation defines the components of a system and how
they interact with one another.
Extra-functional properties. The architectural design should address how the design
architecture achieves requirements for performance, capacity, reliability, security,
adaptability.
Families of related systems. The architectural design should draw upon repeatable
patterns that are commonly encountered in the design of families of similar systems
Models to represent the Architectural design:
Structural models represent architecture as an organized collection of program
components.
Framework models increase the level of design abstraction by attempting to identify
repeatable architectural design frameworks that are encountered in similar types of
applications.
Dynamic models address the behavioural aspects of the program architecture, indicating
how the structure changes when an external event occurs.
Process models focus on the design of the business or technical process that the system
must accommodate.
Functional models can be used to represent the functional hierarchy of a system.
Creates a new set of design classes that implement a software infrastructure to support the
business solution
Types of Design Classes
User interface classes – define all abstractions necessary for human-computer
interaction
Business domain classes – refined from analysis classes; identify attributes and methods
that are required to implement some element of the business domain
Process classes – implement business abstractions required to fully manage the business
domain classes
Persistent classes – represent data stores (e.g., a database) that will persist beyond the
execution of the software
System classes – implement software management and control functions that enable the
system to operate and communicate within its computing environment and the outside
world
Characteristics of a Well-Formed Design Class
Complete and sufficient
Primitiveness - Each method of a class focuses on accomplishing one service for the class
High cohesion
Low coupling
Advantages:
Data-centered architectures promote integrability
Data can be passed among clients using the blackboard mechanism.
Client components can execute the processes independently.
Data Flow Architectures
This architecture is used when input has to be transformed into output through a series of
computational components.
The main goal is modifiability
Pipe-and-filter style
o A pipe-and-filter pattern has a set of components, called filters, connected by
pipes that transmit data from one component to the next.
o The filters incrementally transform the data
o The filter does not require knowledge of the workings of its neighboring filters.
– Peer-level systems are those systems that interact on a peer-to-peer basis with
target system to produce or consume data
– Actors are the people or devices that interact with target system to produce or
consume data
b g h
a d e f
c i
j
data flow model
x1
"Transform" mapping
x2 x3 x4
b c d e f
g i
a h j
Fig: Transform mapping
Step 7. Refine the first iteration program structure using design heuristics for
improved software quality.
Eg: class Withdraw can include the attributes like account no, name, balance and
functions like withdraw() and update() etc
Workflow analysis
Workflow analysis defines how a work process is completed when several people are
involved
For work flow analysis swimlane diagram is used
Eg:
4. Indicate how the user interprets the state of the system from information provided
through the interface
Applying Interface Design steps
Interface objects and actions are obtained from a grammatical parse of the use cases and
the software problem statement
Interface objects are categorized into 3 types: source, target, and application
o A source object is dragged and dropped into a target object such as to create a
hardcopy of a report
o An application object represents application-specific data that are not directly
manipulated as part of screen interaction such as a list
After identifying objects and their actions, an interface designer performs screen layout
which involves
o Graphical design and placement of icons
o Definition of descriptive screen text
o Specification and titling for windows
o Definition of major and minor menu items
o Specification of a real-world metaphor to follow
User Interface Design Patterns
A design pattern is an abstraction that prescribes a design solution to a specific, well-
bounded design problem.
Eg:
o CalendarStrip that produces a continuous, scrollable calendar in which the
current date is highlighted and future dates may be selected by picking them from
the calendar.
Design Issues
Four common design issues in any user interface
o System response time (both length and variability)
Length is the amount of time taken by the system to respond.
Variability is the deviation from average response time
o User help facilities
Help facilities gives information about when is it available, how is it accessed,
how is it represented to the user, how is it structured, what happens when help
is exited
– The designer should specify the component in a way that allows it to be extended
without the need to make internal code or design modifications to the existing
parts of the component
• Liskov substitution principle
– The Subclasses should be substitutable for their base classes
– A component that uses a base class should continue to function properly if a
subclass of the base class is passed to the component instead
• Dependency inversion principle
– This principle depend on abstractions , do not depend on concretions
– The more a component depends on other concrete components, the more difficult
it will be to extend
• Interface segregation principle
– Many client-specific interfaces are better than one general purpose interface
– For a server class, specialized interfaces should be created to serve major
categories of clients
– Only those operations that are relevant to a particular category of clients should
be specified in the interface
Component Packaging Principles
• Release reuse equivalency principle
– The granularity of reuse is the granularity of release
– Group the reusable classes into packages that can be managed, upgraded, and
controlled as newer versions are created
• Common closure principle
– Classes that change together belong together
– Classes should be packaged cohesively; they should address the same functional
or behavioral area on the assumption that if one class experiences a change then
they all will experience a change
• Common reuse principle
– Classes that aren't reused together should not be grouped together
– Classes that are grouped together may go through unnecessary integration and
testing when they have experienced no changes but when other classes in the
package have been upgraded
3.8.2 Component-Level Design Guidelines
• Components
– Naming conventions should be established for components that are specified as
part of the architectural model and then refined and elaborated as part of the
component-level model
– architectural component names must be obtained from the problem domain and
ensure that they have meaning to all stakeholders who view the architectural
model (e.g., Calculator)
– infrastructure component names must that reflect their implementation-specific
meaning (e.g., Stack)
• Dependencies and inheritance in UML
– Dependencies should be modelled from left to right and inheritance from top
(base class) to bottom (derived classes)
• Interfaces
– Interfaces provide important information about communication and collaboration
– lollipop representation of an interface should be used in UML approach
– For consistency, interfaces should flow from the left-hand side of the component
box;
– only those interfaces that are relevant to the component under consideration
should be shown
3.8.3 Cohesion
Cohesion is the “single-mindedness’ of a component
It implies that a component or class encapsulates only attributes and operations that are
closely related to one another and to the class or component itself
The objective is to keep cohesion as high as possible
The kinds of cohesion can be ranked in order from highest (best) to lowest (worst)
o Functional
Exhibited primarily by operations
Stamp coupling
o A whole data structure or class instantiation is passed as a parameter to an
operation
Control coupling
o Operation A() invokes operation B() and passes a control flag to B that
directs logical flow within B()
o Consequently, a change in B() can require a change to be made to the
meaning of the control flag passed by A(), otherwise an error may result
Routine call coupling
o Occurs when one operation invokes another.
Type use coupling
o Occurs when a Component A uses a data type defined in component B
o If the type definition changes, every component that declares a variable of
that data type must also change
Inclusion or import coupling
o Occurs when component A imports or includes a package or the content of
component B.
External coupling
o Occurs when a component communicates or collaborates with
infrastructure components that are entities external to the software (e.g.,
operating system functions, database functions, networking functions)
3.9 DESIGNING CONVENTIONAL COMPONENTS
Conventional design constructs emphasize the maintainability of a functional domain
The constructs include Sequence, condition, and repetition
Each construct has a predictable logical structure where control enters at the top and exits
at the bottom, enabling a maintainer to easily follow the procedural flow
Various notations used for designing these constructs
1. Graphical design notation
• Sequence, if-then-else, selection, repetition
2. Tabular design notation
3. Program design language
Eg:
UNIT-IV
TESTING AND IMPLEMENTATION
– Changes to the software during testing are infrequent and do not invalidate
existing tests
Understandable
– The more information exists, the smarter to test.
– The architectural design is well understood; documentation is available and
organized
4.2 INTERNAL AND EXTERNAL VIEWS OF
TESTING Black-box testing (External view of testing)
Knowing the specified function that has to be performed, if we test to check whether the
function is fully operational and error free then it is termed as Black box testing.
Black box testing will not test the internal logical structure of the software
White-box testing (Internal view of testing)
White box testing checks whether the internal operations perform according to the
specification
This is also referred as glass box testing
This involves tests that concentrate on close examination of procedural detail
Logical paths are also tested.
This uses the control structure of component design to derive the test cases
The test cases
o Guarantee that all independent paths within a module have been exercised at least
once
o Exercise all logical decisions on their true and false sides
o Execute all loops at their boundaries and within their operational bounds
o Exercise internal data structures to ensure their validity
4.3 WHITE-BOX TESTING
4.3.1 BASIS PATH TESTING
Basis path testing is a one of the White-box testing technique.
This helps to derive a logical complexity measure of a procedural design
Test cases derived to exercise the basis set should execute every statement in the program
at least one time during testing
4) Prepare test cases that will force execution of each path in the basis set
Graph Matrices
Graph matrix is a data structure used for developing a software tool that assists in basis
path testing.
A graph matrix is a square matrix whose size is equal to the number of nodes on the flow
graph.
Each row and column corresponds to an identified node, and matrix entries correspond to
connections between nodes.
Eg:
A link weight can be added to each matrix entry to provide additional information about
control flow.
If the link weight is 1 (a connection exists) or 0 (a connection does not exist).
Eg:
Every DU chain must be covered at least once. This strategy as the DU testing strategy.
Loop Testing
Loop testing is a white-box testing technique that is used to test the validity of loop
constructs.
Four different classes of loops can be defined: simple loops, concatenated loops, nested
loops, and unstructured loops
Testing occurs by varying the loop boundary values
A directed link (represented by an arrow) indicates that a relationship moves in only one
direction. A bidirectional link, also called a symmetric link, implies that the relationship
applies in both directions. Parallel links are used when a number of different relationships
are established between graph nodes.
Behavioral testing methods that can make use of graphs:
1. Transaction flow modeling.
The nodes represent steps in some transaction and the links represent the logical
connection between steps. The data flow diagram can be used to assist in creating
graphs of this type.
2. Finite state modeling.
The nodes represent different user-observable states of the software and the links
represent the transitions that occur to move from state to state. The state diagram can
be used to assist in creating graphs of this type.
3. Data flow modeling.
The nodes are data objects, and the links are the transformations that occur to
translate one data object into another.
4. Timing modeling.
The nodes are program objects, and the links are the sequential connections between
those objects. Link weights are used to specify the required execution times.
4.5.2 Equivalence Partitioning
Equivalence partitioning is a black-box testing method which divides the input domain
into classes of data and then derives the test cases from it.
An ideal test case single-handedly uncovers a complete class of errors and reduce the
total number of test cases that has to be developed
Test case design is based on an evaluation of equivalence classes for an input condition
An equivalence class represents a set of valid or invalid states for input conditions
From each equivalence class, test cases are selected so that the largest number of
attributes of an equivalence class are exercise at once
Guidelines for Defining Equivalence Classes
• If an input condition specifies a range, one valid and two invalid equivalence classes are
defined
Region fault is an error category associated with faulty logic within a software
component.
Eg:
To illustrate the difference between orthogonal array testing and more conventional “one
input item at a time” approaches, consider a system that has three input items, X, Y, and
Z.
Each of these input items has three discrete values associated with it. There are 33 = 27
possible test cases
To illustrate the use of the L9 orthogonal array, consider the send function for a fax
application. Four parameters, P1, P2, P3, and P4, are passed to the send function.
Each takes on three discrete values.
For example, P1 takes on values:
P1= 1, send it now
P1=2, send it one hour later
P1= 3, send it after midnight
If a “one input item at a time” testing strategy were chosen, the following
sequence of tests (P1, P2, P3, P4) would be specified: (1, 1, 1, 1), (2, 1, 1, 1), (3, 1, 1, 1),
(1, 2, 1, 1), (1, 3, 1, 1), (1, 1, 2, 1), (1, 1, 3, 1), (1, 1, 1, 2), and (1, 1, 1, 3).
Eg:
• Validation testing
– Requirements are validated against the constructed software
– Validation testing provides final assurance that the software meets all functional,
behavioral, and performance requirements
• System testing
– The software and other system elements are tested as a whole
– This verifies that all system elements communicate properly and that overall system
function and performance is achieved
4.6.1 UNIT TESTING
Unit testing focuses testing each individual module separately.
This concentrates on the internal processing logic and data structures
Unit testing is simplified when a module is designed with high cohesion
o Reduces the number of test cases
o Allows errors to be more easily predicted and uncovered
Unit testing concentrates on critical modules and those with high cyclomatic complexity
when testing resources are limited
Independent paths
Paths are exercised to ensure that all statements in a module have been executed at
least once
Error handling paths
Ensure that the algorithms respond correctly to specific error conditions
Common Errors in Execution Paths
Misunderstood or incorrect arithmetic precedence
Mixed mode operations (e.g., int, float, char)
Incorrect initialization of values
Precision inaccuracy and round-off errors
Incorrect symbolic representation of an expression (int vs. float)
Unit Test Procedures
Driver
– A simple main program that accepts test case data, passes such data to the
component being tested, and prints the returned results
Stubs
– Stubs serve to replace modules that are subordinate to the component to be tested
– A stub is a “dummy subprogram” that uses the subordinate module’s interface,
does minimal data manipulation, prints verification of entry, and returns control to
the module undergoing testing.
Drivers and stubs both represent overhead
– Both must be written but don’t constitute part of the installed software product
Advantages
This approach verifies major control or decision points early in the test
process Disadvantages
Stubs need to be created to substitute for modules that have not been built or tested
yet; this code is later discarded
Because stubs are used to replace lower level modules, no significant data flow can
occur until much later in the integration/testing process
Steps followed in top-down integration:
1. The main control module is used as a test driver and stubs are substituted for all
components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),
subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not been
introduced.
Eg:
Bottom-up Integration
Integration and testing starts with the most atomic modules in the control
hierarchy Advantages
o This approach verifies low-level data processing early in the testing process
o Need for stubs is eliminated
Disadvantages
o Driver modules need to be built to test the lower-level modules; this code is later
discarded or expanded into a full-featured version
o Drivers inherently do not contain the complete algorithms that will eventually use
the services of the lower-level modules; consequently, testing may be incomplete
or more testing may be needed later when the upper level modules are available
Steps for bottom up integration:
1. Low-level components are combined into clusters (sometimes called builds) that perform
a specific software subfunction.
2. A driver is written to coordinate test case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program structure.
Eg:
4.7 DEBUGGING
Debugging Process
Debugging is the process of removing the defects.
It occurs as a consequence of successful testing
The debugging process begins with the execution of a test case, assess the result and then
the difference between expected and actual performance is encountered
The debugging process matches the symptom with the cause and leads to error correction
The debugging process will usually have one of two outcomes:
(1) The cause will be found and corrected
(2) The cause will not be found. In this case, the debugger may suspect a cause, design a
test case to help validate that suspicion, and work toward error correction in an iterative
fashion.
– Brute force
– Backtracking
– Cause elimination
Brute Force
Brute force is the most commonly used and least efficient method. This is used when all
else fails
This approach involves the use of memory dumps, run-time traces, and output statements
This leads to wasted effort and time
Backtracking
This can be used successfully in small programs
The method starts at the location where a symptom has been uncovered
The source code is then traced backward until the location of the cause is found
In large programs, the number of potential backward paths may become unmanageably
large
Cause Elimination
This approach involves the use of induction or deduction and introduces the concept of
binary partitioning
– Induction (specific to general): Prove that a specific starting value is true; then
prove the general case is true
– Deduction (general to specific): Show that a specific conclusion follows from a
set of general premises
Data related to the error occurrence are organized to isolate potential causes
A cause hypothesis is defined, and the hypothesis is proved or disproved using the data
collected.
Alternatively, a list of all possible causes is developed, and tests are conducted to
eliminate each cause
If initial tests indicate that a particular cause hypothesis shows promise, data are refined
in an attempt to isolate the bug
Correcting the Error
Three Questions to ask Before Correcting the Error
• Is the cause of the bug reproduced in another part of the program?
4.9 REENGINEERING
Reengineering as the process of rebuilding the software products with increased
functionality, better performance, greater reliability, and are easier to maintain.
Software reengineering involves inventory analysis, document restructuring, reverse
engineering, program and data restructuring, and forward engineering.
The final product for any reengineering process is a reengineered business process and/or
the reengineered software to support it.
4.9.1 REENGINEERING PROCESS MODEL
Software reengineering process model defines six activities that occur in a linear
sequence, but this is not always the case.
First the unstructured (“dirty”) source code is restructured so that it contains only the
structured programming constructs which makes the source code easier to read and
provides the basis for all the subsequent reverse engineering activities.
The core of reverse engineering is an activity called extract abstractions. In this stage, the
old program is evaluated and from the source code develop a meaningful specification of
the processing, user interface and the database that is used.
Output of reverse engineering is a clear, unambiguous final specification that helps in
easy understanding of the code.
Reverse Engineering to Understand Data
Internal data structures - Reverse engineering techniques for internal program data focus
on the definition of classes of objects. Program code is examined with the intention of
grouping related program variables.
Database structure – A database allows the definition of data objects and supports some
method for establishing relationships among the objects. Therefore, reengineering one
database schema into another requires an understanding of existing objects and their
relationships.
Reverse Engineering to Understand Processing
Here the source code is analyzed at varying levels of detail (system, program, component,
pattern, statement) to understand procedural abstractions and overall functionality
Reverse Engineering User Interfaces
To fully understand an existing user interface, the structure and behavior of the interface
must be specified.
three basic questions that must be answered are:
1. What are the basic actions (e.g., key strokes or mouse operations) processed by the
interface?
2. What is a compact description of the system's behavioral response to these actions?
3. What concept of equivalence of interfaces is relevant here?
Each business system (also called business function) is composed of one or more
business processes, and each business process is defined by a set of sub processes.
BPR can be applied at any level of the hierarchy.
BPR MODEL
BPR is an iterative model and has six activities
Business definition: Business goals are identified within the context of four key drivers:
cost reduction, time reduction, quality improvement, and personnel development and
empowerment.
Process identification: Processes that are critical to achieving the goals defined in the
business definition are identified.
UNIT-V
SOFTWARE PROJECT MANAGEMENT
Estimation – LOC, FP Based Estimation, Make/Buy Decision COCOMO I & II Model – Project
Scheduling – Scheduling, Earned Value Analysis Planning – Project Plan, Planning Process, RFP
Risk Management – Identification, Projection – Risk Management-Risk Identification-RMMM
Plan-CASE TOOLS.
==========================================================================
INTRODUCTION
Software project management is managing people, process and problems during a
software project
It is the discipline of planning, organizing, and managing resources for the successful
completion of specific project goals and objectives
Software project management activities includes:
o Project Planning
o Estimation of the work
o Estimation of resources required
o Project scheduling
o Risk management
5.1 ESTIMATION
Estimation serves as a foundation for all other project planning actions.
Estimation of resources, cost, and schedule for a software engineering effort requires
experience, access to good historical information, and the courage to commit to
quantitative predictions when qualitative information is all that exists.
Estimation carries inherent risk and this risk leads to uncertainty.
5.1.1 Factors influencing estimation risk
Project complexity:
Project size
Problem decomposition
Degree of structural uncertainty
The lines of code and function points were the measures from which productivity metrics
can be computed.
LOC and FP data are used in two ways during software project
estimation: (1)As estimation variables to “size” each element of the
software and
(2)As baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques.
5. The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic (spess)
estimates.
S = ( Sopt + 4 Sm+Spess ) / 6
6. Once the expected value for the estimation variable has been determined, historical
LOC or FP productivity data are applied.
Local domain averages should then be computed by the project domain. That is, projects
should be grouped by team size, application area, complexity, and other relevant
parameters.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning.
When LOC is used as the estimation variable, the greater the degree of partitioning, the
more likely reasonably accurate estimates of LOC can be developed.
Problem:
Develop a software package for a computer-aided design application for mechanical
components. The software is to execute on an engineering workstation and must interface
with various computer graphics peripherals including a mouse, digitizer, high-resolution
color display, and laser printer.
Preliminary statement of software scope can be developed:
The mechanical CAD software will accept two- and three-dimensional geometric data
from an engineer. The engineer will interact and control the CAD system through a user
interface that will exhibit characteristics of good human/machine interface design. All
geometric data and other supporting information will be maintained in a CAD database.
Design analysis modules will be developed to produce the required output, which willbe
displayed on a variety of graphics devices. The software will be designed to control and
interact with peripheral devices that include a mouse, digitizer, laser printer, and plotter.
This statement of scope is preliminary—it is not bounded.
The function point (FP) metric can be used effectively as a means for measuring the
functionality delivered by a system.
Using historical data, the FP metric can then be used to
(1) Estimate the cost or effort required to design, code, and test the software;
(2) Predict the number of errors that will be encountered during testing; and
(3) Forecast the number of components and/or the number of projected source lines in the
implemented system.
Function points are derived using an empirical relationship based on countable (direct)
measures of software’s information domain and qualitative assessments of software
complexity.
Information domain values are defined in the following manner:
Number of external inputs (EIs).
Number of external outputs (EOs).
Number of external inquiries (EQs).
Number of internal logical files (ILFs).
Number of external interface files (EIFs).
Steps involved in FP based estimation:
1. Find the bounded statement of software scope and
2. Decompose the statement of scope into problem functions that can each be
estimated individually.
3. Estimate the information domain characteristics—inputs, outputs, data files,
inquiries, and external interfaces—as well as the 14 complexity adjustment values
4. Using the estimates derive an FP value that can be tied to past data and used to
generate an estimate.
5. Using historical data, estimate an optimistic, most likely, and pessimistic size
value for each function or count for each information domain value.
6. A three-point or expected value can then be computed.
7. The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic
(spess) estimates.
S = ( Sopt + 4 Sm+Spess ) / 6
8. Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied.
Once the information data have been collected, calculate the FP values by associating a
complexity value with each count as shown below
Eg:
Figure depicts a decision tree for a software- based system X. In this case, the software
engineering organization can (1) build system X from scratch, (2) reuse existing partial-
experience components to construct the system, (3) buy an available software product
and modify it to meet local needs, or (4) contract the software development to an outside
vendor.
If the system is to be built from scratch, there is a 70 percent probability that the job will
be difficult. The project planner estimates that a difficult development effort will cost
$450,000. A “simple” development effort is estimated to cost $380,000.
Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes.
Fig. shows a plot of estimated effort versus product size.
It is important to note that the effort and the duration estimations obtained using the
COCOMO model are called as nominal effort estimate and nominal duration estimate.
Intermediate COCOMO model
The basic COCOMO model assumes that effort and development time are functions of
the product size alone.
But other project parameters also affect the effort required to develop the product as well
as the development time. Therefore, in order to obtain an accurate estimation of the effort
and project duration, the effect of all relevant parameters must be taken into account.
The intermediate COCOMO model recognizes this and refines the initial estimate
obtained using the basic COCOMO expressions by using a set of 15 cost drivers
(multipliers) based on various attributes of software development.
In general, the cost drivers can be classified as being attributes of the following items:
Product: The characteristics of the product that are considered include the inherent
complexity of the product, reliability requirements of the product, etc.
Computer: Characteristics of the computer that are considered include the execution
speed required, storage space required etc.
Personnel: The attributes of development personnel that are considered include the
experience level of personnel, programming capability, analysis capability, etc.
Development Environment: Development environment attributes capture the
development facilities available to the developers. An important parameter that is
considered is the sophistication of the automation (CASE) tools used for software
development.
Complete COCOMO model
A major shortcoming of both the basic and intermediate COCOMO models is that they
consider a software product as a single homogeneous entity. However, most large
systems are made up several smaller sub-systems. These sub-systems may have widely
different characteristics.
For example, some sub-systems may be considered as organic type, some semidetached,
and some embedded, also for some subsystems the reliability requirements may be high,
for some the development team might have no previous experience of similar
development, and so on.
o function points
o lines of source code
Object point is an indirect software measure that is computed using counts of the number
of
(1) Screens
(2) Reports, and
(3) Components likely to be required to build the application.
Each object instance is classified into one of three complexity levels as given below
Complexity is a function of the number and source of the client and server data tables
that are required to generate the screen or report and the number of views or sections
presented as part of the screen or report.
The object point count is then determined by multiplying the number of object instances
by the weighting factor and summing to obtain a total object point count.
The percent of reuse (%reuse) is estimated and the object point count is adjusted:
Once the productivity rate has been determined, an estimate of project effort is computed
using
In more advanced COCOMO II models, 12 a variety of scale factors, cost drivers, and
adjustment procedures are required.
COCOMO Cost Drivers
• Personnel Factors
– Programming language experience
– Personnel capability and experience
– Language and tool experience etc
• Product Factors
– Database size
– Required reusability
– Product reliability and complexity etc
• Project Factors
– Use of software tools
– Required development schedule
– Multi-site development etc
5.5 PROJECT SCHEDULING
Software project scheduling is nothing but allocating the estimated effort to specific
software engineering tasks.
The schedule evolves over time. During early stages of project planning, a macroscopic
schedule is developed. This schedule identifies all major process framework activities
and the product functions to which they are applied.
As the project gets developed, each entry on the macroscopic schedule is refined into a
detailed schedule. Here, specific software actions and tasks required to accomplish an
activity are identified and scheduled.
Basic Principles
Compartmentalization
Decompose the project into a number of manageable activities and tasks.
Interdependency
Tamizharasi A , Asst Professor – CSE , RMD Engineering College Page 46
The interdependency between each module must be determined. Some tasks must
occur in sequence, while others can occur in parallel. Some may be independent.
Time allocation
Each task to be scheduled must be allocated some number of work units (e.g.,
person-days of effort). Each task must be assigned a start date and a completion date .
Effort validation
Every project has a defined number of people on the software team. So ensure
that no more than the allocated number of people has been scheduled at any given time.
Defined responsibilities
Every task that is scheduled should be assigned to a specific team member.
Defined outcomes
Every task that is scheduled should have a defined outcome.Work products are
often combined in deliverables.
Defined milestones
Every task or group of tasks should be associated with a project milestone. A
milestone is accomplished when one or more work products has been reviewed for
quality and has been approved.
Each of these principles is applied as the project schedule evolves.
A task set is a collection of software engineering work tasks, milestones, work products,
and quality assurance filters that must be accomplished to complete a particular project.
To develop a project schedule, a task set must be distributed on the project time line.
The task set will vary depending upon the project type and the degree of rigor with which
the software team decides to do its work.
6. Customer reaction to the concept solicits feedback on a new technology concept and
targets specific customer applications.
The two project scheduling methods that can be applied to software development are
Program evaluation and review technique (PERT)
Critical path method (CPM)
Interdependencies among tasks are defined using a task network.
Tasks, sometimes called the project work breakdown structure (WBS), are defined for the
product as a whole or for individual functions.
Both PERT and CPM provide quantitative tools that allows to
(1) Determine the critical path—the chain of tasks that determines the duration of the project,
(2) Establish “most likely” time estimates for individual tasks by applying statistical models,
(3) Calculate “boundary times” that define a time “window” for a particular task.
Time-Line Charts
A time-line chart, also called a Gantt chart.
A time-line chart can be developed for the entire project or a separate chart can be
developed for each project function.
All project tasks are listed in the left hand column
The next few columns may list the following for each task: projected start date, projected
stop date, projected duration, actual start date, actual stop date, actual duration, task inter-
dependencies (i.e., predecessors)
To the right are columns representing dates on a calendar
The length of a horizontal bar on the calendar indicates the duration of the task
Eg:
After a problem has been diagnosed, additional resources may be focused on the problem
area: staff may be redeployed or the project schedule can be redefined.
When faced with severe deadline pressure, experienced project managers sometimes use
a project scheduling and control technique called time-boxing.
Time-boxing strategy:
The time-boxing strategy recognizes that the complete product may not be deliverable by
the predefined deadline.
An incremental software paradigm is applied to the project
The tasks associated with each increment are “time-boxed” (i.e., given a specific start and
stop time) by working backward from the delivery date
A “box” is put around each task. When a task hits the boundary of its time box, work
stops and the next task begins.
This approach succeeds based on the premise that when the time-box boundary is
encountered, it is likely that 90% of the work is complete
The remaining 10% of the work can be
Delayed until the next increment
Completed later if required
Rather than becoming “stuck” on a task, the project proceeds toward the delivery date.
BCWSi values for all work tasks that should have been completed by that
point in time on the project schedule.
2. The BCWS values for all work tasks are summed to derive the budget at
completion (BAC).
BAC = ∑ (BCWSk) for all tasks k
3. Next, the value for budgeted cost of work performed (BCWP) is computed.
The value for BCWP is the sum of the BCWS values for all work tasks that
have actually been completed by a point in time on the project schedule.
BCWS represents the budget of the activities that were planned to be completed and
BCWP represents the budget of the activities that actually were completed.
Given values for BCWS, BAC, and BCWP, important progress indicators can be
computed:
Schedule performance index, SPI = BCWP / BCWS
Schedule variance, SV = BCWP - BCWS
where,
SPI is an indication of the efficiency with which the project is utilizing scheduled
resources.
SPI value close to 1.0 indicates efficient execution of the project schedule.
SV is simply an absolute indication of variance from the planned schedule.
Percentage scheduled for completion provides an indication of the percentage of work
that should have been completed by time t.
Percent scheduled for completion = BCWS / BAC
Percent complete provides a quantitative indication of the percent of completeness of the
project at a given point in time t.
Percent complete = BCWP / BAC
The actual cost of work performed (ACWP) is the sum of the effort actually expended on
work tasks that have been completed by a point in time on the project schedule.
It is then possible to compute
Cost performance index, CPI= BCWP /ACWP
Cost variance, CV =BCWP - ACWP
A CPI value close to 1.0 provides a strong indication that the project is within its
defined budget.
CV is an absolute indication of cost savings (against planned costs) or shortfall at a
particular stage of a project.
Earned value analysis illuminates scheduling difficulties before they might otherwise be
apparent. This enables to take corrective action before a project crisis develops.
The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule.
Estimates should attempt to define best-case and worst-case scenarios so that project
outcomes can be bounded.
Functions described in the statement of scope are evaluated and refined to provide more
detail at the beginning of estimation.
Software feasibility has four solid dimensions:
Technology— Is a project technically feasible? Determine whether the technology
needed to develop the project is available.
Finance—Is it financially feasible? Can development be completed at a cost the software
organization, its client, or the market can afford?
Time—Will the project’s time-to-market beat the competition?
Resources— Does the organization have the resources needed to succeed?
5.7.2 RESOURCES
The three major categories of software engineering resources are:
People
Reusable software components
Development environment (hardware and software tools).
Each resource is specified with four characteristics:
description of the resource
a statement of availability
time when the resource will be required
Duration of time that the resource will be applied.
Human Resources
The planner begins by evaluating software scope and selecting the skills required to
complete development.
Both organizational position (e.g., manager, senior software engineer) and specialty (e.g.,
tele communications, database, client-server) are specified.
For relatively small projects, a single individual may perform all software engineering
tasks, consulting with specialists as required.
For larger projects, the software team may be geographically dispersed across a number
of different locations.
The number of people required for a software project can be determined only after the
estimation of development effort
Reusable Software Resources
New components.
Software components must be built by the software team specifically for the
needs of the current project.
Risk analysis and management are the actions which help to understand and manage
uncertainty.
A risk is a potential problem which may or may not happen.
To manage risk, first identify the risk, assess its probability of occurrence, estimate its
impact, and establish a contingency plan should the problem actually occur
RISK CATEGORIZATION
The two broad categories of risks are:
Product specific risk
- Risks that can be identified only with a clear understanding of the
technology, the people, and the environment that is specific to the software
that is to be built
Generic risk - Risks that are a potential threat to every software project
Categories of Product specific risk:
Project risks
– Project risks threaten the project plan
– If they occur, then the project schedule will slip and the costs will increase
Technical risks
– This threaten the quality and timeliness of the software to be produced
– If they occur, implementation becomes difficult or impossible
Business risks
– They threaten the viability of the software to be built
– If they become real, they jeopardize the project or the product
Sub-categories of Business risks
– Market risk – building an excellent product or system that no one really
wants
– Strategic risk – building a product that no longer fits into the overall
business strategy for the company
– Sales risk – building a product that the sales force doesn't understand how
to sell
– Management risk – losing the support of senior management due to a
change in focus or a change in people
– Budget risk – losing budgetary or personnel commitment
Categories of Generic risks:
Known risks
– Known risks are those risks that can be uncovered after careful evaluation of the
project plan, the business and technical environment in which the project is being
developed, and other reliable information sources (e.g., unrealistic delivery date)
Predictable risks
– Predictable risks are extrapolated from past project experience (e.g., past
turnover)
Unpredictable risks
– Those risks that are extremely difficult to identify in advance are referred as
unpredictable risks.
Reactive vs. Proactive Risk Strategies
Reactive risk strategies
– Nothing is done about risks until something goes wrong
• The team then flies into action in an attempt to correct the problem rapidly
(also referred as firefighting mode)
– Crisis management is the choice of management techniques
Proactive risk strategies
By identifying known and predictable risks, the project manager takes a first step toward
avoiding them when possible and controlling them when necessary
To identify the risk create, Risk item checklist.
Risk Item Checklist
This focuses on known and predictable risks in specific subcategories
The check list can be organized in several ways
1. A list of characteristics relevant to each risk subcategory
2. Questionnaire that leads to an estimate on the impact of each risk
3. A list containing a set of risk component and drivers and their probability
of occurrence
1. Known and Predictable Risk Categories
Product size – risks associated with overall size of the software to be built
Business impact – risks associated with constraints imposed by management or the
marketplace
Customer characteristics – risks associated with sophistication of the customer and the
developer's ability to communicate with the customer in a timely manner
Process definition – risks associated with the degree to which the software process has
been defined and is followed
Development environment – risks associated with availability and quality of the tools to
be used to build the project
Technology to be built – risks associated with complexity of the system to be built and
the "newness" of the technology in the system
Staff size and experience – risks associated with overall technical and project experience
of the software engineers who will do the work
2. Sample Questionnaire on Project Risk
1) Are requirements fully understood by the software engineering team and its customers?
2) Have customers been involved fully in the definition of requirements?
3) Is the project scope stable?
4) Does the software engineering team have the right mix of skills?
5) Are project requirements stable?
6) Does the project team have experience with the technology to be implemented?
– Risk Category – one of seven risk categories (Problem size, business impact etc )
– Probability – estimation of risk occurrence based on group input
– Impact – (1) catastrophic (2) critical (3) marginal (4) negligible
– RMMM – Pointer to a paragraph in the Risk Mitigation, Monitoring, and
Management Plan
The meetings with the customer should ensure that the customer and our organization
understand each other and the requirements for the product.
Management
Should the development team come to the realization that their idea of the product
specifications differs from those of the customer, the customer should be immediately
notified and whatever steps necessary to rectify this problem should be done. Preferably a
meeting should be held between the development team and the customer to discuss at
length this issue.
The RMMM Plan
The RMMM plan may be a part of the software development plan or may be a separate
document
Once RMMM has been documented and the project has begun, the risk mitigation, and
monitoring steps begin
– Risk mitigation is a problem avoidance activity
– Risk monitoring is a project tracking activity
Risk monitoring has three objectives
– To assess whether predicted risks do, in fact, occur
– To ensure that risk aversion steps defined for the risk are being properly applied
– To collect information that can be used for future risk analysis
The findings from risk monitoring may allow the project manager to ascertain what risks
caused which problems throughout the project.
UNIT V
PROJECT MANAGEMENT
Estimation – FP Based, LOC Based, Make/Buy Decision, COCOMO II - Planning – Project Plan,
Planning Process, RFP Risk Management – Identification, Projection, RMMM - Scheduling and Tracking –
Relationship between people and effort, Task Set & Network, Scheduling, EVA – Process and Project Metrics
5.1 INTRODUCTION
The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule.
Estimates should attempt to define best-case and worst-case scenarios so that project
outcomes can be bounded.
5.3 ESTIMATION
4. Change sizing
This approach is used when a project encompasses the use of existing software
that must be modified in some way as part of a project.
The planner estimates the number and type (e.g., reuse, adding code, changing
code, deleting code) of modifications that must be accomplished.
The lines of code and function points were the measures from which productivity metrics
can be computed.
LOC and FP data are used in two ways during software project
estimation: (1)As estimation variables to “size” each element of the
software and
(2)As baseline metrics collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
LOC and FP estimation are distinct estimation techniques.
6. Once the expected value for the estimation variable has been determined, historical
LOC or FP productivity data are applied.
Local domain averages should then be computed by the project domain. That is, projects
should be grouped by team size, application area, complexity, and other relevant
parameters.
The LOC and FP estimation techniques differ in the level of detail required for
decomposition and the target of the partitioning.
When LOC is used as the estimation variable, the greater the degree of partitioning, the
more likely reasonably accurate estimates of LOC can be developed.
Problem:
Develop a software package for a computer-aided design application for mechanical
components. The software is to execute on an engineering workstation and must interface
with various computer graphics peripherals including a mouse, digitizer, high-resolution
color display, and laser printer.
The function point (FP) metric can be used effectively as a means for measuring the
functionality delivered by a system.
Using historical data, the FP metric can then be used to
(1) Estimate the cost or effort required to design, code, and test the software;
(2) Predict the number of errors that will be encountered during testing; and
(3) Forecast the number of components and/or the number of projected source lines in the
implemented system.
Function points are derived using an empirical relationship based on countable (direct)
measures of software’s information domain and qualitative assessments of software
complexity.
Information domain values are defined in the following manner:
Number of external inputs (EIs).
Number of external outputs (EOs).
Number of external inquiries (EQs).
Number of internal logical files (ILFs).
Number of external interface files (EIFs).
Steps involved in FP based estimation:
1. Find the bounded statement of software scope and
2. Decompose the statement of scope into problem functions that can each be
estimated individually.
3. Estimate the information domain characteristics—inputs, outputs, data files,
inquiries, and external interfaces—as well as the 14 complexity adjustment values
4. Using the estimates derive an FP value that can be tied to past data and used to
generate an estimate.
5. Using historical data, estimate an optimistic, most likely, and pessimistic size
value for each function or count for each information domain value.
6. A three-point or expected value can then be computed.
7. The expected value for the estimation variable (size) S can be computed as a
weighted average of the optimistic (sopt), most likely (sm), and pessimistic
(spess) estimates.
S = ( Sopt + 4 Sm+Spess ) / 6
8. Once the expected value for the estimation variable has been determined,
historical LOC or FP productivity data are applied.
Once the information data have been collected, calculate the FP values by associating a
complexity value with each count as shown below
experience on related systems but may be unfamiliar with some aspects of the system being
developed.
Embedded:
A development project is considered to be of embedded type, if the software being
developed is strongly coupled to complex hardware, or if the stringent regulations on the
operational procedures exist.
COCOMO MODEL
COCOMO (Constructive Cost Estimation Model) was proposed by Boehm
According to Boehm, software cost estimation should be done through three stages:
o Basic COCOMO
o Intermediate COCOMO and
o Complete COCOMO.
Basic COCOMO Model
The basic COCOMO model gives an approximate estimate of the project parameters.
The basic COCOMO estimation model is given by the following expressions:
Effort = a1 х (KLOC)a2 PM
Tdev = b1 x (Effort)b2
Months
Where
• KLOC is the estimated size of the software product expressed in Kilo Lines of Code,
• a1, a2, b1, b2 are constants for each category of software products,
•
Tdev is the estimated time to develop the software, expressed in months,
•
Effort is the total effort required to develop the software product, expressed in person
months (PMs).
The effort estimation is expressed in units of person-months (PM).
Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes.
Fig. shows a plot of estimated effort versus product size.
systems are made up several smaller sub-systems. These sub-systems may have widely
different characteristics.
For example, some sub-systems may be considered as organic type, some semidetached,
and some embedded, also for some subsystems the reliability requirements may be high,
for some the development team might have no previous experience of similar
development, and so on.
The complete COCOMO model considers these differences in characteristics of the
subsystems and estimates the effort and development time as the sum of the estimates for
the individual subsystems.
The cost of each subsystem is estimated separately.
This approach reduces the margin of error in the final estimate.
Example:
Assume that the size of an organic type software product has been estimated to be
32,000 lines of source code. Assume that the average salary of software engineers be Rs.
15,000/- per month. Determine the effort required to develop the software product and
the nominal development time.
From the basic COCOMO estimation formula for organic
software: Effort = 2.4 х (32)1.05 = 91 PM
Nominal development time = 2.5 х (91)0.38 = 14 months
Cost required to develop the product = 14 х 15,000
= Rs. 210,000/-
As the project gets developed, each entry on the macroscopic schedule is refined into a
detailed schedule. Here, specific software actions and tasks required to accomplish an
activity are identified and scheduled.
Basic Principles
Compartmentalization
Decompose the project into a number of manageable activities and tasks.
Interdependency
The interdependency between each module must be determined. Some tasks must
occur in sequence, while others can occur in parallel. Some may be independent.
Time allocation
Each task to be scheduled must be allocated some number of work units (e.g.,
person-days of effort). Each task must be assigned a start date and a completion date .
Effort validation
Every project has a defined number of people on the software team. So ensure
that no more than the allocated number of people has been scheduled at any given time.
Defined responsibilities
Every task that is scheduled should be assigned to a specific team member.
Defined outcomes
Every task that is scheduled should have a defined outcome.Work products are
often combined in deliverables.
Defined milestones
Every task or group of tasks should be associated with a project milestone. A
milestone is accomplished when one or more work products has been reviewed for
quality and has been approved.
A task set is a collection of software engineering work tasks, milestones, work products,
and quality assurance filters that must be accomplished to complete a particular project.
To develop a project schedule, a task set must be distributed on the project time line.
The task set will vary depending upon the project type and the degree of rigor with which
the software team decides to do its work.
1. Concept development projects that are initiated to explore some new business concept or
application of some new technology.
2. New application development projects that are undertaken as a consequence of a specific
customer request.
3. Application enhancement projects that occur when existing software undergoes major
modifications to function, performance, or interfaces that are observable by the end user.
4. Application maintenance projects that correct, adapt, or extend existing software in ways
that may not be immediately obvious to the end user.
5. Reengineering projects that are undertaken with the intent of rebuilding an existing
system in whole or in part.
The two project scheduling methods that can be applied to software development are
Program evaluation and review technique (PERT)
Critical path method (CPM)
Interdependencies among tasks are defined using a task network.
Tasks, sometimes called the project work breakdown structure (WBS), are defined for the
product as a whole or for individual functions.
Both PERT and CPM provide quantitative tools that allows to
(1) Determine the critical path—the chain of tasks that determines the duration of the
project,
(2) Establish “most likely” time estimates for individual tasks by applying statistical
models,
(3) Calculate “boundary times” that define a time “window” for a particular task.
Time-Line Charts
When multiple bars occur at the same time interval on the calendar, this implies task
concurrency
A diamond in the calendar area of a specific task indicates that the task is a milestone; a
milestone has a time duration of zero
Eg:
Software project scheduling tools produce project tables from the information available
in the time – line chart
Project table is a tabular listing of all project tasks, their planned and actual start and end
dates, and a variety of related information.
Project tables helps to track progress.
Eg: Project table
1. Conducting periodic project status meetings in which each team member reports
progress and problems
2. Evaluating the results of all reviews conducted throughout the software
engineering process
3. Determining whether formal project milestones have been accomplished by the
scheduled date
4. Comparing the actual start date to the planned start date for each project task
listed in the resource table
5. Meeting informally with practitioners to obtain their subjective assessment of
progress to date and problems on the horizon
Quantitative approach:
6. Using earned value analysis to assess progress quantitatively
Time-boxing strategy:
The time-boxing strategy recognizes that the complete product may not be deliverable by
the predefined deadline.
An incremental software paradigm is applied to the project
The tasks associated with each increment are “time-boxed” (i.e., given a specific start and
stop time) by working backward from the delivery date
A “box” is put around each task. When a task hits the boundary of its time box, work
stops and the next task begins.
This approach succeeds based on the premise that when the time-box boundary is
encountered, it is likely that 90% of the work is complete
The remaining 10% of the work can be
Delayed until the next increment
Completed later if required
Rather than becoming “stuck” on a task, the project proceeds toward the delivery date.
3. Next, the value for budgeted cost of work performed (BCWP) is computed.
The value for BCWP is the sum of the BCWS values for all work tasks that
have actually been completed by a point in time on the project schedule.
BCWS represents the budget of the activities that were planned to be completed and
BCWP represents the budget of the activities that actually were completed.
Tamizharasi A , Asst Professor , CSE, RMDEC Page 21
Given values for BCWS, BAC, and BCWP, important progress indicators can be
computed:
SPI is an indication of the efficiency with which the project is utilizing scheduled
resources.
SPI value close to 1.0 indicates efficient execution of the project schedule.
The actual cost of work performed (ACWP) is the sum of the effort actually expended on
work tasks that have been completed by a point in time on the project schedule.
It is then possible to compute
Cost performance index, CPI= BCWP /ACWP
A CPI value close to 1.0 provides a strong indication that the project is within its
defined budget.
Earned value analysis illuminates scheduling difficulties before they might otherwise be
apparent. This enables to take corrective action before a project crisis develops.
Risk analysis and management are the actions which help to understand and manage
uncertainty.
A risk is a potential problem which may or may not happen.
To manage risk, first identify the risk, assess its probability of occurrence, estimate its
impact, and establish a contingency plan should the problem actually occur
RISK CATEGORIZATION
The two broad categories of risks are:
Product specific risk
- Risks that can be identified only with a clear understanding of the
technology, the people, and the environment that is specific to the software
that is to be built
Generic risk - Risks that are a potential threat to every software project
Project risks
– Project risks threaten the project plan
– If they occur, then the project schedule will slip and the costs will increase
Technical risks
– This threaten the quality and timeliness of the software to be produced
– If they occur, implementation becomes difficult or impossible
Business risks
– They threaten the viability of the software to be built
– If they become real, they jeopardize the project or the product
Sub-categories of Business risks
– Market risk – building an excellent product or system that no one really
wants
– Strategic risk – building a product that no longer fits into the overall
business strategy for the company
– Sales risk – building a product that the sales force doesn't understand how
to sell
– Management risk – losing the support of senior management due to a
change in focus or a change in people
– Budget risk – losing budgetary or personnel commitment
Categories of Generic risks:
Known risks
– Known risks are those risks that can be uncovered after careful evaluation of the
project plan, the business and technical environment in which the project is being
developed, and other reliable information sources (e.g., unrealistic delivery date)
Predictable risks
– Predictable risks are extrapolated from past project experience (e.g., past
turnover)
Unpredictable risks
– Those risks that are extremely difficult to identify in advance are referred as
unpredictable risks.
By identifying known and predictable risks, the project manager takes a first step toward
avoiding them when possible and controlling them when necessary
To identify the risk create, Risk item checklist.
Risk Item Checklist
This focuses on known and predictable risks in specific subcategories
The check list can be organized in several ways
1. A list of characteristics relevant to each risk subcategory
2. Questionnaire that leads to an estimate on the impact of each risk
3. A list containing a set of risk component and drivers and their probability
of occurrence
1. Known and Predictable Risk Categories
Product size – risks associated with overall size of the software to be built
Business impact – risks associated with constraints imposed by management or the
marketplace
Customer characteristics – risks associated with sophistication of the customer and the
developer's ability to communicate with the customer in a timely manner
Process definition – risks associated with the degree to which the software process has
been defined and is followed
Development environment – risks associated with availability and quality of the tools to
be used to build the project
Technology to be built – risks associated with complexity of the system to be built and
the "newness" of the technology in the system
Staff size and experience – risks associated with overall technical and project experience
of the software engineers who will do the work
2. Sample Questionnaire on Project Risk
1) Are requirements fully understood by the software engineering team and its customers?
2) Have customers been involved fully in the definition of requirements?
3) Is the project scope stable?
4) Does the software engineering team have the right mix of skills?
5) Are project requirements stable?
6) Does the project team have experience with the technology to be implemented?
7) Is the number of people on the project team adequate to do the job?
3. Risk Components and Drivers
The project manager identifies the risk drivers that affect the following risk components
– Performance risk - the degree of uncertainty that the product will meet its
requirements and be fit for its intended use
– Cost risk - the degree of uncertainty that the project budget will be maintained
– Support risk - the degree of uncertainty that the resultant software will be easy to
correct, adapt, and enhance
– Schedule risk - the degree of uncertainty that the project schedule will be
maintained and that the product will be delivered on time
The impact of each risk driver on the risk component is divided into one of four impact
levels
– Negligible
– Marginal
– Critical
– Catastrophic
Risk drivers can be assessed as impossible, improbable, probable, and frequent
Risk projection (or estimation) attempts to rate each risk in two ways
– The probability that the risk is real
– The consequence of the problems associated with the risk
Steps involved in Risk Estimation:
1) Establish a scale that reflects the perceived likelihood of a risk (e.g., 1-low, 10-high)
2) Delineate the consequences of the risk
3) Estimate the impact of the risk on the project and product
4) Assess the overall accuracy of the risk projection so that there will be no
misunderstandings
Risk Table
A risk table provides a project manager with a simple technique for risk projection
It consists of five columns
– Risk Summary – short description of the risk
– Risk Category – one of seven risk categories (Problem size, business impact etc )
– Probability – estimation of risk occurrence based on group input
– Impact – (1) catastrophic (2) critical (3) marginal (4) negligible
– RMMM – Pointer to a paragraph in the Risk Mitigation, Monitoring, and
Management Plan
After identifying the category, probability of occurrence and impacts sort the rows by
probability and impact in descending order
Draw a horizontal cutoff line in the table that indicates the risks that will be given
further attention
Risks that fall below the line are reevaluated to accomplish second-order prioritization.
Probability has a distinct influence on management concern.
A risk factor that has a high impact but a very low probability of occurrence should not
absorb a significant amount of management time. However, high-impact risks with
moderate to high probability and low-impact risks with high probability should be carried
forward into the risk analysis steps that follow.
An effective strategy for dealing with risk must consider three issues
– Risk mitigation (i.e., avoidance)
– Risk monitoring
– Risk management and contingency planning
Mitigation : How can we avoid the risk?
Monitoring : What factors can we track that will enable us to determine if the risk is
becoming more or less likely?
Tamizharasi A , Asst Professor , CSE, RMDEC Page 29
Example:
Risk: Technology Does Not Meet Specifications
Mitigation
In order to prevent this from happening, meetings (formal and informal) will be held
with the customer on a routine business. This insures that the product we are producing, and
the specifications of the customer are equivalent.
Monitoring
The meetings with the customer should ensure that the customer and our organization
understand each other and the requirements for the product.
Management
Should the development team come to the realization that their idea of the product
specifications differs from those of the customer, the customer should be immediately
notified and whatever steps necessary to rectify this problem should be done. Preferably a
meeting should be held between the development team and the customer to discuss at
length this issue.
Example: Risk of high staff turnover
Strategy for Reducing Staff Turnover
Meet with current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market)
Mitigate those causes that are under our control before the project starts
Once the project commences, assume turnover will occur and develop techniques to
ensure continuity when people leave
Organize project teams so that information about each development activity is widely
dispersed
Define documentation standards and establish mechanisms to ensure that documents
are developed in a timely manner
Conduct peer reviews of all work (so that more than one person is "up to speed")
Assign a backup staff member for every critical technologist
During risk monitoring, the project manager monitors factors that may provide an
indication of whether a risk is becoming more or less likely
Risk management and contingency planning assume that mitigation efforts have
failed and that the risk has become a reality
RMMM steps incur additional project cost
Large projects may have identified 30 – 40 risks
Risk is not limited to the software project itself
Risks can occur after the software has been delivered to the user
Software safety and hazard analysis
– These are software quality assurance activities that focus on the identification and
assessment of potential hazards that may affect software negatively and cause an
entire system to fail
– If hazards can be identified early in the software process, software design features
can be specified that will either eliminate or control potential hazards
Complexity is a function of the number and source of the client and server data tables that are
required to generate the screen or report and the number of views or sections presented as part of
the screen or report.
The object point count is then determined by multiplying the number of object instances by the
weighting factor and summing to obtain a total object point count.
The percent of reuse (%reuse) is estimated and the object point count is adjusted:
Productivity rate is based on different levels of developer’s experience and environment maturity
Once the productivity rate has been determined, an estimate of project effort is computed using
In more advanced COCOMO II models, 12 a variety of scale factors, cost drivers, and adjustment
procedures are required.
COCOMO Cost Drivers
• Personnel Factors
– Programming language experience
– Personnel capability and experience
– Language and tool experience etc
• Product Factors
– Database size
– Required reusability
– Product reliability and complexity etc
• Project Factors
– Use of software tools
– Required development schedule
– Multi-site development etc