0% found this document useful (0 votes)
39 views252 pages

SE All Units

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
39 views252 pages

SE All Units

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 252

UNIT-I

• Introduction to Software Engineering: Software, Software Crisis,


Software Engineering definition, Evolution of Software Engineering
Methodologies, Software Engineering Challenges.
• Software Processes: Software Process, Software development life cycle,
Software Development Process Models.
WHAT IS A SOFTWARE ?

• We all know the common definition for the Software which is defined as a
collection of computer programs that are executed together with data to provide
desired outcomes.

• But according to IEEE, software has a different meaning extension to the


definition of what we all know.
DEFINITION OF SOFTWARE
• IEEE defines software as: “Software is a collection of
computer programs, together with data, procedure,
rules and associated documentation, which operate in
a specified environment with certain constraints to
provide the desired outcomes.”
➢ Data: It plays an important role in program execution
for providing useful information in some form. Data
may be simple or complex.
➢ Procedure: Each standard program has a certain
procedure for its execution and operation
➢ Rules: Programs run with specified constraints and
environment, with defined rules of execution.
➢ Documentation: The documentation of a program is
also necessary to guide the operating procedure of the
software.
SOFTWARE (Continuation)

Software is developed by the Software


Engineers for an organization on the
requirement of a customer and it is used by the
end users.
Ex: NTES (National Train Enquiry System)
Application
Here
Developer : CRIS (Centre for Railway
Information Service)
Customer: Indian Railways
End users: Passengers
CHARACTERISTICS OF A SOFTWARE

1. Software has logical properties rather than physical


2. Software is produced in an engineering manner rather than in a classical
sense
3. Software is mobile to change
4. Software becomes obsolete but does not wear out or die
5. Software has a certain operating environment, end user and customer
6. Software development is a labor intensive task
CHARACTERISTICS OF A SOFTWARE (Cont.)

1) Software has logical properties rather than physical:


Software is an intangible product and has no physical properties. It has no physical shape, no volume, no
color and no odour. Software logically consists of several programs connected through well defined
logical interfaces.
2) Software is produced in an Engineering Manner rather than in a classical sense:
Software is produced in an Engineering manner unlike the other products which are manufactured in the
classical manner. The engineering mechanism provides some activities with their defined approaches for
software production. Some activities are feasibility study, analysis, design, coding, testing and
deployment.
3) Software is mobile to change:
Software is a too much flexible product that it can easily be changed. These changes may arise due to the
changing requirements of user and technological advancements.
CHARACTERISTICS OF A SOFTWARE (Cont.)

4) Software becomes obsolete but does not wear out or die:


Software becomes obsolete due to the increasing requirements of the users and rapidly changing
technologies. Software products do not wear out or die as they do not have any physical properties.
5) Software has a certain operating system, environment , end user and customer:
Software products run in a specified environment with some defined constraints. Some software products
are platform independent while others are platform specific.
6) Software development is a labor intensive work:
Software products are developed from the begininng and all engineering steps are followed for software
production. All the activities of Software development life cycle (SDLC) are followed for good and
efficient software.
SOFTWARE CLASSIFICATIONS

Software can either be Generic or Customized.

1) Generic: Generic software products are developed for general purpose, regardless of the type of
business.

Ex: Word processors, calculators, database software etc..

2) Customized: Customized software products are developed to satisfy the need of a particular customer in
an organization. Here requirements are specific to the business and these are given by the stake holders of
the organization.

Ex: Order processing software, patient diagnosis software etc...


SOFTWARE CLASSIFICATIONS (Cont.)
Generic and customized software products can again be divided into several categories depending
upon the type of customer, business, technology and computer support as shown in the fig:
SOFTWARE CLASSIFICATIONS (Cont.)
1) System software: System software is a collection of programs written to service other programs. Designed to operate
computer hardware and manage the functioning of the application software running on it.

Ex: Device drivers, boot program, OS etc.

2) Application software: Application software is designed to accomplish certain specific needs of the end user.

Ex: Educational software, video editing software, word processing software, database software.

3) Programming software: It is a class of system software that assists programmers in writing computer programs using
different programming languages in a convenient manner.

Ex: Compilers, Interpreters, Debuggers, Linkers, Loaders and so on

4) Artificial intelligence software: AI software is made to think like human beings and therefore it is useful in solving
complex problems automatically.

Ex: Robotics, expert systems, pattern recognition, game playing, speech recognition etc..
SOFTWARE CLASSIFICATIONS (Cont.)
5) Embedded software: Embedded software is type of software that is built in hardware systems. It resides within a product or
system and is used to control, monitor or assist the operation of the equipment. It is an integral part of the system.
Ex: Controllers, communication protocols etc..
6) Engineering/scientific software: Engineering problems and quantitative analysis are carried out using automated tools.
Scientific software is typically used to solve mathematical functions and calculations.
Ex: CAD/CAM, ESS etc..
7) Web software: Web software has evolved from a simple website to search engines to web computing. Web applications are
spread over a network. Ex: Internet, Intranet
Web applications are based on client server architecture, where client requests information and the server retrieves information
from the web. Ex: Web 2.0, HTML, PHP, search engines.
8) Product line software: Product line software is a set of software intensive systems that share a common, managed set of
features to satisfy the specific needs of a particular market segment or mission.
Ex: Multimedia, database software, word processing software etc.
SOFTWARE CRISIS
• Software crisis started in the late 1960’s and early 1970’s. Since then the importance of software, software industry have
evolved rapidly
• Now a days the development of programs and software have become complex with increasing requirements of users,
technological advancements and computer awareness among people.
• The modern software crisis has some notable symptoms which are
1. Complexity
2. Hardware vs Software cost
3. Lateness
4. Costliness
5. Poor quality
6. Lack of planning
7. Unmanageable nature
8. Immaturity
9. Management practices
SOFTWARE CRISIS (Cont.)
• Complexity: An important cause of the software crisis is the complexity of the software development process.
In early days of computing, programs were simple and they were specified, written, operated and maintained
by same person and the software also used by few people and in a very limited domain. As computing matured,
projects size grew larger and programs began to be developed by a team of people. The expectation of users
also have been increased. At the same time, programs became complex and earlier methodologies of software
development were no longer useful for larger and complex projects.
• H/w vs S/w cost: As discussed above the computing scenario became complex , the hardware cost has been
increasing or stable. The same hardware is capable of handling the existing and new applications. Hardware
takes one time design and analysis as it can be manufactured in classical sense. On the other hand software is
designed in an engineering manner, follow each and every step carefully. Development, maintenance, change
and re engineering process require logical thinking & labor work. Ultimately cost of software is higher than
the hardware cost.
• Lateness: The time required to develop software and its cost began to exceed all estimates. Most of the
projects were cancelled and challenged because they were running behind schedule & exceeded the budget.
SOFTWARE CRISIS (Cont.)
• Quality: Quality of the software has become a challenging issue in the software industry. Software must be of high quality with
respect to product operation, product transition and product migration. Product operation includes usability, safety, correctness,
security, reliability and efficiency. Product migration involves maintainability, flexibility, changeability, modifiability, testability
and ability to reengineer the legacy software. Product transition includes interoperability, dependability and reusability
characteristics.
• Planning and Management: The project must be planned with the expected requirement of resources (h/w, s/w, people), cost,
time and effort. The project managers follow the project plan for effective management of the team and the project so that the
software can be delivered on time and within budget.
• Maintenance: Maintenance and changes require the proper understanding of the software. Change in software is a major
problem for software practitioners and it makes around 40% of the total development cost. After changing a code, the entire
software is to be tested for reliable functioning of the system. Therefore some systematic approaches are need for maintenance
and changes.
Software crisis has inspired people to improve the processes. The solution to these software crisis is to introduce systematic
software engineering practices for systematic development, maintenance, operation, planning and management of software.
SOFTWARE ENGINEERING
“Software Engineering is an engineering, technological and managerial discipline that provides a systematic
approach to the development, operation and maintenance of software.”

In the above definition, the keywords have some specific meanings.

Engineering provides step by step procedure for software engineering i.e., project planning, analysis,
architecture and design, coding, testing and maintenance.

The activities are performed with the help of technological tools that ease the execution of above activities, for
Ex: project management tools, analysis tools, design tools, coding tools and testing tools.

However management skills are necessary in a project manager to coordinate and communicate the
information and management of these activities. Systematic development of software helps to understand
problems and satisfy the client needs.
IEEE DEFINITION OF SOFTWARE ENGINEERING

“The systematic approach to the development, operation, maintenance and retirement of software.”

The main goal of software engineering is to understand customer needs and develop software with improved
quality, on time and within budget. The view of Software Engineering is shown in figure:
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES

A software Engineering methodology is a set of procedures followed from the beginning to the completion
of the development process. Software engineering methodologies have evolved with increasing complexities
in programming and advancements in programming technologies.

The most popular software methodologies are:

1. Exploratory methodology

2. Structure oriented methodology

3. Data structure oriented methodology

4. Object oriented methodology

5. Component based development methodology.


The evolution of software engineering methodologies is shown in above figure.
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)

1.Exploratory methodology:

During the 1950s, most programs were written in assembly language. These programs were
limited to about a few hundreds of lines of assembly code. Every programmer developed programs in his
own individual style - based on his intuition. This type of programming was called as Exploratory
Programming. It involves experimentation and exploring the programs through step by step programming.
The process of each step depends on the results of the previous ones.

Exploratory style uses unstructured programming, here the main program focuses on global data
items. Later, as the size and complexity of the program kept on increasing, exploratory style proved to be
insufficient. It is also difficult to understand and maintain the programs written by others.
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)

2. Structure Oriented Methodology:

The next significant development which occurred during 1960’s is this Structured oriented methodology.
Structured methodology focuses on procedural approach. It uses the features of unstructured programming
and provides certain improvements. It has three basic elements, namely:

Sequence: Computer will run your code in order, one line at a time from top to bottom of your program.

Selection: This is achieved using If else conditional statements. Ex: If the condition is met it will execute
“then” part otherwise it will jump to the “else” part.

Iteration: Sometimes if we want to execute the same lines of code several times, we use loops.

Structure oriented methodology uses a variety of notations such as data flow diagrams (DFD), Control flow
graphs (CFG), entity relationship (ER) diagrams etc..
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)

3. Data structure oriented methodology:

Data structure oriented methodology concentrates more on designing data structures rather than to
the design of its control structure as data plays an important role in the execution of a program.

Example of a very popular data structure oriented methodology is Jackson structured design (JSD)
methodology developed by the Michael Jackson in 1970’s. It expresses how functionality fits in with the real
world. JSD based development proceeds in two stages: firstly, “what” are the specifications determined and
secondly, “how” the implementation is done. It is good for real world scenarios but it is complex and difficult to
understand.
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)

4.Object oriented methodology:


Object oriented methodology was introduced in 1980’s and it is the latest and very widely used technique
for the development of applications in a variety of domains.
This methodology deals with the concept of objects and classes. The real world entities are treated as
“objects” and those objects having characteristics in common are grouped into “classes”.
Some Important concepts of OOPS are
• Data Encapsulation
• Data Abstraction
• Inheritance
• Polymorphism
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)

5. Component based Development methodology:

Component based development (CBD) methodology was introduced as a solution to application

development in the 21st century. It is a system analysis and design methodology that has evolved from the

object oriented methodology. It is an approach to software development that relies on software reuse. It

became a significant methodology for communication among different stake holders.

A component is an independent executable entity that can be made up of one or more executable

objects. Component fabrication consists of various phases, such as domain analysis, component identification,

component design, integration and testing, acceptance and roll out, and deployment.
SOFTWARE ENGINEERING CHALLENGES

The software engineering discipline is facing a number of software problems since software
crisis. The primary focus of software companies is to produce quality software within budget and small cycle
time.

Some of the challenges are understanding the user requirements, frequently changing
technology, increasing market of reuse business, platform independency and so on. These software
challenges reflect the development and maintenance processes. We will briefly discuss some of the
challenges:
SOFTWARE ENGINEERING CHALLENGES (Cont.)

1.Problem Understanding:

It is a difficult task to understand the exact problem and requirements of the customer in the overall software
development and maintenance process. There are several issues involved in problem understanding.

i) Usually customers are from different backgrounds and they do not have clear understanding their problems
and requirements. Also, the customers don’t have technical knowledge, especially those who are living in
remote areas.

ii) Similarly developers do not have the knowledge of all application domains and detailed requirements of the
problems and expectations of the customers.

iii) The lack of communication among software engineers and customers causes problems for the software
engineers in clearly understanding the customer needs. Sometimes the customers do not have sufficient time to
explain their problems to the development organization.
SOFTWARE ENGINEERING CHALLENGES (Cont.)

2.Quality and Productivity:


Software quality and productivity have become the most considerable challenges in the development of the
software.
Quality: A good quality product implements features that are required by the customers. Quality products
provide customer satisfaction. Systematic software engineering approach produce products that have certain
quality attributes such as reliability, usability, efficiency, maintainability, portability and functionality.
Productivity: Production of software is measured in terms of KLOC per person month (PM). KLOC stands
for thosuands (kilo) of lines of code. Software companies focus is on improving the productivity of the
software, i.e., increasing the number of KLOC per PM. Higher productivity means that the cycle time can be
reduced with the low cost of the product.
Quality & Productivity of the software depend on several factors, such as programmer’s ability, type of
technology, level of experience, nature of project and their complexity etc...
SOFTWARE ENGINEERING CHALLENGES (Cont.)

3.Cycle time and Cost:

Cycle time: The customer always foresees faster and cheaper production of software products. Therefore,
software companies put efforts to reduce the cycle time of product delivery and minimize the product cost. Due
to competitive reasons and needs of the customer, programmers in most of the cases have the pressure to deliver
the product in a small cycle time. But delivery before calendar time sometimes compromises the product
quality.

Cost: The cost of the software product is generally the cost of hardware, software and manpower resources. It is
calculated on the basis of number of persons engaged in a project and for how much time. The cost of the
product also depends on the project size. Higher the cycle time higher the product cost.

A systematic engineering approach can reduce the cycle time and product cost.
SOFTWARE ENGINEERING CHALLENGES (Cont.)

4. Reliability:
Reliability is one of the most important quality attributes. Reliability is the successful operation of the
software within the specified environment and duration under certain conditions. A quality product can be
achieved by emphasizing the individual development phases, which are analysis, design, coding and testing.
Verification and Validation techniques are used to ensure the reliability of the software product. Bug
detection and prevention is the prerequisite to high reliability in the product. There are several automated
testing tools for bug detection and removal.

Software becomes unreliable due to logical errors present in the programs of the software. Project
complexity is the major cause of this software unreliability.
SOFTWARE ENGINEERING CHALLENGES (Cont.)

5.Change and Maintenance:


Change and maintenance in software come when the software is delivered and deployed at the customer site.
They occur if there is any change in the business operation, errors in the software and addition of some new
features. Change in one part of the software requires change in other parts also. After any change and
maintenance operation, software is rigorously tested to keep it reliable.
Change and maintenance in software are flexible but they are a very expensive task. Sometimes maintenance
cost becomes more than development cost. The challenge is to accommodate changes under controlled cost
and reliability.
6.Usability and Reusability:
Usability means the ease of use of product in terms of efficiency, effectiveness and customer satisfaction.
Software engineering always concentrated on providing a usable product by incorporating customer
suggestions and technological issues. Reusability increases reliability because reuseable components are well
tested before integrating them into software development.
SOFTWARE ENGINEERING CHALLENGES (Cont.)

7. Repeatability and Process Maturity:

A software engineering process can be repeated in similar projects, which improves productivity and quality.
Repeatability can help to plan project schedule, fix deadlines for product delivery, manage configuration and
identify locations of bug occurrences. Repeatability promotes process maturity.

8.Estimation and Planning:

It is observed that the project failure ratio is greater than the success rates. Most of the projects fail due to
underestimation of budget and time to complete the project. The effectiveness of project plan depends on the
accuracy of estimation and understanding of the problem.
SOFTWARE PROCESS
What is Software process ?
We all know that Software Engineering is defined as the systematic approach to the development,
operation, maintenance and retirement of the software. Here systematic approach is nothing but a
software process.
Definition: A Software process is a set of ordered activities carried out to produce a software product. It
specifies the way to produce software. Each activity has well defined objective, task, and outcome.
An activity is a specified task performed to achieve the process objectives. The outcome of all activities
are compiled and integrated together to design the software. The development of software is done with
the help of some software process methodologies. Thus a software process provides the method for
developing software.
PROCESS, PROJECT AND PRODUCT

A software process is a complex entity in which each activity is executed with supporting tools and
techniques. Type of the process and its process depends on its execution.

A software project is a cross functional entity, with defined start and end. Every project must follow some

systematic process for its successful completion. A successful project is the one that conforms with project

constraints (cost, schedule and quality criteria).

A software product is the outcome of a software project produced through software processes. A project can

have more than one product called work products. A work product is the intermediate outcome of the

processes. But the final work product is referred to as a product or software. A product satisfies the needs of

clients and also fulfills the project constraints.


The relationship between process, project and product is shown in the below figure:
SOFTWARE PROCESS MODEL

A software process model is a generic representation of a software process instantiated for each specific
project. A process model is a set of activities that have to be accomplished to achieve the process objectives.
Process models are basically idealization of processes. These are very difficult to execute in real world. But
idealization of process model can reduce the chaos of software development. A process model can be made
practical by executing the concept, technologies, implementation environment, process constraints and so on.
Process models specify the activities, work products, relationships, milestones etc. Some examples of process
models are data flow model, life cycle model, quality model etc.
A generic view of the software process model is shown in the figure:
SOFTWARE PROCESS MODEL (Cont.)
The generic process model has three phases that are coordinated and supported by umbrella activities. The phases in process

model are:

Definition phase: This phase concentrates on understanding the problem and planning for the process model. The activities may

include problem formulation, problem analysis, system engineering, and project planning for the process.

Development phase: This phase focuses on determining the solution of problem with the help of umbrella activities. The main

activities of this phase are designing the architecture and algorithms of the system, writing codes, and testing the software

Implementation phase: Deployment, change management, defect removal, and maintenance activities are performed in this

phase. Reengineering may take over due to the changes in the technology and business.

The umbrella activities are responsible for ensuring the proper execution of definition, development and implementation phases.

These phases are managed and controlled by the umbrella activities.


ELEMENTS OF A SOFTWARE PROCESS
A Software process comprises various essential elements. These elements are used together to produce a
product. Elements are discussed as follows:
Artifacts: Artifacts are tangible work products produced during the development of software. Also, artifacts are
concerned with the software development process in executing the process. These are specified in advance in
the development process so that activity could be performed accordingly and it can be used as a raw artifact to
generate new artifacts.
Ex: Software architecture, project plan etc..
Activity: Activity specifies the tasks to be carried out implicitly or explicitly. There may be various activities
for a software process. Each activity receives some input, executes the task on laid constraints, and produces
certain work products that can be used as input for some other activity.
Ex: Analysis, design etc..
ELEMENTS OF A SOFTWARE PROCESS (Cont.)
Constraint: Constraint refers to a condition that must be met by a software product. It allows a process to
progress for achieving the maximum outcome with defined constraints.
Ex: A machine allows only 5 users to login at a time.
People: People are persons or stake holders who are directly or indirectly involved in the process. People may
have expertise in some or all the specified activities in the process. Stake holders play important role in
achieving project goals.
Ex: Software Tester, quality checker etc..
Tools and technology: Tools and technology provide technical support to the methods or techniques to be used
for performing the activities. These help human beings in finding the solution to the problem.
Ex: Fortran is suitable for scientific problems, CASE tools support in s/w development.
ELEMENTS OF A SOFTWARE PROCESS (Cont.)
Method or technique: Method or technique specifies the way to perform the activities using tools and
technology to accomplish the activity. It provides detailed mechanism to carry out an activity.
Ex: Object oriented analysis (OOA), binary search etc..
Relationship: Relationship specifies the link among various activities or entities. It assists in the execution
sequence of activities, where the outcome of an activity can be used as an input to the subsequent activity.
Ex: Maintenance followed by implementation.
Organizational structure: It specifies the team of people that should be coordinated and managed during
software development. All organizations have a structure with defined roles and responsibilities of individuals.
Organizational structure reflects the success of software projects and processes.
Ex: Project manager monitors the work flow of various activities which are assigned to the software
engineers.
CHARACTERISTICS OF A SOFTWARE PROCESS

There are certain common characteristics of a software process, which are discussed below:

Understandability: A software process must be explicitly defined i.e., it should be comprehensible for its
users. This is the prerequisite to perform any task. The process specification must be easy to understand, easy
to learn and easy to apply.

Effectiveness: A process must ensure the required deliverables and customer expectations and it must follow
the specified procedure. The produced product should adhere to the schedule and quality constraints.
However effectiveness of process depends on programmer’s skills, fund availability etc..

Predictability: It is about forecasting the outcomes before the completion of a process. It is the basis through
which the cost, quality and resource requirements are specified in a project.
CHARACTERISTICS OF A SOFTWARE PROCESS (Cont.)
Maintainability: It is the flexibility to maintain software through change requirements, defect detection and correction,
adopting it to new operating environments. Maintainability is a life long process and sometimes its cost exceeds the actual
software development cost.

Reliability: It refers to the capability of performing the intended tasks. Rigorous testing procedures are carried out before
applying a process in any production process. Unreliability of a process causes product failures and waste in time and money.

Changeability: It is the acceptability of changes done in software. A change has some effect to the software, which is the
difference in outcome before and after the changes occured. Changeability is classified as robustness, modifiability and
scalability.

Improvement: It concentrates on identifying and prototyping the possibilities (strengths and weakness) for improvements in
the process itself. Improvements in a process helps to enhance quality of the delivered products for providing more satisfactory
services to the users.
CHARACTERISTICS OF A SOFTWARE PROCESS (Cont.)

Monitoring and tracking: Monitoring and tracking a process in a project can help to determine predictability and
productivity. It helps to monitor and track the progress of the project based upon past experience of the process.

Rapidity: Rapidity is the speed of a process to produce the products under specifications for its timely completion.
Understandability and tracking of the process can accelerate the production process.

Repeatablity: It measures the consistency of a process so that it can be used in various similar projects. A process is
said to be repeatable if it is able to produce an artifact number of times without the loss of quality attributes. There
may be variations in the operation, cost and time but the quality of artifacts will be the same.

There are various other desirable features of a software process, such as quality, adoptability, acceptability, visibility,
supportability and so on.
Software Development Life Cycle
Software or product development is a complex and long running process whose aim is to produce quality
software products. Therefore, a product development is carried out as a series of activities for software
production. Each activity in the process is also referred to as a phase. Generally activities include feasibility
study, analysis, design, coding, testing, implementation, and maintenance. Collectively, these activities are
called the software development life cycle (SDLC).

Software development organizations follow some life cycle for each project for developing a software product.
The proposed life cycle model for a project is generally accepted by both parties (i.e., customer and
developer) because it helps in deciding, managing and controlling the various activities of a project. The
software development life cycle with various activities is pictorially represented in the below figure:
Software Development Life Cycle (Cont.)
1) Project Initiation:
This is the important activity of SDLC and It involves 3 steps mainly:

(i) Preliminary investigation (PI) (ii) Feasibility study (iii) Project plan

(i) Preliminary investigation: It is the initial step that gives a clear picture of what actually the physical system is. It goes

through problem identification, background of the physical system, and the system proposal for a candidate system.

(ii) Feasibility study: The purpose of the feasibility study is to determine whether the implementation of the proposed system will

support the mission and objectives of the organization. Feasibility study ensures that the candidate system is able to satisfy the

client needs. There are various types of feasibility study to be performed; such as technical, economical and operational and so

on. Based on this a feasibility report is prepared and submitted to the top level management. A positive report leads to the

project initiation.

(iii) Project plan: A high level plan is designed to cover the schedule, cost, scope and objectives, resources etc. It’s the job of

project manager to secure the required resources, design project teams and prepare a detailed project plan to initiate the

project.
Software Development Life Cycle (Cont.)

2) Requirements Analysis:
Requirements Analysis is the process of collecting the factual data, defining the problem and provides a
document for software development. This analysis phase consists of three main activities mainly:
Requirements elicitation, Requirements specification, Requirements verification and validation.

Requirements elicitation is about understanding the problem. Once the problem has been understood, it is
described in the requirements specification document which is referred to as Software requirements
specifications (SRS). This document describes the product to be delivered, not the process of how it is to be
developed. Requirements verification and validation ascertain that correct requirements are stated (validation)
and that these requirements are stated correctly (Verification).
Software Development Life Cycle (Cont.)

3) Software Design:

Software design focuses on the solution domain of the project on the basis of the requirements document
prepared during the analysis phase. It places stress on how to develop the product. The goal of the design
phase is to transform the collected requirements into a structure that is suitable for implementation in
programming languages. The software designer begins with making architectures, outlining the hierarchical
structure and writing algorithms for each component in the system.

The design phase has two aspects physical design and logical design. Physical design concentrates on
identifying the different modules or components in a system that interact with each other to create the
architecture of the system. In logical design, the internal logic of a module or component is described in a
pseudo code or in an algorithmic manner.
Software Development Life Cycle (Cont.)

4) Coding:

The coding phase is concerned with the development of the source code that will implement the design. This
code is written in a formal language called a programming language, such as assembly language, C++, Java
etc. Although the major constraints and decisions are already specified during design, still utmost care is
taken while programming. Good coding efforts can reduce testing and maintenance tasks.

The programs written during the coding phase must be easy to read and understand. If necessary, source
codes must be documented for future purposes. Proper guidelines and standards must be followed while
programming the design. Rules must be followed for the declaration of data structures, variables, header
files, function calls and so on.
Software Development Life Cycle (Cont.)

5) Testing:
Before the deployment of software, testing is performed to remove the defects in the developed system. After
the coding phase, a test plan of the system is developed and run on the specified test data.
Testing covers various errors at the requirements, design and coding phases. Requirements errors may arise
due to improper understanding of the customer needs. Design errors occur if the algorithms are not
implemented properly. Coding errors are mainly logical and syntactical errors.
Testing is performed at different levels: unit testing, integration testing, system testing and acceptance testing.
Unit testing is carried out for individual modules at code level. After testing each module, interfaces among
various modules are checked with integration testing. System test ensures that the system satisfies the
requirements specified by the customer. Acceptance test is done for customer satisfaction.
Software Development Life Cycle (Cont.)

6) Deployment:
After acceptance by the customer during the testing phase, deployment of the software begins. The purpose of
the software deployment is to make the software available for operational use. The release of the software starts.
During deployment, all the program files are loaded into user’s computer. After installation of all modules the
system, training of the user starts. Documentation is also an important activity in software development as it the
description of the system from the user’s point of view, detailing how to use or operate the system.
7) Maintenance:
The maintenance phase comes after the software product is released and put into operation through deployment
process. Software maintenance is performed to adapt to changes in a new environment, correct bugs if any and
enhance the performance by adding new features. The software will age in near future and enter the retirement
stage. In extreme cases, the software will be reengineered onto a different platform.
SOFTWARE PROCESS MODELS
Software development organizations follow some development process models when developing a software
product. Each process model has a life cycle of software production. The general activities of software life cycle
models are feasibility study, analysis, design, coding, testing, deployment and maintenance. Each life cycle
model has certain advantages, applications and limitations.
Various software development process models have been proposed due to varying nature of software
applications. These models can be differentiated by the feedback and control methods employed during
development. Some of the models are listed below:
• Classical waterfall model
• Iterative waterfall model
• Prototyping model
• Incremental model
• Spiral Model
• Agile process model
• RUP (Rational unified process) process model.
CLASSICAL WATERFALL MODEL
CLASSICAL WATERFALL MODEL (Cont.)
▪ The Waterfall Model was first Process Model to be introduced.
▪ It is also referred to as a linear-sequential life cycle model.
▪ It is very simple to understand and use.
▪ In a waterfall model, each phase must be completed fully before the next phase can begin.
▪ This type of software development model is basically used for the project which is small and
there are no uncertain requirements. At the end of each phase, a review takes place to
determine if the project is on the right path and whether or not to continue or discard the
project.
▪ In this model software testing starts only after the development is complete.
▪ In waterfall model phases do not overlap.
CLASSICAL WATERFALL MODEL (Cont.)

Advantages of waterfall model:


▪ This model is simple and easy to understand and use.
▪ It is easy to manage due to the rigidity of the model – each phase has specific deliverables and
a review process.
▪ In this model phases are processed and completed one at a time. Phases do not overlap.
▪ Waterfall model works well for smaller projects where requirements are very well understood.
CLASSICAL WATERFALL MODEL (Cont.)

Disadvantages of waterfall model:


▪ Once an application is in the testing stage, it is very difficult to go back and
change something that was not well-thought out in the concept stage.
▪ No working software is produced until late during the life cycle.
▪ High amounts of risk and uncertainty.
▪ Not a good model for complex and object-oriented projects. Poor model for long
and ongoing projects.
▪ Not suitable for the projects where requirements are at a moderate to high risk of
changing
CLASSICAL WATERFALL MODEL (Cont.)

When to use the waterfall model:


▪ This model is used only when the requirements are very well known, clear and fixed.
Product definition is stable.
▪ Technology is understood.
▪ There are no ambiguous requirements
▪ Ample resources with required expertise are available freely.
▪ The project is of short duration.
▪ Very less customer interaction is involved during the development of the product.
Once the product is ready then only it can be demoted to the end users. Once the
product is developed and if any failure occurs then the cost of fixing such issues are
very high, because we need to update everywhere from document till the logic.
ITERATIVE WATERFALL MODEL
ITERATIVE WATERFALL MODEL (Cont.)
▪ Iterative Waterfall Model is the extension of the Waterfall model.
▪ This model is almost same as the waterfall model except some modifications are
made to improve the performance of the software development.
▪ The iterative waterfall model provides customer’s feedback paths from each phase to
its previous phases.
▪ There is no feedback path provided for feasibility study phase, so if any change is
required in that phase then iterative model doesn’t have scope for modification or
making corrections.
▪ Iterative waterfall allows to go back on the previous phase and change the
requirements and some modification can done if necessary.
ITERATIVE WATERFALL MODEL (Cont.)

Advantages of Iterative waterfall model:


▪ Iterative waterfall model is very easy to understand and use.
▪ Every phase contains feedback path to its previous phase.
▪ This is an simple to make changes or any modifications at any phase.
▪ Customer involvement is not required during the software development.
▪ This model is suitable for large and complex projects.
ITERATIVE WATERFALL MODEL (Cont.)

Disadvantages of Iterative waterfall model:


▪ There is no feedback path for feasibility study phase.
▪ This model is not suitable if requirements are not clear.
▪ It can be more costly.
▪ There is no process for risk handling.
▪ This model does not work well for short projects.
▪ If modifications are required repeatedly then it can be more complex projects
INCREMENTAL MODEL
INCREMENTAL MODEL (Cont.)
▪ In this model, each module passes through the requirements, design, implementation and
testing phases. A working version of software is produced during the first module, so you have
working software early on during the software life cycle. Each subsequent release of the
module adds function to the previous release. The process continues till the complete system is
achieved.
▪ In the diagram below when we work incrementally we are adding piece by piece but expect that
each piece is fully finished. Thus keep on adding the pieces until it’s complete.
INCREMENTAL MODEL (Cont.)
Advantages of Incremental model:
▪ Generates working software quickly and early during the software life cycle.
▪ This model is more flexible – less costly to change scope and requirements.
▪ It is easier to test and debug during a smaller iteration.
▪ In this model customer can respond to each built. Lowers initial delivery cost.
▪ Easier to manage risk because risky pieces are identified and handled during it’s iteration.
Disadvantages of Incremental model:
▪ Needs good planning and design.
▪ Needs a clear and complete definition of the whole system before it can be broken down and
built incrementally.
▪ Total cost is higher than waterfall
INCREMENTAL MODEL (Cont.)

When to use the Incremental model:


▪ This model can be used when the requirements of the complete system are clearly
defined and understood.
▪ Major requirements must be defined; however, some details can evolve with time.
There is a need to get a product to the market early.
▪ A new technology is being used.
▪ Resources with needed skill set are not available. There are some high risk features
and goals.
PROTOTYPING MODEL
PROTOTYPING MODEL (Cont.)

▪ The basic idea in Prototype model is that instead of freezing the requirements before a design
or coding can proceed, a throwaway prototype is built to understand the requirements.
▪ This prototype is developed based on the currently known requirements.
▪ Prototype model is a software development model. By using this prototype, the client can get
an “actual feel” of the system, since the interactions with prototype can enable the client to
better understand the requirements of the desired system.
▪ Prototyping is an attractive idea for complicated and large systems for which there is no
manual process or existing system to help determining the requirements.
▪ The prototypes are usually not complete systems and many of the details are not built in the
prototype. The goal is to provide a system with overall functionality.
PROTOTYPING MODEL (Cont.)

Advantages of Prototype model:


▪ Users are actively involved in the development
▪ Since in this methodology a working model of the system is provided, the users get
a better understanding of the system being developed.
▪ Errors can be detected much earlier.
▪ Quicker user feedback is available leading to better solutions.
▪ Missing functionality can be identified easily
▪ Confusing or difficult functions can be identified in each and every iteration,
requirements validation.
▪ Quick implementation of incomplete but functional and application model.
PROTOTYPING MODEL (Cont.)
Disadvantages of Prototype model:
▪ Leads to implementing and then repairing way of building systems.
▪ Practically, this methodology may increase the complexity of the system as scope of the system may expand
beyond original plans.
▪ Incomplete application may result in not using of that application.
▪ Incomplete or inadequate problem analysis.
When to use Prototype model:
▪ Prototype model should be used when the desired system needs to have a lot of interaction with the end
users.
▪ Typically, online systems, web interfaces have a very high amount of interaction with end users, are best
suited for Prototype model.
▪ It might take a while for a system to be built that allows ease of use and needs minimal training for the end
user.
▪ Prototyping ensures that the end users constantly work with the system and provide a feedback which is
incorporated in the prototype to result in a useable system. They are excellent for designing good human
computer interface systems.
SPIRAL MODEL
SPIRAL MODEL (Cont.)

▪ The Spiral model of software development is shown in the figure given above.
▪ The diagrammatic representation of this model appears like a spiral with many
loops.
▪ The exact number of loops in the spiral is not fixed.
▪ Each loop of the spiral represents a phase of the software process.
▪ For example, the innermost loop might be concerned with feasibility study, the next
loop with requirements specification, the next one with design, and so on.
▪ Each phase in this model is split into four sectors (or quadrants) as shown in the
figure.
▪ The following activities are carried out during each phase of a spiral model.
SPIRAL MODEL (Cont.)
First quadrant (Objective Setting)
During the first quadrant, it is needed to identify the objectives of the phase.
Examine the risks associated with these objectives. A detailed analysis is carried out for each
identified project risk.
Second Quadrant (Risk Assessment and Reduction)
Steps are taken to reduce the risks. For example, if there is a risk that the requirements are
inappropriate, a prototype system may be developed.
Third Quadrant (Development and Validation)
Develop and validate the next level of the product after resolving the identified risks.
Fourth Quadrant (Review and Planning)
Review the results achieved so far with the customer and plan the next iteration around the
spiral.
Progressively more complete version of the software gets built with each iteration around the
spiral.
SPIRAL MODEL (Cont.)

Circumstances to use spiral model

The spiral model is called a Meta model since it encompasses all other life cycle
models. Risk handling is inherently built into this model. The spiral model is suitable
for development of technically challenging software products that are prone to several
kinds of risks. However, this model is much more complex than the other models – this
is probably a factor deterring its use in ordinary projects.
AGILE MODEL
AGILE MODEL (Cont.)

Agile development model is also a type of Incremental model. Software is developed in


incremental, rapid cycles. This results in small incremental releases with each release building
on previous functionality. Each release is thoroughly tested to ensure software quality is
maintained. It is used for time critical applications. Extreme Programming (XP) and Scrum
are currently the most well known agile development life cycle models.
AGILE MODEL (Cont.)

Extreme Programming:
XP is a lightweight, efficient, low-risk, flexible, predictable, scientific, and fun way to
develop software. eXtreme Programming (XP) was conceived and developed to address the
specific needs of software development by small teams in the face of vague and changing
requirements. Extreme Programming is one of the Agile software development
methodologies. It provides values and principles to guide the team behavior. The team is
expected to self-organize. Extreme Programming provides specific core practices where-
Each practice is simple and self-complete. Combination of practices produces more complex
and emergent behavior.
AGILE MODEL (Cont.)

▪ Extreme Programming in a Nutshell Extreme Programming involves-


▪ Writing unit tests before programming and keeping all of the tests running at all times. The
unit tests are automated and eliminate defects early, thus reducing the costs.
▪ Starting with a simple design just enough to code the features at hand and redesigning when
required.
▪ Programming in pairs (called pair programming), with two programmers at one screen, taking
turns to use the keyboard. While one of them is at the keyboard, the other constantly reviews
and provides inputs.
▪ Integrating and testing the whole system several times a day.
▪ Putting a minimal working system into the production quickly and upgrading it whenever
required.
▪ Keeping the customer involved all the time and obtaining constant feedback. Iterating
facilitates the accommodating changes as the software evolves with the changing
requirements.
AGILE MODEL (Cont.)
AGILE MODEL (Cont.)
Advantages:
Slipped schedules: Short and achievable development cycles ensure timely deliveries.
Cancelled projects: Focus on continuous customer involvement ensures transparency with the
customer and immediate resolution of any issues.
Costs incurred in changes: Extensive and ongoing testing makes sure the changes do not break
the existing functionality. A running working system always ensures sufficient time for
accommodating changes such that the current operations are not affected.
Production and post-delivery defects: Emphasis is on the unit tests to detect and fix the defects
early.
Misunderstanding the business and/or domain: Making the customer a part of the team ensures
constant communication and clarifications.
Business changes: Changes are considered to be inevitable and are accommodated at any point of
time.
Staff turnover: Intensive team collaboration ensures enthusiasm and good will. Cohesion of
multi-disciplines fosters the team spirit.
AGILE MODEL (Cont.)
Scrum Programming:
Scrum is an agile way to manage a project, usually software development. Agile software development with
Scrum is often perceived as a methodology; but rather than viewing Scrum as methodology, think of it as a
framework for managing a process.
Scrum Overview - Introduction to Scrum Terms
An introduction to Scrum would not be complete without knowing the Scrum terms you'll be using. This section
in the Scrum overview will discuss common concepts in Scrum.
Scrum team: A typical scrum team has between five and nine people, but Scrum projects can easily scale into
the hundreds. However, Scrum can easily be used by one-person teams and often is. This team does not include
any of the traditional software engineering roles such as programmer, designer, tester or architect. Everyone on
the project works together to complete the set of work they have collectively committed to complete within a
sprint. Scrum teams develop a deep form of camaraderie and a feeling that “we’re all in this together.”
Product owner: The product owner is the project’s key stakeholder and represents users, customers and others in
the process. The product owner is often someone from product management or marketing, a key stakeholder or a
key user.
AGILE MODEL (Cont.)
Scrum Master: The Scrum Master is responsible for making sure the team is as productive as possible. The
Scrum Master does this by helping the team use the Scrum process, by removing impediments to progress, by
protecting the team from outside, and so on.
Product backlog: The product backlog is a prioritized features list containing every desired feature or change to
the product. Note: The term “backlog” can get confusing because it’s used for two different things. To clarify, the
product backlog is a list of desired features for the product. The sprint backlog is a list of tasks to be completed in
a sprint.
Sprint planning meeting: At the start of each sprint, a sprint planning meeting is held, during which the product
owner presents the top items on the product backlog to the team. The Scrum team selects the work they can
complete during the coming sprint. That work is then moved from the product backlog to a sprint backlog, which
is the list of tasks needed to complete the product backlog items the team has committed to complete in the
sprint.
Daily Scrum: Each day during the sprint, a brief meeting called the daily scrum is conducted. This meeting helps
set the context for each day’s work and helps the team stay on track. All team members are required to attend the
daily scrum.
AGILE MODEL (Cont.)

Sprint review meeting: At the end of each sprint, the team demonstrates the completed functionality at a sprint
review meeting, during which, the team shows what they accomplished during the sprint. Typically, this takes the
form of a demonstration of the new features, but in an informal way; for example, PowerPoint slides are not
allowed. The meeting must not become a task in itself nor a distraction from the process.
Sprint retrospective: Also at the end of each sprint, the team conducts a sprint retrospective, which is a meeting
during which the team (including its Scrum Master and product owner) reflect on how well Scrum is working for
them and what changes they may wish to make for it to work even better.
Scrum is an iterative framework to help teams manage and progress through a complex project. It is most
commonly used in Software Development by teams that implement the Agile Software Development
methodology. However it is not limited to those groups. Even if your team does not implement Agile Software
Development, you can still benefit from holding regular scrums with your teams.
AGILE MODEL (Cont.)

Scrum participants fall into the same two categories. They are either Pigs or they are chickens. Participants at
scrum are either fully committed to the project or simply participants. Let’s look at who these various roles really
are.
Pig Roles
Actual Team Members: These would be the developers, artists or product managers that comprise the core of
the team. These are the people who are actually doing the daily work to bring the project to fruition. These
members are fully committed to the project.
Scrum Master: The scrum master might be one of the team members — or might not be. It is important to call
this person out separately here though because the Scrum master has the primary role of ensuring that the scrum
moves forward without problems and is effective for the team.
Project Owner: This may be a Product Manager who is also comprised of the team or it may not. Again it is
important to call this persons role out here as this person represents the voice of the end customer. This person
needs to ensure that the product achieves it’s product goals and provides the necessary end product to the
customers.
AGILE MODEL (Cont.)

Chicken Roles
Managers: At first glance you might think that managers are pigs — naturally. However in the scrum context
managers are generally more concerned about the people involved in a project and their respective health. They
are not as focused on the product and it’s particular customer oriented goals. For this reason they are considered a
chicken in the scrum context.
Stakeholders: Stakeholders are individuals who will benefit or have a vested interest in the project, however do
not necessarily have authority to dictate direction or to be held accountable for the product. They can be
consulted for opinions and insight however the product owner needs to maintain final rights for the decision
making process.
Why are the roles important
The chicken and pig roles are vital to scrum because it dictates who in the scrum should be an active participant
Chickens should not be active participants in a scrum meeting. They may attend, however they should be there as
guests only and not required to share their current statuses. Pigs on the other hand need to share their current
progress and share any blockers that they are encountering.
The reason that Chickens should not be active participants is that they too easily will take over the direction of
the scrum and lead it away from the goals of the entire team. It is the scrum masters job to ensure that the scrum
stays on target and covers the topics that need to be covered. if someone goes off topic (chicken or pig) it is the
scrum masters job to bring the group back to the topic at hand.
AGILE MODEL (Cont.)
Advantages of Agile model:
▪ Customer satisfaction by rapid, continuous delivery of useful software.
▪ People and interactions are emphasized rather than process and tools. Customers, developers and testers
constantly interact with each other.
▪ Working software is delivered frequently (weeks rather than months). Face-to-face conversation is the best
form of communication.
▪ Close, daily cooperation between business people and developers. Continuous attention to technical excellence
and good design.
▪ Regular adaptation to changing circumstances.
▪ Even late changes in requirements are welcomed.

Disadvantages of Agile model:


▪ In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the
beginning of the software development life cycle.
▪ There is lack of emphasis on necessary designing and documentation.
▪ The project can easily get taken off track if the customer representative is not clear what final outcome that
they want.
▪ Only senior programmers are capable of taking the kind of decisions required during the development process.
Hence it has no place for newbie programmers, unless combined with experienced resources.
AGILE MODEL (Cont.)
When to use Agile model:
▪ When new changes are needed to be implemented. The freedom agile gives to change is very
important. New changes can be implemented at very little cost because of the frequency of new
increments that are produced.
▪ To implement a new feature the developers need to lose only the work of a few days, or even
only hours, to roll back and implement it.
▪ Unlike the waterfall model in agile model very limited planning is required to get started with
the project. Agile assumes that the end users’ needs are ever changing in a dynamic business and
IT world. Changes can be discussed and features can be newly effected or removed based on
feedback. This effectively gives the customer the finished system they want or need.
▪ Both system developers and stakeholders alike, find they also get more freedom of time and
options than if the software was developed in a more rigid sequential way. Having options gives
them the ability to leave important decisions until more or better data or even entire hosting
programs are available; meaning the project can continue to move forward without fear of
reaching a sudden standstill.
RUP PROCESS MODEL
Stands for "Rational Unified Process." RUP is a software development process from Rational, a division of
IBM. It divides the development process into four distinct phases that each involve business modeling, analysis
and design, implementation, testing, and deployment. The four phases are:
1. Inception - The idea for the project is stated. The development team determines if the project is worth
pursuing and what resources will be needed.
2. Elaboration - The project's architecture and required resources are further evaluated. Developers consider
possible applications of the software and costs associated with the development.
3. Construction - The project is developed and completed. The software is designed, written, and tested.
4. Transition - The software is released to the public. Final adjustments or updates are made based on feedback
from end users.

The RUP development methodology provides a structured way for companies to envision create software
programs. Since it provides a specific plan for each step of the development process, it helps prevent resources
from being wasted and reduces unexpected development costs.
RUP PROCESS MODEL (Cont.)
RUP PROCESS MODEL (Cont.)

Advantages of RUP Software Development


1. This is a complete methodology in itself with an emphasis on accurate documentation
2. It is proactively able to resolve the project risks associated with the client's evolving
requirements requiring careful change request management
3. Less time is required for integration as the process of integration goes on throughout the
software development life cycle.
4. The development time required is less due to reuse of components.
5. There is online training and tutorial available for this process.
RUP PROCESS MODEL (Cont.)

Disadvantages of RUP Software Development


1. The team members need to be expert in their field to develop software under this methodology.
2. The development process is too complex and disorganized.
3. On cutting edge projects which utilize new technology, the reuse of components will not be
possible. Hence the time saving one could have made will be impossible to fulfil.
4. Integration throughout the process of software development, in theory sounds a good thing.
But on particularly big projects with multiple development streams it will only add to the
confusion and cause more issues during the stages of testing.
RUP PROCESS MODEL (Cont.)
The 6 RUP Best Practices
1. Develop Iteratively
The software requirements specification (SRS) keeps on evolving throughout the development process and loops
are created to add them without affecting the cost of development.
2. Manage Requirements
The business requirements documentation and project management requirements need to be gathered properly
from the user in order to reach the targeted goal.
3. Use Components
The components of large project which are already tested and are in use can be conveniently used in other projects.
This reuse of components reduces the production time.
4. Model Visually
Use of Unified modeling language (UML) facilitates the analysis and design of various components. Diagrams and
models are used to represent various components and their interactions.
5. Verify Quality
Testing and implementing effective project quality management should be a major part of each and every phase of
the project from initiation to delivery (aka the project management life cycle).
6. Control Changes
Synchronization of various parts of the system becomes all the more challenging when the parts are being
developed by various teams working from different geographic locations on different development platforms.
Hence special care should be taken in this direction so that the changes can be controlled.
UNIT-2
Requirements Engineering

Introduction

• The requirements of a system are the descriptions of the features or services that the
system exhibits within the specified constraints.
• The requirements collected from the customer are organized in some systematic manner
and presented in the formal document called software requirements specification (SRS)
document.

• Requirements engineering is the process of gathering, analyzing, documenting, validating,


and managing requirements.

• The main goal of requirements engineering is to clearly understand the customer


requirements and systematically organize these requirements in the SRS.
Software Requirements

• A requirement is a detailed, formal description of system functionalities. It specifies a


function that a system or component must be able to perform for customer satisfaction.
• IEEE defines a requirement as:

• “a condition of capability of a system required by customer to solve a problem or


achieve an objective.”

• “a capability that a product must possess or something a product must do in order to


ultimately satisfy customer need, contract, constraint, standard, specification or
other formally imposed documents.”
• “a documented representation of a condition or capability as in (1) or (2).”

1. Business Requirements:

• Understanding the business rules or the processes of organization is vital to software


development.

• Business requirements define the project goal and the expected business benefits for doing
the project.

• The enterprise mission, values, priorities, and strategies must be known to understand the
business requirements that cover higher level data models and scope of the models.
• The business analyst is well versed in understanding the concept of business flow as
well as the process being followed in the organization.

• The business analyst guides the client through the complex process that elicits the requirements
of their business.
2. User Requirements

• User requirements are the high-level abstract statements supplied by the customer, end
users, or other stakeholders.

• These requirements are translated into system requirements keeping in mind user’s views.

• These requirements are generally represented in some natural language with pictorial
representations or tables to understand the requirements.

• User requirements may be ambiguous or incomplete in description with less product


specification and little hardware/software configurations are stated in the user
requirements.

• There may be composite requirements with several complexities and confusions.

• In an ATM machine, user requirements allow users to withdraw and deposit cash.

3. System Requirements

• System requirements are the detailed and technical functionalities written in a systematic
manner that are implemented in the business process to achieve the goal of user
requirements.

• These are considered as a contract between the client and the development organization.

• System requirements are often expressed as documents in a structured manner using


technical representations.

• The system requirements consider customer ID, account type, bank name, consortium, PIN,
communication link, hardware, and software. Also, an ATM will service one customer at a
time.

4. Functional Requirements

• Functional requirements are the behavior or functions that the system must support.

• These are the attributes that characterize what the software does to fulfil the needs of the
customer.

• These can be business rules, administrative tasks, transactions, cancellations,


authentication, authorization, external interfaces, legal or regulatory requirements, audit
tracking, certification, reporting requirements, and historical data.

5. Non-functional Requirements

• Non-functional requirements specify how a system must behave. These are qualities,
standards, constraints upon the systems services that are specified with respect to a
product, organization, and external environment.
• Non-functional requirements are related to functional requirements, i.e., how efficiently,
by how much volume, how fast, at what quality, how safely, etc., a function is performed
by a particular system.

• The examples of non-functional requirements are reliability, maintainability,


performance, usability, security, scalability, capacity, availability, recoverability,
serviceability, manageability, integrity, and interoperability.

Example: ATM (Automated Teller Machine)

Business requirements:

A business requirement for an ATM can be stated as to design a computerized banking network
that will enable customers to avail simple bank account services through ATM’s that may be at
remote locations from the bank campus and that need not be owned and operated by the
customer’s bank.

User and system requirements:

User requirements allows users to withdraw and deposit cash, checking balance enquiry and
mini statement etc.

System requirements consider customer ID, account type, bank name, PIN, communication link,
hardware and software.

Functional and Nonfunctional requirements:


Functional requirements of an ATM can be customer authentication, balance update, enquiries
and withdrawal.

Nonfunctional requirements of an ATM can be integrity of user authentication, response time of


withdrawal and efficiency of balanced transactions and usability features

Requirement Engineering Process

• Requirement engineering is the key phase in software development which decides


what to build and it provides the outline of the quality of the final product.
• Requirement engineering is the disciplined, process-oriented approach to the
development and management of requirements.

• The main issues involved in requirement engineering are

• Discovering the requirements,

• Technical organization of the requirements,

• Documenting the requirements,

• Ensuring correctness and completeness of the requirements,

• Managing requirements that changes over time.


• A typical requirement engineering process has two main aspects:
– Requirement development
– Requirement management

– Requirement development includes various activities, such as elicitation, analysis,


specification and validation of requirements.

– Requirement management is concerned with managing requirements that change


dynamically, controlling the baseline requirements, monitoring the commitments
and consistency of requirements throughout software development.

1. Requirements Elicitation

• Requirement elicitation aims to gather requirements from different perspectives to


understand the customer needs in the real scenario.

• The goals of requirement elicitation are to identify the different parties involved in the
project as sources of requirements, gather requirement from different parties, write
requirements in their original form as collected from the parties, and integrate these
requirements.

• The original requirements may be inconsistent, ambiguous, incomplete, and infeasible for
the business.

• Therefore, the system analyst involves domain experts, software engineers, clients, end
users, sponsors, managers, vendors and suppliers, and other stakeholders and follows
standards and guidelines to elicit requirements.
System analyst

• The role of the system analyst is multifunctional, fascinating, and challenging as compared to
other people in the organization.

• A system analyst is the person who interacts with different people, understands the business
needs, and has knowledge of computing.

• The skills of the system analyst include programming experience, problem solving,
interpersonal savvy, IT expertise, and political savvy. He acts as a broker and needs to be a
team player.
• He should also have good communication and decision-making skills.
• He should be able to motivate others and should have sound business awareness.

Fig: System Analyst Interaction

Challenges in requirements elicitation


• The main challenges of requirements elicitation are
– Identification of problem scope,
– Identification of stakeholders,
– Understanding of problem, and
– Volatility of requirements.

• Stakeholders are generally unable to express the complete requirement at a time and in an
appropriate language.
• There may be conflicts in the views of stakeholders.

• It may happen that the analyst is not aware of the problem domain and the business
scenario.

• To face these challenges, analyst will follow some techniques to gather the
requirements, which are called as Fact-Finding Techniques.
Fact-Finding Techniques

• Fact-finding techniques are used to discover the information pertaining to system


development.
• Some of the popular techniques are
– interviewing,
– Questionnaires,
– Joint Applications Development (JAD),
– Onsite observation,
– Prototyping,
– Viewpoints,
– and review records
• Interviewing

– Interview involves eliciting requirements through face to face interaction with


clients, end-users, domain experts, or other stakeholders associated with the
project.

– They are conducted to gather facts, verify and clarify facts, identify requirements,
and determine ideas and opinions.

– There are two types of interviews: unstructured interviews and structured


interviews.

– An unstructured interview aims at eliciting various issues related to stakeholders and


the existing business system perspectives. There is no predefined agenda and thus
general questions are asked from the stakeholders.

– A structured interview is conducted to elicit specific requirements and therefore the


stakeholders are asked to answer specific questions.
• Questionnaires

– Questionnaires are used to collect and record large amount of qualitative as well as
quantitative data from a number of people.

– The system analyst prepares a series of questions in a logical manner, supplemented by


multiple choice answers and instructions for filling the questionnaires.
– It is a quick method of data collection and inexpensive as compared to interviews.

– It provides standardized and uniform responses from people as questions are


designed objectively.
– However, it is very difficult to design logical and unambiguous questions.
– A very few people take interest in responding and carefully filling questionnaires.
• Joint Applications Development (JAD)

– It is a structured group meeting just like workshop where the customer, designer, and
other experts meet together for the purpose of identifying and understanding the
problems to define requirements.
– The JAD session has predefined agenda and purpose.

– It includes various participants with their well-defined roles, such as facilitator,


sponsors, end users, managers, IT staff, designers, etc.

– The JRD (Joint Requirements Development) meeting is conducted to identify and


understand problems, resolve conflicts, generate ideas, and discuss some
alternatives and the best solutions.
– Brainstorming and focus group techniques are used for JRD.
• Onsite observation

– The system analyst personally visits the client site organization, observes the
functioning of the system, understands the flow of documents and the users of the
system, etc.

– It helps the system analyst to gain insight into the working of the system instead
documentation.

– Through constant observation, the system analyst identifies critical requirements


and their influence in the system development.
– Users may feel uncomfortable if someone observes their work.
• Prototyping

– Prototyping technique is helpful in situations where it is difficult to discover and


understand the requirements and where early feedback from the customer is
needed.

– An initial version of the final product called prototype is developed that can give
clues to the client to provide and think on additional requirements. The initial
version will be changed and a new prototype is developed.

– Prototype development is a repetitive process that continues until the product


meets the business needs or requirements.

– In this way, prototype is corrected, enhanced, or refined to reflect the new


requirements.
– Prototyping can be costly if it is repeated many times
• Viewpoints

– Viewpoint is a way of structuring requirements to represent the perspectives of


different stakeholders as there is no single correct way to analyze the system
requirements.
– Stakeholders may be classified under different viewpoints:
• direct (who directly interact with the system),
• indirect (who indirectly interact with the system),
• domain (represent the domain standards and constraints) viewpoints.

– The collected viewpoints are classified and evaluated under the system’s
circumstances and finally, it is integrated into the normal requirement engineering
process.
• Review records
– Reviewing the existing documents is a good way of gathering requirements.

– The information related to the system is found in the documents such as manual of the
working process, newspapers, magazines, journals, etc.

– The existing forms, reports, and procedures help to understand the guidelines of a
process to identify the business rules, discrepancies, and redundancies.
– This method is beneficial to new employees or consultants working on the project.
2. Requirements Analysis
• In requirement analysis, we analyze stakeholder’s needs, constraints, assumptions,
qualifications, and other information relevant to the proposed system and organize them in a
systematic manner.

• It is performed in several iterations with elicitation to clarify requirements, resolve conflicts,


and ensure their correctness and completeness.

• It includes various activities, such as classification, organization, prioritization, and


modeling requirements.

• During requirement analysis, models are prepared to analyze the requirements.

• The following analysis techniques are generally used for the modeling of requirements:

1. Structured analysis

2. Data-oriented analysis

3. Object-oriented analysis

4. Prototyping analysis

1. Structured Analysis

• Structured analysis is also referred to as process modeling or data flow modeling.

• It focuses on transforming the documented requirements in the form of processes of the


system.

• During transformation, it follows a top-down functional decomposition process in which the


system is considered a single process and it is further decomposed into several sub-processes
to solve the problem.

• Thus, the aim of structured analysis is to understand the work flow of the system that the
user performs in the existing system of the organization.

Structured analysis uses a graphical tool called data flow diagrams (DFD), which represent the
system behavior

Data Flow Diagram (DFD)

• A DFD is a graphical tool that describes the flow of data through a system and the
functions performed by the system.

• It shows the processes that receive input, perform a series of transformations, and
produce the desired outcomes.

• It does not show the control information (time) at which processes are executed.

• DFD is also called a bubble chart or process model or information flow model.
A DFD has four different symbols:

• Process: A process is represented by a circle and it denotes transformations of the input


data to produce the output data.

• Data flow: represent the movement of data, i.e., leaving one process and entering into
another process. Data flows are represented by arrows, connecting one data
transformation to another.

• Data store: Data store is the data at rest. It is represented in parallel lines.

• Actor: It is the external entity that represents the source or sink (destination of data). It is
represented by a rectangle.

Fig: DFD for ATM Withdrawal

Constructing DFD

• The construction of the DFD starts with the high-level functionality of the system, which
incorporates external inputs and outputs.

• This abstract DFD is further decomposed into smaller functions with the same input and
outputs.

• The decomposed DFD is the elaborated and nested DFD with more concrete functionalities.

• Decomposition of DFD at various levels to design a nested DFD is called leveling of DFD.

• Sometimes, dotted lines are used to represent the control flow information. Control flow
helps to decide the sequencing of the operations in the system.
Conventions in constructing DFD

• Data flow diagrams at each level must be numbered for reference purposes, for example,
level 0, level 1 etc.

• Multiple data flow can be shown on single data flow line. A bidirectional arrow can be used
as input and outflow (if same data is used) data or separate line can be used for input and
output.

• External agents/actors and data flow are represented using nouns; for example, stock, pin,
university, transporter, etc.

• Processes should be represented with verbs followed by nouns. Longer names must be
connected with underscores (“_”) and these should be short but meaningful, e.g.
sales_detail.

• Avoid representation of control logics in the DFD.

• Each process and data store must have at least one incoming data flow into it and one
outgoing data flow leaving it.

DFD Vs. Flowcharts

• DFDs represent the flow of data through the system while flowcharts represent the flow of
control in a program.

• DFDs do not have branching and iteration of data whereas flowcharts have conditional and
repetitive representation of processes.

• Flowcharts represent the flow of control through an algorithm.

• Flowcharts do not show the input, output, and storage of data in the system.

• The DFD can be used to show the parallel processes executing in the system while
flowcharts have only one process active at a time.

Data Dictionary

• Data flows are used by the programmers in designing data structures and also it is used by
testers to design test cases.

• Such data structures may be primitive or composite. Composite data structures may consist of
several primitive data structures.

• Longer composite data structures are difficult to write on the line of data flow in DFD.
Therefore, data flow and data structures are described in the data dictionary.

• Data dictionary is metadata that describe composite data structures defined in the DFD.

• Data dictionary is written using special symbols, such as “+” for composition, “*”
for repetition, and “|” for selection.

• The combinations of repeated data are written using “* +” symbol.

• For example, a data dictionary for a restaurant bill is as follows:

restaurant_bill = dish_price + sales_tax + service_charge + seat_charge +


grand_total dish_price = [dish_name + quantity + price]*
seat_charge = [reserved_table | common_table]
Structured Analysis Method

• Structured analysis is a systematic approach for requirement analysis that employs


techniques and graphical tools to represent the scenario of the working system in an
organization.

• Structured analysis uses data flow diagrams and data dictionary for requirement analysis.

• This approach begins with studying the existing system

– to understand the working of the system

– to design the context diagram,

– apply functional decomposition to subdivide the functional requirements,

– mark the boundary of the system to know what can be automated or what will be
manual, and prepare the dictionary of the system.

Fig: Structured Analysis Approach

1. Prepare Context Diagram

1. The context diagram (or level 0 DFD) is the high-level representation of the
proposed system after studying the physical DFD of the existing system.
2. The entire system is treated as a single process called bubble, with all its external
entities.

3. That is, the system is represented with main input and output data, process, and
external entities.

4. A Physical DFD is the representation of the real physical environment of a non-


automated system.

5. Logical DFD describes logical rather than physical entities. Processes might
be implemented as programs, data might be considered as file or database, etc.
FIG: CONTEXT DIAGRAM FOR CHEQUES IN BANK

2. Construct Level 1 Logical DFD

• Level 1 logical DFD includes the main functions supported by the proposed system.

• The equivalent logical DFD for the existing physical DFD of the system is designed
with additional services required by the customer.

• The system will provide equivalent resources to the customer as these are in the non-
automated or existing system.
3. Decomposition of Level 1 DFD

• It follows a top-down analysis and functional decomposition to refine the level 1 DFD into
smaller functional units.

• Functional decomposition of each function is also called exploding the DFD or factoring.

• The process of decomposition is repeated until a function needs no more subdivisions.

• Each successive level of DFD provides a more detailed view of the system.

• The goal of decomposition is to develop a balanced or leveled DFD. That is, data flows,
data stores, external entities, and processes are matched between levels from the context
diagram to the lowest level.

• For example, the level 2 DFDs for the decomposed process of verification of account:

4. Identify Man-Machine Boundary

After constructing the final DFD, boundary conditions are identified, which ensures what
will be automated and what can be done manually or by another machine. For example, in an ATM
system, a user will select options, cancel operation, and cash and receipt. All these tasks will be done
by the user manually.
5. Prepare Data Dictionary and Process Descriptions

• The data flows and processes are defined in the data dictionary with proper structures and
formats.

Fig: Data Dictionary for processing cheques in bank

Benefits of Structured Analysis

– A data-flow-based structured analysis is easy to present the customer problems in a pictorial


form that can easily be converted into structured design.

– The top-down decomposition approach enables producing a model in an organized manner.

– The DFD-based approach can be used to support other methodologies.

– It does not require much technical expertise

– It helps to understand the system scope and its boundaries.

– It provides proper communication of system knowledge to the users.


Limitations of Structured Analysis

– It is difficult to understand the final DFD and also it does not reveal the sequence in which
processes are performed in the system.

– The structured analysis methodology sometimes becomes a time-consuming and tedious


job to manage such voluminous data and to produce several levels of DFD.

– Although a step-by-step approach is suitable for the waterfall model but the system
requirements and user requirements must be frozen at the early in the life cycle.

– A complete DFD is constructed if all the requirements are available at the beginning of the
structured analysis.
2. Data-Oriented analysis

• Data-oriented analysis is also referred to as data-oriented modeling, which aims at


conceptual representation of the business requirements.

• A data model is the abstraction of the data structures required by a database rather than the
operations on those data structures.

• Without analyzing data models, it is not possible to design database.

• Data models ensure that all data objects required by the database are completely and
accurately represented.

• Data models are composed of data entities, associations among different entities, and the
rules which govern operations on the data.

• Data models are accompanied by functional models that describe how the data will be
processed in the system.

• Thus, producing data models and functional models together is called conceptual database
design.

• Data-oriented analysis is performed using entity relationship modeling (ERM).

Entity Relationship Modeling (ERM)

• Entity relationship modeling (ERM) is a pictorial method of data representation.

• ERM is represented by the E-R diagram that represents data and organizes them in such a
graphical manner that helps to design the final database.

• An entity or entity class is analogous to the class in object orientation, which represents a
collection of similar objects.

• An entity may represent a group of people, places, things, events, or concepts.

• For example, student, course, university, fees, etc., represent entities.

• An entity instance is the single instance of an entity class.

• Entities are classified as independent or dependent entities.

• An independent entity or strong entity is one that does not rely on another for identification.
• A dependent entity or weak entity is one that relies on another for identification.

• A weak entity set is represented by a doubly outlined box

• Its relationship is represented by a doubly outlined diamond.

• The discriminator of a weak entity set is underlined with a dashed line.

• Attributes are the properties or descriptors of an entity. e.g., the entity course contains ID,
name, credits, and faculty attributes. Attributes are represented by ellipses.

• The logically-grouped attributes are called compound attributes.

• An attribute can be single valued or multi-valued.

• A multi-valued attribute is represented by a double ellipse.


• The attribute which is used to uniquely identity an entity is called a key attribute or an
identifier and it is indicated by an underline.

• Derived attributes are the attributes whose values are derived from other attributes. They
are indicated by a dotted ellipse.

ERM Relationships
• A relationship represents the association between two or more entities.

• It is represented by a diamond box and two connecting lines with the name of relationship
between the entities.

• Relationships are classified in terms of degree, connectivity, cardinality, direction, and


participation.

• The number of entities associated with a relationship is called the degree of relationship. It
can be recursive, binary, ternary, or n-ary.
• A recursive relationship occurs when an entity is related to itself.
• A binary relationship associates two entities.
• A ternary relationship involves three entities
• An n-ary relationship involves many entities in it.

ERM Cardinality
• Cardinality defines the number of occurrences of entities which are related to each other.
• It can be one-to-one (1:1), one-to-many (1: M), or many-to-many (N: M).

• Direction of a relationship indicates the originating entity of a binary relationship. The entity
from which a relationship originates is called the parent entity and the entity where the
relationship terminates is called the child entity.

• The figures given below will illustrate recursive, binary, and ternary relationships
with cardinalities.
Fig: Relationships with Cardinalities

ERM: Aggregation, Generalization, and Specialization

• Participation denotes whether the existence of an entity instance is dependent upon the
existence of another related entity instance.

• Aggregation is an abstraction of entities that represents the “part-of” or “part whole”


relationship between entities.

• Specialization represents the “is-a” relationship. It designates entities in two levels, viz., a
high-level entity set and low-level entity set.

Generalization is the relationship between an entity and one or more refined sets of it. It combines entity sets
that share the common features into a higher-level entity set

Data-Oriented analysis Method

1. Identification of entity and relationships

2. Construct the basic E-R diagram

3. Add key attributes to the basic E-R model

4. Add non-key attributes to the basic E-R model

5. Apply hierarchical relation

6. Perform normalization

7. Adding integrity rules to the model


Fig: ER Diagram for Course Registration
3. Object-Oriented Analysis

• Object-oriented approach also combines both data and processes into single entities called
objects.
• The object-oriented approach has two aspects, object-oriented analysis (OOA) and object-
oriented design (OOD).
• The idea behind OOA is to consider the whole system as a single complex entity called
object, breaking down the system into its various objects, and combining the data and
operations in objects.
• OOA increases the understanding of problem domains, promotes a smooth transition from
the analysis phase to the design phase, and provides a more natural way of organizing
specifications.
• There exist various object-oriented approaches for OOA and OOD, e.g. Object
Modeling Technique (OMT).

• Objects and classes

• Objects are the real-world entities that can uniquely be identified and distinguished
fromother objects.
• Each object has certain attributes (state) and operations (behavior).
• Similar objects are grouped together to form a class.
• An instance or individual object has identity and can be distinguished from other objects in
a class.

• Attributes are the data values of an object.

• Operations are the services or functions of an object in a class.

• An object communicates to other objects through message passing or signatures.

• In OMT, a class is represented through class diagram, which is indicated by a rectangle with
its three parts: first part for class name, second part for its attributes, and third part for
operations

• Objects are shown through object diagrams, which are also represented with rounded
rectangles.

• For example, the class diagram for an employee is shown in the Figure given below:

Fig: Class Diagram and Association of an Employee in a Company


• Association
• Relationship between classes is known as association.
• Association can be binary, ternary, or n-ary.

• Multiplicity defines how many instances of a class may relate to a single instance of another
class. It can be one-to-one, one-to-many, or many-to-many.

• An association can also have attributes and they are represented by a box connected to the
association with a loop.
• Role names are also attached at the end of the association line.

• Sometimes, a qualifier can be attached to the association at the many side of the association
line.
Aggregation, Generalization, and Specialization

• An aggregation models a “whole-part” relationship between objects. In aggregation, a single


object is treated as a collection of objects.
• Generalization represents the relationship between the super class and the sub-class.

• The most general class is at the top, with the more specific object types shown as the sub-
class.

• Generalization and specialization are helpful to describe the systems that should be
implemented using inheritance in an object-oriented language.

Fig: Aggregation, Generalization and Specialization

Object-Oriented Analysis Method

• OOA in the OMT approach starts with the problem statement of the real world situation
expressed by the customer.

• Based on the problem statement, following three kinds of modeling are performed to
produce the object-oriented analysis model:
– Object modeling
– Dynamic modeling
– Functional modeling
• The object model describes the structural aspects of a system;
• The dynamic model represents the behavioral aspects; and
• The functional model covers the transformation aspects of the system.
I. Object Modeling
• Object modeling begins with the problem statement of the application domain.
• The problem statement describes the services that will be performed by the system.
• The object model captures the static structure of object in the system.

• It describes the identity, attributes, relationships to other objects, and the operations
performed on the objects.

• Object models are constructed using class diagrams after the analysis of the application
domain.

Object modeling Method:


1. Identifying object and classes
2. Prepare a data dictionary
3. Identifying associations
4. Identifying attributes
5. Refining with inheritance

Fig: Class Diagram for ATM system

II. Dynamic Modeling

• Once the structure of an object is found, its dynamic behavior and relationships over time in
the system are modeled in a dynamic model.

• During dynamic modeling, state diagrams are modeled, which consist of states and
transitions caused by events.

• The state diagram is constructed after analyzing the system behavior using the event
sequence diagram.
• An event sequence diagram is composed of participating objects drawn as vertical lines and
events passing from one object to another drawn as horizontal lines between the object lines.

• The sequence diagrams as well as state chart diagrams for the ATM system is shown below
diagrams.

Fig: Sequence Diagram for Withdrawl Service in ATM


Fig: State Chart Diagram for ATM system

III. Functional Modeling

• The functional model describes what are the actions performed without specifying how or
when they are performed.

• A functional model involves inputs, transformations, and outcomes. Outputs are generated
on some input after applying certain transformations to it.
• The functional model is represented through data flow diagrams (DFD).
4. Prototyping Analysis

• Prototyping is more suitable where requirements are not known in advance, rapid delivery
of the product is required, and the customer involvement is necessary in software
development.

• It is an iterative approach that begins with the partial development of an executable model
of the system. It is then demonstrated to the customer for the collection of feedback.

• The development of prototype is repeated until it satisfies all the needs and until it is
considered the final system.

• Prototype can be developed either using automated tools, such as Visual Basic, PHP
(Hypertext Pre-processor), 4GL (fourth generation languages), or paper sketching.
• There are two types of prototyping approaches widely used for elicitation, analysis, and
requirement validation:

o Throwaway prototyping
o Evolutionary prototyping

• In throwaway prototyping, a prototype is built as quickly as possible for the purpose of


observing the product's viability.
• If the prototype is not acceptable to the customer, then it is totally discarded and the project
begins from scratch.

• The various versions of the prototype are developed from customer requirements until the
customer is satisfied.

Fig: Throwaway protoyping

• In evolutionary prototyping, a prototype is built with the focus that the working prototype
will be considered the final system.
• The process begins with the customer requirements.
• The prototypes are produced in several iterations.

• They are shown to the customers for their acceptance and customer suggestions are
incorporated until the final prototype as the final product is constructed.

• This type of prototyping uses the rapid application development (RAD) approach in which
automated tools and CASE tools are used for prototype development.

Fig: Evolutionary Prototyping


3. Requirements Specification

• The main focus of the problem analysis approaches is to understand the internal behavior of
the software

• Requirements are described in a formal document called software requirement specification


(SRS).

• Software requirement specification (SRS) document is a formal document that provides the
complete description of the proposed software, i.e., what the software will do without
describing how it will do so.

• Software requirements specification is one of the important documents required in the


software development.
SRS
• SRS is needed for a variety of reasons:

• Customers and users rely more on and better understand a written formal document
than some technical specification.

• It provides basis for later stages of software development, viz., design, coding, testing,
standard compliances, delivery, and maintenance.

• It acts as the reference document for the validation and verification of the work
products and final software.

• It is treated as an agreement on the features incorporated in the final system of


project between customer and the supplier.
• A good quality SRS ensures high quality software product.

• A high quality SRS reduces development effort (schedule, cost, and resources) because
unclear requirements always lead to unsuccessful projects.

Characteristics of the SRS


• Correctness
• Unambiguity
• Completeness
• Consistency
• Should be ranked for importance and/or stability
• Verifiability
• Modifiability
• Testability
• Validity
• Traceability
• Components of an SRS

• The main focus for specifying requirements is to cover all the specific levels of details that
will be required for the subsequent phases of software development and these are agreed
upon by the client.
• The specific aspects that the requirement document deals with are as follows:
• Functional requirements
• Performance requirements
• Design constraints (hardware and software)
• External interface requirements
Functional Requirements

– Functional requirements describe the behavior of the system that it is supposed to


have in the software product.

– They specify the functions that will accept inputs, perform processing on these inputs,
and produce outputs.

– Functions should include descriptions of the validity checks on the input and output
data, parameters affected by the operation and formulas, and other operations that
must be used to transform the inputs into corresponding outputs.

– For example, an ATM machine should not process transaction if the input amount is
greater than the available balance. Thus, each functional requirement is specified
with valid/invalid inputs and outputs and their influences.
Performance Requirements

– This component of the SRS specifies the performance characteristics or the


non-functional aspects of the software system.
– The performance outcomes depend on the operation of the functional requirements.
– Performance requirements are classified as

• Static requirements: These are fixed and they do not impose constraint on
the execution of the system.

• Dynamic requirements: specify constraints on the execution behavior of the


system. These typically include system efficiency, response time,
throughput, system priorities, fault recovery, integrity, and availability
constraints on the system.
Design Constraints

• There are certain design constraints that can be imposed by other standards, hardware
limitations, and software constraints at the client’s environment.
• These constraints may be related to the standards compliances and policies, hardware
constraints (e.g., resource limits, operating environment, etc.), and software constraints.
• The hardware constraints limit the execution of software on which they operate.
• Software constraints impose restrictions on software operation and maintenance.
External Interface Requirements

• The external interface specification covers all the interactions of the software with people,
hardware, and other software.

• The working environment of the user and the interface features in the software must be
specified in the SRS.
• The external interface requirement should specify the interface with other software.
• It includes the interface with the operating system and other applications.
Structure of an SRS
• The structure of the SRS describes the organization of the software requirement document.

• The best way to specify requirements is to use predefined requirements specification


templates.

• The specific requirement subsection of the SRS should contain all of the software
requirements to a level of detail sufficient to enable designers to design a system to satisfy
those requirements, and testers to test that the system satisfies those requirements.

• It includes functional requirements, performance requirements, design constraints, and


external interface requirements.
IEEE structure of SRS
4. Requirements Validation

• The SRS document may contain errors and unclear requirements and it may be the cause of
human errors.

• The most common errors that occur in the SRS documents are omission, inconsistency,
incorrect fact, and ambiguity.

• Requirement validation is an iterative process in the requirements development process


that ensures that customer requirements are accurately and clearly specified in the SRS
document.

• There are various methods of requirements validation, such as requirements review,


inspection, test case generation, reading, and prototyping.
➢ Requirements Review

• It is a formal process of requirement validation, which is performed by a group of people


from both sides, i.e. clients and developers.

• Requirements review is one of the most widely used and successful techniques to detect
errors in requirements.
• Requirements review helps to address problems at early stages of software development.

• Although requirement reviews take time but they pay back by minimizing the changes and
alteration in the software.
➢ Requirement Inspection
• It is an effective way for requirement validation that detects defects at early stages.

• Inspection is a costly and time-consuming process because large number of requirement


artifacts are analyzed, searched, and sorted.
• Inspection can detect up to 90% defects [95].
• The inspection process correctly gives results for small and less complex projects.
• It is not a common practice in industry due to the cost which is associated to it.
➢ Test Case Generation

• The purpose of this technique is to ensure that requirements are good enough for the
product and planning activities.

• It removes the defects before a project starts and during the project execution and
development.

• In this technique, the product manager and the tester select and review the high priority
requirements and least priority requirements are discarded in the initial specification.

• Writing test cases early may result in some rework if the requirements change, but this
rework cost will be lower than finding and fixing defects in the later stages.

➢ Reading

• Reading is a technique of reviewing requirements in which the reader applies his knowledge
to find the defects.
• There are various reading techniques, such as ad-hoc based, checklist based, etc.
• Detection of the defect depends upon the knowledge and experience of the reviewer.

• Checklist-based reading is one of the commonly used techniques in which a set of questions
is given to the reviewer as a checklist.
➢ Prototyping

• Prototyping is mostly used to understand, analyze, and validate the requirements of a


system.

• In requirements validation, prototyping helps to identify the errors like missing, incomplete,
and incorrect requirements from the requirements document.

• Prototyping works with developing an executable model of the proposed system. This is
developed in an incremental manner.

• In each increment, defects are identified and discussed with the stakeholders and corrective
actions are taken accordingly.

5. Requirements Management

• During the requirements engineering process and later stages of development,


requirements are always changing.

• Customer requirements are unclear even at the final stage of system development, which is
one of the important causes of project failures.

• Therefore, it becomes necessary for project managers to monitor to effect any changes that
may be necessary as the project work advances.

• Requirements management is the process of systematically collecting, organizing,


documenting, prioritizing, and negotiating on the requirements for a project.

• Requirements management planning is a continuous and cross-sectional process that


continues throughout the project life span.
• It is performed after development as well as during maintenance.

• The main activities of requirements management are as follows:


• Planning for the project requirements
• Focusing on the requirements identification process
• Managing the requirements changes
• Controlling and tracking the changes
• Agreeing on the requirements among stakeholders
• Performing regular requirements reviews
• Performing impact analysis for the required changes
Conclusion:

• Requirements analysis and specification is an extremely important phase of software


development because it is the basis of later stages.
• Requirements ultimately reflect in the quality of the final product.

• Requirements development focuses on preparing validated requirements, which includes


activities such as elicitation, analysis, specification, and validation of requirements.

• Requirements management is concerned with managing requirements that change


dynamically, controlling the baseline requirements, and monitoring the commitments and
consistency of requirements throughout software development.
UNIT - 3
Software Design

Definition
Software design is the process by which an agent creates a specification of
a software artifact, intended to accomplish goals, using a set of primitive components and subject to
constraints.
A software product is considered a collection of software modules. A module is a part of a
software product which has data and functions together and an interface with other modules to
produce some outcome.

For Example: A banking S/W system consists of various modules like ATM interface, online
transaction, loan management, deposit, and so on. Each of these module interacts with other modules
to accomplish the banking activities.

For assessing user requirements, an SRS (Software Requirement Specification) document is


created whereas for coding and implementation, there is a need of more specific and detailed
requirements in software terms. The output of this process can directly be used into implementation
in programming languages.

Software design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain. It tries to specify how to fulfil the
requirements mentioned in SRS.

Software Design Process


It exists between Requirements engineering and programming. Software design yields three
levels of results:

 Architectural Design - The architectural design is the highest abstract version of the system.
It identifies the software as a system with many components interacting with each other. At
this level, the designers get the idea of proposed solution domain. The external design
considers the architectural aspects related to business, technology, major data stores, and
structure of the product.
 High Level Design (Physical Design) - The high-level design breaks the ‘single entity-
multiple component’ concept of architectural design(conceptual view) into less-abstracted
view of sub-systems and modules and depicts their interaction with each other. High-level
design focuses on how the system along with all of its components can be implemented in
forms of modules. It recognizes modular structure of each sub-system and their relation and
interaction among each other.
 Detailed Design- Detailed design deals with the implementation part of what is seen as a
system and its sub-systems in the previous two designs. It is more detailed towards modules
and their implementations. It defines logical structure of each module and their interfaces to
communicate with other modules.
Characteristics of good software design:
The quality of a software design can be characterized by the application domain. For example real
time software will focus more on efficiency and reliability issues whereas academic automation s/w
will concentrate on understandability and usability issues. A designer always tries to produce a good
design. The desirable characteristics that a good s/w design should have are as follows:
1. Correctness: A design is said to be correct if it is correctly produced according to the stated
requirements of customers in the SRS. It should fulfil all the functional features, satisfy
constraints, and follow the guidelines. A correct design is more likely to produce accurate
outcomes.
2. Efficiency: It is concerned with performance related issues; for example, optimal utilization
of resources. The design should consume less memory and processor time. Software design
and its implementation should be as fast as requires by the user.
3. Understandability: It should be easy to understand what the module is, how it is connected to
other modules, what data structure is used, and its flow of information. Documentation of a
design can also make it more understandable. An understandable design will make the
maintenance and implementation tasks easier.
4. Maintainability: A difficult and complex design would take a larger time to be understood and
modified. Therefore, the design should be easy to be modified, should include new features,
should not have unnecessary parts, and it should be easy to migrate it onto another platform.
5. Simplicity: A simple design will improve understandability and maintainability. Introducing a
simple design is rare because a design follows certain steps and criteria. Still designers always
think to “keep it simple” rather than “make it complex”.
6. Completeness: It means that the design includes all the specifications of the SRS. A complete
design may not necessarily be correct. But a correct design can be complete.
7. Verifiability: The design should be able to be verified against the requirements documents
and programs. Interfaces between the modules are necessary for integration and function
prototyping.
8. Portability: The external design mainly focuses on the interface, business, and technology
architectures. These architectures must be able to move a design to another environment. This
may be required when the system is to be migrated onto different platforms.
9. Modularity: A modular design will be easy to understand and modify. Once a modular system
is designed, it allows easy development and repairing of required modules independently.
10. Reliability: This factor depends on the measurement of completeness, consistency, and
robustness in the software design. Nowadays, most people depend and rely on S/W to always
work and yield correct results. If there is any unreliable part of software, it can cause major
dangers.
11. Reusability: The software design should be standard and generic so that it can be used for
mass production of quality products with small cycle time and reduced cost. The object code,
classes, design patterns, packages, etc., are the reusable parts of software.
Design Principles:
Every software process is characterized by basic concepts along with certain practices or methods.
Methods represent the manner through which the concepts are applied. As new technology replaces
older technology, many changes occur in the methods that are used to apply the concepts for the
development of software. However, the fundamental concepts underlining the software design process
remain the same, some of which are described here.
1. Abstraction
Abstraction refers to a powerful design tool, which allows software designers to consider
components at an abstract level, while neglecting the implementation details of the
components. IEEE defines abstraction as 'a view of a problem that extracts the
essential information relevant to a particular purpose and ignores the remainder of the
information.' The concept of abstraction can be used in two ways: as a process and as an entity. As
a process, it refers to a mechanism of hiding irrelevant details and representing only the essential
features of an item so that one can focus on important things at a time. As an entity, it refers to a
model or view of an item.
Each step in the software process is accomplished through various levels of abstraction. At
the highest level, an outline of the solution to the problem is presented whereas at the lower levels, the
solution to the problem is presented in detail. For example, in the requirements analysis phase, a
solution to the problem is presented using the language of problem environment and as we proceed
through the software process, the abstraction level reduces and at the lowest level, source code of the
software is produced.
There are three commonly used abstraction mechanisms in software design namely,
functional abstraction, data abstraction and control abstraction. All these mechanisms allow us to
control the complexity of the design process by proceeding from the abstract design model to concrete
design model in a systematic manner.
1. Functional abstraction: This involves the use of parameterized subprograms. Functional abstraction
can be generalized as collections of subprograms referred to as 'groups'. Within these groups there
exist routines which may be visible or hidden. Visible routines can be used within the containing
groups as well as within other groups, whereas hidden routines are hidden from other groups and can
be used within the containing group only.
2. Data abstraction: This involves specifying data that describes a data object. For example, the data
object window encompasses a set of attributes (window type, window dimension) that describe the
window object clearly. In this abstraction mechanism, representation and manipulation details are
ignored.
3. Control abstraction: This states the desired effect, without stating the exact mechanism of control.
For example, if and while statements in programming languages (like C and C++) are abstractions of
machine code implementations, which involve conditional instructions. In the architectural design
level, this abstraction mechanism permits specifications of sequential subprogram and exception
handlers without the concern for exact details of implementation.
2. Architecture
Software architecture refers to the structure of the system, which is composed of various
components of a program/ system, the attributes (properties) of those components and the relationship
amongst them. The software architecture enables the software engineers to analyze the software
design efficiently. In addition, it also helps them in decision-making and handling risks. The software
architecture does the following.

 Provides an insight to all the interested stakeholders that enable them to communicate with each
other
 Highlights early design decisions, which have great impact on the software engineering activities
(like coding and testing) that follow the design phase
 Creates intellectual models of how the system is organized into components and how these
components interact with each other.

Currently, software architecture is represented in an informal and unplanned manner. Though


the architectural concepts are often represented in the infrastructure (for supporting particular
architectural styles) and the initial stages of a system configuration, the lack of an explicit
independent characterization of architecture restricts the advantages of this design concept in the
present scenario.
Note that software architecture comprises two elements of design model, namely, data design
and architectural design.
3. Patterns
A pattern provides a description of the solution to a recurring design problem of some specific
domain in such a way that the solution can be used again and again. The objective of each pattern is to
provide an insight to a designer who can determine the following.
1. Whether the pattern can be reused
2. Whether the pattern is applicable to the current project
3. Whether the pattern can be used to develop a similar but functionally or
structurally different design pattern.
Types of Design Patterns
Software engineer can use the design pattern during the entire software design
process. When the analysis model is developed, the designer can examine the problem
description at different levels of abstraction to determine whether it complies with one or
more of the following types of design patterns.
1. Architectural patterns: These patterns are high-level strategies that refer to the
overall structure and organization of a software system. That is, they define the elements of a
software system such as subsystems, components, classes, etc. In addition, they also indicate
the relationship between the elements along with the rules and guidelines for specifying these
relationships. Note that architectural patterns are often considered equivalent to software
architecture.
2. Design patterns: These patterns are medium-level strategies that are used to solve
design problems. They provide a means for the refinement of the elements (as defined by
architectural pattern) of a software system or the relationship among them. Specific design
elements such as relationship among components or mechanisms that affect component-to-
component interaction are addressed by design patterns. Note that design patterns are often
considered equivalent to software components.
3. Idioms: These patterns are low-level patterns, which are programming-language
specific. They describe the implementation of a software component, the method used for
interaction among software components, etc., in a specific programming language. Note that
idioms are often termed as coding patterns.
4. Modularity: Modularity is achieved by dividing the software into uniquely named
and addressable components, which are also known as modules. A complex system (large
program) is partitioned into a set of discrete modules in such a way that each module can be
developed independent of other modules. After developing the modules, they are integrated
together to meet the software requirements. Note that larger the number of modules a system
is divided into, greater will be the effort required to integrate the modules.
Fig: Modules in Software Programs
Modularizing a design helps to plan the development in a more effective manner,
accommodate changes easily, conduct testing and debugging effectively and efficiently, and conducts
maintenance work without adversely affecting the functioning of the software.
4. Information Hiding
Modules should be specified and designed in such a way that the data structures and
processing details of one module are not accessible to other modules. They pass only that much
information to each other, which is required to accomplish the software functions. The way of hiding
unnecessary details is referred to as information hiding. IEEE defines information hiding as 'the
technique of encapsulating software design decisions in modules in such a way that the module's
interfaces reveal as little as possible about the module's inner workings; thus each module is a 'black
box' to the other modules in the system.

Fig: Information Hiding

Information hiding is of immense use when modifications are required during the testing and
maintenance phase. Some of the advantages associated with information hiding are listed below.
1. Leads to low coupling
2. Emphasizes communication through controlled interfaces
3. Decreases the probability of adverse effects
4. Restricts the effects of changes in one component on others
5. Results in higher quality software.
5. Stepwise Refinement
Stepwise refinement is a top-down design strategy used for decomposing a system from a
high level of abstraction into a more detailed level (lower level) of abstraction. At the highest level of
abstraction, function or information is defined conceptually without providing any information about
the internal workings of the function or internal structure of the data. As we proceed towards the
lower levels of abstraction, more and more details are available.
Software designers start the stepwise refinement process by creating a sequence of
compositions for the system being designed. Each composition is more detailed than the previous one
and contains more components and interactions. The earlier compositions represent the significant
interactions within the system, while the later compositions show in detail how these interactions are
achieved.
To have a clear understanding of the concept, let us consider an example of stepwise
refinement. Every computer program comprises input, process, and output.
1. INPUT

 Get user's name (string) through a prompt.


 Get user's grade (integer from 0 to 100) through a prompt and validate.

2. PROCESS
3. OUTPUT
This is the first step in refinement. The input phase can be refined further as given here.
1. INPUT
o Get user's name through a prompt.
o Get user's grade through a prompt.
o While (invalid grade)
Ask again:
2. PROCESS
3. OUTPUT
Note: Stepwise refinement can also be performed for PROCESS and OUTPUT phase.
6. Refactoring
Refactoring is an important design activity that reduces the complexity of module design
keeping its behaviour or function unchanged. Refactoring can be defined as a process of modifying a
software system to improve the internal structure of design without changing its external behavior.
During the refactoring process, the existing design is checked for any type of flaws like redundancy,
poorly constructed algorithms and data structures, etc., in order to improve the design. For example, a
design model might yield a component which exhibits low cohesion (like a component performs four
functions that have a limited relationship with one another). Software designers may decide to refactor
the component into four different components, each exhibiting high cohesion. This leads to easier
integration, testing, and maintenance of the software components.
7. Structural Partitioning
When the architectural style of a design follows a hierarchical nature, the structure of the
program can be partitioned either horizontally or vertically. In horizontal partitioning, the control
modules are used to communicate between functions and execute the functions. Structural partitioning
provides the following benefits.

 The testing and maintenance of software becomes easier.


 The negative impacts spread slowly.
 The software can be extended easily.
Besides these advantages, horizontal partitioning has some disadvantage also. It requires
passing more data across the module interface, which makes the control flow of the problem more
complex. This usually happens in cases where data moves rapidly from one function to another.

In vertical partitioning, the functionality is distributed among the modules--in a top-down


manner. The modules at the top level called control modules perform the decision-making and do
little processing whereas the modules at the low level called worker modules perform all input,
computation and output tasks.
8. Concurrency
Computer has limited resources and they must be utilized efficiently as much as possible. To
utilize these resources efficiently, multiple tasks must be executed concurrently. This requirement
makes concurrency one of the major concepts of software design. Every system must be designed to
allow multiple processes to execute concurrently, whenever possible. For example, if the current
process is waiting for some event to occur, the system must execute some other process in the mean
time.
However, concurrent execution of multiple processes sometimes may result in undesirable
situations such as an inconsistent state, deadlock, etc. For example, consider two processes A and B
and a data item Q1 with the value '200'. Further, suppose A and B are being executed concurrently
and firstly A reads the value of Q1 (which is '200') to add '100' to it. However, before A updates es the
value of Q1, B reads the value ofQ1 (which is still '200') to add '50' to it. In this situation, whether A
or B first updates the value of Q1, the value of would definitely be wrong resulting in an inconsistent
state of the system. This is because the actions of A and B are not synchronized with each other. Thus,
the system must control the concurrent execution and synchronize the actions of concurrent processes.
One way to achieve synchronization is mutual exclusion, which ensures that two concurrent
processes do not interfere with the actions of each other. To ensure this, mutual exclusion may use
locking technique. In this technique, the processes need to lock the data item to be read or updated.
The data item locked by some process cannot be accessed by other processes until it is unlocked. It
implies that the process, that needs to access the data item locked by some other process, has to wait.

Modular Design:
A modular design focuses on minimizing the interconnection b/w modules. In a modular
design, several independent and executable modules are composed together to construct an executable
application program. The programming language support, interfaces, and the information hiding
principles ensure modular system design. There are various modularization criterions to measure the
modularity of a system. The most common criterions are functional independency; levels of
abstraction; information hiding; functional diagrams, such as DFD, modular programming languages,
coupling, and cohesion. An effective modular system has low coupling and high cohesion. So,
coupling and cohesion are most popular criterions used to measure the modularity in a system.
1. Coupling:
Coupling between two modules is a measure of the degree of interdependence or interaction
between the two modules. A module having high cohesion and low coupling is said to be functionally
independent of other modules. If two modules interchange large amounts of data, then they are highly
interdependent. The degree of coupling between two modules depends on their interface complexity.
The interface complexity is basically determined by the number of types of parameters that are
interchanged while invoking the functions of the module. Module coupling can be Tightly or Loosely
coupled based on the dependencies.

Fig: Module Coupling


Classification of Coupling:
Even if there are no techniques to precisely and quantitatively estimate the coupling between
two modules, classification of the different types of coupling will help to quantitatively estimate the
degree of coupling between two modules. Six types of coupling can occur between any two modules.
This is shown in the figure given below:

Fig: Types of Coupling


i) Data coupling: Two modules are data coupled, if they communicate through a parameter. An
example is an elementary data item passed as a parameter between two modules, e.g. an integer, a
float, a character, etc. This data item should be problem related and not used for the control purpose.
ii) Stamp coupling: Two modules are stamp coupled, if they communicate using a composite data
item such as a record in PASCAL or a structure in C.
iii) Control coupling: Control coupling exists between two modules, if data from one module is used
to direct the order of instructions execution in another. An example of control coupling is a flag set in
one module and tested in another module.
iv) External Coupling: It occurs when two modules share an externally imposed data format,
communication protocol, or device interface. All the modules share the same I/O device or the
external environment.
v) Common coupling: Two modules are common coupled, if they share data through some global
data items.
vi) Content coupling: Content coupling exists between two modules, if they share code, e.g. a branch
from one module into another module.
2. Cohesion
Most researchers and engineers agree that a good software design implies clean
decomposition of the problem into modules, and the neat arrangement of these modules in a
hierarchy. The primary characteristics of neat module decomposition are high cohesion and low
coupling. Cohesion is a measure of functional strength of a module. A module having high cohesion
and low coupling is said to be functionally independent of other modules. By the term functional
independence, we mean that a cohesive module performs a single task or function. A functionally
independent module has minimal interaction with other modules.
Classification of cohesion:
The different classes of cohesion that a module may possess are depicted in fig. given below:

Fig: Types of Cohesion


i) Coincidental cohesion: A module is said to have coincidental cohesion, if it performs a set of tasks
that relate to each other very loosely, if at all. In this case, the module contains a random collection of
functions. It is likely that the functions have been put in the module out of pure coincidence without
any thought or design. For example, in a transaction processing system (TPS), the get-input, print-
error, and summarize-members functions are grouped into one module. The grouping does not have
any relevance to the structure of the problem.
ii) Logical cohesion: A module is said to be logically cohesive, if all elements of the module perform
similar operations, e.g. error handling, data input, data output, etc. An example of logical cohesion is
the case where a set of print functions generating different output reports are arranged into a single
module.
iii) Temporal cohesion: When a module contains functions that are related by the fact that all the
functions must be executed in the same time span, the module is said to exhibit temporal cohesion.
The set of functions responsible for initialization, start-up, shutdown of some process, etc. exhibit
temporal cohesion.
iv) Procedural cohesion: A module is said to possess procedural cohesion, if the set of functions of
the module are all part of a procedure (algorithm) in which certain sequence of steps have to be
carried out for achieving an objective, e.g. the algorithm for decoding a message.
v) Communicational cohesion: A module is said to have communicational cohesion, if all functions
of the module refer to or update the same data structure, e.g. the set of functions defined on an array
or a stack.
vi) Sequential cohesion: A module is said to possess sequential cohesion, if the elements of a module
form the parts of sequence, where the output from one element of the sequence is input to the next.
For example, in a TPS, the get-input, validate-input, sort-input functions are grouped into one module.
vii) Functional cohesion: Functional cohesion is said to exist, if different elements of a module
cooperate to achieve a single function. For example, a module containing all the functions required to
manage employees’ pay-roll exhibits functional cohesion. Suppose a module exhibits functional
cohesion and we are asked to describe what the module does, then we would be able to describe it
using a single sentence.
Design Methodologies:
A Design methodology provides the techniques and guidelines for the design process of a
system. There are different design processes for different design methodologies. The goal of all
design methodologies is to produce a design for the solution of a system. A design process consists of
various design activities. The most popular design methodologies are:
1) Function Oriented Design
2) Object Oriented Design
1) Function Oriented Design:
The following are the salient features of a typical function-oriented design approach:
1. A system is viewed as something that performs a set of functions. Starting at this high-level view of
the system, each function is successively refined into more detailed functions. For example, consider a
function create-new library-member which essentially creates the record for a new member, assigns a
unique membership number to him, and prints a bill towards his membership charge. This function
may consist of the following sub functions:
• Assign-membership-number
• Create-member-record
• Print-bill
Each of these sub-functions may be split into more detailed sub functions and so on.
2. The system state is centralized and shared among different functions, e.g. data such as member-
records is available for reference and updation to several functions such as:
• Create-new-member
• Delete-member
• Update-member-record
2) Object Oriented Design:
In the object-oriented design approach, the system is viewed as collection of objects (i.e.
entities). The state is decentralized among the objects and each object manages its own state
information. For example, in a Library Automation Software, each library member may be a separate
object with its own data and functions to operate on these data. In fact, the functions defined for one
object cannot refer or change data of other objects. Objects have their own internal data which define
their state. Similar objects constitute a class. In other words, each object is a member of some class.
Classes may inherit features from super class. Conceptually, objects communicate by message
passing.
Function-oriented vs. object-oriented design approach
The following are some of the important differences between function-oriented and object-
oriented design.
• Unlike function-oriented design methods, in OOD, the basic abstraction are not real-world functions
such as sort, display, track, etc, but realworld entities such as employee, picture, machine, radar
system, etc. For example in OOD, an employee pay-roll software is not developed by designing
functions such as update-employee-record, getemployee-address, etc. but by designing objects such as
employees, departments, etc. Grady Booch sums up this difference as “identify verbs if you are after
procedural design and nouns if you are after object-oriented design”.
• In OOD, state information is not represented in a centralized shared memory but is distributed
among the objects of the system. For example, while developing an employee pay-roll system, the
employee data such as the names of the employees, their code numbers, basic salaries, etc. are usually
implemented as global data in a traditional programming system; whereas in an object-oriented
system these data are distributed among different employee objects of the system. Objects
communicate by message passing. Therefore, one object may discover the state information of
another object by interrogating it. Of course, somewhere or other the real-world functions must be
implemented. In OOD, the functions are usually associated with specific real-world entities (objects);
they directly access only part of the system state information.
• Function-oriented techniques such as SA/SD group functions together if, as a group, they constitute
a higher-level function. On the other hand, object-oriented techniques group functions together on the
basis of the data they operate on.
To illustrate the differences between the object-oriented and the function-oriented design approaches,
an example can be considered.
Example: Fire-Alarm System
The owner of a large multi-stored building wants to have a computerized fire alarm system for his
building. Smoke detectors and fire alarms would be placed in each room of the building. The fire
alarm system would monitor the status of these smoke detectors. Whenever a fire condition is
reported by any of the smoke detectors, the fire alarm system should determine the location at which
the fire condition is reported by any of the smoke detectors, the fire alarm system should determine
the location at which the fire condition has occurred and then sound the alarms only in the
neighbouring locations. The fire alarm system should also flash an alarm message on the computer
console. Fire fighting personnel man the console round the clock. After a fire condition has been
successfully handled, the fire alarm system should support resetting the alarms by the fire fighting
personnel.

Function-Oriented Approach:
/* Global data (system state) accessible by various functions */
BOOL detector_status[MAX_ROOMS];
int detector_locs[MAX_ROOMS];
BOOL alarm_status[MAX_ROOMS];
/* alarm activated when status is set */
int alarm_locs[MAX_ROOMS];
/* room number where alarm is located */
int neighbor-alarm[MAX_ROOMS][10];
/* each detector has at most 10 neighboring locations */

The functions which operate on the system state are:


interrogate_detectors();
get_detector_location();
determine_neighbor();
ring_alarm();
reset_alarm();
report_fire_location();

Object-Oriented Approach:
class detector
attributes:
status, location, neighbours
operations: create, sense_status, get_location, find_neighbors
class alarm
attributes: location, status
operations: create, ring_alarm, get_location, reset_alarm
In the object oriented program, an appropriate number of instances of the class detector and alarm
should be created. If the function-oriented and the object-oriented programs are examined, it can be
seen that in the function-oriented program, the system state is centralized and several functions
accessing this central data are defined. In case of the object-oriented program, the state information is
distributed among various sensor and alarm objects.
It is not necessary an object-oriented design be implemented by using an object-oriented
language only. However, object-oriented languages such as C++ supports the definition of all the
basic mechanisms of class, inheritance, objects, methods, etc. and also support all key object-oriented
concepts that we have just discussed. Thus, an object-oriented language facilitates the implementation
of an OOD. However, an OOD can as well be implemented using a conventional procedural language
– though it may require more effort to implement an OOD using a procedural language as compared
to the effort required for implementing the same design using an object-oriented language.
Even though object-oriented and function-oriented approaches are remarkably different
approaches to software design, yet they do not replace each other but complement each other in some
sense. For example, usually one applies the top-down function-oriented techniques to design the
internal methods of a class, once the classes are identified. In this case, though outwardly the system
appears to have been developed in an object-oriented fashion, but inside each class there may be a
small hierarchy of functions designed in a top-down manner.

Structured Design
Structured Design The aim of structured design is to transform the results of the structured
analysis (i.e. a DFD representation) into a structure chart. Structured design provides two strategies to
guide transformation of a DFD into a structure chart.
• Transform analysis
• Transaction analysis
Normally, one starts with the level 1 DFD, transforms it into module representation using
either the transform or the transaction analysis and then proceeds towards the lower-level DFDs. At
each level of transformation, it is important to first determine whether the transform or the transaction
analysis is applicable to a particular DFD.
General Steps Involved in structured design are:
 The type of data flow is established
In this step the nature of the data flowing between processes is defined.
 Determine flow boundaries (switch points)
This includes if the boundary is input boundary, output boundary, hub boundary or action
boundary or process.
 Map the abstract DFD onto a particular program structure
Determine if the program structure is a transformational structure or transactional structure.
 Define a valid control structure
This step is also known as "first-level" factoring. It depends on whether transformational or
transactional models are used.
The control structure is either "Call-and-return" for transformational model or "Call-and-
act" for transactional model.
 Refine (tune) the resulting structure
This step is also known as "second-level factoring". It maps Input/Output flow
bounded parts of DFD.
 Supplement and tune the final architectural structure
Apply basic module independence concepts (i.e. Explode or implode modules
according to coupling/cohesion requirements) to obtain an easier implementation.

Structure Chart
A structure chart represents the software architecture, i.e. the various modules making up the
system, the dependency (which module calls which other modules), and the parameters that are
passed among the different modules. Hence, the structure chart representation can be easily
implemented using some programming language. Since the main focus in a structure chart
representation is on the module structure of the software and the interactions among different
modules, the procedural aspects (e.g. how a particular functionality is achieved) are not represented.

The basic building blocks which are used to design structure charts are the following:
• Rectangular boxes: Represents a module.
• Module invocation arrows: Control is passed from one module to another module in the direction of
the connecting arrow.
• Data flow arrows: Arrows are annotated with data name; named data passes from one module to
another module in the direction of the arrow.
• Library modules: Represented by a rectangle with double edges.
• Selection: Represented by a diamond symbol.
• Repetition: Represented by a loop around the control flow arrow.

Structure Chart vs. Flow Chart


We are all familiar with the flow chart representation of a program. Flow chart is a convenient
technique to represent the flow of control in a program. A structure chart differs from a flow chart in
three principal ways:
• It is usually difficult to identify the different modules of the software from its flow chart
representation.
• Data interchange among different modules is not represented in a flow chart.
• Sequential ordering of tasks inherent in a flow chart is suppressed in a structure chart.
Transform Analysis
Transform analysis identifies the primary functional components (modules) and the high level
inputs and outputs for these components. The first step in transform analysis is to divide the DFD into
3 types of parts:
• Input
• Logical processing
• Output
The input portion of the DFD includes processes that transform input data from physical (e.g.
character from terminal) to logical forms (e.g. internal tables, lists, etc.). Each input portion is called
an afferent branch.
The output portion of a DFD transforms output data from logical to physical form. Each
output portion is called an efferent branch. The remaining portion of a DFD is called the central
transform.
In the next step of transform analysis, the structure chart is derived by drawing one functional
component for the central transform, and the afferent and efferent branches. These are drawn below a
root (coordinate module) module, which would invoke these modules.

Identifying the highest level input and output transforms requires experience and skill. One
possible approach is to trace the inputs until a bubble is found whose output cannot be deduced from
its inputs alone. Processes which validate input or add information to them are not central transforms.
Processes which sort input or filter data from it are the first level structure chart is produced by
representing each input and output unit as boxes and each central transform as a single box.

In the third step of transform analysis, the structure chart is refined by adding sub-functions
required by each of the high-level functional components. Many levels of functional components may
be added. This process of breaking functional components into subcomponents is called factoring.
Factoring includes adding read and write modules, error-handling modules, initialization and
termination process, identifying customer modules, etc. The factoring process is continued until all
bubbles in the DFD are represented in the structure chart.

Example: Structure chart for the RMS software


For this example, the context diagram was drawn earlier.
To draw the level 1 DFD, from a cursory analysis of the problem description, we can see that there are
four basic functions that the system needs to perform – accept the input numbers from the user,
validate the numbers, calculate the root mean square of the input numbers and, then display the result.

Fig: Level 1 DFD


By observing the level 1 DFD, we identify the validate-input as the afferent branch and write-output
as the efferent branch. The remaining portion (i.e. compute-rms) forms the central transform. By
applying the step 2 and step 3 of transform analysis, we get the structure chart shown in fig. given
below.
Fig: Structure Chart for RMS
Transaction Analysis
A transaction allows the user to perform some meaningful piece of work. Transaction
analysis is useful while designing transaction processing programs. In a transaction-driven system,
one of several possible paths through the DFD is traversed depending upon the input data item. This is
in contrast to a transform centered system which is characterized by similar processing steps for each
data item. Each different way in which input data is handled is a transaction. A simple way to identify
a transaction is to check the input data. The number of bubbles on which the input data to the DFD are
incident defines the number of transactions. However, some transaction may not require any input
data. These transactions can be identified from the experience of solving a large number of examples.
For each identified transaction, trace the input data to the output. All the traversed bubbles belong to
the transaction. These bubbles should be mapped to the same module on the structure chart. In the
structure chart, draw a root module and below this module draw each identified transaction a module.
Every transaction carries a tag, which identifies its type. Transaction analysis uses this tag to divide
the system into transaction modules and a transaction-center module.

The structure chart for the supermarket prize scheme software is shown in the figure given below:

Fig: Structure Chart for the super market prize scheme


Transform Vs Transaction Flow:
Points Transform Flow Transaction Flow
Incoming Flow Incoming flow paths where Reception paths that converts
incoming information is converted to external information into a
internal representation. transaction.
Center Transform center where information A transaction center
is being processed. (dispatcher) where the
transaction is evaluated and one
of the emanating paths is
activated.
Path Outgoing flow paths Action paths
Outgoing Flow Overall flow of data occurs in Overall flow of data forms a
sequential manner and follows one dispatch center pattern, where
or more linear paths. the incoming data flow (via the
reception path) is directed to
only one of the action paths by
the transaction centre.

Object-Oriented Design
Object–Oriented Design (OOD) involves implementation of the conceptual model produced
during object-oriented analysis. In OOD, concepts in the analysis model, which are
technology−independent, are mapped onto implementing classes, constraints are identified and
interfaces are designed, resulting in a model for the solution domain, i.e., a detailed description
of how the system is to be built on concrete technologies.
The implementation details generally include:

 Restructuring the class data (if necessary),


 Implementation of methods, i.e., internal data structures and algorithms,
 Implementation of control, and
 Implementation of associations.

Grady Booch has defined object-oriented design as “a method of design encompassing the
process of object-oriented decomposition and a notation for depicting logical and physical as well as
static and dynamic models of the system under design”.

Some popular object-oriented design principles are:

Divide and Conquer


Trying to deal with something big all at once is normally much harder than dealing with a
series of smaller things
• Separate people can work on each part.
• An individual software engineer can specialize.
• Each individual component is smaller, and therefore easier to understand.
• Parts can be replaced or changed without having to replace or extensively change other parts.

DRY
“Don’t Repeat Yourself”. Try to avoid any duplicates; instead you put them into a single part
of the system, or a method.
Imagine that you have copied and pasted blocks of code in different parts in your system. What
if you changed any of them? You will need to change and check the logic of every part that has the
same block of code.
Definitely you don’t want to do that. This is an extra cost that you don’t need to pay for, all
what you need to is to have a single source of truth in your design, code, documentation, and even in
the database schema.

Expect to Change
Should have the capability to think what are the requirements/ functionalities may add in the future.
Based on that idea, the design has to be developed that even if the functionalities are added in future, it
shouldn’t be disturbed. For that purpose we have to follow the OO principles called SOLID.

SOLID

S—Single Responsibility Principle


An object should have one and only one responsibility.
You don’t need to have an object that does different or many tasks. An object can have many
behaviors and methods, but all of them are relevant to its single responsibility.
So, whenever there is a change that needs to happen, there will be only one class to be
modified, this class has one primary responsibility.

O—Open/Closed Principle
Software entities (classes, modules, functions, etc.) should be open for extension, but closed
for modification.
Whenever you need to add additional behaviors, or methods, you don’t have to modify the
existing one, instead, you start writing new methods.
Because, What if you changed a behavior of an object, where some other parts of the system
depends on it?. So, you need to change also every single part in the software that has a dependency
with that object, and check the logic, and do some extra testing.

L—Liskov Substitution Principle


A super class can be replaced by any of its inheriting sub classes at any parts of the system
without any change in the code.
It means that the sub classes should extend the functionality of the super class without
overriding it.

I—Interface Segregation Principle


Interfaces should be specific rather than doing many and different things.
That’s because any implementing class will only implement the specific needed interfaces
rather than being forced to implement methods that it doesn’t need it.
So, large interfaces should be decomposed into smaller, more specific ones.

D-Dependency Inversion Principle


Try to minimize the dependency between objects by using abstraction.
If for example you have a App class that depends on very specialized
classes; Database and Mail (dependencies). Instead, we could have App object that deals
with Service class, which is more abstract, rather than something very specific. So, now the App class
is not dependent on the concrete classes, but on abstraction.
And the benefit of that is we are able to replace and extend the functionality of Service class
without changing the App class at all.
Perhaps we can replace the Database and Mail classes, or add additional classes
like Logger and Auth as well.
A common design pattern that applies this principle is called Dependency injection. We’re
going to discuss design patterns in a more detail in the next tutorial.

The object model visualizes the elements in a software application in terms of objects. In this
chapter, we will look into the basic concepts and terminologies of object–oriented systems.

1. Objects and Classes


The concepts of objects and classes are intrinsically linked with each other and form the foundation
of object–oriented paradigm.
2. Object
An object is a real-world element in an object–oriented environment that may have a physical or a
conceptual existence. Each object has:
 Identity that distinguishes it from other objects in the system.
 State that determines the characteristic properties of an object as well as the values of the
properties that the object holds.
 Behavior that represents externally visible activities performed by an object in terms of
changes in its state.
Objects can be modelled according to the needs of the application. An object may have a physical
existence, like a customer, a car, etc.; or an intangible conceptual existence, like a project, a process,
etc.
3. Class
A class represents a collection of objects having same characteristic properties that exhibit common
behavior. It gives the blueprint or description of the objects that can be created from it. Creation of
an object as a member of a class is called instantiation. Thus, object is an instance of a class.

The constituents of a class are:

 A set of attributes for the objects that are to be instantiated from the class. Generally,
different objects of a class have some difference in the values of the attributes. Attributes are
often referred as class data.

 A set of operations that portray the behavior of the objects of the class. Operations are also
referred as functions or methods.

Example

Let us consider a simple class, Circle, that represents the geometrical figure circle in a two–
dimensional space. The attributes of this class can be identified as follows:

 x–coord, to denote x–coordinate of the center


 y–coord, to denote y–coordinate of the center
 a, to denote the radius of the circle
Some of its operations can be defined as follows:

 findArea(), method to calculate area


 findCircumference(), method to calculate circumference
 scale(), method to increase or decrease the radius
During instantiation, values are assigned for at least some of the attributes. If we create an object
my_circle, we can assign values like x-coord : 2, y-coord : 3, and a : 4 to depict its state. Now, if the
operation scale() is performed on my_circle with a scaling factor of 2, the value of the variable a will
become 8. This operation brings a change in the state of my_circle, i.e., the object has exhibited
certain behavior.

4. Encapsulation and Data Hiding


 Encapsulation
Encapsulation is the process of binding both attributes and methods together within a class.
Through encapsulation, the internal details of a class can be hidden from outside. It permits the
elements of the class to be accessed from outside only through the interface provided by the class.

 Data Hiding
Typically, a class is designed such that its data (attributes) can be accessed only by its class
methods and insulated from direct outside access. This process of insulating an object’s data is called
data hiding or information hiding.

Example
In the class Circle, data hiding can be incorporated by making attributes invisible from outside
the class and adding two more methods to the class for accessing class data, namely:

 setValues(), method to assign values to x-coord, y-coord, and a


 getValues(), method to retrieve values of x-coord, y-coord, and a
Here the private data of the object my_circle cannot be accessed directly by any method that is
not encapsulated within the class Circle. It should instead be accessed through the methods
setValues() and getValues().

5. Message Passing
Any application requires a number of objects interacting in a harmonious manner. Objects in
a system may communicate with each other using message passing. Suppose a system has two
objects: obj1 and obj2. The object obj1 sends a message to object obj2, if obj1 wants obj2 to execute
one of its methods.
The features of message passing are:
 Message passing between two objects is generally unidirectional.
 Message passing enables all interactions between objects.
 Message passing essentially involves invoking class methods.
 Objects in different processes can be involved in message passing.
6. Inheritance
Inheritance is the mechanism that permits new classes to be created out of existing classes by
extending and refining its capabilities. The existing classes are called the base classes/parent
classes/super-classes, and the new classes are called the derived classes/child classes/subclasses. The
subclass can inherit or derive the attributes and methods of the super-class(es) provided that the
super-class allows so. Besides, the subclass may add its own attributes and methods and may modify
any of the super-class methods. Inheritance defines an “is – a” relationship.

Example
From a class Mammal, a number of classes can be derived such as Human, Cat, Dog, Cow, etc.
Humans, cats, dogs, and cows all have the distinct characteristics of mammals. In addition, each has
its own particular characteristics. It can be said that a cow “is – a” mammal.
Types of Inheritance:
 Single Inheritance: A subclass derives from a single super-class.
 Multiple Inheritance: A subclass derives from more than one super-classes.
 Multilevel Inheritance: A subclass derives from a super-class which in turn is derived from
another class and so on.
 Hierarchical Inheritance: A class has a number of subclasses each of which may have
subsequent subclasses, continuing for a number of levels, so as to form a tree structure.
 Hybrid Inheritance: A combination of multiple and multilevel inheritance so as to form a
lattice structure.
The following figure depicts the examples of different types of inheritance.
7. Polymorphism
Polymorphism is originally a Greek word that means the ability to take multiple forms. In object-
oriented paradigm, polymorphism implies using operations in different ways, depending upon the
instance they are operating upon. Polymorphism allows objects with different internal structures to
have a common external interface. Polymorphism is particularly effective while implementing
inheritance.

Example
Let us consider two classes, Circle and Square, each with a method findArea(). Though the
name and purpose of the methods in the classes are same, the internal implementation, i.e., the
procedure of calculating area is different for each class. When an object of class Circle invokes its
findArea() method, the operation finds the area of the circle without any conflict with the findArea()
method of the Square class.
8. Generalization and Specialization
Generalization and specialization represent a hierarchy of relationships between classes, where
subclasses inherit from super-classes.

Generalization
In the generalization process, the common characteristics of classes are combined to form a
class in a higher level of hierarchy, i.e., subclasses are combined to form a generalized super-class. It
represents an “is – a – kind – of” relationship. For example, “car is a kind of land vehicle”, or “ship
is a kind of water vehicle”.

Specialization
Specialization is the reverse process of generalization. Here, the distinguishing features of
groups of objects are used to form specialized classes from existing classes. It can be said that the
subclasses are the specialized versions of the super-class.
The following figure shows an example of generalization and specialization.

Relationships:
Links and Association
1. Link
A link represents a connection through which an object collaborates with other objects.
Rumbaugh has defined it as “a physical or conceptual connection between objects”. Through a link,
one object may invoke the methods or navigate through another object. A link depicts the
relationship between two or more objects.

2. Association
Association is a group of links having common structure and common behavior. Association
depicts the relationship between objects of one or more classes. A link can be defined as an instance
of an association.

Degree of an Association
Degree of an association denotes the number of classes involved in a connection. Degree may be
unary, binary, or ternary.
 A unary relationship connects objects of the same class.
 A binary relationship connects objects of two classes.
 A ternary relationship connects objects of three or more classes.

Cardinality Ratios of Associations


Cardinality of a binary association denotes the number of instances participating in an
association. There are three types of cardinality ratios, namely:

 One–to–One: A single object of class A is associated with a single object of class B.


 One–to–Many: A single object of class A is associated with many objects of class B.
 Many–to–Many: An object of class A may be associated with many objects of class B and
conversely an object of class B may be associated with many objects of class A.
Aggregation or Composition
Aggregation or composition is a relationship among classes by which a class can be made up
of any combination of objects of other classes. It allows objects to be placed directly within the body
of other classes. Aggregation is referred as a “part–of” or “has–a” relationship, with the ability to
navigate from the whole to its parts. An aggregate object is an object that is composed of one or
more other objects.

Example
In the relationship, “a car has–a motor”, car is the whole object or the aggregate, and the motor is
a “part–of” the car. Aggregation may denote:
 Physical containment: Example, a computer is composed of monitor, CPU, mouse,
keyboard, and so on.
 Conceptual containment: Example, shareholder has–a share.

1. UML Analysis Model

The Unified Modelling Language (UML) is a graphical language for OOAD that gives a
standard way to write a software system’s blueprint. It helps to visualize, specify, construct, and
document the artifacts of an object-oriented system. It is used to depict the structures and the
relationships in a complex system.

Brief History
It was developed in 1990s as an amalgamation of several techniques, prominently OOAD
technique by Grady Booch, OMT (Object Modeling Technique) by James Rumbaugh, and OOSE
(Object Oriented Software Engineering) by Ivar Jacobson. UML attempted to standardize semantic
models, syntactic notations, and diagrams of OOAD.

Systems and Models in UML


System: A set of elements organized to achieve certain objectives form a system. Systems are often
divided into subsystems and described by a set of models.

Model: Model is a simplified, complete, and consistent abstraction of a system, created for better
understanding of the system.

View: A view is a projection of a system’s model from a specific perspective.

2. Conceptual Model of UML


The Conceptual Model of UML encompasses three major elements:
 Basic building blocks
 Rules
 Common mechanisms

Basic Building Blocks


The three building blocks of UML are:

 Things
 Relationships
 Diagrams
(a) Things:
There are four kinds of things in UML, namely:
 Structural Things: These are the nouns of the UML models representing the static elements
that may be either physical or conceptual. The structural things are class, interface,
collaboration, use case, active class, components, and nodes.
 Behavioral Things: These are the verbs of the UML models representing the dynamic
behavior over time and space. The two types of behavioral things are interaction and state
machine.
 Grouping Things: They comprise the organizational parts of the UML models. There is only
one kind of grouping thing, i.e., package.
 Annotational Things: These are the explanations in the UML models representing the
comments applied to describe elements.

(b) Relationships:

Relationships are the connection between things. The four types of relationships that can be
represented in UML are:
 Dependency: This is a semantic relationship between two things such that a change in one
thing brings a change in the other. The former is the independent thing, while the latter is the
dependent thing.
 Association: This is a structural relationship that represents a group of links having common
structure and common behavior.
 Generalization: This represents a generalization/specialization relationship in which
subclasses inherit structure and behavior from super-classes.
 Realization: This is a semantic relationship between two or more classifiers such that one
classifier lays down a contract that the other classifiers ensure to abide by.
(c) Diagrams: A diagram is a graphical representation of a system. It comprises of a group of
elements generally in the form of a graph. UML includes nine diagrams in all, namely:

 Class Diagram
 Object Diagram
 Use Case Diagram
 Sequence Diagram
 Collaboration Diagram
 State Chart Diagram
 Activity Diagram
 Component Diagram
 Deployment Diagram

Rules
UML has a number of rules so that the models are semantically self-consistent and related to
other models in the system harmoniously. UML has semantic rules for the following:

 Names
 Scope
 Visibility
 Integrity
 Execution
Common Mechanisms
UML has four common mechanisms:

 Specifications
 Adornments
 Common Divisions
 Extensibility Mechanisms
Specifications
In UML, behind each graphical notation, there is a textual statement denoting the syntax and
semantics. These are the specifications. The specifications provide a semantic backplane that
contains all the parts of a system and the relationship among the different paths.

Adornments
Each element in UML has a unique graphical notation. Besides, there are notations to represent the
important aspects of an element like name, scope, visibility, etc.

Common Divisions
Object-oriented systems can be divided in many ways. The two common ways of division are:

 Division of classes and objects: A class is an abstraction of a group of similar objects. An


object is the concrete instance that has actual existence in the system.
 Division of Interface and Implementation: An interface defines the rules for interaction.
Implementation is the concrete realization of the rules defined in the interface.

Extensibility Mechanisms
UML is an open-ended language. It is possible to extend the capabilities of UML in a controlled
manner to suit the requirements of a system. The extensibility mechanisms are:
 Stereotypes: It extends the vocabulary of the UML, through which new building blocks can
be created out of existing ones.
 Tagged Values: It extends the properties of UML building blocks.
 Constraints: It extends the semantics of UML building blocks.

UML Basic Notations


UML defines specific notations for each of the building blocks.

Class
A class is represented by a rectangle having three sections:

 the top section containing the name of the class


 the middle section containing class attributes
 the bottom section representing operations of the class
The visibility of the attributes and operations can be represented in the following ways:
 Public : A public member is visible from anywhere in the system. In class diagram, it is
prefixed by the symbol ‘+’.
 Private : A private member is visible only from within the class. It cannot be accessed from
outside the class. A private member is prefixed by the symbol ‘−’.
 Protected : A protected member is visible from within the class and from the subclasses
inherited from this class, but not from outside. It is prefixed by the symbol ‘#’.
An abstract class has the class name written in italics.

Example: Let us consider the Circle class introduced earlier. The attributes of Circle are x-coord, y-
coord, and radius. The operations are findArea(), findCircumference(), and scale(). Let us assume
that x-coord and y-coord are private data members, radius is a protected data member, and the
member functions are public. The following figure gives the diagrammatic representation of the
class.
Object
An object is represented as a rectangle with two sections:
 The top section contains the name of the object with the name of the class or package of
which it is an instance of. The name takes the following forms:
o object-name : class-name
o object-name : class-name :: package-name
o class-name : in case of anonymous objects
 The bottom section represents the values of the attributes. It takes the form attribute-name =
value.
 Sometimes objects are represented using rounded rectangles.

Example: Let us consider an object of the class Circle named c1. We assume that the center of c1 is
at (2, 3) and the radius of c1 is 5. The following figure depicts the object.

Component
A component is a physical and replaceable part of the system that conforms to and provides
the realization of a set of interfaces. It represents the physical packaging of elements like classes and
interfaces.

Notation: In UML diagrams, a component is represented by a rectangle with tabs as shown in the
figure below.
Interface
Interface is a collection of methods of a class or component. It specifies the set of services
that may be provided by the class or component.

Notation: Generally, an interface is drawn as a circle together with its name. An interface is almost
always attached to the class or component that realizes it. The following figure gives the notation of
an interface.

Package
A package is an organized group of elements. A package may contain structural things like
classes, components, and other packages in it.

Notation: Graphically, a package is represented by a tabbed folder. A package is generally drawn


with only its name. However it may have additional details about the contents of the package. See
the following figures.

Relationship
The notations for the different types of relationships are as follows:
Usually, elements in a relationship play specific roles in the relationship. A role name
signifies the behavior of an element participating in a certain context.

Example: The following figures show examples of different relationships between classes. The first
figure shows an association between two classes, Department and Employee, wherein a department
may have a number of employees working in it. Worker is the role name. The ‘1’ alongside
Department and ‘*’ alongside Employee depict that the cardinality ratio is one–to–many. The second
figure portrays the aggregation relationship, a University is the “whole–of” many Departments.

UML Structural Diagrams


UML structural diagrams are categorized as follows: class diagram, object diagram,
component diagram, and deployment diagram.
1. Class Diagram
A class diagram models the static view of a system. It comprises of the classes, interfaces,
and collaborations of a system; and the relationships between them.

Class Diagram of a System

Let us consider a simplified Banking System.

A bank has many branches. In each zone, one branch is designated as the zonal head office that
supervises the other branches in that zone. Each branch can have multiple accounts and loans. An
account may be either a savings account or a current account. A customer may open both a savings
account and a current account. However, a customer must not have more than one savings account or
current account. A customer may also procure loans from the bank.

The following figure shows the corresponding class diagram.

Classes in the system:


Bank, Branch, Account, Savings Account, Current Account, Loan, and Customer.
Relationships:
 A Bank “has–a” number of Branches : composition, one–to–many
 A Branch with role Zonal Head Office supervises other Branches : unary association,
one–to-many
 A Branch “has–a” number of accounts : aggregation, one–to–many

From the class Account, two classes have inherited, namely, Savings Account and Current Account.
 A Customer can have one Current Account : association, one–to–one
 A Customer can have one Savings Account : association, one–to–one
 A Branch “has–a” number of Loans : aggregation, one–to–many
 A Customer can take many loans : association, one–to–many

2. Object Diagram
An object diagram models a group of objects and their links at a point of time. It shows the
instances of the things in a class diagram. Object diagram is the static part of an interaction diagram.

Example: The following figure shows an object diagram of a portion of the class diagram of the
Banking System.
3. Component Diagram
Component diagrams show the organization and dependencies among a group of components.
Component diagrams comprise of:
 Components
 Interfaces
 Relationships
 Packages and Subsystems (optional)
Component diagrams are used for:

 Constructing systems through forward and reverse engineering.


 Modelling configuration management of source code files while developing a system using
an object-oriented programming language.
 Representing schemas in modeling databases.
 Modeling behaviors of dynamic systems.
Example

The following figure shows a component diagram to model a system’s source code that is
developed using C++. It shows four source code files, namely, myheader.h, otherheader.h,
priority.cpp, and other.cpp. Two versions of myheader.h are shown, tracing from the recent version
to its ancestor. The file priority.cpp has compilation dependency on other.cpp. The file other.cpp has
compilation dependency on otherheader.h.
4. Deployment Diagram
A deployment diagram puts emphasis on the configuration of runtime processing nodes and their
components that live on them. They are commonly comprised of nodes and dependencies, or
associations between the nodes.

Deployment diagrams are used to:


 Model devices in embedded systems that typically comprise of software-intensive collection
of hardware.
 Represent the topologies of client/server systems.
 Model fully distributed systems.
Example
The following figure shows the topology of a computer system that follows client/server
architecture. The figure illustrates a node stereotyped as server that comprises of processors. The
figure indicates that four or more servers are deployed at the system. Connected to the server are the
client nodes, where each node represents a terminal device such as workstation, laptop, scanner, or
printer. The nodes are represented using icons that clearly depict the real-world equivalent.
UML Behavioural Diagrams
UML behavioral diagrams visualize, specify, construct, and document the dynamic aspects of
a system. The behavioral diagrams are categorized as follows: use case diagrams, interaction
diagrams, state–chart diagrams, and activity diagrams.

5. Use Case Model


(a) Use case
A use case describes the sequence of actions a system performs yielding visible results. It
shows the interaction of things outside the system with the system itself. Use cases may be applied to
the whole system as well as a part of the system.

(b) Actor
An actor represents the roles that the users of the use cases play. An actor may be a person
(e.g. student, customer), a device (e.g. workstation), or another system (e.g. bank, institution).

The following figure shows the notations of an actor named Student and a use case called
Generate Performance Report.

(c) Use case diagrams


Use case diagrams present an outside view of the manner the elements in a system behave and how
they can be used in the context.
Use case diagrams comprise of:

 Use cases
 Actors
 Relationships like dependency, generalization, and association
Use case diagrams are used:
 To model the context of a system by enclosing all the activities of a system within a rectangle
and focusing on the actors outside the system by interacting with it.
 To model the requirements of a system from the outside point of view.
Example

Let us consider an Automated Trading House System. We assume the following features of the
system:
 The trading house has transactions with two types of customers, individual customers and
corporate customers.
 Once the customer places an order, it is processed by the sales department and the customer
is given the bill.
 The system allows the manager to manage customer accounts and answer any queries posted
by the customer.

Interaction Diagrams
Interaction diagrams depict interactions of objects and their relationships. They also include the
messages passed between them. There are two types of interaction diagrams:
 Sequence Diagrams
 Collaboration Diagrams
Interaction diagrams are used for modeling:
 The control flow by time ordering using sequence diagrams.
 The control flow of organization using collaboration diagrams.
6. Sequence Diagrams
Sequence diagrams are interaction diagrams that illustrate the ordering of messages
according to time.
Notations: These diagrams are in the form of two-dimensional charts. The objects that initiate the
interaction are placed on the x–axis. The messages that these objects send and receive are placed
along the y–axis, in the order of increasing time from top to bottom.
Example: A sequence diagram for the Automated Trading House System is shown in the following
figure.
7. Collaboration Diagrams
Collaboration diagrams are interaction diagrams that illustrate the structure of the objects that send
and receive messages.

Notations: In these diagrams, the objects that participate in the interaction are shown using vertices.
The links that connect the objects are used to send and receive messages. The message is shown as a
labelled arrow.

Example: Collaboration diagram for the Automated Trading House System is illustrated in the
figure below.

8. State–Chart Diagrams
A state–chart diagram shows a state machine that depicts the control flow of an object from
one state to another. A state machine portrays the sequences of states which an object undergoes due
to events and their responses to events.
State–Chart Diagrams comprise of:

 States: Simple or Composite


 Transitions between states
 Events causing transitions
 Actions due to the events
State-chart diagrams are used for modeling objects which are reactive in nature.

Example

In the Automated Trading House System, let us model Order as an object and trace its
sequence. The following figure shows the corresponding state–chart diagram.

9. Activity Diagrams
An activity diagram depicts the flow of activities which are ongoing non-atomic operations in
a state machine. Activities result in actions which are atomic operations.

Activity diagrams comprise of:


 Activity states and action states
 Transitions
 Objects
Activity diagrams are used for modeling:
 Workflows as viewed by actors, interacting with the system.
 Details of operations or computations using flowcharts.
Example

The following figure shows an activity diagram of a portion of the Automated Trading House
System.
UML DIAGRAMS
(Unified Modelling Language)
DEFINITION:
❑ The Unified Modelling Language (UML) is a graphical language for OOAD that gives a standard way to write a
software system’s blueprint. It helps to visualize, specify, construct, and document the artifacts of an object-oriented
system. It is used to depict the structures and the relationships in a complex system.

❑ It was developed in 1990s as an amalgamation of several techniques, prominently OOAD technique by Grady Booch,
OMT (Object Modeling Technique) by James Rumbaugh, and OOSE (Object Oriented Software Engineering) by Ivar
Jacobson. UML attempted to standardize semantic models, syntactic notations, and diagrams of OOAD.

Basic Building Blocks

The three building blocks of UML are:

a) Things

b) Relationships

c) Diagrams
(a) Things:

There are four kinds of things in UML, namely:

Structural Things: These are the nouns of the UML models representing the static elements that may be either physical or
conceptual. The structural things are class, interface, collaboration, use case, active class, components, and nodes.

Behavioral Things: These are the verbs of the UML models representing the dynamic behavior over time and space. The
two types of behavioral things are interaction and state machine.

Grouping Things: They comprise the organizational parts of the UML models. There is only one kind of grouping thing,
i.e., package.

Annotational Things: These are the explanations in the UML models representing the comments applied to describe
elements.
(b) Relationships:

Relationships are the connection between things. The four types of relationships that can be represented in UML are:

Dependency: This is a semantic relationship between two things such that a change in one thing brings a change in the
other. The former is the independent thing, while the latter is the dependent thing.

Association: This is a structural relationship that represents a group of links having common structure and common
behavior.

Generalization: This represents a generalization/specialization relationship in which subclasses inherit structure and
behavior from super-classes.

Realization: This is a semantic relationship between two or more classifiers such that one classifier lays down a contract
that the other classifiers ensure to abide by.
(c) Diagrams: A diagram is a graphical representation of a system. It comprises of a group of elements generally in the
form of a graph. UML includes nine diagrams in all, namely:

▪ Class Diagram

▪ Object Diagram

▪ Use Case Diagram

▪ Sequence Diagram

▪ Collaboration Diagram

▪ State Chart Diagram

▪ Activity Diagram

▪ Component Diagram

▪ Deployment Diagram
Class Diagram
The class diagram is a central modeling technique that runs through nearly all object-oriented methods. This diagram
describes the types of objects in the system and various kinds of static relationships which exist between them.
Relationships:
There are three principal kinds of relationships which are important:
1. Association - represent relationships between instances of types (a person works for a company, a company has a
number of offices.
2. Inheritance - the most obvious addition to ER diagrams for use in OO. It has an immediate correspondence to
inheritance in OO design.
3. Aggregation - Aggregation, a form of object composition in object-oriented design.
A class is represented by a rectangle having three sections:
▪ the top section containing the name of the class
▪ the middle section containing class attributes
▪ the bottom section representing operations of the class
Object Diagram
▪ An object diagram is a graph of instances, including objects and data values. A static object diagram is an
instance of a class diagram.
▪ It shows a snapshot of the detailed state of a system at a point in time. The difference is that a class
diagram represents an abstract model consisting of classes and their relationships.
▪ An object diagram represents an instance at a particular moment, which is concrete in nature. The use of
object diagrams is fairly limited.
An object is represented as a rectangle with two sections:
▪ The top section contains the name of the object with the name of the class or package of which it is an
instance of. The name takes the following forms:
object-name : class-name
object-name : class-name :: package-name
class-name : in case of anonymous objects
▪ The bottom section represents the values of the attributes. It takes the form attribute-name = value.
▪ Sometimes objects are represented using rounded rectangles.
▪ Example: Let us consider an object of the class Circle named c1. We assume that the center of c1 is at (2, 3) and the

radius of c1 is 5. The following figure depicts the object.


Use Case Diagram
▪ A use-case model describes a system's functional requirements in terms of use cases. It is a model
of the system's intended functionality (use cases) and its environment (actors). Use cases enable
you to relate what you need from a system to how the system delivers on those needs.

▪ Think of a use-case model as a menu, much like the menu you'd find in a restaurant. By looking at
the menu, you know what's available to you, the individual dishes as well as their prices. You also
know what kind of cuisine the restaurant serves: Italian, Mexican, Chinese, and so on. By looking
at the menu, you get an overall impression of the dining experience that awaits you in that
restaurant. The menu, in effect, "models" the restaurant's behavior.
Sequence Diagram
The Sequence Diagram models the collaboration of objects based on a time sequence. It shows how the

objects interact with others in a particular scenario of a use case. With the advanced visual modeling

capability, you can create complex sequence diagram in few clicks. Besides, some modeling tool such as

Visual Paradigm can generate sequence diagram from the flow of events which you have defined in the use

case description.
Collaboration Diagram
Collaboration diagrams (known as Communication Diagram in UML ) are used to show how objects interact
to perform the behavior of a particular use case, or a part of a use case. Along with sequence diagrams,
collaboration are used by designers to define and clarify the roles of the objects that perform a particular flow of
events of a use case. They are the primary source of information used to determining class responsibilities and
interfaces.

Notation:

An object is represented by an object symbol showing the name of the object and its class underlined, separated
by a colon:

Object_name : class_name
State Chart Diagram
A State chart diagram describes a state machine. State machine can be defined as a
machine which defines different states of an object and these states are controlled by
external or internal events.

Purpose of State Chart diagram:

State chart diagram is one of the five UML diagrams used to model the dynamic nature
of a system. They define different states of an object during its lifetime and these states
are changed by events.
Activity Diagram
• Activity diagram is basically a flowchart to represent the flow from one activity to
another activity. The activity can be described as an operation of the system.

• The control flow is drawn from one operation to another. This flow can be sequential,
branched, or concurrent. Activity diagrams deal with all type of flow control by using
different elements such as fork, join, etc

• Activity diagrams are not only used for visualizing the dynamic nature of a system, but
they are also used to construct the executable system by using forward and reverse
engineering techniques.
Component Diagram:

In the Unified Modeling Language, a component diagram depicts how components are
wired together to form larger components or software systems.

▪ It illustrates the architectures of the software components and the dependencies


between them. Those software components including run-time components,
executable components also the source code components.
Deployment Diagram
• Deployment diagrams are used to visualize the topology of the physical components of a
system, where the software components are deployed.

• Deployment diagrams are used to describe the static deployment view of a system.
Deployment diagrams consist of nodes and their relationships.

Purpose of Deployment Diagrams:

The term Deployment itself describes the purpose of the diagram. Deployment diagrams
are used for describing the hardware components, where software components are
deployed. Component diagrams and deployment diagrams are closely related.
Dept. of CSE Software Engineering

UNIT – IV

Syllabus:
Implementation: Coding Principles, Coding Process, Code verification, Code documentation
Software Testing: Testing Fundamentals, Test Planning, Black Box Testing, White Box
Testing, Levels of Testing, Usability Testing, Regression testing, Debugging approaches.

Software Implementation
• The software engineer translates the design specifications into source codes in some
programming language.
• The main goal of implementation is to produce quality source codes that can reduce the cost of
testing and maintenance.
• The purpose of coding is to create a set of instructions in a programming language so that
computers execute them to perform certain operations.
• Implementation is the software development phase that affects the testing and maintenance
activities.
• A clear, readable, and understandable source code will make testing, debugging, and maintenance
tasks easier.
• Source codes are written for functional requirements but they also cover some nonfunctional
requirements.
• Unstructured and structured programming produce more complex and tedious codes than object-
oriented, fourth generation languages, component based programming etc.
• A well- documented code helps programmers in understanding the source codes for testing and
maintenance.
• Software engineers are instructed to follow the coding process, principles, standards, and
guidelines for writing source codes.
• Finally, the code is tested to uncover errors and to ensure that the product satisfies the needs of
the customer.

Coding Principles
• Coding principles are closely related to the principles of design and modeling.
• Developed software goes through testing, maintenance, and reengineering. .
• Coding principles help programmers in writing an efficient and effective code, which is easier to
test, maintain, and reengineer.

The Coding principles are following


✓ Information hiding
✓ Structure Programming Feature
✓ Maximize Cohesion and Minimize Coupling
✓ Code Reusability
✓ KISS (Keep It Simple, Stupid)
✓ Simplicity, Extensibility, and Effortlessness
✓ Code Verification
✓ Code Documentation
✓ Separation of Concern
✓ Follow Coding Standards, Guidelines, and Styles
Information Hiding
• Data encapsulation binds data structures and their operations into a single module.
• The operations declared in a module can access its data structures and allow other
modules to access them via interfaces.
• Other modules can access data structures through access specifiers and interfaces
available in modern programming languages.
• Information hiding is supported by data abstraction, which allows creating multiple
instances of abstract data type.
• Most of object-oriented programming languages such as C++, Java etc., support the
features of information hiding.
• Structured programming languages, such as C, Pascal, FORTRAN, etc., provide
information hiding in a disciplined manner.

Structure Programming Features


• Structured programming features linearize the program flow in some sequential way that the
programs follow during their execution.
• The organization of program flow is achieved through the following three basic constructs of
structured programming.
o Sequence: It provides sequential ordering of statements, i.e., S1, S2, and S3 … Sn.
o Selection: It provides branching of statements using if-then-else, switch-case, etc.
o Iteration: A statement can be executed repeatedly using while-do, repeat-until, while, etc.

Maximize Cohesion and Minimize Coupling


– Writing modular programs with the help of functions, code, block, classes, etc., may
increase dependency among modules in the software.
– The main reason is the use of shared and global data items.
– Shared data should be used as little as possible.
– Minimizing dependencies among programs will maximize cohesion within modules; that
is, there will be more use of local data rather than global data items.
– High cohesion and low coupling make a program clear, readable, and maintainable.
Code Reusability allows the use of existing code several times.
KISS (Keep It Simple, Stupid)
– Most of the software are structurally complex but can be made simple by using
modularization and other designing principles.

Coding Process
• The coding process describes the steps that programmers follow for producing source codes.
• The coding process allows programmers to write bug-free source codes.
• It involves mainly coding and testing phases to generate a reliable code.
• The coding process describes the steps that programmers follow for producing source codes.
• The coding process allows programmers to write bug-free source codes.
• Two widely used coding processes.
• Traditional Coding Process
• Test-driven Development (TDD)
• The traditional programming process is an iterative and incremental process which follows the
“write-compile-debug” process.
• TDD was introduced by Extreme Programming (XP) in agile methodologies that follow the
“coding with testing” process.

Traditional Coding Process

Design specifications

Writing source codes

Source file
Compilation and linking

Object file Debugging

Is there Yes
any
compilatio
n error?
No
Testing

No
Testing OK

Yes
Executable program
Test-Driven Development

– Developed software goes through a repeated maintenance process due to lack of quality
and inability to satisfy the customer needs.
– System functionality is decomposed into several small features.
– Test cases are designed before coding.
– Unit tests are written first for the feature specification and then the small source code is
written according to the specification.
– Source code is run against the test case.
– It is quite possible that the small code written may not meet the requirements, thus it will
fail the test.
– After failure, we need to modify the small code written before to meet the requirements
and run it again.
– If the code passes the test case implies the code is correct. The same process is repeated
for another set of requirements specification.
Feature specifications

Test case design

Write source code

Bug fixing
Unsuccessful
Code change Run

Successful

Refactoring

Software
Coding Verification
• Code verification is the process of identifying errors, failures, and faults in source codes, which
cause the system to fail in performing specified tasks.
• Code verification ensures that functional specifications are implemented correctly using a
programming language.
• There are several techniques in software engineering which are used for code verification.
– Code review
– Static analysis
– Testing
Code review
– It is a traditional method for verification used in the software life cycle. It mainly aims at
discovering and fixing mistakes in source codes.
– Code review is done after a successful compilation of source codes. Experts review codes
by using their expertise in coding.
– The errors found during code verification are debugged.
• Following methods are used for code review:
– Code walkthrough
– Code inspection
– Pair programming
Code walkthrough
– A code walkthrough is a technical and peer review process of finding mistakes in source
codes.
– The walkthrough team consists of a reviewee and a team of reviewers.
– The reviewers examine the code either using a set of test cases or by changing the source
code.
– During the walkthrough meeting, the reviewers discuss their findings to correct mistakes
or improve the code.
– The reviewers may also suggest alternate methods for code improvement.
– The walkthrough session is beneficial for code verification, especially when the code is
not properly documented.
– Sometimes, this technique becomes time consuming and tedious. Therefore, the
walkthrough session is kept short.
Code inspection
– It aims at detecting programming defects in the source code.
– The code inspection team consists of a programmer, a designer, and a tester.
– The inspectors are provided the code and a document of checklists.
– In the inspection process, definite roles are assigned to the team members, who inspect
the code in a more rigorous manner. Also, the checklists help them to catch errors in a
smooth manner.
– Code inspection takes less time as compared to code walkthrough.
– Most of the software companies prefer software inspection process for code review.
Pair programming
– It is an extreme programming practice in which two programmers work together at one
workstation, i.e., one monitor and one keyboard. In the current practice, programmers can
use two keyboards.
– During pair programming, code review is done by the programmers who write the code.
It is possible that they are unable to see their own mistakes.
– With the help of pair programming, the pair works with better concentration.
– They catch simple mistakes such as ambiguous variable and method names easily. The
pair shares knowledge and provides quick solution.
– Pair programming improves the quality of software and promotes knowledge sharing
between the team members.
Static analysis
– Source codes are not executed rather these are given as input to some tool that provides
program behavior.
– Static analysis is the process of automatically checking computer programs.
– This is performed with program analysis tools.
– Static analysis tools help to identify redundancies in source codes.
– They identify idempotent operations; data declared but not used, dead codes, missing
data, connections that lead to unreachable code segments, and redundant assignments.
– They also identify the errors in interfacing between programs. They identify mismatch
errors in parameters used by the team and assure compliance to coding standards.
Testing
– Dynamic analysis works with test data by executing test cases.
– Testing is performed before the integration of programs for system testing.
– Also, it is intended to ensure that the software ensures the satisfaction of customer needs.

Code Documentation
• Software development, operation, and maintenance processes include various kinds of
documents.
• Documents act as a communication medium between the different team members of
development.
• They help users in understanding the system operations.
• Documents prepared during development are problem statement, software requirement
specification (SRS) document, design document, documentation in the source codes, and test
document.
• These documents are used by the development and maintenance team members.
• There are following categories of documentation done in the system
• Internal documentation
• System documentation
• User documentation
• Process documentation
• Daily documentation

Software Testing
✓ Software testing is the process of finding defects in the software so that these can be debugged
and the defect-free software can meet the customer needs and expectations.
✓ Software testing is one of the important phases in software development life cycle.
✓ A quality software can be achieved through testing.
✓ Effective testing reduces the maintenance cost and provides reliable outcomes.
✓ Example of Ineffective testing -the Y2K problem.
✓ The intention of software testing process is to produce a defect-free system.
Testing Fundamentals
✓ Error is the discrepancy between the actual value of the output of software and the theoretically
correct value of the output for that given input.
✓ Error also known as variance, mistake, or problem is the unintended behavior of software.
✓ Fault is the cause of an error. Fault is also called defect or bug in the manifestation of one or
more errors. It causes a system to fail in achieving the intended task.
✓ Failure is the deviation of the observed behavior from the specified behavior.
✓ It occurs when the faulty code is executed leading to an incorrect outcome. Thus, the presence of
faults may lead to system failure.
✓ A failure is the manifestation of an error in the system or software.

Test Planning
✓ Testing is a long activity in which several test cases are executed by different members of test
team and may be in different environment and at different locations and machine.
✓ Test planning specifies the scope, approach, resources, and schedule of the testing activities.
✓ Test planning includes the following activities:
✓ Create test plan.
✓ Design test cases.
✓ Design test stubs and test drivers.
✓ Test case execution.
✓ Defect tracking and statistics.
✓ Prepare test summary report.
Creation of a Test Plan
• A test plan is a document that describes the scope and activities of testing. It is a formal
document for testing software.
• A test plan contains the following attributes:
– Test plan ID
– Purpose
– Test items
– References
– Features to be tested
– Schedule
– Responsibilities
– Test environment
– Test case libraries and standards
– Test strategy
– Test deliverables
– Release criteria
– Expected risk
Design Test Cases
✓ A test case is a set of inputs and expected results under which a program unit is exercised with
the purpose of causing failure and detecting faults.
✓ A good test case is one that has the high probability of detecting defects in the system.
✓ A well-designed test case can be traceable, repeatable, and can be reused in other software
development.
✓ The intention of designing set of test cases for testing is to prove that program under test is
incorrect.
✓ The test case selection is the main objective to detect errors in the program unit.
✓ A possible way is to exercise all the possible paths and variables to find undiscovered errors. But
performing exhaustive testing is difficult because it takes a lot of time and efforts. An exhaustive
testing includes all possible input to the program unit.
✓ Test script: A test script is a procedure that is performed on a system under test to verify that the
system functions as expected. Test case is the baseline to create test scripts using automated tool.
✓ Test suite: A test suite is a collection of test cases. It is the composite of test cases designed for a
system.
✓ Test data: Test data are needed when writing and executing test cases for any kind of test. Test
data is sometimes also known as test mixture.
✓ Test harness: Test harness is the collection of software, tools, input/output data, and
configurations required for test.
✓ Test scenario: Test scenario is the set of test cases in which requirements are tested from end to
end. There can be independent test cases or a series of test cases that follow each other.
✓ A test case includes the following fields:
✓ Test plan ID
✓ Test case ID
✓ Feature to be tested
✓ Preconditions
✓ Test script or test procedure
✓ Test data
✓ Expected results

Example: Test case to issue a book to the student member.


Test Stubs and Test Drivers
✓ A test driver is a simulated module that calls the module under test. The test driver specifies the
parameters to call the module under test.
✓ A test stub is a simulated module that is called by the module under test.
✓ The test stub and test drivers are the dummy modules that are basically written for the purpose of
providing input/output or the interface behavior for testing.
✓ Test stub and test drivers are required at the time of unit testing a module.
✓ Design of test stub takes more effort as compared to test driver.

Test Case Execution


✓ Once the test cases, test drivers, and test stubs are designed for a test plan, the next task is to
execute the test cases.
✓ The test environment is set up to perform testing.
✓ Software tester runs the test procedure one by one using valid and invalid data and observes the
results.
✓ On executing test cases, the expected results and their behavior is recorded in a test summary
report.
Test Summary Report
✓ Test summary report is prepared to ensure whether the module under test satisfies the acceptance
criteria or not.
✓ This summary report is directed to the stakeholders to know the status of module.
✓ Test summary report covers the results of the items from the test plan, which were planned at the
beginning to test the module.
✓ It includes the number of test cases executed and the type of errors observed.
Defect Tracking and Statistics
 A project has a lot of defects, which are inspected, retested, and managed in a test log.
 Test team tracks various aspects of the testing progress, such as the location of defective modules
and estimation of progress with respect to the schedule, resources, and completion criteria.
 A record of the ignored defects, unresolved defects, stopped due to extra effort and resource
requirements, etc., are managed.
 The defects whose identification and the cause of occurrence is determined are debugged, fixed,
and verified before closing the testing.
Static Testing Dynamic Testing
Technique for assessing structural It executes code on some test data
characteristics of code
Examines the structure of the code but code is It involves actual execution
not executed. Doesn’t involve actual execution.
Types: Types:
1. Inspections 1. Black box testing
2. Structured Walkthroughs i. Boundary Value analysis
3. Technical Reviews ii. Equivalence class partitioning
iii. State table-based testing
iv. Decision table-based testing
v. Cause effect graph-based
testing
vi. Error guessing
2. White box testing

Dynamic testing, I- Black Box Testing:

1. Technique considers the functional requirements of the system.


2. Takes no notice of the internal structure.
3. Black box is used for designing effective test cases.

Different types of BBT

i. Boundary Value analysis


ii. Equivalence class partitioning
iii. State table-based testing
iv. Decision table-based testing
v. Cause effect graph-based testing
vi. Error guessing
Cause-Effect Graphing

 The main drawback of equivalence class partitioning and boundary value analysis
methods is the consideration of only single input domain.
 Like decision tables, Cause effect graphing is another technique for combinations for
input conditions.
 Cause-effect graphing technique begins with finding the relationships among input
conditions known as causes and the output conditions known as effects.
 A cause is any condition in the requirement that affects the program output. Similarly,
an effect is the outcome of some input conditions.
 The logical relationships among input and output conditions are expressed in terms of
cause-effect graph.
 Each condition (either cause or effect) is represented as a node in the cause-effect graph.
Each condition has the value whether true or false.

Notations for cause-effect graph

The process of cause-effect graphing testing is as follows:


1. From the requirements, identify causes and effects and assign them a unique identification
number.
2. The relationships among causes and effects are established by combining the causes and
effects; and annotated into the cause-effect graph.
3. Transform the cause-effect graph into the decision table and each column in the decision
table represents a test case.
Example: Perform cause-effect graphing technique to issue a book to the student member
of the library. The membership is provided for a session that can be renewed.
In this example, the causes and the effects identified are as follows:
Causes:
C1: Library membership is valid
C2: Membership expired
C3: Verify book limit
C4: Verify book availability
Effects:
E1: Renew membership
E2: Exceed book limit
E3: Issue book

Figure: Cause-effect graph for issuing book from library

Decision table for cause-effect graph shown in above Figure

Causes: TC1 TC2 TC3


C1 -- T T

C2 T -- --
C3 -- T F

C4 -- T --
Effects:
E1 X

E2 X
E3 X
Error Guessing:

It is the preferred method used when all other previous methods fail. Sometimes it is used to
test some special cases. It is a very practical case where in tester uses his intuition and makes
a guess about where the bug can be. The tester does not have to use any particular testing
technique. However, this capability comes with years of experience in a particular field of
testing.

Some special cases in the system are as follows:


Ex: Consider the system for calculating the roots of a quadratic equation:
1. What will happen when a=0 in the quadratic equation?
i. If a=0 then the equation is not quadratic
ii. For calculation of roots, division is by zero
2. What will happen if all inputs are negative?
3. What will happen if the input list is empty?
White Box Testing

White-box testing is another effective testing technique in dynamic testing. It is also known as glass-

box testing, as everything that is required to implement the software is visible. The entire design,

structure, and code of the software have to be studied for this type of testing. It is obvious that the

developer is very close to this type of testing. Often, developers use white-box testing techniques to

test their own design and code. This testing is also known as structural or development testing. In

white-box testing, structure means the logic of the program which has been implemented in the

language code. The intention is to test this logic so that required results or functionalities can be

achieved. Thus, white-box testing ensures that the internal parts of the software are adequately

tested.

Types of White Box Testing:

1. Logic Coverage Criteria

2. Basis Path Testing

3. Data Flow Testing

4. Mutation Testing
5.2 LOGIC COVERAGE CRITERIA
Structural testing considers the program code, and test cases are designed
based on the logic of the program such that every element of the logic is cov-
ered. Therefore the intention in white-box testing is to cover the whole logic.
Discussed below are the basic forms of logic coverage.

Statement Coverage
The first kind of logic coverage can be identified in the form of statements. It is
assumed that if all the statements of the module are executed once, every bug
will be notified.
Consider the following code segment shown in Fig. 5.1.

scanf (“%d”, &x);


scanf (“%d”, &y);
while (x != y)
{
if (x > y)
x = x – y;
else
y = y – x;
}
printf (“x = ’’, x);
printf (“y = ’’, y);

Figure 5.1 Sample code

If we want to cover every statement in the above code, then the following
test cases must be designed:
Test case 1: x = y = n, where n is any number
Test case 2: x = n, y = n¢, where n and n¢ are different numbers.
Test case 1 just skips the while loop and all loop statements are not
executed. Considering test case 2, the loop is also executed. However, every
statement inside the loop is not executed. So two more cases are designed:
Test case 3: x > y
Test case 4: x < y
These test cases will cover every statement in the code segment, however
statement coverage is a poor criteria for logic coverage. We can see that test
case 3 and 4 are sufficient to execute all the statements in the code. But, if
we execute only test case 3 and 4, then conditions and paths in test case 1
will never be tested and errors will go undetected. Thus, statement coverage is a
necessary but not a sufficient criteria for logic coverage.

Decision or Branch Coverage


Branch coverage states that each decision takes on all possible outcomes (True
or False) at least once. In other words, each branch direction must be traversed
at least once. In the previous sample code shown in Figure 5.1, while and if
statements have two outcomes: True and False. So test cases must be designed
such that both outcomes for while and if statements are tested. The test cases
are designed as:
Test case 1: x = y
Test case 2: x != y
Test case 3: x < y
Test case 4: x > y

Condition Coverage
Condition coverage states that each condition in a decision takes on all pos-
sible outcomes at least once. For example, consider the following statement:
while ((I £5) && (J < COUNT))
In this loop statement, two conditions are there. So test cases should be de-
signed such that both the conditions are tested for True and False outcomes.
The following test cases are designed:
Test case 1: I £ 5, J < COUNT
Test case 2: I < 5, J > COUNT

Decision/condition Coverage
Condition coverage in a decision does not mean that the decision has been
covered. If the decision
if (A && B)
is being tested, the condition coverage would allow one to write two test cases:
Test case 1: A is True, B is False.
Test case 2: A is False, B is True.
But these test cases would not cause the THEN clause of the IF to execute
(i.e. execution of decision). The obvious way out of this dilemma is a criterion
called decision/condition coverage. It requires sufficient test cases such that
each condition in a decision takes on all possible outcomes at least once, each
decision takes on all possible outcomes at least once, and each point of entry
is invoked at least once [2].

Multiple condition coverage In case of multiple conditions, even decision/


condition coverage fails to exercise all outcomes of all conditions. The reason
is that we have considered all possible outcomes of each condition in the deci-
sion, but we have not taken all combinations of different multiple conditions.
Certain conditions mask other conditions. For example, if an AND condition
is False, none of the subsequent conditions in the expression will be evaluated.
Similarly, if an OR condition is True, none of the subsequent conditions will
be evaluated. Thus, condition coverage and decision/condition coverage need
not necessarily uncover all the errors.
Therefore, multiple condition coverage requires that we should write suf-
ficient test cases such that all possible combinations of condition outcomes
in each decision and all points of entry are invoked at least once. Thus, as in
decision/condition coverage, all possible combinations of multiple conditions
should be considered. The following test cases can be there:
Test case 1: A = True, B = True
Test case 2: A = True, B = False
Test case 3: A = False, B = True
Test case 4: A = False, B = False
5.6 DATA FLOW TESTING
In path coverage, the stress was to cover a path using statement or branch
coverage. However, data and data integrity is as important as code and code
integrity of a module. We have checked every possibility of the control flow
of a module. But what about the data flow in the module? Has every data
object been initialized prior to use? Have all defined data objects been used
for something? These questions can be answered if we consider data objects
in the control flow of a module.
Data flow testing is a white-box testing technique that can be used to detect
improper use of data values due to coding errors. Errors may be unintention-
ally introduced in a program by programmers. For instance, a programmer
might use a variable without defining it. Moreover, he may define a variable,
but not initialize it and then use that variable in a predicate. For example,
int a;
if(a == 67) { }
In this way, data flow testing gives a chance to look out for inappropriate
data definition, its use in predicates, computations, and termination. It identi-
fies potential bugs by examining the patterns in which that piece of data is
used. For example, if an out-of-scope data is being used in a computation,
then it is a bug. There may be several patterns like this which indicate data
anomalies.
To examine the patterns, the control flow graph of a program is used. This
test strategy selects the paths in the module’s control flow such that various
sequences of data objects can be chosen. The major focus is on the points at
which the data receives values and the places at which the data initialized has
been referenced. Thus, we have to choose enough paths in the control flow to
ensure that every data is initialized before use and all the defined data have
been used somewhere. Data flow testing closely examines the state of the data
in the control flow graph, resulting in a richer test suite than the one obtained
from control flow graph based path testing strategies like branch coverage, all
statement coverage, etc.

5.6.1 STATE OF A DATA OBJECT


A data object can be in the following states:
Defined (d) A data object is called defined when it is initialized, i.e. when
it is on the left side of an assignment statement. Defined state can also
be used to mean that a file has been opened, a dynamically allocated
object has been allocated, something is pushed onto the stack, a record
written, and so on [9].
Killed/Undefined/Released (k) When the data has been reinitialized
or the scope of a loop control variable finishes, i.e. exiting the loop or
memory is released dynamically or a file has been closed.
Usage (u) When the data object is on the right side of assignment or
used as a control variable in a loop, or in an expression used to evaluate
the control flow of a case statement, or as a pointer to an object, etc.
In general, we say that the usage is either computational use (c-use) or
predicate use (p-use).

5.6.2 DATA-FLOW ANOMALIES


Data-flow anomalies represent the patterns of data usage which may lead to
an incorrect execution of the code. An anomaly is denoted by a two-character
sequence of actions. For example, ‘dk’ means a variable is defined and killed
without any use, which is a potential bug. There are nine possible two-char-
acter combinations out of which only four are data anomalies, as shown in
Table 5.1.

Table 5.1 Two-character data-flow anomalies


Anomaly Explanation Effect of Anomaly
du Define-use Allowed. Normal case.
dk Define-kill Potential bug. Data is killed without use after definition.
ud Use-define Data is used and then redefined. Allowed. Usually not a bug because
the language permits reassignment at almost any time.
uk Use-kill Allowed. Normal situation.
ku Kill-use Serious bug because the data is used after being killed.
kd Kill-define Data is killed and then redefined. Allowed.
dd Define-define Redefining a variable without using it. Harmless bug, but not
allowed.
uu Use-use Allowed. Normal case.
kk Kill-kill Harmless bug, but not allowed.

It can be observed that not all data-flow anomalies are harmful, but most
of them are suspicious and indicate that an error can occur. In addition to
the above two-character data anomalies, there may be single-character data
anomalies also. To represent these types of anomalies, we take the following
conventions:
~x : indicates all prior actions are not of interest to x.
x~ : indicates all post actions are not of interest to x.
All single-character data anomalies are listed in Table 5.2.
Table 5.2 Single-character data-flow anomalies
Anomaly Explanation Effect of Anomaly
~d First definition Normal situation. Allowed.
~u First Use Data is used without defining it. Potential bug.
~k First Kill Data is killed before defining it. Potential bug.
D~ Define last Potential bug.
U~ Use last Normal case. Allowed.
K~ Kill last Normal case. Allowed.

5.6.3 TERMINOLOGY USED IN DATA FLOW TESTING


In this section, some terminology [9,20], which will help in understanding all
the concepts related to data-flow testing, is being discussed. Suppose P is a
program that has a graph G(P ) and a set of variables V. The graph has a single
entry and exit node.
Definition node Defining a variable means assigning value to a variable for
the very first time in a program. For example, input statements, assignment
statements, loop control statements, procedure calls, etc.
Usage node It means the variable has been used in some statement of the
program. Node n that belongs to G(P ) is a usage node of variable v, if the value
of variable v is used at the statement corresponding to node n. For example,
output statements, assignment statements (right), conditional statements, loop
control statements, etc.
A usage node can be of the following two types:
(i) Predicate Usage Node: If usage node n is a predicate node, then n is a
predicate usage node.
(ii) Computation Usage Node: If usage node n corresponds to a compu-
tation statement in a program other than predicate, then it is called a
computation usage node.
Loop-free path segment It is a path segment for which every node is visited
once at most.
Simple path segment It is a path segment in which at most one node is visited
twice. A simple path segment is either loop-free or if there is a loop, only one
node is involved.
Definition-use path (du-path) A du-path with respect to a variable v is a path
between the definition node and the usage node of that variable. Usage node
can either be a p-usage or a c-usage node.
Definition-clear path(dc-path) A dc-path with respect to a variable v is a path
between the definition node and the usage node such that no other node in the
path is a defining node of variable v.
The du-paths which are not dc-paths are important from testing viewpoint,
as these are potential problematic spots for testing persons. Those du-paths
which are definition-clear are easy to test in comparison to du-paths which are
not dc-paths. The application of data flow testing can be extended to debug-
ging where a testing person finds the problematic areas in code to trace the
bug. So the du-paths which are not dc-paths need more attention.

5.6.4 STATIC DATA FLOW TESTING


With static analysis, the source code is analysed without executing it. Let us
consider an example of an application given below.

Example 5.10

Consider the program given below for calculating the gross salary of an
employee in an organization. If his basic salary is less than Rs 1500, then
HRA = 10% of basic salary and DA = 90% of the basic. If his salary is either
equal to or above Rs 1500, then HRA = Rs 500 and DA = 98% of the basic
salary. Calculate his gross salary.
main()
{
1. float bs, gs, da, hra = 0;
2. printf(“Enter basic salary”);
3. scanf(“%f”, &bs);
4. if(bs < 1500)
5. {
6. hra = bs * 10/100;
7. da = bs * 90/100;
8. }
9. else
10. {
11. hra = 500;
12. da = bs * 98/100;
13. }
14. gs = bs + hra + da;
15. printf(“Gross Salary = Rs. %f”, gs);
16. }

Find out the define-use-kill patterns for all the variables in the source code
of this application.
Solution
For variable ‘bs’, the define-use-kill patterns are given below.

Pattern Line Number Explanation


~d 3 Normal case. Allowed
du 3-4 Normal case. Allowed
uu 4-6, 6-7, 7-12, 12-14 Normal case. Allowed
uk 14-16 Normal case. Allowed
K~ 16 Normal case. Allowed

For variable ‘gs’, the define-use-kill patterns are given below.

Pattern Line Number Explanation


~d 14 Normal case. Allowed
du 14-15 Normal case. Allowed
uk 15-16 Normal case. Allowed
K~ 16 Normal case. Allowed

For variable ‘da’, the define-use-kill patterns are given below.

Pattern Line Number Explanation


~d 7 Normal case. Allowed
du 7-14 Normal case. Allowed
uk 14-16 Normal case. Allowed
K~ 16 Normal case. Allowed

For variable ‘hra’, the define-use-kill patterns are given below.

Pattern Line Number Explanation


~d 1 Normal case. Allowed
Double definition. Not allowed.
dd 1-6 or 1-11
Harmless bug.
du 6-14 or 11-14 Normal case. Allowed
uk 14-16 Normal case. Allowed
K~ 16 Normal case. Allowed
From the above static analysis, it was observed that static data flow testing for
the variable ‘hra’ discovered one bug of double definition in line number 1.

Static Analysis is not Enough


It is not always possible to determine the state of a data variable by just static
analysis of the code. For example, if the data variable in an array is used as an
index for a collection of data elements, we cannot determine its state by static
analysis. Or it may be the case that the index is generated dynamically dur-
ing execution, therefore we cannot guarantee what the state of the array ele-
ment is referenced by that index. Moreover, the static data-flow testing might
denote a certain piece of code to be anomalous which is never executed and
hence, not completely anomalous. Thus, all anomalies using static analysis
cannot be determined and this problem is provably unsolvable.

5.6.5 DYNAMIC DATA FLOW TESTING


Dynamic data flow testing is performed with the intention to uncover pos-
sible bugs in data usage during the execution of the code. The test cases are
designed in such a way that every definition of data variable to each of its use
is traced and every use is traced to each of its definition. Various strategies
are employed for the creation of test cases. All these strategies are defined
below.
All-du Paths (ADUP) It states that every du-path from every definition of
every variable to every use of that definition should be exercised under some
test. It is the strongest data flow testing strategy, since it is a superset of all other
data flow testing strategies. Moreover, this strategy requires the maximum
number of paths for testing.
All-uses (AU) This states that for every use of the variable, there is a path from
the definition of that variable (nearest to the use in backward direction) to the
use.
All-p-uses/Some-c-uses (APU + C) This strategy states that for every variable
and every definition of that variable, include at least one dc-path from the
definition to every predicate use. If there are definitions of the variable with
no p-use following it, then add computational use (c-use) test cases as required
to cover every definition.
All-c-uses/Some-p-uses (ACU + P) This strategy states that for every variable
and every definition of that variable, include at least one dc-path from the
definition to every computational use. If there are definitions of the variable
with no c-use following it, then add predicate use (c-use) test cases as required
to cover every definition.
All-Predicate-Uses (APU) It is derived from the APU+C strategy and states
that for every variable, there is a path from every definition to every p-use
of that definition. If there is a definition with no p-use following it, then it is
dropped from contention.
All-Computational-Uses (ACU) It is derived from the strategy ACU+P strategy
and states that for every variable, there is a path from every definition to every
c-use of that definition. If there is a definition with no c-use following it, then it
is dropped from contention.
All-Definition (AD) It states that every definition of every variable should be
covered by at least one use of that variable, be that a computational use or a
predicate use.

Example 5.11

Consider the program given below. Draw its control flow graph and data flow
graph for each variable used in the program, and derive data flow testing
paths with all the strategies discussed above.
main()
{
int work;
0. double payment =0;
1. scanf(“%d”, work);
2. if (work > 0) {
3. payment = 40;
4. if (work > 20)
5. {
6. if(work <= 30)
7. payment = payment + (work – 25) * 0.5;
8. else
9. {
10. payment = payment + 50 + (work –30) * 0.1;
11. if (payment >= 3000)
12. payment = payment * 0.9;
13. }
14. }
15. }
16. printf(“Final payment”, payment);
Solution
Figure 5.12 shows the control flow graph for the given program.
0, 1

56

7 8, 9, 10

14 11

16, 17 15 13 12

Figure 5.12 DD graph for Example 5.11

Figure 5.13 shows the data flow graph for the variable ‘payment’.

0: Define
1

3: Define

5, 6

8, 9,
7: Define & c-use
10: Define & c-use

11: p-use
14

16 : c-use
15 13 12: Define & c-use
17

Figure 5.13 Data flow graph for ‘payment’


Figure 5.14 shows the data flow graph for the variable ‘work’.
0
1: Define

2: p-use

4: p-use

5,
6: p-use

8, 9,
7: c-use 10: c-use

14 11

16, 17 15 13 12

Figure 5.14 Data flow graph for variable ‘work’

Prepare a list of all the definition nodes and usage nodes for all the vari-
ables in the program.

Variable Defined At Used At


Payment 0,3,7,10,12 7,10,11,12,16
Work 1 2,4,6,7,10

Data flow testing paths for each variable are shown in Table 5.3.

Table 5.3 Data flow testing paths

Strategy Payment Work

All Uses(AU) 3-4-5-6-7 1-2


10-11 1-2-3-4
10-11-12 1-2-3-4-5-6
12-13-14-15-16 1-2-3-4-5-6-7
3-4-5-6-8-9-10 1-2-3-4-5-6-8-9-10

All p-uses 0-1-2-3-4-5-6-8-9-10-11 1-2


(APU) 1-2-3-4
1-2-3-4-5-6
All c-uses 0-1-2-16 1-2-3-4-5-6-7
(ACU) 3-4-5-6-7 1-2-3-4-5-6-8-9-10
3-4-5-6-8-9-10
3-4-15-16
7-14-15-16
10-11-12
10-11-13-14-15-16
12-13-14-15-16

All-p-uses / 0-1-2-3-4-5-6-8-9-10-11 1-2


Some-c-uses 10-11-12 1-2-3-4
(APU + C) 12-13-14-15-16 1-2-3-4-5-6
1-2-3-4-5-6-8-9-10

All-c-uses / 0-1-2-16 1-2-3-4-5-6-7


Some-p-uses 3-4-5-6-7 1-2-3-4-5-6-8-9-10
(ACU + P) 3-4-5-6-8-9-10 1-2-3-4-5-6
3-4-15-16
7-14-15-16
10-11-12
10-11-13-14-15-16
12-13-14-15-16
0-1-2-3-4-5-6-8-9-10-11

All-du-paths 0-1-2-3-4-5-6-8-9-10-11 1-2


(ADUP) 0-1-2-16 1-2-3-4
3-4-5-6-7 1-2-3-4-5-6
3-4-5-6-8-9-10 1-2-3-4-5-6-7
3-4-15-16 1-2-3-4-5-6-8-9-10
7-14-15-16
10-11-12
10-11-13-14-15-16
12-13-14-15-16

All Definitions 0-1-2-16 1-2


(AD) 3-4-5-6-7
7-14-15-16
10-11
12-13-14-15-16

5.6.6 ORDERING OF DATA FLOW TESTING STRATEGIES


While selecting a test case, we need to analyse the relative strengths of various
data flow testing strategies. Figure 5.15 depicts the relative strength of the data
flow strategies. In this figure, the relative strength of testing strategies reduces
along the direction of the arrow. It means that all-du-paths (ADPU) is the
strongest criterion for selecting the test cases.

ADPU

AU

ACU + P APU + C

ACU AD APU

Figure 5.15 Data-flow testing strategies


Mutation Testing
➢ Mutation testing is another powerful white-box testing that takes a different approach for testing
programs, where control flow and data flow based testing are performed by exercising test cases
in different paths to find errors.
➢ Mutation is the act of slightly changing a program.
➢ The changed program is known as mutated program and it is called a mutant of original program.
➢ The process of creating mutant is called mutation.
➢ The mutations are tested with test cases of original program to determine whether the test case is
capable of detecting the change between original program and its mutants.
➢ In mutation testing, mutants are created by using mutation operator. A mutation operator is an
operator whose availability depends on a particular programming language.
➢ Example: The following program is written in C language to calculate the sum of numbers and
their square values.

main()
{
int i, sum=0, sqsum=0;
for (i=1;i<5;i++)
{
sum+=i;
sqsum+=i*i;
printf(“%2d”, i,i*i);
}
printf(“The sum is: %d and square sum is: %d”, sum, sqsum);
}

The mutant program is shown below with high order mutants:


main()
{
int i, sum=0, sqsum=0;
for (i=1;i<=5;i++) Mutated statement 1
{
sum+=i;
sqsum=i*i; Mutated statement 2
printf(“%2d”, i,i*i);
}
printf(“The sum is: %d and square sum is: %d”, sum, sqsum);
}
➢ The mutant can be first order or higher order mutants.
➢ If the mutant is created by single change in the program then it is known as first order mutant.
The higher order mutants are produced by making several changes in a program.
➢ If a mutant program is distinguished from the parent or original program then it is known as dead
or killed mutant.
➢ The dead mutants are helpful to detect and fix problems. If a test case does not distinguish the
mutant program from original program then it is called live mutant.
➢ In case if the test case is not able to detect and fix problems, then killed mutant and original
programs are said to equivalent.
➢ A programmer tries to compute the mutation score after mutation testing.
➢ Let T be a test case, L live mutants and D the dead mutants and E the equivalent mutant and N the
total mutants generated.
➢ The mutation score on test case T, i.e., M (T), is computed as follows:
|D|
M (T) =
|N| - |E|
Levels of Testing
✓ Testing is a defect detection technique that is performed at various levels. Testing begins once a
module is fully constructed.
✓ Although software engineers test source codes after it is written, but it is not an appealing way
that can satisfy customer’s needs and expectations.
✓ Software is developed through a series of activities, i.e., customer needs, specification, design,
and coding.
✓ Each of these activities has different aims. Therefore, testing is performed at various levels of
development phases to achieve their purpose.

Unit Testing
✓ Unit means a program unit, module, component, procedure, subroutine of a system developed by
the programmer.
✓ The aim of unit testing is to find bugs by isolating an individual module using test stub and test
drivers and by executing test cases on it.
✓ During module testing, test stub and test drivers are designed for proper testing in a test
environment.
✓ The unit testing is performed to detect both structural and functional errors in the module.
✓ Therefore, test cases are designed using white-box and black-box testing strategies for unit
testing.
✓ Most of the module errors are captured through white-box testing.
Unit test environment
Integration Testing
✓ Integration testing is another level of testing, which is performed after unit testing of modules.
✓ It is carried out keeping in view the design issues of the system into subsystems.
✓ The main goal of integration testing is to find interface errors between modules.
✓ There are various approaches in which the modules are combined together for integration testing.
✓ Big-bang approach
✓ Top-down approach
✓ Bottom-up approach
✓ Sandwich approach
Big-bang approach
✓ The big-bang is a simple and straightforward integration testing.
✓ In this approach, all the modules are first tested individually and then these are combined together
and tested as a single system.
✓ This approach works well where there is less number of modules in a system.
✓ As all modules are integrated to form a whole system, the chaos may occur. If there is any defect
found, it becomes difficult to identify where the defect has occurred.
✓ Therefore, big-bang approach is generally avoided for large and complex systems.
Top-down approach
✓ Top-down integration testing begins with the main module and move downwards integrating and
testing its lower level modules.
✓ Again the next lower level modules are integrated and tested.
✓ Thus, this incremental integration and testing is continued until all modules up to the concrete
level are integrated and tested.
✓ The top-down integration testing approach is as follows:
main system -> subsystems -> modules at concrete level.
✓ In this approach, the testing of a module may be delayed if its lower level modules (i.e., test
stubs) are not available at this time.
✓ Thus, writing test stubs and simulating to act as actual modules may be complicated and time-
consuming task.
M

S1 S2 S3

M1.1 M1.2 M2.1 M3.1 M3.2 M3.3

Figure 9.14: Top-down integration

Bottom-up approach
✓ As the name implies, bottom-up approach begins with the individual testing of bottom-level
modules in the software hierarchy.
✓ Then lower level modules are merged function wise together to form a subsystem and then all
subsystems are integrated to test the main module covering all modules of the system.
✓ The approach of bottom-up integration is as follows:
concrete level modules -> subsystem –> main module.
✓ The bottom-up approach works opposite to the top-down integration approach.
Sandwich approach
✓ The sandwich testing combines both top-down and bottom-up integration approaches.
✓ During sandwich testing, top-down approach force to the lower level modules to be available and
bottom-up approach requires upper level modules.
✓ Thus, testing a module requires its top and bottom level modules.
✓ It is the most preferred approach in testing because the modules are tested as and when these are
available for testing.

System Testing
✓ The unit and integration testing are applied to detect defects in the modules and the system as a
whole. Once all the modules have been tested, system testing is performed to check whether the
system satisfies the requirements (both functional and non-functional).
✓ To test the functional requirements of the system, functional or black-box testing methods are
used with appropriate test cases.
✓ System testing is performed keeping in view the system requirements and system objectives.
✓ The non-functional requirements are tested with a series of tests whose purpose is to check the
computer-based system.
✓ A single test case cannot ensure all the system non-functional requirements.
✓ For specific non-functional requirements, special tests are conducted to ensure the system
functionality.
✓ Some of the non-functional system tests are
✓ Performance testing
✓ Volume testing
✓ Stress testing
✓ Security testing
✓ Recovery testing
✓ Compatibility testing
✓ Configuration testing
✓ Installation testing
✓ Documentation testing
Performance testing:
✓ A performance testing is carried out to check the run time outcomes of the system, such as
efficiency, accuracy, etc.
✓ Each system performs differently in different environment.
✓ During performance testing, both hardware and software are taken into consideration to observe
the system behavior.
✓ For example, a testing tool is tested to check if it tests the specified codes in a defined time
duration.
Volume testing:
✓ It deals with the system if heavy amount of data are to be processed or stored in the system.
✓ For example, an operating system will be checked to ensure that the job queue will handle when a
large number of processes enter into the computer.
✓ It basically checks the capacity of the data structures.
Stress testing:
✓ In stress testing, the behavior of the system is checked when it is under stress.
✓ The stress may come due to load increases at peak time for a short period of time.
✓ There are several reasons of stress, such as the maximum number of users increased, peak
demand, number of operations extended, etc.
✓ For example, the network server is checked if the number of concurrent users and nodes are
increased to use the network resources in the evening time.
✓ The stress test is the peak time of the volume testing.
Security testing:
✓ Due to the increasing complexity of software and its applications for variety of users in different
technologies, it becomes necessary to provide sufficient security to the society.
✓ It is conducted to ensure the security checks at different levels in the system.
✓ For example, testing of e-payment system is done to ensure that the money transaction is
happening in a secure manner in e-commerce applications.
✓ There a lot of confidential data are transferred and used in the system that must be protected from
its leakage, alteration, and modification by illegal people.
Recovery testing:
✓ Most of the systems now have the recovery policies if the there is any loss of data.
✓ Therefore, recovery testing is performed to check that it will recover the losses caused by data
error, software error, or hardware problems.
✓ For example, the Windows operating system recovers the currently running files if any
hardware/software problem occurs in the system.
Compatibility testing:
✓ Compatibility testing is performed to ensure that the new system will be able to work with the
existing system.
✓ Sometimes, the data format, report format, process categories, databases, etc., differ from system
to system.
✓ For example, compatibility testing checks whether Windows 2007 files can be opened in the
Windows 2003 if it is installed in the system.
Configuration testing:
✓ Configuration testing is performed to check that a system can run on different hardware and
software configurations.
✓ Therefore, system is configured for each of the hardware and software.
✓ For example, suppose you want to run your program on other machine then you are required to
check the configuration of its hardware and software.
Documentation testing:
✓ Once the system becomes operational, problems may be encountered in the system.
✓ A systematic documentation or manual can help to recover such problems.
✓ The system is verified whether its proper documentation is available.
Installation testing:
✓ Installation testing is conducted to ensure that all modules of the software are installed properly.
✓ The main purpose of installation testing is to find errors that occur during the installation process.
✓ Installation testing covers various issues, such as automatic execution of the CD, files and
libraries must be allocated and loaded; appropriate hardware configurations must be present;
proper network connectivity; compatible with the operating system platform, etc.
✓ The installers must be familiar with the installation technologies and their troubleshooting
mechanisms.
Acceptance Testing
✓ Acceptance testing is a kind of system testing, which is performed before the system is released
into the market.
✓ It is performed with the customer to ensure that the system is acceptable for delivery.
✓ Once all system testing have been exercised, the system is now tested from the customer’s point
of view.
✓ Acceptance testing is conducted because there is a difference between the actual user and the
simulated users considered by the development organization.
✓ The user involvement is important during acceptance testing of the software as it is developed for
the end-users.
✓ Acceptance testing is performed at two levels, i.e.,
✓ Alpha testing
✓ Beta testing.
✓ Alpha testing is a pilot testing in which customers are involved in exercising test cases.
✓ In alpha testing, customer conducts tests in the development environment. The users
perform alpha test and tries to pinpoint any problem in the system.
✓ The alpha test is conducted in a controlled environment.
✓ After alpha testing, system is ready to transport the system at the customer site for
deployment
✓ Beta testing is performed by a limited and friendly customers and end-users.
✓ Beta testing is conducted at the customer site, where the software is to be deployed and used by
the end-users.
✓ The developer may or may not be present during beta testing.
✓ The end-users operate the system under testing mode and note down any problem
observed during system operation.
✓ The defects noted by the end-users are corrected by the developer.
✓ If there are any major changes required, then these changes are sent to the configuration
management team.
✓ The configuration management team decides whether to approve or disapprove the
changes for modification in the system.
Usability Testing
✓ Usability refers to the ease of use and comfort that users have while working with software.
✓ It is also known as user-centric testing. Nowadays, usability has become a wider aspect of
software development and testing.
✓ Usability testing is conducted to check usability of the system which mainly focuses on finding
the differences between quality of developed software and user’s expectations of what it should
perform.
✓ Poor usability may affect the success of the software. If the user finds that the system is difficult
to understand and operate, then ultimately it will lead to unsuccessful product.
✓ The usability testing concentrates on the testing of user interface design, such as look and feel of
the user interface, format of reports, screen layouts, hardware and user interactions, etc.
✓ Usability testing is performed by potential end-users in a controlled environment.
✓ The development organization calls selected end-users to test the product in terms of ease of use,
functionality as expected, performance, safety and security; and the outcomes.
Regression Testing
✓ Regression testing is also known as program revalidation.
✓ Regression testing is performed whenever new functionality is added or the existing functionality
is modified in the program.
✓ If the existing system is working correctly, then new system should work correctly after making
changes because the code may have been changed.
✓ It is required when the new version of a program is obtained by changing the existing version.
✓ Regression testing is also needed when a subsystem is modified to get the new version of the
system.
Introduction to Debugging
Debugging is not a part of the testing domain. Therefore, debugging is not testing. It is a separate process
performed as a consequence of testing. But the testing process is considered a waste if debugging is not
performed after the testing. Testing phase in the SDLC aims to find more and more bugs, remove the errors,
and build confidence in the software quality. In this sense, testing phase can be divided into two parts:
1. Preparation of test cases, executing them, and observing the output. This is known as testing.
2. If output of testing is not successful, then a failure has occurred. Now the goal is to find the bugs that
caused the failure and remove the errors present.
Debugging is the process of identification of the symptoms of failures, tracing the bug, locating the errors
that caused the bug, and correcting these errors. Describing it in more concrete terms, debugging is a two-
part process. It begins with some indication of the existence of an error. It is the activity of:
1. Determining the exact nature of the bug and location of the suspected error within the program.
2. Fixing or repairing the error.
Debugging Techniques
The most popular debugging approaches are as follows:

❑ Debugging by Brute force or Memory Dump

❑ Debugging with Watch Points or Break Points

❑ Debugging by Backtracking

❑ Debugging by induction

❑ Debugging by deduction

❑ Debugging by testing
Debugging by Brute force or Memory Dump
Brute force or Memory Dump

▪ It is the simplest method of debugging but it is inefficient. It uses memory dumps or output statements for
debugging.

▪ The memory dump is a machine level representation of the corresponding variables and statements.

▪ It represents the static structure of the program at a particular snapshot of execution sequence.

▪ The memory dump rarely establishes correspondence to show errors at a particular time.

▪ Also, one should have good understanding of dynamics of the program.

▪ Therefore, instead of using brute force for debugging, a debugger should be used for better results.
Debugging with Watch Points or Break Points
Watch Points or Break Points

▪ Breakpoint debugging is a method of tracing programs with a breakpoint and stopping the program execution at the
breakpoint.

▪ A breakpoint is a kind of signal that tells the debugger to temporarily suspend execution of program at a certain point.

▪ Each breakpoint is associated with a particular instruction of the program.

▪ The program execution continues before the breakpoint statement. If any error is reported, its location is marked and then
the program execution resumes till the next breakpoint.

▪ This process is continued until all errors are located in the program.

▪ Breakpoint is also performed with watch values.

▪ A watch value is a value of a variable or expression, which is set and shown along with the program execution.

▪ The watch values change as the program executes.

▪ The incorrect or unexpected values can be observed with the watch values.
Debugging by Backtracking
Backtracking

▪ Backtracking is the refinement of brute force method and it is one of the successful methods of
debugging.

▪ Debugging begins from where the bug is discovered and the source code is traced out backward though
different paths until the exact location of the cause of bug is reached or the cause of bug has disappeared.

▪ This process is performed with the program logic in reverse direction of the flow of control.

▪ This method is effective for small size problems.

▪ Backtracking should be used when all other methods of debugging are not able to locate errors. The
reason is the effort spent in backtracking if there is no error exists in the source code.
Debugging by Induction
Debugging by Induction

▪ It is based on pattern matching and a thought process on some clue.

▪ The process begins from collecting information about pertinent data where the bug has discovered.

▪ The patterns of successful test cases are observed and data items are organized.

▪ Thereafter, hypothesis is derived by relating the pattern and the error to be debugged.

▪ On successful hypothesis, the devised theory is proved the occurrence of bugs.

▪ Otherwise, more data are collected to derive causes of errors. Finally, causes are removed and errors are
fixed in the program.
Debugging by Deduction
Debugging by Deduction

▪ This is the kind of cause elimination method.

▪ On the basis of cause hypothesis, lists of possible causes are enumerated for the observed failure.

▪ Now the tests are conducted to eliminate causes to remove errors in the system.

▪ If all the causes are eliminated then errors are fixed. Otherwise, hypothesis is refined to eliminate errors.

▪ Finally, hypothesis is proved to ensure that all causes have been eliminated and the system is bug free.
Debugging by Testing
Debugging by testing

▪ It uses test cases to locate errors.

▪ Test cases designed during testing are used in debugging to collect information to locate the suspected
errors.

▪ Test case in testing focuses on covering many conditions and statements, whereas test case in debugging
focuses on small number of conditions and statements.

▪ Test cases of debugging are the refined test cases of testing.

▪ Test cases during debugging concentrate on locating error situations in a program.


Correcting Bugs
The second phase of the debugging process is to correct the error when it has been uncovered.
But it is not as easy as it seems. The design of the software should not be affected by correcting
the bug or due to new modifications. Before correcting the errors, we should concentrate on the
following points:
a) Evaluate the coupling of the logic and data structure where corrections are to be made.
Highly coupled module correction can introduce many other bugs. That is why low-coupled
module is easy to debug.
b) After recognizing the influence of corrections on other modules or parts, plan the regression
test cases to perform regression testing as discussed earlier.
c) Perform regression testing with every correction in the software to ensure that the
corrections have not introduced bugs in other parts of the software.
UNIT – V

Syllabus:
Software Quality: Software Quality Factors, Verification & Validation, Software Quality
Assurance, The Capability Maturity Model
Software Maintenance: Software maintenance, Maintenance Process Models, Reengineering
activities.

Software Quality
Introduction
• Once software is tested, it is assumed that it is defect free and it will perform according to the
needs of the customer.
• As a software product is to be used for a long time, it is important to measure its quality for better
reliability and durability.
• Measuring the reliability of software products has been a major issue for the developer and the
customer.
• A good quality product satisfies the customer needs, is constructed as per the standards and
norms, has sound internal design, and is developed within an optimized cost and schedule.
• The internal design of software is much more important than the external design.

Software Quality Concept


• The quality of a software product is defined in terms of its characteristics or attributes.
• The values of attributes vary from product to product.
• A product can be of good quality or bad quality. Weaker values of attributes define a bad-quality
product whereas higher values of attributes define a good-quality product.
• The aim of software development organizations is to produce a high-quality product.
• The external quality of a product does not always ensure the internal quality.
• It is analogous to the quality of a building.
• A good quality product aims to satisfy the customer requirements, developed under the guidelines
of software quality standards.
• Cost and schedule also play important roles in defining the quality of software.
• The process of development can affect the product quality.
• A standard process provides a systematic approach to development and leads to a high-quality
product.
• Quality of software can be defined in terms of following points.

satisfies customer requirements

possesses higher values of its characteristics

has sound internal design along with external design

is developed within the budget and cost

follows the development standards

Software Quality Factors


• McCall, Richards, and Walters have proposed certain factors that affect the quality of software:
• These factors are classified into the following categories:
– Product operational factors: Correctness, reliability, usability, integrity, and efficiency
– Product revision factors: Maintainability, flexibility, and testability
– Product adaptability factors: Portability, reusability, and interoperability
Classification of Software quality factors


Correctness: A program is correct if it performs according to the specifications of functions it should
provide.

Reliability is the extent to which a program performs its intended functions satisfactorily with
required precision without failure in a specified duration.

Usability is the extent of effort required to learn, operate, and use a product.

Integrity is the extent of effort to control illegal access to data and program by unauthorized people.

Efficiency is the volume of computing resourses (e.g., processor time, memory space, bandwidth in
communication devices, etc.) and code required to perform software functions.

Maintainability is the ease to locate and correct errors. Maintainability of a software is measured
through mean time to change.

Flexibility is the cost required to modify an operational program.

Testability is the effort required to test a program to ensure that it performs its intended function.

Portability is the effort required for transferring software products to various hardware and software
environments.

Reusability is the extent to which software or its parts can be reused in the development of some
other software.

Interoperability is the effort required to couple one system to another. Strong coupling and loose
coupling are the approaches used in interoperability.

Verification and Validation (V&V)


• The software product being developed must be checked during development and post-
development phases.
• Verification and Validation (V&V) is the process of checking a software product to ensure that it
meets the specifications and satisfies the intended purpose.
• The V&V activities begin with requirements specification review and continue to design reviews
to code inspection during testing.
• Verification is the process of evaluating work products in software development phases to assess
whether the work product meets the specifications as intended for the purpose.
• IEEE defines, “verification is the act of reviewing, inspecting, testing, checking, auditing, or
otherwise establishing and documenting whether product, process, service, or documents conform
to the specified requirements.”
• Validation is the process of evaluating software during or at the end of the development process
to determine whether it satisfies the specified stated requirements.
• IEEE defines, “validation in the process of evaluating software at the end of the software
development process to determine compliance with the requirement.” Validation is the end-to-end
verification.
• Berry Boehm defines and differentiates verification and validation as:
• Verification: “Are we building the product right?”
• Validation: “Are we building the right product?”
• Verification and validation ensure that the software performs no unintended functions and
provides information about its quality and reliability .
• The ultimate goal of verification and validation is to establish confidence that the software system
is “fit-for purpose.”
Cost of Quality
• The cost of quality is the cost related to achieving quality and performing quality-related
activities.
• In 1979, Philip Crosby stated that quality is free. He demonstrated that the cost of producing
high-quality products does not take extra cost than producing low-cost products.
• The cost of quality is divided into the cost of conformance and the cost of nonconformance.
• The cost of conformance is the cost incurred before the delivery of a product, i.e., in identifying
bugs, locating, and correcting bugs, etc.
• The cost of nonconformance comes after the product is released.
• Crosby stated that the cost of conformance plus the cost of nonconformance is less than the cost
of nonconformance.
• The cost of quality is categorized into three classes: prevention cost, appraisal cost, and failure
cost

Prevention cost is the cost incurred before performing quality assurance activities. That is, it is the
cost spent on quality planning, formal technical reviews, test equipments, and training.

Appraisal cost is the cost incurred in performing quality assurance activities, viz., inspection, testing,
equipment calibration and maintenance, etc.

Failure cost is the cost of conformance and the cost of nonconformance.
The Capability Maturity Model (CMM)
• The Capability Maturity Model (CMM) is an industry standard model for defining and measuring
the maturity of the development process and for providing strategy for improving the software
processes toward achieving high-quality products.
• It was established by Software Engineering Institute (SEI) in 1986 at Carnegie Mellon University
(CMU) at California, U.S.A, under the direction of the U.S. Department of Defense.
• The CMM is involved in the process management process to improve the software process
whereas life cycle models are used for the software development process.
• The SEI-CMM is a reference model for appraising the maturity of the software process at
different levels.

• The CMM provides a way to develop and refine an organization's processes.


• A maturity model can be used as a benchmark for assessing different organizations for equivalent
comparison.
• It describes the maturity of the company in the above stated levels based upon the project the
company is dealing with and the clients.
• Within each of these maturity levels, there are defined key process areas (KPAs) which
characterize that level.
• An organization willing to achieve a level has to demonstrate all the KPAs in the corresponding
level of CMM.

Focus and KPAs for each CMM level

CMM level Focus KPAs

1. Initial Competent people and heroics Not applicable


2. Repeatable Disciplined process Requirement management
Project planning and tracking
Subcontractor management
Software quality assurance
Configuration management

3. Defined Process standardization Organization process focus


Organization process definition
Training program
Software product engineering
Integrated software development
Inter-group coordination
Peer reviews
4. Managed Measurable and controlled Quantitative process management
processes for quality Quality management

5. Optimized Continuous process improvement Defect prevention


Technology change management
Process change management

Software Maintenance
• Software maintenance is an important activity, which keeps a system remain useful during its
lifetime.
• Maintenance is performed after the system is deployed at the customer site.
• As the customer starts working on the system, the system and its parts may introduce defects.
• Some new feature may be added/deleted to the system if the customer needs change.
• Organizational operating environment and policies change from time to time, which causes a
system to be transported or adapted to a new environment.
• Software maintenance is valuable to avoid software aging and to maintain the quality of software
product.
• Without software maintenance and evolution, system becomes complex and unreliable.
• The cost of software change varies from project to project and with the types of changes.
• A survey indicates that the software maintenance or change consumes 60–80% of the total life
cycle cost.
• Major software cost is incurred due to enhancement (75–80%) rather than correction.

What is software maintenance


• IEEE defines, “software maintenance is the process of modifying a software system or component
after delivery to correct faults, improve performances, or other attributes; or adapt to a changed
environment.”
• The main purpose of maintenance is to keep the system up and running after delivery.
• Pigoski states, “Software maintenance is the totality of activities required to provide cost-effective
support to a software system. Activities are performed during the pre-delivery stage as well as the
post-delivery stage. Pre-delivery activities include planning for post-delivery operations,
supportability, and logistics determination. Post-delivery activities include software modification,
training, and operating a helpdesk.
Categories of Maintenance

Corrective maintenance
o fixing defects or failures
o defect can arise from design errors, logical errors and coding errors

Adaptive maintenance
o changes due to operating environment changes
o deals with the portability aspect of the software

Perfective maintenance
o improve performance or maintainability
o Implementation of better algorithms, rewriting documentations for better
understandability etc.

Preventive maintenance
o update the software in anticipation of any future problems
o aims at increasing the systems maintainability and improving the structure of
the system.

Emergency maintenance
o repair or replacement of facility components or equipment requiring immediate
attention.

Maintenance Process Models


• The process models organize maintenance into a sequence of related activities, or phases, and
define the order in which these phases are to be executed.
• Some models are:
– Quick-Fix Model
– Osborne’s Model
– Iterative-Enhancement Model
– Full-Reuse Model
– IEEE 1219 model
– ISO-12207 model
Quick-Fix Model
• The quick-fix model is an ad hoc approach to the maintenance
• Its approach is to work on the code first and then make necessary changes to the accompanying
documentation
• The idea is to identify the problem and fix it as early as possible.
• After the code has been changed, it may affect requirement, design, testing; and any other form of
available documents impacted by the modification should be updated.
• Due to time constraint, this model does not pay attention to the long-term effects.
• Changes are often made without proper planning, design, impact analysis, and regression testing.
• Also, repeated changes may demolish the original design, thus making future modifications
progressively more expensive to carry out.

Fig: Quick-Fix Model


Osborne’s Model
• The Osborne’s model is concerned with the reality of the maintenance environment.
• This model assumes that the technical problems that arise during maintenance are due to poor
management communication and control.
• According to the Osborne strategies, maintenance requirements need to be included in the change
specification; a quality assurance program is required to establish quality assurance requirements,
and a metrics needs to be developed in order to verify that the maintenance goals have been met.

Iterative-Enhancement Model
• This model considers that making changes in a system throughout its lifetime is an iterative
process.
• The iterative-enhancement model assumes that the requirements of a system cannot be gathered
and fully understood initially.
• The system is to be developed in builds. Each build completes, corrects, and refines the
requirements of the previous builds based on the feedback of users.
• The construction of a build in the iteration (i.e., maintenance) begins with the analysis of the
existing system’s requirements, design, code, and test; and continues with the modification of the
highest-level document affected by changes.
• A key advantage of the iterative-enhancement model is that documentation is kept updated as the
code changes.
• The iterative-enhancement model keeps the system maintainable as compared to the quick-fix
model.
• Also, the maintenance changes are faster in iterative-enhancement.
• This model is observed to be ineffective if there is unavailability of complete documentation.
• The iterative-enhancement model is well suited for systems that have a long life and evolve over
time.

Fig: Iterative-Enhancement Model

Full-Reuse Model
• Here, maintenance is considered as reuse-oriented software development, where reusable
components are used for maintenance and replacements for faulty components.
• It begins with requirements analysis and design of a new system and reuses the appropriate
requirements, design, code, and tests from the earlier versions of the existing system.
• The reuse repository plays an important role in the full-reuse model.
• It also promotes the development of more reusable components.
• The full-reuse model is more suited for the development of lines of related products.
• This model takes some initial cost to institutionalize the reuse environment.
• The full-reuse model is especially important for the maintenance of component-based systems or
reengineering-type projects that are to be migrated onto a component-based platform.

IEEE 1219 Model


• The IEEE standard organizes the maintenance process in seven phases.
• The phases are classification and identification, analysis, design, implementation, system test,
acceptance test, and delivery.
• Initially, modification requests are generated by the user, customer, programmer, or the manager
of the maintenance team.

ISO-12207 Model
• The ISO-12207 standard organizes the maintenance process in six phases.
• The phases are process implementation, problem and modification analysis, modification
implementation, maintenance review/acceptance, migration, and software retirement.
• Process implementation phase includes the tasks for developing plans and procedures.
• Problem and modification analysis is to analyse the maintenance request in terms of size, cost
and time required.
• Modification implementation to ensure the requested modifications are correctly implemented
and original unmodified requirements are not affected.
• Maintenance review is for assessing the integrity of the modified system.

Fig: ISO-12207 model

Reengineering Activities
Reengineering inverts the existing obsolete and incomplete procedural system to a system which is
well structured and well documented according to the predefined quality requirements. Reengineering
involves the following sub-activities.
• Reverse engineering
• Forward engineering
• Program comprehension
• Restructuring
• Design recovery
• Re-documentation

Reverse engineering
• Reverse engineering is the process of recovering the design specifications of an existing business
system from its implementation and representing it at a much higher level of abstraction .
• Reverse engineering techniques can be performed to extract data, architecture, design
information, and content of a procedural system.
• Reverse engineering techniques provide the means for recovering the lost information and
developing alternative representations of a system, such as generation of structure charts,
dataflow diagrams, entity-relationship diagrams, etc.
Forward Engineering
• Once reverse engineering has been performed and all-important artifacts have been recovered,
forward engineering is performed.
• Forward reengineering is the traditional process of moving from high-level abstraction using
logical design to physical implementation.
• Forward engineering moves from a higher-level abstract representations and design details to
implementation level of the system.
• Design details such as object models, use case diagrams, pseudo codes, etc., can be converted to
object-oriented programming languages.

Program Comprehension
• Program comprehension is an essential part of software evolution and software maintenance.
• Software that is not comprehended cannot be changed.
• Frequently in program comprehension the programmer understands domain concepts, but not the
code. The knowledge of domain concepts is based on program use and therefore it is easier to
acquire than knowledge of the code.
• Program comprehension is the root of the reverse engineering process. It is the process of
acquiring or extracting knowledge about the software artifacts such as code, design, document,
etc.

Restructuring
• Restructuring transforms the system from one representation form to another at the same level of
abstraction.
• Restructuring modifies the code and data that are adaptable to future changes. It preserves
semantics and functionality between the new and old representations.
• Mainly, it has two major aspects, namely, code restructuring and data restructuring.
• Code restructuring produces designs with higher quality than the existing one.
• Data restructuring is performed to extract the data items and objects to understand data flow and
the existing data structure.

Design Recovery
• Design recovery recreates design abstractions from a combination of code, existing design
documentation, personal experience, and general knowledge about problem and application
domains.
• The recovered design abstractions must include conventional software engineering
representations such as formal specifications, module breakdowns, data abstractions, data flows,
and program description language.
• Design recovery is performed across a spectrum of activities from software development to
maintenance.
• A key objective of design recovery is to develop structures that will help the software engineer to
understand a software system.

Re-documentation
• Re-documentation is the process of creating a semantically equivalent representation at the
corresponding levels of abstraction.
• In this aspect, system documents are updated/rewritten/replaced to document the target system.
• Various documents that may be affected in legacy software include requirement specifications,
design and implementation, design decision report, configuration, data dictionary, user and
reference manuals, and the change document.
• The process of re-documentation is similar to reverse engineering activities.

You might also like