SE All Units
SE All Units
• We all know the common definition for the Software which is defined as a
collection of computer programs that are executed together with data to provide
desired outcomes.
1) Generic: Generic software products are developed for general purpose, regardless of the type of
business.
2) Customized: Customized software products are developed to satisfy the need of a particular customer in
an organization. Here requirements are specific to the business and these are given by the stake holders of
the organization.
2) Application software: Application software is designed to accomplish certain specific needs of the end user.
Ex: Educational software, video editing software, word processing software, database software.
3) Programming software: It is a class of system software that assists programmers in writing computer programs using
different programming languages in a convenient manner.
4) Artificial intelligence software: AI software is made to think like human beings and therefore it is useful in solving
complex problems automatically.
Ex: Robotics, expert systems, pattern recognition, game playing, speech recognition etc..
SOFTWARE CLASSIFICATIONS (Cont.)
5) Embedded software: Embedded software is type of software that is built in hardware systems. It resides within a product or
system and is used to control, monitor or assist the operation of the equipment. It is an integral part of the system.
Ex: Controllers, communication protocols etc..
6) Engineering/scientific software: Engineering problems and quantitative analysis are carried out using automated tools.
Scientific software is typically used to solve mathematical functions and calculations.
Ex: CAD/CAM, ESS etc..
7) Web software: Web software has evolved from a simple website to search engines to web computing. Web applications are
spread over a network. Ex: Internet, Intranet
Web applications are based on client server architecture, where client requests information and the server retrieves information
from the web. Ex: Web 2.0, HTML, PHP, search engines.
8) Product line software: Product line software is a set of software intensive systems that share a common, managed set of
features to satisfy the specific needs of a particular market segment or mission.
Ex: Multimedia, database software, word processing software etc.
SOFTWARE CRISIS
• Software crisis started in the late 1960’s and early 1970’s. Since then the importance of software, software industry have
evolved rapidly
• Now a days the development of programs and software have become complex with increasing requirements of users,
technological advancements and computer awareness among people.
• The modern software crisis has some notable symptoms which are
1. Complexity
2. Hardware vs Software cost
3. Lateness
4. Costliness
5. Poor quality
6. Lack of planning
7. Unmanageable nature
8. Immaturity
9. Management practices
SOFTWARE CRISIS (Cont.)
• Complexity: An important cause of the software crisis is the complexity of the software development process.
In early days of computing, programs were simple and they were specified, written, operated and maintained
by same person and the software also used by few people and in a very limited domain. As computing matured,
projects size grew larger and programs began to be developed by a team of people. The expectation of users
also have been increased. At the same time, programs became complex and earlier methodologies of software
development were no longer useful for larger and complex projects.
• H/w vs S/w cost: As discussed above the computing scenario became complex , the hardware cost has been
increasing or stable. The same hardware is capable of handling the existing and new applications. Hardware
takes one time design and analysis as it can be manufactured in classical sense. On the other hand software is
designed in an engineering manner, follow each and every step carefully. Development, maintenance, change
and re engineering process require logical thinking & labor work. Ultimately cost of software is higher than
the hardware cost.
• Lateness: The time required to develop software and its cost began to exceed all estimates. Most of the
projects were cancelled and challenged because they were running behind schedule & exceeded the budget.
SOFTWARE CRISIS (Cont.)
• Quality: Quality of the software has become a challenging issue in the software industry. Software must be of high quality with
respect to product operation, product transition and product migration. Product operation includes usability, safety, correctness,
security, reliability and efficiency. Product migration involves maintainability, flexibility, changeability, modifiability, testability
and ability to reengineer the legacy software. Product transition includes interoperability, dependability and reusability
characteristics.
• Planning and Management: The project must be planned with the expected requirement of resources (h/w, s/w, people), cost,
time and effort. The project managers follow the project plan for effective management of the team and the project so that the
software can be delivered on time and within budget.
• Maintenance: Maintenance and changes require the proper understanding of the software. Change in software is a major
problem for software practitioners and it makes around 40% of the total development cost. After changing a code, the entire
software is to be tested for reliable functioning of the system. Therefore some systematic approaches are need for maintenance
and changes.
Software crisis has inspired people to improve the processes. The solution to these software crisis is to introduce systematic
software engineering practices for systematic development, maintenance, operation, planning and management of software.
SOFTWARE ENGINEERING
“Software Engineering is an engineering, technological and managerial discipline that provides a systematic
approach to the development, operation and maintenance of software.”
Engineering provides step by step procedure for software engineering i.e., project planning, analysis,
architecture and design, coding, testing and maintenance.
The activities are performed with the help of technological tools that ease the execution of above activities, for
Ex: project management tools, analysis tools, design tools, coding tools and testing tools.
However management skills are necessary in a project manager to coordinate and communicate the
information and management of these activities. Systematic development of software helps to understand
problems and satisfy the client needs.
IEEE DEFINITION OF SOFTWARE ENGINEERING
“The systematic approach to the development, operation, maintenance and retirement of software.”
The main goal of software engineering is to understand customer needs and develop software with improved
quality, on time and within budget. The view of Software Engineering is shown in figure:
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES
A software Engineering methodology is a set of procedures followed from the beginning to the completion
of the development process. Software engineering methodologies have evolved with increasing complexities
in programming and advancements in programming technologies.
1. Exploratory methodology
1.Exploratory methodology:
During the 1950s, most programs were written in assembly language. These programs were
limited to about a few hundreds of lines of assembly code. Every programmer developed programs in his
own individual style - based on his intuition. This type of programming was called as Exploratory
Programming. It involves experimentation and exploring the programs through step by step programming.
The process of each step depends on the results of the previous ones.
Exploratory style uses unstructured programming, here the main program focuses on global data
items. Later, as the size and complexity of the program kept on increasing, exploratory style proved to be
insufficient. It is also difficult to understand and maintain the programs written by others.
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)
The next significant development which occurred during 1960’s is this Structured oriented methodology.
Structured methodology focuses on procedural approach. It uses the features of unstructured programming
and provides certain improvements. It has three basic elements, namely:
Sequence: Computer will run your code in order, one line at a time from top to bottom of your program.
Selection: This is achieved using If else conditional statements. Ex: If the condition is met it will execute
“then” part otherwise it will jump to the “else” part.
Iteration: Sometimes if we want to execute the same lines of code several times, we use loops.
Structure oriented methodology uses a variety of notations such as data flow diagrams (DFD), Control flow
graphs (CFG), entity relationship (ER) diagrams etc..
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)
Data structure oriented methodology concentrates more on designing data structures rather than to
the design of its control structure as data plays an important role in the execution of a program.
Example of a very popular data structure oriented methodology is Jackson structured design (JSD)
methodology developed by the Michael Jackson in 1970’s. It expresses how functionality fits in with the real
world. JSD based development proceeds in two stages: firstly, “what” are the specifications determined and
secondly, “how” the implementation is done. It is good for real world scenarios but it is complex and difficult to
understand.
EVOLUTION OF SOFTWARE ENGINEERING METHODOLOGIES (Cont.)
development in the 21st century. It is a system analysis and design methodology that has evolved from the
object oriented methodology. It is an approach to software development that relies on software reuse. It
A component is an independent executable entity that can be made up of one or more executable
objects. Component fabrication consists of various phases, such as domain analysis, component identification,
component design, integration and testing, acceptance and roll out, and deployment.
SOFTWARE ENGINEERING CHALLENGES
The software engineering discipline is facing a number of software problems since software
crisis. The primary focus of software companies is to produce quality software within budget and small cycle
time.
Some of the challenges are understanding the user requirements, frequently changing
technology, increasing market of reuse business, platform independency and so on. These software
challenges reflect the development and maintenance processes. We will briefly discuss some of the
challenges:
SOFTWARE ENGINEERING CHALLENGES (Cont.)
1.Problem Understanding:
It is a difficult task to understand the exact problem and requirements of the customer in the overall software
development and maintenance process. There are several issues involved in problem understanding.
i) Usually customers are from different backgrounds and they do not have clear understanding their problems
and requirements. Also, the customers don’t have technical knowledge, especially those who are living in
remote areas.
ii) Similarly developers do not have the knowledge of all application domains and detailed requirements of the
problems and expectations of the customers.
iii) The lack of communication among software engineers and customers causes problems for the software
engineers in clearly understanding the customer needs. Sometimes the customers do not have sufficient time to
explain their problems to the development organization.
SOFTWARE ENGINEERING CHALLENGES (Cont.)
Cycle time: The customer always foresees faster and cheaper production of software products. Therefore,
software companies put efforts to reduce the cycle time of product delivery and minimize the product cost. Due
to competitive reasons and needs of the customer, programmers in most of the cases have the pressure to deliver
the product in a small cycle time. But delivery before calendar time sometimes compromises the product
quality.
Cost: The cost of the software product is generally the cost of hardware, software and manpower resources. It is
calculated on the basis of number of persons engaged in a project and for how much time. The cost of the
product also depends on the project size. Higher the cycle time higher the product cost.
A systematic engineering approach can reduce the cycle time and product cost.
SOFTWARE ENGINEERING CHALLENGES (Cont.)
4. Reliability:
Reliability is one of the most important quality attributes. Reliability is the successful operation of the
software within the specified environment and duration under certain conditions. A quality product can be
achieved by emphasizing the individual development phases, which are analysis, design, coding and testing.
Verification and Validation techniques are used to ensure the reliability of the software product. Bug
detection and prevention is the prerequisite to high reliability in the product. There are several automated
testing tools for bug detection and removal.
Software becomes unreliable due to logical errors present in the programs of the software. Project
complexity is the major cause of this software unreliability.
SOFTWARE ENGINEERING CHALLENGES (Cont.)
A software engineering process can be repeated in similar projects, which improves productivity and quality.
Repeatability can help to plan project schedule, fix deadlines for product delivery, manage configuration and
identify locations of bug occurrences. Repeatability promotes process maturity.
It is observed that the project failure ratio is greater than the success rates. Most of the projects fail due to
underestimation of budget and time to complete the project. The effectiveness of project plan depends on the
accuracy of estimation and understanding of the problem.
SOFTWARE PROCESS
What is Software process ?
We all know that Software Engineering is defined as the systematic approach to the development,
operation, maintenance and retirement of the software. Here systematic approach is nothing but a
software process.
Definition: A Software process is a set of ordered activities carried out to produce a software product. It
specifies the way to produce software. Each activity has well defined objective, task, and outcome.
An activity is a specified task performed to achieve the process objectives. The outcome of all activities
are compiled and integrated together to design the software. The development of software is done with
the help of some software process methodologies. Thus a software process provides the method for
developing software.
PROCESS, PROJECT AND PRODUCT
A software process is a complex entity in which each activity is executed with supporting tools and
techniques. Type of the process and its process depends on its execution.
A software project is a cross functional entity, with defined start and end. Every project must follow some
systematic process for its successful completion. A successful project is the one that conforms with project
A software product is the outcome of a software project produced through software processes. A project can
have more than one product called work products. A work product is the intermediate outcome of the
processes. But the final work product is referred to as a product or software. A product satisfies the needs of
A software process model is a generic representation of a software process instantiated for each specific
project. A process model is a set of activities that have to be accomplished to achieve the process objectives.
Process models are basically idealization of processes. These are very difficult to execute in real world. But
idealization of process model can reduce the chaos of software development. A process model can be made
practical by executing the concept, technologies, implementation environment, process constraints and so on.
Process models specify the activities, work products, relationships, milestones etc. Some examples of process
models are data flow model, life cycle model, quality model etc.
A generic view of the software process model is shown in the figure:
SOFTWARE PROCESS MODEL (Cont.)
The generic process model has three phases that are coordinated and supported by umbrella activities. The phases in process
model are:
Definition phase: This phase concentrates on understanding the problem and planning for the process model. The activities may
include problem formulation, problem analysis, system engineering, and project planning for the process.
Development phase: This phase focuses on determining the solution of problem with the help of umbrella activities. The main
activities of this phase are designing the architecture and algorithms of the system, writing codes, and testing the software
Implementation phase: Deployment, change management, defect removal, and maintenance activities are performed in this
phase. Reengineering may take over due to the changes in the technology and business.
The umbrella activities are responsible for ensuring the proper execution of definition, development and implementation phases.
There are certain common characteristics of a software process, which are discussed below:
Understandability: A software process must be explicitly defined i.e., it should be comprehensible for its
users. This is the prerequisite to perform any task. The process specification must be easy to understand, easy
to learn and easy to apply.
Effectiveness: A process must ensure the required deliverables and customer expectations and it must follow
the specified procedure. The produced product should adhere to the schedule and quality constraints.
However effectiveness of process depends on programmer’s skills, fund availability etc..
Predictability: It is about forecasting the outcomes before the completion of a process. It is the basis through
which the cost, quality and resource requirements are specified in a project.
CHARACTERISTICS OF A SOFTWARE PROCESS (Cont.)
Maintainability: It is the flexibility to maintain software through change requirements, defect detection and correction,
adopting it to new operating environments. Maintainability is a life long process and sometimes its cost exceeds the actual
software development cost.
Reliability: It refers to the capability of performing the intended tasks. Rigorous testing procedures are carried out before
applying a process in any production process. Unreliability of a process causes product failures and waste in time and money.
Changeability: It is the acceptability of changes done in software. A change has some effect to the software, which is the
difference in outcome before and after the changes occured. Changeability is classified as robustness, modifiability and
scalability.
Improvement: It concentrates on identifying and prototyping the possibilities (strengths and weakness) for improvements in
the process itself. Improvements in a process helps to enhance quality of the delivered products for providing more satisfactory
services to the users.
CHARACTERISTICS OF A SOFTWARE PROCESS (Cont.)
Monitoring and tracking: Monitoring and tracking a process in a project can help to determine predictability and
productivity. It helps to monitor and track the progress of the project based upon past experience of the process.
Rapidity: Rapidity is the speed of a process to produce the products under specifications for its timely completion.
Understandability and tracking of the process can accelerate the production process.
Repeatablity: It measures the consistency of a process so that it can be used in various similar projects. A process is
said to be repeatable if it is able to produce an artifact number of times without the loss of quality attributes. There
may be variations in the operation, cost and time but the quality of artifacts will be the same.
There are various other desirable features of a software process, such as quality, adoptability, acceptability, visibility,
supportability and so on.
Software Development Life Cycle
Software or product development is a complex and long running process whose aim is to produce quality
software products. Therefore, a product development is carried out as a series of activities for software
production. Each activity in the process is also referred to as a phase. Generally activities include feasibility
study, analysis, design, coding, testing, implementation, and maintenance. Collectively, these activities are
called the software development life cycle (SDLC).
Software development organizations follow some life cycle for each project for developing a software product.
The proposed life cycle model for a project is generally accepted by both parties (i.e., customer and
developer) because it helps in deciding, managing and controlling the various activities of a project. The
software development life cycle with various activities is pictorially represented in the below figure:
Software Development Life Cycle (Cont.)
1) Project Initiation:
This is the important activity of SDLC and It involves 3 steps mainly:
(i) Preliminary investigation (PI) (ii) Feasibility study (iii) Project plan
(i) Preliminary investigation: It is the initial step that gives a clear picture of what actually the physical system is. It goes
through problem identification, background of the physical system, and the system proposal for a candidate system.
(ii) Feasibility study: The purpose of the feasibility study is to determine whether the implementation of the proposed system will
support the mission and objectives of the organization. Feasibility study ensures that the candidate system is able to satisfy the
client needs. There are various types of feasibility study to be performed; such as technical, economical and operational and so
on. Based on this a feasibility report is prepared and submitted to the top level management. A positive report leads to the
project initiation.
(iii) Project plan: A high level plan is designed to cover the schedule, cost, scope and objectives, resources etc. It’s the job of
project manager to secure the required resources, design project teams and prepare a detailed project plan to initiate the
project.
Software Development Life Cycle (Cont.)
2) Requirements Analysis:
Requirements Analysis is the process of collecting the factual data, defining the problem and provides a
document for software development. This analysis phase consists of three main activities mainly:
Requirements elicitation, Requirements specification, Requirements verification and validation.
Requirements elicitation is about understanding the problem. Once the problem has been understood, it is
described in the requirements specification document which is referred to as Software requirements
specifications (SRS). This document describes the product to be delivered, not the process of how it is to be
developed. Requirements verification and validation ascertain that correct requirements are stated (validation)
and that these requirements are stated correctly (Verification).
Software Development Life Cycle (Cont.)
3) Software Design:
Software design focuses on the solution domain of the project on the basis of the requirements document
prepared during the analysis phase. It places stress on how to develop the product. The goal of the design
phase is to transform the collected requirements into a structure that is suitable for implementation in
programming languages. The software designer begins with making architectures, outlining the hierarchical
structure and writing algorithms for each component in the system.
The design phase has two aspects physical design and logical design. Physical design concentrates on
identifying the different modules or components in a system that interact with each other to create the
architecture of the system. In logical design, the internal logic of a module or component is described in a
pseudo code or in an algorithmic manner.
Software Development Life Cycle (Cont.)
4) Coding:
The coding phase is concerned with the development of the source code that will implement the design. This
code is written in a formal language called a programming language, such as assembly language, C++, Java
etc. Although the major constraints and decisions are already specified during design, still utmost care is
taken while programming. Good coding efforts can reduce testing and maintenance tasks.
The programs written during the coding phase must be easy to read and understand. If necessary, source
codes must be documented for future purposes. Proper guidelines and standards must be followed while
programming the design. Rules must be followed for the declaration of data structures, variables, header
files, function calls and so on.
Software Development Life Cycle (Cont.)
5) Testing:
Before the deployment of software, testing is performed to remove the defects in the developed system. After
the coding phase, a test plan of the system is developed and run on the specified test data.
Testing covers various errors at the requirements, design and coding phases. Requirements errors may arise
due to improper understanding of the customer needs. Design errors occur if the algorithms are not
implemented properly. Coding errors are mainly logical and syntactical errors.
Testing is performed at different levels: unit testing, integration testing, system testing and acceptance testing.
Unit testing is carried out for individual modules at code level. After testing each module, interfaces among
various modules are checked with integration testing. System test ensures that the system satisfies the
requirements specified by the customer. Acceptance test is done for customer satisfaction.
Software Development Life Cycle (Cont.)
6) Deployment:
After acceptance by the customer during the testing phase, deployment of the software begins. The purpose of
the software deployment is to make the software available for operational use. The release of the software starts.
During deployment, all the program files are loaded into user’s computer. After installation of all modules the
system, training of the user starts. Documentation is also an important activity in software development as it the
description of the system from the user’s point of view, detailing how to use or operate the system.
7) Maintenance:
The maintenance phase comes after the software product is released and put into operation through deployment
process. Software maintenance is performed to adapt to changes in a new environment, correct bugs if any and
enhance the performance by adding new features. The software will age in near future and enter the retirement
stage. In extreme cases, the software will be reengineered onto a different platform.
SOFTWARE PROCESS MODELS
Software development organizations follow some development process models when developing a software
product. Each process model has a life cycle of software production. The general activities of software life cycle
models are feasibility study, analysis, design, coding, testing, deployment and maintenance. Each life cycle
model has certain advantages, applications and limitations.
Various software development process models have been proposed due to varying nature of software
applications. These models can be differentiated by the feedback and control methods employed during
development. Some of the models are listed below:
• Classical waterfall model
• Iterative waterfall model
• Prototyping model
• Incremental model
• Spiral Model
• Agile process model
• RUP (Rational unified process) process model.
CLASSICAL WATERFALL MODEL
CLASSICAL WATERFALL MODEL (Cont.)
▪ The Waterfall Model was first Process Model to be introduced.
▪ It is also referred to as a linear-sequential life cycle model.
▪ It is very simple to understand and use.
▪ In a waterfall model, each phase must be completed fully before the next phase can begin.
▪ This type of software development model is basically used for the project which is small and
there are no uncertain requirements. At the end of each phase, a review takes place to
determine if the project is on the right path and whether or not to continue or discard the
project.
▪ In this model software testing starts only after the development is complete.
▪ In waterfall model phases do not overlap.
CLASSICAL WATERFALL MODEL (Cont.)
▪ The basic idea in Prototype model is that instead of freezing the requirements before a design
or coding can proceed, a throwaway prototype is built to understand the requirements.
▪ This prototype is developed based on the currently known requirements.
▪ Prototype model is a software development model. By using this prototype, the client can get
an “actual feel” of the system, since the interactions with prototype can enable the client to
better understand the requirements of the desired system.
▪ Prototyping is an attractive idea for complicated and large systems for which there is no
manual process or existing system to help determining the requirements.
▪ The prototypes are usually not complete systems and many of the details are not built in the
prototype. The goal is to provide a system with overall functionality.
PROTOTYPING MODEL (Cont.)
▪ The Spiral model of software development is shown in the figure given above.
▪ The diagrammatic representation of this model appears like a spiral with many
loops.
▪ The exact number of loops in the spiral is not fixed.
▪ Each loop of the spiral represents a phase of the software process.
▪ For example, the innermost loop might be concerned with feasibility study, the next
loop with requirements specification, the next one with design, and so on.
▪ Each phase in this model is split into four sectors (or quadrants) as shown in the
figure.
▪ The following activities are carried out during each phase of a spiral model.
SPIRAL MODEL (Cont.)
First quadrant (Objective Setting)
During the first quadrant, it is needed to identify the objectives of the phase.
Examine the risks associated with these objectives. A detailed analysis is carried out for each
identified project risk.
Second Quadrant (Risk Assessment and Reduction)
Steps are taken to reduce the risks. For example, if there is a risk that the requirements are
inappropriate, a prototype system may be developed.
Third Quadrant (Development and Validation)
Develop and validate the next level of the product after resolving the identified risks.
Fourth Quadrant (Review and Planning)
Review the results achieved so far with the customer and plan the next iteration around the
spiral.
Progressively more complete version of the software gets built with each iteration around the
spiral.
SPIRAL MODEL (Cont.)
The spiral model is called a Meta model since it encompasses all other life cycle
models. Risk handling is inherently built into this model. The spiral model is suitable
for development of technically challenging software products that are prone to several
kinds of risks. However, this model is much more complex than the other models – this
is probably a factor deterring its use in ordinary projects.
AGILE MODEL
AGILE MODEL (Cont.)
Extreme Programming:
XP is a lightweight, efficient, low-risk, flexible, predictable, scientific, and fun way to
develop software. eXtreme Programming (XP) was conceived and developed to address the
specific needs of software development by small teams in the face of vague and changing
requirements. Extreme Programming is one of the Agile software development
methodologies. It provides values and principles to guide the team behavior. The team is
expected to self-organize. Extreme Programming provides specific core practices where-
Each practice is simple and self-complete. Combination of practices produces more complex
and emergent behavior.
AGILE MODEL (Cont.)
Sprint review meeting: At the end of each sprint, the team demonstrates the completed functionality at a sprint
review meeting, during which, the team shows what they accomplished during the sprint. Typically, this takes the
form of a demonstration of the new features, but in an informal way; for example, PowerPoint slides are not
allowed. The meeting must not become a task in itself nor a distraction from the process.
Sprint retrospective: Also at the end of each sprint, the team conducts a sprint retrospective, which is a meeting
during which the team (including its Scrum Master and product owner) reflect on how well Scrum is working for
them and what changes they may wish to make for it to work even better.
Scrum is an iterative framework to help teams manage and progress through a complex project. It is most
commonly used in Software Development by teams that implement the Agile Software Development
methodology. However it is not limited to those groups. Even if your team does not implement Agile Software
Development, you can still benefit from holding regular scrums with your teams.
AGILE MODEL (Cont.)
Scrum participants fall into the same two categories. They are either Pigs or they are chickens. Participants at
scrum are either fully committed to the project or simply participants. Let’s look at who these various roles really
are.
Pig Roles
Actual Team Members: These would be the developers, artists or product managers that comprise the core of
the team. These are the people who are actually doing the daily work to bring the project to fruition. These
members are fully committed to the project.
Scrum Master: The scrum master might be one of the team members — or might not be. It is important to call
this person out separately here though because the Scrum master has the primary role of ensuring that the scrum
moves forward without problems and is effective for the team.
Project Owner: This may be a Product Manager who is also comprised of the team or it may not. Again it is
important to call this persons role out here as this person represents the voice of the end customer. This person
needs to ensure that the product achieves it’s product goals and provides the necessary end product to the
customers.
AGILE MODEL (Cont.)
Chicken Roles
Managers: At first glance you might think that managers are pigs — naturally. However in the scrum context
managers are generally more concerned about the people involved in a project and their respective health. They
are not as focused on the product and it’s particular customer oriented goals. For this reason they are considered a
chicken in the scrum context.
Stakeholders: Stakeholders are individuals who will benefit or have a vested interest in the project, however do
not necessarily have authority to dictate direction or to be held accountable for the product. They can be
consulted for opinions and insight however the product owner needs to maintain final rights for the decision
making process.
Why are the roles important
The chicken and pig roles are vital to scrum because it dictates who in the scrum should be an active participant
Chickens should not be active participants in a scrum meeting. They may attend, however they should be there as
guests only and not required to share their current statuses. Pigs on the other hand need to share their current
progress and share any blockers that they are encountering.
The reason that Chickens should not be active participants is that they too easily will take over the direction of
the scrum and lead it away from the goals of the entire team. It is the scrum masters job to ensure that the scrum
stays on target and covers the topics that need to be covered. if someone goes off topic (chicken or pig) it is the
scrum masters job to bring the group back to the topic at hand.
AGILE MODEL (Cont.)
Advantages of Agile model:
▪ Customer satisfaction by rapid, continuous delivery of useful software.
▪ People and interactions are emphasized rather than process and tools. Customers, developers and testers
constantly interact with each other.
▪ Working software is delivered frequently (weeks rather than months). Face-to-face conversation is the best
form of communication.
▪ Close, daily cooperation between business people and developers. Continuous attention to technical excellence
and good design.
▪ Regular adaptation to changing circumstances.
▪ Even late changes in requirements are welcomed.
The RUP development methodology provides a structured way for companies to envision create software
programs. Since it provides a specific plan for each step of the development process, it helps prevent resources
from being wasted and reduces unexpected development costs.
RUP PROCESS MODEL (Cont.)
RUP PROCESS MODEL (Cont.)
Introduction
• The requirements of a system are the descriptions of the features or services that the
system exhibits within the specified constraints.
• The requirements collected from the customer are organized in some systematic manner
and presented in the formal document called software requirements specification (SRS)
document.
1. Business Requirements:
• Business requirements define the project goal and the expected business benefits for doing
the project.
• The enterprise mission, values, priorities, and strategies must be known to understand the
business requirements that cover higher level data models and scope of the models.
• The business analyst is well versed in understanding the concept of business flow as
well as the process being followed in the organization.
• The business analyst guides the client through the complex process that elicits the requirements
of their business.
2. User Requirements
• User requirements are the high-level abstract statements supplied by the customer, end
users, or other stakeholders.
• These requirements are translated into system requirements keeping in mind user’s views.
• These requirements are generally represented in some natural language with pictorial
representations or tables to understand the requirements.
• In an ATM machine, user requirements allow users to withdraw and deposit cash.
3. System Requirements
• System requirements are the detailed and technical functionalities written in a systematic
manner that are implemented in the business process to achieve the goal of user
requirements.
• These are considered as a contract between the client and the development organization.
• The system requirements consider customer ID, account type, bank name, consortium, PIN,
communication link, hardware, and software. Also, an ATM will service one customer at a
time.
4. Functional Requirements
• Functional requirements are the behavior or functions that the system must support.
• These are the attributes that characterize what the software does to fulfil the needs of the
customer.
5. Non-functional Requirements
• Non-functional requirements specify how a system must behave. These are qualities,
standards, constraints upon the systems services that are specified with respect to a
product, organization, and external environment.
• Non-functional requirements are related to functional requirements, i.e., how efficiently,
by how much volume, how fast, at what quality, how safely, etc., a function is performed
by a particular system.
Business requirements:
A business requirement for an ATM can be stated as to design a computerized banking network
that will enable customers to avail simple bank account services through ATM’s that may be at
remote locations from the bank campus and that need not be owned and operated by the
customer’s bank.
User requirements allows users to withdraw and deposit cash, checking balance enquiry and
mini statement etc.
System requirements consider customer ID, account type, bank name, PIN, communication link,
hardware and software.
1. Requirements Elicitation
• The goals of requirement elicitation are to identify the different parties involved in the
project as sources of requirements, gather requirement from different parties, write
requirements in their original form as collected from the parties, and integrate these
requirements.
• The original requirements may be inconsistent, ambiguous, incomplete, and infeasible for
the business.
• Therefore, the system analyst involves domain experts, software engineers, clients, end
users, sponsors, managers, vendors and suppliers, and other stakeholders and follows
standards and guidelines to elicit requirements.
System analyst
• The role of the system analyst is multifunctional, fascinating, and challenging as compared to
other people in the organization.
• A system analyst is the person who interacts with different people, understands the business
needs, and has knowledge of computing.
• The skills of the system analyst include programming experience, problem solving,
interpersonal savvy, IT expertise, and political savvy. He acts as a broker and needs to be a
team player.
• He should also have good communication and decision-making skills.
• He should be able to motivate others and should have sound business awareness.
• Stakeholders are generally unable to express the complete requirement at a time and in an
appropriate language.
• There may be conflicts in the views of stakeholders.
• It may happen that the analyst is not aware of the problem domain and the business
scenario.
• To face these challenges, analyst will follow some techniques to gather the
requirements, which are called as Fact-Finding Techniques.
Fact-Finding Techniques
– They are conducted to gather facts, verify and clarify facts, identify requirements,
and determine ideas and opinions.
– Questionnaires are used to collect and record large amount of qualitative as well as
quantitative data from a number of people.
– It is a structured group meeting just like workshop where the customer, designer, and
other experts meet together for the purpose of identifying and understanding the
problems to define requirements.
– The JAD session has predefined agenda and purpose.
– The system analyst personally visits the client site organization, observes the
functioning of the system, understands the flow of documents and the users of the
system, etc.
– It helps the system analyst to gain insight into the working of the system instead
documentation.
– An initial version of the final product called prototype is developed that can give
clues to the client to provide and think on additional requirements. The initial
version will be changed and a new prototype is developed.
– The collected viewpoints are classified and evaluated under the system’s
circumstances and finally, it is integrated into the normal requirement engineering
process.
• Review records
– Reviewing the existing documents is a good way of gathering requirements.
– The information related to the system is found in the documents such as manual of the
working process, newspapers, magazines, journals, etc.
– The existing forms, reports, and procedures help to understand the guidelines of a
process to identify the business rules, discrepancies, and redundancies.
– This method is beneficial to new employees or consultants working on the project.
2. Requirements Analysis
• In requirement analysis, we analyze stakeholder’s needs, constraints, assumptions,
qualifications, and other information relevant to the proposed system and organize them in a
systematic manner.
• The following analysis techniques are generally used for the modeling of requirements:
1. Structured analysis
2. Data-oriented analysis
3. Object-oriented analysis
4. Prototyping analysis
1. Structured Analysis
• Thus, the aim of structured analysis is to understand the work flow of the system that the
user performs in the existing system of the organization.
Structured analysis uses a graphical tool called data flow diagrams (DFD), which represent the
system behavior
• A DFD is a graphical tool that describes the flow of data through a system and the
functions performed by the system.
• It shows the processes that receive input, perform a series of transformations, and
produce the desired outcomes.
• It does not show the control information (time) at which processes are executed.
• DFD is also called a bubble chart or process model or information flow model.
A DFD has four different symbols:
• Data flow: represent the movement of data, i.e., leaving one process and entering into
another process. Data flows are represented by arrows, connecting one data
transformation to another.
• Data store: Data store is the data at rest. It is represented in parallel lines.
• Actor: It is the external entity that represents the source or sink (destination of data). It is
represented by a rectangle.
Constructing DFD
• The construction of the DFD starts with the high-level functionality of the system, which
incorporates external inputs and outputs.
• This abstract DFD is further decomposed into smaller functions with the same input and
outputs.
• The decomposed DFD is the elaborated and nested DFD with more concrete functionalities.
• Decomposition of DFD at various levels to design a nested DFD is called leveling of DFD.
• Sometimes, dotted lines are used to represent the control flow information. Control flow
helps to decide the sequencing of the operations in the system.
Conventions in constructing DFD
• Data flow diagrams at each level must be numbered for reference purposes, for example,
level 0, level 1 etc.
• Multiple data flow can be shown on single data flow line. A bidirectional arrow can be used
as input and outflow (if same data is used) data or separate line can be used for input and
output.
• External agents/actors and data flow are represented using nouns; for example, stock, pin,
university, transporter, etc.
• Processes should be represented with verbs followed by nouns. Longer names must be
connected with underscores (“_”) and these should be short but meaningful, e.g.
sales_detail.
• Each process and data store must have at least one incoming data flow into it and one
outgoing data flow leaving it.
• DFDs represent the flow of data through the system while flowcharts represent the flow of
control in a program.
• DFDs do not have branching and iteration of data whereas flowcharts have conditional and
repetitive representation of processes.
• Flowcharts do not show the input, output, and storage of data in the system.
• The DFD can be used to show the parallel processes executing in the system while
flowcharts have only one process active at a time.
Data Dictionary
• Data flows are used by the programmers in designing data structures and also it is used by
testers to design test cases.
• Such data structures may be primitive or composite. Composite data structures may consist of
several primitive data structures.
• Longer composite data structures are difficult to write on the line of data flow in DFD.
Therefore, data flow and data structures are described in the data dictionary.
• Data dictionary is metadata that describe composite data structures defined in the DFD.
• Data dictionary is written using special symbols, such as “+” for composition, “*”
for repetition, and “|” for selection.
• Structured analysis uses data flow diagrams and data dictionary for requirement analysis.
– mark the boundary of the system to know what can be automated or what will be
manual, and prepare the dictionary of the system.
1. The context diagram (or level 0 DFD) is the high-level representation of the
proposed system after studying the physical DFD of the existing system.
2. The entire system is treated as a single process called bubble, with all its external
entities.
3. That is, the system is represented with main input and output data, process, and
external entities.
5. Logical DFD describes logical rather than physical entities. Processes might
be implemented as programs, data might be considered as file or database, etc.
FIG: CONTEXT DIAGRAM FOR CHEQUES IN BANK
• Level 1 logical DFD includes the main functions supported by the proposed system.
• The equivalent logical DFD for the existing physical DFD of the system is designed
with additional services required by the customer.
• The system will provide equivalent resources to the customer as these are in the non-
automated or existing system.
3. Decomposition of Level 1 DFD
• It follows a top-down analysis and functional decomposition to refine the level 1 DFD into
smaller functional units.
• Functional decomposition of each function is also called exploding the DFD or factoring.
• Each successive level of DFD provides a more detailed view of the system.
• The goal of decomposition is to develop a balanced or leveled DFD. That is, data flows,
data stores, external entities, and processes are matched between levels from the context
diagram to the lowest level.
• For example, the level 2 DFDs for the decomposed process of verification of account:
After constructing the final DFD, boundary conditions are identified, which ensures what
will be automated and what can be done manually or by another machine. For example, in an ATM
system, a user will select options, cancel operation, and cash and receipt. All these tasks will be done
by the user manually.
5. Prepare Data Dictionary and Process Descriptions
• The data flows and processes are defined in the data dictionary with proper structures and
formats.
– It is difficult to understand the final DFD and also it does not reveal the sequence in which
processes are performed in the system.
– Although a step-by-step approach is suitable for the waterfall model but the system
requirements and user requirements must be frozen at the early in the life cycle.
– A complete DFD is constructed if all the requirements are available at the beginning of the
structured analysis.
2. Data-Oriented analysis
• A data model is the abstraction of the data structures required by a database rather than the
operations on those data structures.
• Data models ensure that all data objects required by the database are completely and
accurately represented.
• Data models are composed of data entities, associations among different entities, and the
rules which govern operations on the data.
• Data models are accompanied by functional models that describe how the data will be
processed in the system.
• Thus, producing data models and functional models together is called conceptual database
design.
• ERM is represented by the E-R diagram that represents data and organizes them in such a
graphical manner that helps to design the final database.
• An entity or entity class is analogous to the class in object orientation, which represents a
collection of similar objects.
• An independent entity or strong entity is one that does not rely on another for identification.
• A dependent entity or weak entity is one that relies on another for identification.
• Attributes are the properties or descriptors of an entity. e.g., the entity course contains ID,
name, credits, and faculty attributes. Attributes are represented by ellipses.
• Derived attributes are the attributes whose values are derived from other attributes. They
are indicated by a dotted ellipse.
ERM Relationships
• A relationship represents the association between two or more entities.
• It is represented by a diamond box and two connecting lines with the name of relationship
between the entities.
• The number of entities associated with a relationship is called the degree of relationship. It
can be recursive, binary, ternary, or n-ary.
• A recursive relationship occurs when an entity is related to itself.
• A binary relationship associates two entities.
• A ternary relationship involves three entities
• An n-ary relationship involves many entities in it.
ERM Cardinality
• Cardinality defines the number of occurrences of entities which are related to each other.
• It can be one-to-one (1:1), one-to-many (1: M), or many-to-many (N: M).
• Direction of a relationship indicates the originating entity of a binary relationship. The entity
from which a relationship originates is called the parent entity and the entity where the
relationship terminates is called the child entity.
• The figures given below will illustrate recursive, binary, and ternary relationships
with cardinalities.
Fig: Relationships with Cardinalities
• Participation denotes whether the existence of an entity instance is dependent upon the
existence of another related entity instance.
• Specialization represents the “is-a” relationship. It designates entities in two levels, viz., a
high-level entity set and low-level entity set.
Generalization is the relationship between an entity and one or more refined sets of it. It combines entity sets
that share the common features into a higher-level entity set
6. Perform normalization
• Object-oriented approach also combines both data and processes into single entities called
objects.
• The object-oriented approach has two aspects, object-oriented analysis (OOA) and object-
oriented design (OOD).
• The idea behind OOA is to consider the whole system as a single complex entity called
object, breaking down the system into its various objects, and combining the data and
operations in objects.
• OOA increases the understanding of problem domains, promotes a smooth transition from
the analysis phase to the design phase, and provides a more natural way of organizing
specifications.
• There exist various object-oriented approaches for OOA and OOD, e.g. Object
Modeling Technique (OMT).
• Objects are the real-world entities that can uniquely be identified and distinguished
fromother objects.
• Each object has certain attributes (state) and operations (behavior).
• Similar objects are grouped together to form a class.
• An instance or individual object has identity and can be distinguished from other objects in
a class.
• In OMT, a class is represented through class diagram, which is indicated by a rectangle with
its three parts: first part for class name, second part for its attributes, and third part for
operations
• Objects are shown through object diagrams, which are also represented with rounded
rectangles.
• For example, the class diagram for an employee is shown in the Figure given below:
• Multiplicity defines how many instances of a class may relate to a single instance of another
class. It can be one-to-one, one-to-many, or many-to-many.
• An association can also have attributes and they are represented by a box connected to the
association with a loop.
• Role names are also attached at the end of the association line.
• Sometimes, a qualifier can be attached to the association at the many side of the association
line.
Aggregation, Generalization, and Specialization
• The most general class is at the top, with the more specific object types shown as the sub-
class.
• Generalization and specialization are helpful to describe the systems that should be
implemented using inheritance in an object-oriented language.
• OOA in the OMT approach starts with the problem statement of the real world situation
expressed by the customer.
• Based on the problem statement, following three kinds of modeling are performed to
produce the object-oriented analysis model:
– Object modeling
– Dynamic modeling
– Functional modeling
• The object model describes the structural aspects of a system;
• The dynamic model represents the behavioral aspects; and
• The functional model covers the transformation aspects of the system.
I. Object Modeling
• Object modeling begins with the problem statement of the application domain.
• The problem statement describes the services that will be performed by the system.
• The object model captures the static structure of object in the system.
• It describes the identity, attributes, relationships to other objects, and the operations
performed on the objects.
• Object models are constructed using class diagrams after the analysis of the application
domain.
• Once the structure of an object is found, its dynamic behavior and relationships over time in
the system are modeled in a dynamic model.
• During dynamic modeling, state diagrams are modeled, which consist of states and
transitions caused by events.
• The state diagram is constructed after analyzing the system behavior using the event
sequence diagram.
• An event sequence diagram is composed of participating objects drawn as vertical lines and
events passing from one object to another drawn as horizontal lines between the object lines.
• The sequence diagrams as well as state chart diagrams for the ATM system is shown below
diagrams.
• The functional model describes what are the actions performed without specifying how or
when they are performed.
• A functional model involves inputs, transformations, and outcomes. Outputs are generated
on some input after applying certain transformations to it.
• The functional model is represented through data flow diagrams (DFD).
4. Prototyping Analysis
• Prototyping is more suitable where requirements are not known in advance, rapid delivery
of the product is required, and the customer involvement is necessary in software
development.
• It is an iterative approach that begins with the partial development of an executable model
of the system. It is then demonstrated to the customer for the collection of feedback.
• The development of prototype is repeated until it satisfies all the needs and until it is
considered the final system.
• Prototype can be developed either using automated tools, such as Visual Basic, PHP
(Hypertext Pre-processor), 4GL (fourth generation languages), or paper sketching.
• There are two types of prototyping approaches widely used for elicitation, analysis, and
requirement validation:
o Throwaway prototyping
o Evolutionary prototyping
• The various versions of the prototype are developed from customer requirements until the
customer is satisfied.
• In evolutionary prototyping, a prototype is built with the focus that the working prototype
will be considered the final system.
• The process begins with the customer requirements.
• The prototypes are produced in several iterations.
• They are shown to the customers for their acceptance and customer suggestions are
incorporated until the final prototype as the final product is constructed.
• This type of prototyping uses the rapid application development (RAD) approach in which
automated tools and CASE tools are used for prototype development.
• The main focus of the problem analysis approaches is to understand the internal behavior of
the software
• Software requirement specification (SRS) document is a formal document that provides the
complete description of the proposed software, i.e., what the software will do without
describing how it will do so.
• Customers and users rely more on and better understand a written formal document
than some technical specification.
• It provides basis for later stages of software development, viz., design, coding, testing,
standard compliances, delivery, and maintenance.
• It acts as the reference document for the validation and verification of the work
products and final software.
• A high quality SRS reduces development effort (schedule, cost, and resources) because
unclear requirements always lead to unsuccessful projects.
• The main focus for specifying requirements is to cover all the specific levels of details that
will be required for the subsequent phases of software development and these are agreed
upon by the client.
• The specific aspects that the requirement document deals with are as follows:
• Functional requirements
• Performance requirements
• Design constraints (hardware and software)
• External interface requirements
Functional Requirements
– They specify the functions that will accept inputs, perform processing on these inputs,
and produce outputs.
– Functions should include descriptions of the validity checks on the input and output
data, parameters affected by the operation and formulas, and other operations that
must be used to transform the inputs into corresponding outputs.
– For example, an ATM machine should not process transaction if the input amount is
greater than the available balance. Thus, each functional requirement is specified
with valid/invalid inputs and outputs and their influences.
Performance Requirements
• Static requirements: These are fixed and they do not impose constraint on
the execution of the system.
• There are certain design constraints that can be imposed by other standards, hardware
limitations, and software constraints at the client’s environment.
• These constraints may be related to the standards compliances and policies, hardware
constraints (e.g., resource limits, operating environment, etc.), and software constraints.
• The hardware constraints limit the execution of software on which they operate.
• Software constraints impose restrictions on software operation and maintenance.
External Interface Requirements
• The external interface specification covers all the interactions of the software with people,
hardware, and other software.
• The working environment of the user and the interface features in the software must be
specified in the SRS.
• The external interface requirement should specify the interface with other software.
• It includes the interface with the operating system and other applications.
Structure of an SRS
• The structure of the SRS describes the organization of the software requirement document.
• The specific requirement subsection of the SRS should contain all of the software
requirements to a level of detail sufficient to enable designers to design a system to satisfy
those requirements, and testers to test that the system satisfies those requirements.
• The SRS document may contain errors and unclear requirements and it may be the cause of
human errors.
• The most common errors that occur in the SRS documents are omission, inconsistency,
incorrect fact, and ambiguity.
• Requirements review is one of the most widely used and successful techniques to detect
errors in requirements.
• Requirements review helps to address problems at early stages of software development.
• Although requirement reviews take time but they pay back by minimizing the changes and
alteration in the software.
➢ Requirement Inspection
• It is an effective way for requirement validation that detects defects at early stages.
• The purpose of this technique is to ensure that requirements are good enough for the
product and planning activities.
• It removes the defects before a project starts and during the project execution and
development.
• In this technique, the product manager and the tester select and review the high priority
requirements and least priority requirements are discarded in the initial specification.
• Writing test cases early may result in some rework if the requirements change, but this
rework cost will be lower than finding and fixing defects in the later stages.
➢ Reading
• Reading is a technique of reviewing requirements in which the reader applies his knowledge
to find the defects.
• There are various reading techniques, such as ad-hoc based, checklist based, etc.
• Detection of the defect depends upon the knowledge and experience of the reviewer.
• Checklist-based reading is one of the commonly used techniques in which a set of questions
is given to the reviewer as a checklist.
➢ Prototyping
• In requirements validation, prototyping helps to identify the errors like missing, incomplete,
and incorrect requirements from the requirements document.
• Prototyping works with developing an executable model of the proposed system. This is
developed in an incremental manner.
• In each increment, defects are identified and discussed with the stakeholders and corrective
actions are taken accordingly.
5. Requirements Management
• Customer requirements are unclear even at the final stage of system development, which is
one of the important causes of project failures.
• Therefore, it becomes necessary for project managers to monitor to effect any changes that
may be necessary as the project work advances.
Definition
Software design is the process by which an agent creates a specification of
a software artifact, intended to accomplish goals, using a set of primitive components and subject to
constraints.
A software product is considered a collection of software modules. A module is a part of a
software product which has data and functions together and an interface with other modules to
produce some outcome.
For Example: A banking S/W system consists of various modules like ATM interface, online
transaction, loan management, deposit, and so on. Each of these module interacts with other modules
to accomplish the banking activities.
Software design is the first step in SDLC (Software Design Life Cycle), which moves the
concentration from problem domain to solution domain. It tries to specify how to fulfil the
requirements mentioned in SRS.
Architectural Design - The architectural design is the highest abstract version of the system.
It identifies the software as a system with many components interacting with each other. At
this level, the designers get the idea of proposed solution domain. The external design
considers the architectural aspects related to business, technology, major data stores, and
structure of the product.
High Level Design (Physical Design) - The high-level design breaks the ‘single entity-
multiple component’ concept of architectural design(conceptual view) into less-abstracted
view of sub-systems and modules and depicts their interaction with each other. High-level
design focuses on how the system along with all of its components can be implemented in
forms of modules. It recognizes modular structure of each sub-system and their relation and
interaction among each other.
Detailed Design- Detailed design deals with the implementation part of what is seen as a
system and its sub-systems in the previous two designs. It is more detailed towards modules
and their implementations. It defines logical structure of each module and their interfaces to
communicate with other modules.
Characteristics of good software design:
The quality of a software design can be characterized by the application domain. For example real
time software will focus more on efficiency and reliability issues whereas academic automation s/w
will concentrate on understandability and usability issues. A designer always tries to produce a good
design. The desirable characteristics that a good s/w design should have are as follows:
1. Correctness: A design is said to be correct if it is correctly produced according to the stated
requirements of customers in the SRS. It should fulfil all the functional features, satisfy
constraints, and follow the guidelines. A correct design is more likely to produce accurate
outcomes.
2. Efficiency: It is concerned with performance related issues; for example, optimal utilization
of resources. The design should consume less memory and processor time. Software design
and its implementation should be as fast as requires by the user.
3. Understandability: It should be easy to understand what the module is, how it is connected to
other modules, what data structure is used, and its flow of information. Documentation of a
design can also make it more understandable. An understandable design will make the
maintenance and implementation tasks easier.
4. Maintainability: A difficult and complex design would take a larger time to be understood and
modified. Therefore, the design should be easy to be modified, should include new features,
should not have unnecessary parts, and it should be easy to migrate it onto another platform.
5. Simplicity: A simple design will improve understandability and maintainability. Introducing a
simple design is rare because a design follows certain steps and criteria. Still designers always
think to “keep it simple” rather than “make it complex”.
6. Completeness: It means that the design includes all the specifications of the SRS. A complete
design may not necessarily be correct. But a correct design can be complete.
7. Verifiability: The design should be able to be verified against the requirements documents
and programs. Interfaces between the modules are necessary for integration and function
prototyping.
8. Portability: The external design mainly focuses on the interface, business, and technology
architectures. These architectures must be able to move a design to another environment. This
may be required when the system is to be migrated onto different platforms.
9. Modularity: A modular design will be easy to understand and modify. Once a modular system
is designed, it allows easy development and repairing of required modules independently.
10. Reliability: This factor depends on the measurement of completeness, consistency, and
robustness in the software design. Nowadays, most people depend and rely on S/W to always
work and yield correct results. If there is any unreliable part of software, it can cause major
dangers.
11. Reusability: The software design should be standard and generic so that it can be used for
mass production of quality products with small cycle time and reduced cost. The object code,
classes, design patterns, packages, etc., are the reusable parts of software.
Design Principles:
Every software process is characterized by basic concepts along with certain practices or methods.
Methods represent the manner through which the concepts are applied. As new technology replaces
older technology, many changes occur in the methods that are used to apply the concepts for the
development of software. However, the fundamental concepts underlining the software design process
remain the same, some of which are described here.
1. Abstraction
Abstraction refers to a powerful design tool, which allows software designers to consider
components at an abstract level, while neglecting the implementation details of the
components. IEEE defines abstraction as 'a view of a problem that extracts the
essential information relevant to a particular purpose and ignores the remainder of the
information.' The concept of abstraction can be used in two ways: as a process and as an entity. As
a process, it refers to a mechanism of hiding irrelevant details and representing only the essential
features of an item so that one can focus on important things at a time. As an entity, it refers to a
model or view of an item.
Each step in the software process is accomplished through various levels of abstraction. At
the highest level, an outline of the solution to the problem is presented whereas at the lower levels, the
solution to the problem is presented in detail. For example, in the requirements analysis phase, a
solution to the problem is presented using the language of problem environment and as we proceed
through the software process, the abstraction level reduces and at the lowest level, source code of the
software is produced.
There are three commonly used abstraction mechanisms in software design namely,
functional abstraction, data abstraction and control abstraction. All these mechanisms allow us to
control the complexity of the design process by proceeding from the abstract design model to concrete
design model in a systematic manner.
1. Functional abstraction: This involves the use of parameterized subprograms. Functional abstraction
can be generalized as collections of subprograms referred to as 'groups'. Within these groups there
exist routines which may be visible or hidden. Visible routines can be used within the containing
groups as well as within other groups, whereas hidden routines are hidden from other groups and can
be used within the containing group only.
2. Data abstraction: This involves specifying data that describes a data object. For example, the data
object window encompasses a set of attributes (window type, window dimension) that describe the
window object clearly. In this abstraction mechanism, representation and manipulation details are
ignored.
3. Control abstraction: This states the desired effect, without stating the exact mechanism of control.
For example, if and while statements in programming languages (like C and C++) are abstractions of
machine code implementations, which involve conditional instructions. In the architectural design
level, this abstraction mechanism permits specifications of sequential subprogram and exception
handlers without the concern for exact details of implementation.
2. Architecture
Software architecture refers to the structure of the system, which is composed of various
components of a program/ system, the attributes (properties) of those components and the relationship
amongst them. The software architecture enables the software engineers to analyze the software
design efficiently. In addition, it also helps them in decision-making and handling risks. The software
architecture does the following.
Provides an insight to all the interested stakeholders that enable them to communicate with each
other
Highlights early design decisions, which have great impact on the software engineering activities
(like coding and testing) that follow the design phase
Creates intellectual models of how the system is organized into components and how these
components interact with each other.
Information hiding is of immense use when modifications are required during the testing and
maintenance phase. Some of the advantages associated with information hiding are listed below.
1. Leads to low coupling
2. Emphasizes communication through controlled interfaces
3. Decreases the probability of adverse effects
4. Restricts the effects of changes in one component on others
5. Results in higher quality software.
5. Stepwise Refinement
Stepwise refinement is a top-down design strategy used for decomposing a system from a
high level of abstraction into a more detailed level (lower level) of abstraction. At the highest level of
abstraction, function or information is defined conceptually without providing any information about
the internal workings of the function or internal structure of the data. As we proceed towards the
lower levels of abstraction, more and more details are available.
Software designers start the stepwise refinement process by creating a sequence of
compositions for the system being designed. Each composition is more detailed than the previous one
and contains more components and interactions. The earlier compositions represent the significant
interactions within the system, while the later compositions show in detail how these interactions are
achieved.
To have a clear understanding of the concept, let us consider an example of stepwise
refinement. Every computer program comprises input, process, and output.
1. INPUT
2. PROCESS
3. OUTPUT
This is the first step in refinement. The input phase can be refined further as given here.
1. INPUT
o Get user's name through a prompt.
o Get user's grade through a prompt.
o While (invalid grade)
Ask again:
2. PROCESS
3. OUTPUT
Note: Stepwise refinement can also be performed for PROCESS and OUTPUT phase.
6. Refactoring
Refactoring is an important design activity that reduces the complexity of module design
keeping its behaviour or function unchanged. Refactoring can be defined as a process of modifying a
software system to improve the internal structure of design without changing its external behavior.
During the refactoring process, the existing design is checked for any type of flaws like redundancy,
poorly constructed algorithms and data structures, etc., in order to improve the design. For example, a
design model might yield a component which exhibits low cohesion (like a component performs four
functions that have a limited relationship with one another). Software designers may decide to refactor
the component into four different components, each exhibiting high cohesion. This leads to easier
integration, testing, and maintenance of the software components.
7. Structural Partitioning
When the architectural style of a design follows a hierarchical nature, the structure of the
program can be partitioned either horizontally or vertically. In horizontal partitioning, the control
modules are used to communicate between functions and execute the functions. Structural partitioning
provides the following benefits.
Modular Design:
A modular design focuses on minimizing the interconnection b/w modules. In a modular
design, several independent and executable modules are composed together to construct an executable
application program. The programming language support, interfaces, and the information hiding
principles ensure modular system design. There are various modularization criterions to measure the
modularity of a system. The most common criterions are functional independency; levels of
abstraction; information hiding; functional diagrams, such as DFD, modular programming languages,
coupling, and cohesion. An effective modular system has low coupling and high cohesion. So,
coupling and cohesion are most popular criterions used to measure the modularity in a system.
1. Coupling:
Coupling between two modules is a measure of the degree of interdependence or interaction
between the two modules. A module having high cohesion and low coupling is said to be functionally
independent of other modules. If two modules interchange large amounts of data, then they are highly
interdependent. The degree of coupling between two modules depends on their interface complexity.
The interface complexity is basically determined by the number of types of parameters that are
interchanged while invoking the functions of the module. Module coupling can be Tightly or Loosely
coupled based on the dependencies.
Function-Oriented Approach:
/* Global data (system state) accessible by various functions */
BOOL detector_status[MAX_ROOMS];
int detector_locs[MAX_ROOMS];
BOOL alarm_status[MAX_ROOMS];
/* alarm activated when status is set */
int alarm_locs[MAX_ROOMS];
/* room number where alarm is located */
int neighbor-alarm[MAX_ROOMS][10];
/* each detector has at most 10 neighboring locations */
Object-Oriented Approach:
class detector
attributes:
status, location, neighbours
operations: create, sense_status, get_location, find_neighbors
class alarm
attributes: location, status
operations: create, ring_alarm, get_location, reset_alarm
In the object oriented program, an appropriate number of instances of the class detector and alarm
should be created. If the function-oriented and the object-oriented programs are examined, it can be
seen that in the function-oriented program, the system state is centralized and several functions
accessing this central data are defined. In case of the object-oriented program, the state information is
distributed among various sensor and alarm objects.
It is not necessary an object-oriented design be implemented by using an object-oriented
language only. However, object-oriented languages such as C++ supports the definition of all the
basic mechanisms of class, inheritance, objects, methods, etc. and also support all key object-oriented
concepts that we have just discussed. Thus, an object-oriented language facilitates the implementation
of an OOD. However, an OOD can as well be implemented using a conventional procedural language
– though it may require more effort to implement an OOD using a procedural language as compared
to the effort required for implementing the same design using an object-oriented language.
Even though object-oriented and function-oriented approaches are remarkably different
approaches to software design, yet they do not replace each other but complement each other in some
sense. For example, usually one applies the top-down function-oriented techniques to design the
internal methods of a class, once the classes are identified. In this case, though outwardly the system
appears to have been developed in an object-oriented fashion, but inside each class there may be a
small hierarchy of functions designed in a top-down manner.
Structured Design
Structured Design The aim of structured design is to transform the results of the structured
analysis (i.e. a DFD representation) into a structure chart. Structured design provides two strategies to
guide transformation of a DFD into a structure chart.
• Transform analysis
• Transaction analysis
Normally, one starts with the level 1 DFD, transforms it into module representation using
either the transform or the transaction analysis and then proceeds towards the lower-level DFDs. At
each level of transformation, it is important to first determine whether the transform or the transaction
analysis is applicable to a particular DFD.
General Steps Involved in structured design are:
The type of data flow is established
In this step the nature of the data flowing between processes is defined.
Determine flow boundaries (switch points)
This includes if the boundary is input boundary, output boundary, hub boundary or action
boundary or process.
Map the abstract DFD onto a particular program structure
Determine if the program structure is a transformational structure or transactional structure.
Define a valid control structure
This step is also known as "first-level" factoring. It depends on whether transformational or
transactional models are used.
The control structure is either "Call-and-return" for transformational model or "Call-and-
act" for transactional model.
Refine (tune) the resulting structure
This step is also known as "second-level factoring". It maps Input/Output flow
bounded parts of DFD.
Supplement and tune the final architectural structure
Apply basic module independence concepts (i.e. Explode or implode modules
according to coupling/cohesion requirements) to obtain an easier implementation.
Structure Chart
A structure chart represents the software architecture, i.e. the various modules making up the
system, the dependency (which module calls which other modules), and the parameters that are
passed among the different modules. Hence, the structure chart representation can be easily
implemented using some programming language. Since the main focus in a structure chart
representation is on the module structure of the software and the interactions among different
modules, the procedural aspects (e.g. how a particular functionality is achieved) are not represented.
The basic building blocks which are used to design structure charts are the following:
• Rectangular boxes: Represents a module.
• Module invocation arrows: Control is passed from one module to another module in the direction of
the connecting arrow.
• Data flow arrows: Arrows are annotated with data name; named data passes from one module to
another module in the direction of the arrow.
• Library modules: Represented by a rectangle with double edges.
• Selection: Represented by a diamond symbol.
• Repetition: Represented by a loop around the control flow arrow.
Identifying the highest level input and output transforms requires experience and skill. One
possible approach is to trace the inputs until a bubble is found whose output cannot be deduced from
its inputs alone. Processes which validate input or add information to them are not central transforms.
Processes which sort input or filter data from it are the first level structure chart is produced by
representing each input and output unit as boxes and each central transform as a single box.
In the third step of transform analysis, the structure chart is refined by adding sub-functions
required by each of the high-level functional components. Many levels of functional components may
be added. This process of breaking functional components into subcomponents is called factoring.
Factoring includes adding read and write modules, error-handling modules, initialization and
termination process, identifying customer modules, etc. The factoring process is continued until all
bubbles in the DFD are represented in the structure chart.
The structure chart for the supermarket prize scheme software is shown in the figure given below:
Object-Oriented Design
Object–Oriented Design (OOD) involves implementation of the conceptual model produced
during object-oriented analysis. In OOD, concepts in the analysis model, which are
technology−independent, are mapped onto implementing classes, constraints are identified and
interfaces are designed, resulting in a model for the solution domain, i.e., a detailed description
of how the system is to be built on concrete technologies.
The implementation details generally include:
Grady Booch has defined object-oriented design as “a method of design encompassing the
process of object-oriented decomposition and a notation for depicting logical and physical as well as
static and dynamic models of the system under design”.
DRY
“Don’t Repeat Yourself”. Try to avoid any duplicates; instead you put them into a single part
of the system, or a method.
Imagine that you have copied and pasted blocks of code in different parts in your system. What
if you changed any of them? You will need to change and check the logic of every part that has the
same block of code.
Definitely you don’t want to do that. This is an extra cost that you don’t need to pay for, all
what you need to is to have a single source of truth in your design, code, documentation, and even in
the database schema.
Expect to Change
Should have the capability to think what are the requirements/ functionalities may add in the future.
Based on that idea, the design has to be developed that even if the functionalities are added in future, it
shouldn’t be disturbed. For that purpose we have to follow the OO principles called SOLID.
SOLID
O—Open/Closed Principle
Software entities (classes, modules, functions, etc.) should be open for extension, but closed
for modification.
Whenever you need to add additional behaviors, or methods, you don’t have to modify the
existing one, instead, you start writing new methods.
Because, What if you changed a behavior of an object, where some other parts of the system
depends on it?. So, you need to change also every single part in the software that has a dependency
with that object, and check the logic, and do some extra testing.
The object model visualizes the elements in a software application in terms of objects. In this
chapter, we will look into the basic concepts and terminologies of object–oriented systems.
A set of attributes for the objects that are to be instantiated from the class. Generally,
different objects of a class have some difference in the values of the attributes. Attributes are
often referred as class data.
A set of operations that portray the behavior of the objects of the class. Operations are also
referred as functions or methods.
Example
Let us consider a simple class, Circle, that represents the geometrical figure circle in a two–
dimensional space. The attributes of this class can be identified as follows:
Data Hiding
Typically, a class is designed such that its data (attributes) can be accessed only by its class
methods and insulated from direct outside access. This process of insulating an object’s data is called
data hiding or information hiding.
Example
In the class Circle, data hiding can be incorporated by making attributes invisible from outside
the class and adding two more methods to the class for accessing class data, namely:
5. Message Passing
Any application requires a number of objects interacting in a harmonious manner. Objects in
a system may communicate with each other using message passing. Suppose a system has two
objects: obj1 and obj2. The object obj1 sends a message to object obj2, if obj1 wants obj2 to execute
one of its methods.
The features of message passing are:
Message passing between two objects is generally unidirectional.
Message passing enables all interactions between objects.
Message passing essentially involves invoking class methods.
Objects in different processes can be involved in message passing.
6. Inheritance
Inheritance is the mechanism that permits new classes to be created out of existing classes by
extending and refining its capabilities. The existing classes are called the base classes/parent
classes/super-classes, and the new classes are called the derived classes/child classes/subclasses. The
subclass can inherit or derive the attributes and methods of the super-class(es) provided that the
super-class allows so. Besides, the subclass may add its own attributes and methods and may modify
any of the super-class methods. Inheritance defines an “is – a” relationship.
Example
From a class Mammal, a number of classes can be derived such as Human, Cat, Dog, Cow, etc.
Humans, cats, dogs, and cows all have the distinct characteristics of mammals. In addition, each has
its own particular characteristics. It can be said that a cow “is – a” mammal.
Types of Inheritance:
Single Inheritance: A subclass derives from a single super-class.
Multiple Inheritance: A subclass derives from more than one super-classes.
Multilevel Inheritance: A subclass derives from a super-class which in turn is derived from
another class and so on.
Hierarchical Inheritance: A class has a number of subclasses each of which may have
subsequent subclasses, continuing for a number of levels, so as to form a tree structure.
Hybrid Inheritance: A combination of multiple and multilevel inheritance so as to form a
lattice structure.
The following figure depicts the examples of different types of inheritance.
7. Polymorphism
Polymorphism is originally a Greek word that means the ability to take multiple forms. In object-
oriented paradigm, polymorphism implies using operations in different ways, depending upon the
instance they are operating upon. Polymorphism allows objects with different internal structures to
have a common external interface. Polymorphism is particularly effective while implementing
inheritance.
Example
Let us consider two classes, Circle and Square, each with a method findArea(). Though the
name and purpose of the methods in the classes are same, the internal implementation, i.e., the
procedure of calculating area is different for each class. When an object of class Circle invokes its
findArea() method, the operation finds the area of the circle without any conflict with the findArea()
method of the Square class.
8. Generalization and Specialization
Generalization and specialization represent a hierarchy of relationships between classes, where
subclasses inherit from super-classes.
Generalization
In the generalization process, the common characteristics of classes are combined to form a
class in a higher level of hierarchy, i.e., subclasses are combined to form a generalized super-class. It
represents an “is – a – kind – of” relationship. For example, “car is a kind of land vehicle”, or “ship
is a kind of water vehicle”.
Specialization
Specialization is the reverse process of generalization. Here, the distinguishing features of
groups of objects are used to form specialized classes from existing classes. It can be said that the
subclasses are the specialized versions of the super-class.
The following figure shows an example of generalization and specialization.
Relationships:
Links and Association
1. Link
A link represents a connection through which an object collaborates with other objects.
Rumbaugh has defined it as “a physical or conceptual connection between objects”. Through a link,
one object may invoke the methods or navigate through another object. A link depicts the
relationship between two or more objects.
2. Association
Association is a group of links having common structure and common behavior. Association
depicts the relationship between objects of one or more classes. A link can be defined as an instance
of an association.
Degree of an Association
Degree of an association denotes the number of classes involved in a connection. Degree may be
unary, binary, or ternary.
A unary relationship connects objects of the same class.
A binary relationship connects objects of two classes.
A ternary relationship connects objects of three or more classes.
Example
In the relationship, “a car has–a motor”, car is the whole object or the aggregate, and the motor is
a “part–of” the car. Aggregation may denote:
Physical containment: Example, a computer is composed of monitor, CPU, mouse,
keyboard, and so on.
Conceptual containment: Example, shareholder has–a share.
The Unified Modelling Language (UML) is a graphical language for OOAD that gives a
standard way to write a software system’s blueprint. It helps to visualize, specify, construct, and
document the artifacts of an object-oriented system. It is used to depict the structures and the
relationships in a complex system.
Brief History
It was developed in 1990s as an amalgamation of several techniques, prominently OOAD
technique by Grady Booch, OMT (Object Modeling Technique) by James Rumbaugh, and OOSE
(Object Oriented Software Engineering) by Ivar Jacobson. UML attempted to standardize semantic
models, syntactic notations, and diagrams of OOAD.
Model: Model is a simplified, complete, and consistent abstraction of a system, created for better
understanding of the system.
Things
Relationships
Diagrams
(a) Things:
There are four kinds of things in UML, namely:
Structural Things: These are the nouns of the UML models representing the static elements
that may be either physical or conceptual. The structural things are class, interface,
collaboration, use case, active class, components, and nodes.
Behavioral Things: These are the verbs of the UML models representing the dynamic
behavior over time and space. The two types of behavioral things are interaction and state
machine.
Grouping Things: They comprise the organizational parts of the UML models. There is only
one kind of grouping thing, i.e., package.
Annotational Things: These are the explanations in the UML models representing the
comments applied to describe elements.
(b) Relationships:
Relationships are the connection between things. The four types of relationships that can be
represented in UML are:
Dependency: This is a semantic relationship between two things such that a change in one
thing brings a change in the other. The former is the independent thing, while the latter is the
dependent thing.
Association: This is a structural relationship that represents a group of links having common
structure and common behavior.
Generalization: This represents a generalization/specialization relationship in which
subclasses inherit structure and behavior from super-classes.
Realization: This is a semantic relationship between two or more classifiers such that one
classifier lays down a contract that the other classifiers ensure to abide by.
(c) Diagrams: A diagram is a graphical representation of a system. It comprises of a group of
elements generally in the form of a graph. UML includes nine diagrams in all, namely:
Class Diagram
Object Diagram
Use Case Diagram
Sequence Diagram
Collaboration Diagram
State Chart Diagram
Activity Diagram
Component Diagram
Deployment Diagram
Rules
UML has a number of rules so that the models are semantically self-consistent and related to
other models in the system harmoniously. UML has semantic rules for the following:
Names
Scope
Visibility
Integrity
Execution
Common Mechanisms
UML has four common mechanisms:
Specifications
Adornments
Common Divisions
Extensibility Mechanisms
Specifications
In UML, behind each graphical notation, there is a textual statement denoting the syntax and
semantics. These are the specifications. The specifications provide a semantic backplane that
contains all the parts of a system and the relationship among the different paths.
Adornments
Each element in UML has a unique graphical notation. Besides, there are notations to represent the
important aspects of an element like name, scope, visibility, etc.
Common Divisions
Object-oriented systems can be divided in many ways. The two common ways of division are:
Extensibility Mechanisms
UML is an open-ended language. It is possible to extend the capabilities of UML in a controlled
manner to suit the requirements of a system. The extensibility mechanisms are:
Stereotypes: It extends the vocabulary of the UML, through which new building blocks can
be created out of existing ones.
Tagged Values: It extends the properties of UML building blocks.
Constraints: It extends the semantics of UML building blocks.
Class
A class is represented by a rectangle having three sections:
Example: Let us consider the Circle class introduced earlier. The attributes of Circle are x-coord, y-
coord, and radius. The operations are findArea(), findCircumference(), and scale(). Let us assume
that x-coord and y-coord are private data members, radius is a protected data member, and the
member functions are public. The following figure gives the diagrammatic representation of the
class.
Object
An object is represented as a rectangle with two sections:
The top section contains the name of the object with the name of the class or package of
which it is an instance of. The name takes the following forms:
o object-name : class-name
o object-name : class-name :: package-name
o class-name : in case of anonymous objects
The bottom section represents the values of the attributes. It takes the form attribute-name =
value.
Sometimes objects are represented using rounded rectangles.
Example: Let us consider an object of the class Circle named c1. We assume that the center of c1 is
at (2, 3) and the radius of c1 is 5. The following figure depicts the object.
Component
A component is a physical and replaceable part of the system that conforms to and provides
the realization of a set of interfaces. It represents the physical packaging of elements like classes and
interfaces.
Notation: In UML diagrams, a component is represented by a rectangle with tabs as shown in the
figure below.
Interface
Interface is a collection of methods of a class or component. It specifies the set of services
that may be provided by the class or component.
Notation: Generally, an interface is drawn as a circle together with its name. An interface is almost
always attached to the class or component that realizes it. The following figure gives the notation of
an interface.
Package
A package is an organized group of elements. A package may contain structural things like
classes, components, and other packages in it.
Relationship
The notations for the different types of relationships are as follows:
Usually, elements in a relationship play specific roles in the relationship. A role name
signifies the behavior of an element participating in a certain context.
Example: The following figures show examples of different relationships between classes. The first
figure shows an association between two classes, Department and Employee, wherein a department
may have a number of employees working in it. Worker is the role name. The ‘1’ alongside
Department and ‘*’ alongside Employee depict that the cardinality ratio is one–to–many. The second
figure portrays the aggregation relationship, a University is the “whole–of” many Departments.
A bank has many branches. In each zone, one branch is designated as the zonal head office that
supervises the other branches in that zone. Each branch can have multiple accounts and loans. An
account may be either a savings account or a current account. A customer may open both a savings
account and a current account. However, a customer must not have more than one savings account or
current account. A customer may also procure loans from the bank.
From the class Account, two classes have inherited, namely, Savings Account and Current Account.
A Customer can have one Current Account : association, one–to–one
A Customer can have one Savings Account : association, one–to–one
A Branch “has–a” number of Loans : aggregation, one–to–many
A Customer can take many loans : association, one–to–many
2. Object Diagram
An object diagram models a group of objects and their links at a point of time. It shows the
instances of the things in a class diagram. Object diagram is the static part of an interaction diagram.
Example: The following figure shows an object diagram of a portion of the class diagram of the
Banking System.
3. Component Diagram
Component diagrams show the organization and dependencies among a group of components.
Component diagrams comprise of:
Components
Interfaces
Relationships
Packages and Subsystems (optional)
Component diagrams are used for:
The following figure shows a component diagram to model a system’s source code that is
developed using C++. It shows four source code files, namely, myheader.h, otherheader.h,
priority.cpp, and other.cpp. Two versions of myheader.h are shown, tracing from the recent version
to its ancestor. The file priority.cpp has compilation dependency on other.cpp. The file other.cpp has
compilation dependency on otherheader.h.
4. Deployment Diagram
A deployment diagram puts emphasis on the configuration of runtime processing nodes and their
components that live on them. They are commonly comprised of nodes and dependencies, or
associations between the nodes.
(b) Actor
An actor represents the roles that the users of the use cases play. An actor may be a person
(e.g. student, customer), a device (e.g. workstation), or another system (e.g. bank, institution).
The following figure shows the notations of an actor named Student and a use case called
Generate Performance Report.
Use cases
Actors
Relationships like dependency, generalization, and association
Use case diagrams are used:
To model the context of a system by enclosing all the activities of a system within a rectangle
and focusing on the actors outside the system by interacting with it.
To model the requirements of a system from the outside point of view.
Example
Let us consider an Automated Trading House System. We assume the following features of the
system:
The trading house has transactions with two types of customers, individual customers and
corporate customers.
Once the customer places an order, it is processed by the sales department and the customer
is given the bill.
The system allows the manager to manage customer accounts and answer any queries posted
by the customer.
Interaction Diagrams
Interaction diagrams depict interactions of objects and their relationships. They also include the
messages passed between them. There are two types of interaction diagrams:
Sequence Diagrams
Collaboration Diagrams
Interaction diagrams are used for modeling:
The control flow by time ordering using sequence diagrams.
The control flow of organization using collaboration diagrams.
6. Sequence Diagrams
Sequence diagrams are interaction diagrams that illustrate the ordering of messages
according to time.
Notations: These diagrams are in the form of two-dimensional charts. The objects that initiate the
interaction are placed on the x–axis. The messages that these objects send and receive are placed
along the y–axis, in the order of increasing time from top to bottom.
Example: A sequence diagram for the Automated Trading House System is shown in the following
figure.
7. Collaboration Diagrams
Collaboration diagrams are interaction diagrams that illustrate the structure of the objects that send
and receive messages.
Notations: In these diagrams, the objects that participate in the interaction are shown using vertices.
The links that connect the objects are used to send and receive messages. The message is shown as a
labelled arrow.
Example: Collaboration diagram for the Automated Trading House System is illustrated in the
figure below.
8. State–Chart Diagrams
A state–chart diagram shows a state machine that depicts the control flow of an object from
one state to another. A state machine portrays the sequences of states which an object undergoes due
to events and their responses to events.
State–Chart Diagrams comprise of:
Example
In the Automated Trading House System, let us model Order as an object and trace its
sequence. The following figure shows the corresponding state–chart diagram.
9. Activity Diagrams
An activity diagram depicts the flow of activities which are ongoing non-atomic operations in
a state machine. Activities result in actions which are atomic operations.
The following figure shows an activity diagram of a portion of the Automated Trading House
System.
UML DIAGRAMS
(Unified Modelling Language)
DEFINITION:
❑ The Unified Modelling Language (UML) is a graphical language for OOAD that gives a standard way to write a
software system’s blueprint. It helps to visualize, specify, construct, and document the artifacts of an object-oriented
system. It is used to depict the structures and the relationships in a complex system.
❑ It was developed in 1990s as an amalgamation of several techniques, prominently OOAD technique by Grady Booch,
OMT (Object Modeling Technique) by James Rumbaugh, and OOSE (Object Oriented Software Engineering) by Ivar
Jacobson. UML attempted to standardize semantic models, syntactic notations, and diagrams of OOAD.
a) Things
b) Relationships
c) Diagrams
(a) Things:
Structural Things: These are the nouns of the UML models representing the static elements that may be either physical or
conceptual. The structural things are class, interface, collaboration, use case, active class, components, and nodes.
Behavioral Things: These are the verbs of the UML models representing the dynamic behavior over time and space. The
two types of behavioral things are interaction and state machine.
Grouping Things: They comprise the organizational parts of the UML models. There is only one kind of grouping thing,
i.e., package.
Annotational Things: These are the explanations in the UML models representing the comments applied to describe
elements.
(b) Relationships:
Relationships are the connection between things. The four types of relationships that can be represented in UML are:
Dependency: This is a semantic relationship between two things such that a change in one thing brings a change in the
other. The former is the independent thing, while the latter is the dependent thing.
Association: This is a structural relationship that represents a group of links having common structure and common
behavior.
Generalization: This represents a generalization/specialization relationship in which subclasses inherit structure and
behavior from super-classes.
Realization: This is a semantic relationship between two or more classifiers such that one classifier lays down a contract
that the other classifiers ensure to abide by.
(c) Diagrams: A diagram is a graphical representation of a system. It comprises of a group of elements generally in the
form of a graph. UML includes nine diagrams in all, namely:
▪ Class Diagram
▪ Object Diagram
▪ Sequence Diagram
▪ Collaboration Diagram
▪ Activity Diagram
▪ Component Diagram
▪ Deployment Diagram
Class Diagram
The class diagram is a central modeling technique that runs through nearly all object-oriented methods. This diagram
describes the types of objects in the system and various kinds of static relationships which exist between them.
Relationships:
There are three principal kinds of relationships which are important:
1. Association - represent relationships between instances of types (a person works for a company, a company has a
number of offices.
2. Inheritance - the most obvious addition to ER diagrams for use in OO. It has an immediate correspondence to
inheritance in OO design.
3. Aggregation - Aggregation, a form of object composition in object-oriented design.
A class is represented by a rectangle having three sections:
▪ the top section containing the name of the class
▪ the middle section containing class attributes
▪ the bottom section representing operations of the class
Object Diagram
▪ An object diagram is a graph of instances, including objects and data values. A static object diagram is an
instance of a class diagram.
▪ It shows a snapshot of the detailed state of a system at a point in time. The difference is that a class
diagram represents an abstract model consisting of classes and their relationships.
▪ An object diagram represents an instance at a particular moment, which is concrete in nature. The use of
object diagrams is fairly limited.
An object is represented as a rectangle with two sections:
▪ The top section contains the name of the object with the name of the class or package of which it is an
instance of. The name takes the following forms:
object-name : class-name
object-name : class-name :: package-name
class-name : in case of anonymous objects
▪ The bottom section represents the values of the attributes. It takes the form attribute-name = value.
▪ Sometimes objects are represented using rounded rectangles.
▪ Example: Let us consider an object of the class Circle named c1. We assume that the center of c1 is at (2, 3) and the
▪ Think of a use-case model as a menu, much like the menu you'd find in a restaurant. By looking at
the menu, you know what's available to you, the individual dishes as well as their prices. You also
know what kind of cuisine the restaurant serves: Italian, Mexican, Chinese, and so on. By looking
at the menu, you get an overall impression of the dining experience that awaits you in that
restaurant. The menu, in effect, "models" the restaurant's behavior.
Sequence Diagram
The Sequence Diagram models the collaboration of objects based on a time sequence. It shows how the
objects interact with others in a particular scenario of a use case. With the advanced visual modeling
capability, you can create complex sequence diagram in few clicks. Besides, some modeling tool such as
Visual Paradigm can generate sequence diagram from the flow of events which you have defined in the use
case description.
Collaboration Diagram
Collaboration diagrams (known as Communication Diagram in UML ) are used to show how objects interact
to perform the behavior of a particular use case, or a part of a use case. Along with sequence diagrams,
collaboration are used by designers to define and clarify the roles of the objects that perform a particular flow of
events of a use case. They are the primary source of information used to determining class responsibilities and
interfaces.
Notation:
An object is represented by an object symbol showing the name of the object and its class underlined, separated
by a colon:
Object_name : class_name
State Chart Diagram
A State chart diagram describes a state machine. State machine can be defined as a
machine which defines different states of an object and these states are controlled by
external or internal events.
State chart diagram is one of the five UML diagrams used to model the dynamic nature
of a system. They define different states of an object during its lifetime and these states
are changed by events.
Activity Diagram
• Activity diagram is basically a flowchart to represent the flow from one activity to
another activity. The activity can be described as an operation of the system.
• The control flow is drawn from one operation to another. This flow can be sequential,
branched, or concurrent. Activity diagrams deal with all type of flow control by using
different elements such as fork, join, etc
• Activity diagrams are not only used for visualizing the dynamic nature of a system, but
they are also used to construct the executable system by using forward and reverse
engineering techniques.
Component Diagram:
In the Unified Modeling Language, a component diagram depicts how components are
wired together to form larger components or software systems.
• Deployment diagrams are used to describe the static deployment view of a system.
Deployment diagrams consist of nodes and their relationships.
The term Deployment itself describes the purpose of the diagram. Deployment diagrams
are used for describing the hardware components, where software components are
deployed. Component diagrams and deployment diagrams are closely related.
Dept. of CSE Software Engineering
UNIT – IV
Syllabus:
Implementation: Coding Principles, Coding Process, Code verification, Code documentation
Software Testing: Testing Fundamentals, Test Planning, Black Box Testing, White Box
Testing, Levels of Testing, Usability Testing, Regression testing, Debugging approaches.
Software Implementation
• The software engineer translates the design specifications into source codes in some
programming language.
• The main goal of implementation is to produce quality source codes that can reduce the cost of
testing and maintenance.
• The purpose of coding is to create a set of instructions in a programming language so that
computers execute them to perform certain operations.
• Implementation is the software development phase that affects the testing and maintenance
activities.
• A clear, readable, and understandable source code will make testing, debugging, and maintenance
tasks easier.
• Source codes are written for functional requirements but they also cover some nonfunctional
requirements.
• Unstructured and structured programming produce more complex and tedious codes than object-
oriented, fourth generation languages, component based programming etc.
• A well- documented code helps programmers in understanding the source codes for testing and
maintenance.
• Software engineers are instructed to follow the coding process, principles, standards, and
guidelines for writing source codes.
• Finally, the code is tested to uncover errors and to ensure that the product satisfies the needs of
the customer.
Coding Principles
• Coding principles are closely related to the principles of design and modeling.
• Developed software goes through testing, maintenance, and reengineering. .
• Coding principles help programmers in writing an efficient and effective code, which is easier to
test, maintain, and reengineer.
Coding Process
• The coding process describes the steps that programmers follow for producing source codes.
• The coding process allows programmers to write bug-free source codes.
• It involves mainly coding and testing phases to generate a reliable code.
• The coding process describes the steps that programmers follow for producing source codes.
• The coding process allows programmers to write bug-free source codes.
• Two widely used coding processes.
• Traditional Coding Process
• Test-driven Development (TDD)
• The traditional programming process is an iterative and incremental process which follows the
“write-compile-debug” process.
• TDD was introduced by Extreme Programming (XP) in agile methodologies that follow the
“coding with testing” process.
Design specifications
Source file
Compilation and linking
Is there Yes
any
compilatio
n error?
No
Testing
No
Testing OK
Yes
Executable program
Test-Driven Development
– Developed software goes through a repeated maintenance process due to lack of quality
and inability to satisfy the customer needs.
– System functionality is decomposed into several small features.
– Test cases are designed before coding.
– Unit tests are written first for the feature specification and then the small source code is
written according to the specification.
– Source code is run against the test case.
– It is quite possible that the small code written may not meet the requirements, thus it will
fail the test.
– After failure, we need to modify the small code written before to meet the requirements
and run it again.
– If the code passes the test case implies the code is correct. The same process is repeated
for another set of requirements specification.
Feature specifications
Bug fixing
Unsuccessful
Code change Run
Successful
Refactoring
Software
Coding Verification
• Code verification is the process of identifying errors, failures, and faults in source codes, which
cause the system to fail in performing specified tasks.
• Code verification ensures that functional specifications are implemented correctly using a
programming language.
• There are several techniques in software engineering which are used for code verification.
– Code review
– Static analysis
– Testing
Code review
– It is a traditional method for verification used in the software life cycle. It mainly aims at
discovering and fixing mistakes in source codes.
– Code review is done after a successful compilation of source codes. Experts review codes
by using their expertise in coding.
– The errors found during code verification are debugged.
• Following methods are used for code review:
– Code walkthrough
– Code inspection
– Pair programming
Code walkthrough
– A code walkthrough is a technical and peer review process of finding mistakes in source
codes.
– The walkthrough team consists of a reviewee and a team of reviewers.
– The reviewers examine the code either using a set of test cases or by changing the source
code.
– During the walkthrough meeting, the reviewers discuss their findings to correct mistakes
or improve the code.
– The reviewers may also suggest alternate methods for code improvement.
– The walkthrough session is beneficial for code verification, especially when the code is
not properly documented.
– Sometimes, this technique becomes time consuming and tedious. Therefore, the
walkthrough session is kept short.
Code inspection
– It aims at detecting programming defects in the source code.
– The code inspection team consists of a programmer, a designer, and a tester.
– The inspectors are provided the code and a document of checklists.
– In the inspection process, definite roles are assigned to the team members, who inspect
the code in a more rigorous manner. Also, the checklists help them to catch errors in a
smooth manner.
– Code inspection takes less time as compared to code walkthrough.
– Most of the software companies prefer software inspection process for code review.
Pair programming
– It is an extreme programming practice in which two programmers work together at one
workstation, i.e., one monitor and one keyboard. In the current practice, programmers can
use two keyboards.
– During pair programming, code review is done by the programmers who write the code.
It is possible that they are unable to see their own mistakes.
– With the help of pair programming, the pair works with better concentration.
– They catch simple mistakes such as ambiguous variable and method names easily. The
pair shares knowledge and provides quick solution.
– Pair programming improves the quality of software and promotes knowledge sharing
between the team members.
Static analysis
– Source codes are not executed rather these are given as input to some tool that provides
program behavior.
– Static analysis is the process of automatically checking computer programs.
– This is performed with program analysis tools.
– Static analysis tools help to identify redundancies in source codes.
– They identify idempotent operations; data declared but not used, dead codes, missing
data, connections that lead to unreachable code segments, and redundant assignments.
– They also identify the errors in interfacing between programs. They identify mismatch
errors in parameters used by the team and assure compliance to coding standards.
Testing
– Dynamic analysis works with test data by executing test cases.
– Testing is performed before the integration of programs for system testing.
– Also, it is intended to ensure that the software ensures the satisfaction of customer needs.
Code Documentation
• Software development, operation, and maintenance processes include various kinds of
documents.
• Documents act as a communication medium between the different team members of
development.
• They help users in understanding the system operations.
• Documents prepared during development are problem statement, software requirement
specification (SRS) document, design document, documentation in the source codes, and test
document.
• These documents are used by the development and maintenance team members.
• There are following categories of documentation done in the system
• Internal documentation
• System documentation
• User documentation
• Process documentation
• Daily documentation
Software Testing
✓ Software testing is the process of finding defects in the software so that these can be debugged
and the defect-free software can meet the customer needs and expectations.
✓ Software testing is one of the important phases in software development life cycle.
✓ A quality software can be achieved through testing.
✓ Effective testing reduces the maintenance cost and provides reliable outcomes.
✓ Example of Ineffective testing -the Y2K problem.
✓ The intention of software testing process is to produce a defect-free system.
Testing Fundamentals
✓ Error is the discrepancy between the actual value of the output of software and the theoretically
correct value of the output for that given input.
✓ Error also known as variance, mistake, or problem is the unintended behavior of software.
✓ Fault is the cause of an error. Fault is also called defect or bug in the manifestation of one or
more errors. It causes a system to fail in achieving the intended task.
✓ Failure is the deviation of the observed behavior from the specified behavior.
✓ It occurs when the faulty code is executed leading to an incorrect outcome. Thus, the presence of
faults may lead to system failure.
✓ A failure is the manifestation of an error in the system or software.
Test Planning
✓ Testing is a long activity in which several test cases are executed by different members of test
team and may be in different environment and at different locations and machine.
✓ Test planning specifies the scope, approach, resources, and schedule of the testing activities.
✓ Test planning includes the following activities:
✓ Create test plan.
✓ Design test cases.
✓ Design test stubs and test drivers.
✓ Test case execution.
✓ Defect tracking and statistics.
✓ Prepare test summary report.
Creation of a Test Plan
• A test plan is a document that describes the scope and activities of testing. It is a formal
document for testing software.
• A test plan contains the following attributes:
– Test plan ID
– Purpose
– Test items
– References
– Features to be tested
– Schedule
– Responsibilities
– Test environment
– Test case libraries and standards
– Test strategy
– Test deliverables
– Release criteria
– Expected risk
Design Test Cases
✓ A test case is a set of inputs and expected results under which a program unit is exercised with
the purpose of causing failure and detecting faults.
✓ A good test case is one that has the high probability of detecting defects in the system.
✓ A well-designed test case can be traceable, repeatable, and can be reused in other software
development.
✓ The intention of designing set of test cases for testing is to prove that program under test is
incorrect.
✓ The test case selection is the main objective to detect errors in the program unit.
✓ A possible way is to exercise all the possible paths and variables to find undiscovered errors. But
performing exhaustive testing is difficult because it takes a lot of time and efforts. An exhaustive
testing includes all possible input to the program unit.
✓ Test script: A test script is a procedure that is performed on a system under test to verify that the
system functions as expected. Test case is the baseline to create test scripts using automated tool.
✓ Test suite: A test suite is a collection of test cases. It is the composite of test cases designed for a
system.
✓ Test data: Test data are needed when writing and executing test cases for any kind of test. Test
data is sometimes also known as test mixture.
✓ Test harness: Test harness is the collection of software, tools, input/output data, and
configurations required for test.
✓ Test scenario: Test scenario is the set of test cases in which requirements are tested from end to
end. There can be independent test cases or a series of test cases that follow each other.
✓ A test case includes the following fields:
✓ Test plan ID
✓ Test case ID
✓ Feature to be tested
✓ Preconditions
✓ Test script or test procedure
✓ Test data
✓ Expected results
The main drawback of equivalence class partitioning and boundary value analysis
methods is the consideration of only single input domain.
Like decision tables, Cause effect graphing is another technique for combinations for
input conditions.
Cause-effect graphing technique begins with finding the relationships among input
conditions known as causes and the output conditions known as effects.
A cause is any condition in the requirement that affects the program output. Similarly,
an effect is the outcome of some input conditions.
The logical relationships among input and output conditions are expressed in terms of
cause-effect graph.
Each condition (either cause or effect) is represented as a node in the cause-effect graph.
Each condition has the value whether true or false.
C2 T -- --
C3 -- T F
C4 -- T --
Effects:
E1 X
E2 X
E3 X
Error Guessing:
It is the preferred method used when all other previous methods fail. Sometimes it is used to
test some special cases. It is a very practical case where in tester uses his intuition and makes
a guess about where the bug can be. The tester does not have to use any particular testing
technique. However, this capability comes with years of experience in a particular field of
testing.
White-box testing is another effective testing technique in dynamic testing. It is also known as glass-
box testing, as everything that is required to implement the software is visible. The entire design,
structure, and code of the software have to be studied for this type of testing. It is obvious that the
developer is very close to this type of testing. Often, developers use white-box testing techniques to
test their own design and code. This testing is also known as structural or development testing. In
white-box testing, structure means the logic of the program which has been implemented in the
language code. The intention is to test this logic so that required results or functionalities can be
achieved. Thus, white-box testing ensures that the internal parts of the software are adequately
tested.
4. Mutation Testing
5.2 LOGIC COVERAGE CRITERIA
Structural testing considers the program code, and test cases are designed
based on the logic of the program such that every element of the logic is cov-
ered. Therefore the intention in white-box testing is to cover the whole logic.
Discussed below are the basic forms of logic coverage.
Statement Coverage
The first kind of logic coverage can be identified in the form of statements. It is
assumed that if all the statements of the module are executed once, every bug
will be notified.
Consider the following code segment shown in Fig. 5.1.
If we want to cover every statement in the above code, then the following
test cases must be designed:
Test case 1: x = y = n, where n is any number
Test case 2: x = n, y = n¢, where n and n¢ are different numbers.
Test case 1 just skips the while loop and all loop statements are not
executed. Considering test case 2, the loop is also executed. However, every
statement inside the loop is not executed. So two more cases are designed:
Test case 3: x > y
Test case 4: x < y
These test cases will cover every statement in the code segment, however
statement coverage is a poor criteria for logic coverage. We can see that test
case 3 and 4 are sufficient to execute all the statements in the code. But, if
we execute only test case 3 and 4, then conditions and paths in test case 1
will never be tested and errors will go undetected. Thus, statement coverage is a
necessary but not a sufficient criteria for logic coverage.
Condition Coverage
Condition coverage states that each condition in a decision takes on all pos-
sible outcomes at least once. For example, consider the following statement:
while ((I £5) && (J < COUNT))
In this loop statement, two conditions are there. So test cases should be de-
signed such that both the conditions are tested for True and False outcomes.
The following test cases are designed:
Test case 1: I £ 5, J < COUNT
Test case 2: I < 5, J > COUNT
Decision/condition Coverage
Condition coverage in a decision does not mean that the decision has been
covered. If the decision
if (A && B)
is being tested, the condition coverage would allow one to write two test cases:
Test case 1: A is True, B is False.
Test case 2: A is False, B is True.
But these test cases would not cause the THEN clause of the IF to execute
(i.e. execution of decision). The obvious way out of this dilemma is a criterion
called decision/condition coverage. It requires sufficient test cases such that
each condition in a decision takes on all possible outcomes at least once, each
decision takes on all possible outcomes at least once, and each point of entry
is invoked at least once [2].
It can be observed that not all data-flow anomalies are harmful, but most
of them are suspicious and indicate that an error can occur. In addition to
the above two-character data anomalies, there may be single-character data
anomalies also. To represent these types of anomalies, we take the following
conventions:
~x : indicates all prior actions are not of interest to x.
x~ : indicates all post actions are not of interest to x.
All single-character data anomalies are listed in Table 5.2.
Table 5.2 Single-character data-flow anomalies
Anomaly Explanation Effect of Anomaly
~d First definition Normal situation. Allowed.
~u First Use Data is used without defining it. Potential bug.
~k First Kill Data is killed before defining it. Potential bug.
D~ Define last Potential bug.
U~ Use last Normal case. Allowed.
K~ Kill last Normal case. Allowed.
Example 5.10
Consider the program given below for calculating the gross salary of an
employee in an organization. If his basic salary is less than Rs 1500, then
HRA = 10% of basic salary and DA = 90% of the basic. If his salary is either
equal to or above Rs 1500, then HRA = Rs 500 and DA = 98% of the basic
salary. Calculate his gross salary.
main()
{
1. float bs, gs, da, hra = 0;
2. printf(“Enter basic salary”);
3. scanf(“%f”, &bs);
4. if(bs < 1500)
5. {
6. hra = bs * 10/100;
7. da = bs * 90/100;
8. }
9. else
10. {
11. hra = 500;
12. da = bs * 98/100;
13. }
14. gs = bs + hra + da;
15. printf(“Gross Salary = Rs. %f”, gs);
16. }
Find out the define-use-kill patterns for all the variables in the source code
of this application.
Solution
For variable ‘bs’, the define-use-kill patterns are given below.
Example 5.11
Consider the program given below. Draw its control flow graph and data flow
graph for each variable used in the program, and derive data flow testing
paths with all the strategies discussed above.
main()
{
int work;
0. double payment =0;
1. scanf(“%d”, work);
2. if (work > 0) {
3. payment = 40;
4. if (work > 20)
5. {
6. if(work <= 30)
7. payment = payment + (work – 25) * 0.5;
8. else
9. {
10. payment = payment + 50 + (work –30) * 0.1;
11. if (payment >= 3000)
12. payment = payment * 0.9;
13. }
14. }
15. }
16. printf(“Final payment”, payment);
Solution
Figure 5.12 shows the control flow graph for the given program.
0, 1
56
7 8, 9, 10
14 11
16, 17 15 13 12
Figure 5.13 shows the data flow graph for the variable ‘payment’.
0: Define
1
3: Define
5, 6
8, 9,
7: Define & c-use
10: Define & c-use
11: p-use
14
16 : c-use
15 13 12: Define & c-use
17
2: p-use
4: p-use
5,
6: p-use
8, 9,
7: c-use 10: c-use
14 11
16, 17 15 13 12
Prepare a list of all the definition nodes and usage nodes for all the vari-
ables in the program.
Data flow testing paths for each variable are shown in Table 5.3.
ADPU
AU
ACU + P APU + C
ACU AD APU
main()
{
int i, sum=0, sqsum=0;
for (i=1;i<5;i++)
{
sum+=i;
sqsum+=i*i;
printf(“%2d”, i,i*i);
}
printf(“The sum is: %d and square sum is: %d”, sum, sqsum);
}
Unit Testing
✓ Unit means a program unit, module, component, procedure, subroutine of a system developed by
the programmer.
✓ The aim of unit testing is to find bugs by isolating an individual module using test stub and test
drivers and by executing test cases on it.
✓ During module testing, test stub and test drivers are designed for proper testing in a test
environment.
✓ The unit testing is performed to detect both structural and functional errors in the module.
✓ Therefore, test cases are designed using white-box and black-box testing strategies for unit
testing.
✓ Most of the module errors are captured through white-box testing.
Unit test environment
Integration Testing
✓ Integration testing is another level of testing, which is performed after unit testing of modules.
✓ It is carried out keeping in view the design issues of the system into subsystems.
✓ The main goal of integration testing is to find interface errors between modules.
✓ There are various approaches in which the modules are combined together for integration testing.
✓ Big-bang approach
✓ Top-down approach
✓ Bottom-up approach
✓ Sandwich approach
Big-bang approach
✓ The big-bang is a simple and straightforward integration testing.
✓ In this approach, all the modules are first tested individually and then these are combined together
and tested as a single system.
✓ This approach works well where there is less number of modules in a system.
✓ As all modules are integrated to form a whole system, the chaos may occur. If there is any defect
found, it becomes difficult to identify where the defect has occurred.
✓ Therefore, big-bang approach is generally avoided for large and complex systems.
Top-down approach
✓ Top-down integration testing begins with the main module and move downwards integrating and
testing its lower level modules.
✓ Again the next lower level modules are integrated and tested.
✓ Thus, this incremental integration and testing is continued until all modules up to the concrete
level are integrated and tested.
✓ The top-down integration testing approach is as follows:
main system -> subsystems -> modules at concrete level.
✓ In this approach, the testing of a module may be delayed if its lower level modules (i.e., test
stubs) are not available at this time.
✓ Thus, writing test stubs and simulating to act as actual modules may be complicated and time-
consuming task.
M
S1 S2 S3
Bottom-up approach
✓ As the name implies, bottom-up approach begins with the individual testing of bottom-level
modules in the software hierarchy.
✓ Then lower level modules are merged function wise together to form a subsystem and then all
subsystems are integrated to test the main module covering all modules of the system.
✓ The approach of bottom-up integration is as follows:
concrete level modules -> subsystem –> main module.
✓ The bottom-up approach works opposite to the top-down integration approach.
Sandwich approach
✓ The sandwich testing combines both top-down and bottom-up integration approaches.
✓ During sandwich testing, top-down approach force to the lower level modules to be available and
bottom-up approach requires upper level modules.
✓ Thus, testing a module requires its top and bottom level modules.
✓ It is the most preferred approach in testing because the modules are tested as and when these are
available for testing.
System Testing
✓ The unit and integration testing are applied to detect defects in the modules and the system as a
whole. Once all the modules have been tested, system testing is performed to check whether the
system satisfies the requirements (both functional and non-functional).
✓ To test the functional requirements of the system, functional or black-box testing methods are
used with appropriate test cases.
✓ System testing is performed keeping in view the system requirements and system objectives.
✓ The non-functional requirements are tested with a series of tests whose purpose is to check the
computer-based system.
✓ A single test case cannot ensure all the system non-functional requirements.
✓ For specific non-functional requirements, special tests are conducted to ensure the system
functionality.
✓ Some of the non-functional system tests are
✓ Performance testing
✓ Volume testing
✓ Stress testing
✓ Security testing
✓ Recovery testing
✓ Compatibility testing
✓ Configuration testing
✓ Installation testing
✓ Documentation testing
Performance testing:
✓ A performance testing is carried out to check the run time outcomes of the system, such as
efficiency, accuracy, etc.
✓ Each system performs differently in different environment.
✓ During performance testing, both hardware and software are taken into consideration to observe
the system behavior.
✓ For example, a testing tool is tested to check if it tests the specified codes in a defined time
duration.
Volume testing:
✓ It deals with the system if heavy amount of data are to be processed or stored in the system.
✓ For example, an operating system will be checked to ensure that the job queue will handle when a
large number of processes enter into the computer.
✓ It basically checks the capacity of the data structures.
Stress testing:
✓ In stress testing, the behavior of the system is checked when it is under stress.
✓ The stress may come due to load increases at peak time for a short period of time.
✓ There are several reasons of stress, such as the maximum number of users increased, peak
demand, number of operations extended, etc.
✓ For example, the network server is checked if the number of concurrent users and nodes are
increased to use the network resources in the evening time.
✓ The stress test is the peak time of the volume testing.
Security testing:
✓ Due to the increasing complexity of software and its applications for variety of users in different
technologies, it becomes necessary to provide sufficient security to the society.
✓ It is conducted to ensure the security checks at different levels in the system.
✓ For example, testing of e-payment system is done to ensure that the money transaction is
happening in a secure manner in e-commerce applications.
✓ There a lot of confidential data are transferred and used in the system that must be protected from
its leakage, alteration, and modification by illegal people.
Recovery testing:
✓ Most of the systems now have the recovery policies if the there is any loss of data.
✓ Therefore, recovery testing is performed to check that it will recover the losses caused by data
error, software error, or hardware problems.
✓ For example, the Windows operating system recovers the currently running files if any
hardware/software problem occurs in the system.
Compatibility testing:
✓ Compatibility testing is performed to ensure that the new system will be able to work with the
existing system.
✓ Sometimes, the data format, report format, process categories, databases, etc., differ from system
to system.
✓ For example, compatibility testing checks whether Windows 2007 files can be opened in the
Windows 2003 if it is installed in the system.
Configuration testing:
✓ Configuration testing is performed to check that a system can run on different hardware and
software configurations.
✓ Therefore, system is configured for each of the hardware and software.
✓ For example, suppose you want to run your program on other machine then you are required to
check the configuration of its hardware and software.
Documentation testing:
✓ Once the system becomes operational, problems may be encountered in the system.
✓ A systematic documentation or manual can help to recover such problems.
✓ The system is verified whether its proper documentation is available.
Installation testing:
✓ Installation testing is conducted to ensure that all modules of the software are installed properly.
✓ The main purpose of installation testing is to find errors that occur during the installation process.
✓ Installation testing covers various issues, such as automatic execution of the CD, files and
libraries must be allocated and loaded; appropriate hardware configurations must be present;
proper network connectivity; compatible with the operating system platform, etc.
✓ The installers must be familiar with the installation technologies and their troubleshooting
mechanisms.
Acceptance Testing
✓ Acceptance testing is a kind of system testing, which is performed before the system is released
into the market.
✓ It is performed with the customer to ensure that the system is acceptable for delivery.
✓ Once all system testing have been exercised, the system is now tested from the customer’s point
of view.
✓ Acceptance testing is conducted because there is a difference between the actual user and the
simulated users considered by the development organization.
✓ The user involvement is important during acceptance testing of the software as it is developed for
the end-users.
✓ Acceptance testing is performed at two levels, i.e.,
✓ Alpha testing
✓ Beta testing.
✓ Alpha testing is a pilot testing in which customers are involved in exercising test cases.
✓ In alpha testing, customer conducts tests in the development environment. The users
perform alpha test and tries to pinpoint any problem in the system.
✓ The alpha test is conducted in a controlled environment.
✓ After alpha testing, system is ready to transport the system at the customer site for
deployment
✓ Beta testing is performed by a limited and friendly customers and end-users.
✓ Beta testing is conducted at the customer site, where the software is to be deployed and used by
the end-users.
✓ The developer may or may not be present during beta testing.
✓ The end-users operate the system under testing mode and note down any problem
observed during system operation.
✓ The defects noted by the end-users are corrected by the developer.
✓ If there are any major changes required, then these changes are sent to the configuration
management team.
✓ The configuration management team decides whether to approve or disapprove the
changes for modification in the system.
Usability Testing
✓ Usability refers to the ease of use and comfort that users have while working with software.
✓ It is also known as user-centric testing. Nowadays, usability has become a wider aspect of
software development and testing.
✓ Usability testing is conducted to check usability of the system which mainly focuses on finding
the differences between quality of developed software and user’s expectations of what it should
perform.
✓ Poor usability may affect the success of the software. If the user finds that the system is difficult
to understand and operate, then ultimately it will lead to unsuccessful product.
✓ The usability testing concentrates on the testing of user interface design, such as look and feel of
the user interface, format of reports, screen layouts, hardware and user interactions, etc.
✓ Usability testing is performed by potential end-users in a controlled environment.
✓ The development organization calls selected end-users to test the product in terms of ease of use,
functionality as expected, performance, safety and security; and the outcomes.
Regression Testing
✓ Regression testing is also known as program revalidation.
✓ Regression testing is performed whenever new functionality is added or the existing functionality
is modified in the program.
✓ If the existing system is working correctly, then new system should work correctly after making
changes because the code may have been changed.
✓ It is required when the new version of a program is obtained by changing the existing version.
✓ Regression testing is also needed when a subsystem is modified to get the new version of the
system.
Introduction to Debugging
Debugging is not a part of the testing domain. Therefore, debugging is not testing. It is a separate process
performed as a consequence of testing. But the testing process is considered a waste if debugging is not
performed after the testing. Testing phase in the SDLC aims to find more and more bugs, remove the errors,
and build confidence in the software quality. In this sense, testing phase can be divided into two parts:
1. Preparation of test cases, executing them, and observing the output. This is known as testing.
2. If output of testing is not successful, then a failure has occurred. Now the goal is to find the bugs that
caused the failure and remove the errors present.
Debugging is the process of identification of the symptoms of failures, tracing the bug, locating the errors
that caused the bug, and correcting these errors. Describing it in more concrete terms, debugging is a two-
part process. It begins with some indication of the existence of an error. It is the activity of:
1. Determining the exact nature of the bug and location of the suspected error within the program.
2. Fixing or repairing the error.
Debugging Techniques
The most popular debugging approaches are as follows:
❑ Debugging by Backtracking
❑ Debugging by induction
❑ Debugging by deduction
❑ Debugging by testing
Debugging by Brute force or Memory Dump
Brute force or Memory Dump
▪ It is the simplest method of debugging but it is inefficient. It uses memory dumps or output statements for
debugging.
▪ The memory dump is a machine level representation of the corresponding variables and statements.
▪ It represents the static structure of the program at a particular snapshot of execution sequence.
▪ The memory dump rarely establishes correspondence to show errors at a particular time.
▪ Therefore, instead of using brute force for debugging, a debugger should be used for better results.
Debugging with Watch Points or Break Points
Watch Points or Break Points
▪ Breakpoint debugging is a method of tracing programs with a breakpoint and stopping the program execution at the
breakpoint.
▪ A breakpoint is a kind of signal that tells the debugger to temporarily suspend execution of program at a certain point.
▪ The program execution continues before the breakpoint statement. If any error is reported, its location is marked and then
the program execution resumes till the next breakpoint.
▪ This process is continued until all errors are located in the program.
▪ A watch value is a value of a variable or expression, which is set and shown along with the program execution.
▪ The incorrect or unexpected values can be observed with the watch values.
Debugging by Backtracking
Backtracking
▪ Backtracking is the refinement of brute force method and it is one of the successful methods of
debugging.
▪ Debugging begins from where the bug is discovered and the source code is traced out backward though
different paths until the exact location of the cause of bug is reached or the cause of bug has disappeared.
▪ This process is performed with the program logic in reverse direction of the flow of control.
▪ Backtracking should be used when all other methods of debugging are not able to locate errors. The
reason is the effort spent in backtracking if there is no error exists in the source code.
Debugging by Induction
Debugging by Induction
▪ The process begins from collecting information about pertinent data where the bug has discovered.
▪ The patterns of successful test cases are observed and data items are organized.
▪ Thereafter, hypothesis is derived by relating the pattern and the error to be debugged.
▪ Otherwise, more data are collected to derive causes of errors. Finally, causes are removed and errors are
fixed in the program.
Debugging by Deduction
Debugging by Deduction
▪ On the basis of cause hypothesis, lists of possible causes are enumerated for the observed failure.
▪ Now the tests are conducted to eliminate causes to remove errors in the system.
▪ If all the causes are eliminated then errors are fixed. Otherwise, hypothesis is refined to eliminate errors.
▪ Finally, hypothesis is proved to ensure that all causes have been eliminated and the system is bug free.
Debugging by Testing
Debugging by testing
▪ Test cases designed during testing are used in debugging to collect information to locate the suspected
errors.
▪ Test case in testing focuses on covering many conditions and statements, whereas test case in debugging
focuses on small number of conditions and statements.
Syllabus:
Software Quality: Software Quality Factors, Verification & Validation, Software Quality
Assurance, The Capability Maturity Model
Software Maintenance: Software maintenance, Maintenance Process Models, Reengineering
activities.
Software Quality
Introduction
• Once software is tested, it is assumed that it is defect free and it will perform according to the
needs of the customer.
• As a software product is to be used for a long time, it is important to measure its quality for better
reliability and durability.
• Measuring the reliability of software products has been a major issue for the developer and the
customer.
• A good quality product satisfies the customer needs, is constructed as per the standards and
norms, has sound internal design, and is developed within an optimized cost and schedule.
• The internal design of software is much more important than the external design.
➢
Correctness: A program is correct if it performs according to the specifications of functions it should
provide.
➢
Reliability is the extent to which a program performs its intended functions satisfactorily with
required precision without failure in a specified duration.
➢
Usability is the extent of effort required to learn, operate, and use a product.
➢
Integrity is the extent of effort to control illegal access to data and program by unauthorized people.
➢
Efficiency is the volume of computing resourses (e.g., processor time, memory space, bandwidth in
communication devices, etc.) and code required to perform software functions.
➢
Maintainability is the ease to locate and correct errors. Maintainability of a software is measured
through mean time to change.
➢
Flexibility is the cost required to modify an operational program.
➢
Testability is the effort required to test a program to ensure that it performs its intended function.
➢
Portability is the effort required for transferring software products to various hardware and software
environments.
➢
Reusability is the extent to which software or its parts can be reused in the development of some
other software.
➢
Interoperability is the effort required to couple one system to another. Strong coupling and loose
coupling are the approaches used in interoperability.
Software Maintenance
• Software maintenance is an important activity, which keeps a system remain useful during its
lifetime.
• Maintenance is performed after the system is deployed at the customer site.
• As the customer starts working on the system, the system and its parts may introduce defects.
• Some new feature may be added/deleted to the system if the customer needs change.
• Organizational operating environment and policies change from time to time, which causes a
system to be transported or adapted to a new environment.
• Software maintenance is valuable to avoid software aging and to maintain the quality of software
product.
• Without software maintenance and evolution, system becomes complex and unreliable.
• The cost of software change varies from project to project and with the types of changes.
• A survey indicates that the software maintenance or change consumes 60–80% of the total life
cycle cost.
• Major software cost is incurred due to enhancement (75–80%) rather than correction.
Iterative-Enhancement Model
• This model considers that making changes in a system throughout its lifetime is an iterative
process.
• The iterative-enhancement model assumes that the requirements of a system cannot be gathered
and fully understood initially.
• The system is to be developed in builds. Each build completes, corrects, and refines the
requirements of the previous builds based on the feedback of users.
• The construction of a build in the iteration (i.e., maintenance) begins with the analysis of the
existing system’s requirements, design, code, and test; and continues with the modification of the
highest-level document affected by changes.
• A key advantage of the iterative-enhancement model is that documentation is kept updated as the
code changes.
• The iterative-enhancement model keeps the system maintainable as compared to the quick-fix
model.
• Also, the maintenance changes are faster in iterative-enhancement.
• This model is observed to be ineffective if there is unavailability of complete documentation.
• The iterative-enhancement model is well suited for systems that have a long life and evolve over
time.
Full-Reuse Model
• Here, maintenance is considered as reuse-oriented software development, where reusable
components are used for maintenance and replacements for faulty components.
• It begins with requirements analysis and design of a new system and reuses the appropriate
requirements, design, code, and tests from the earlier versions of the existing system.
• The reuse repository plays an important role in the full-reuse model.
• It also promotes the development of more reusable components.
• The full-reuse model is more suited for the development of lines of related products.
• This model takes some initial cost to institutionalize the reuse environment.
• The full-reuse model is especially important for the maintenance of component-based systems or
reengineering-type projects that are to be migrated onto a component-based platform.
ISO-12207 Model
• The ISO-12207 standard organizes the maintenance process in six phases.
• The phases are process implementation, problem and modification analysis, modification
implementation, maintenance review/acceptance, migration, and software retirement.
• Process implementation phase includes the tasks for developing plans and procedures.
• Problem and modification analysis is to analyse the maintenance request in terms of size, cost
and time required.
• Modification implementation to ensure the requested modifications are correctly implemented
and original unmodified requirements are not affected.
• Maintenance review is for assessing the integrity of the modified system.
Reengineering Activities
Reengineering inverts the existing obsolete and incomplete procedural system to a system which is
well structured and well documented according to the predefined quality requirements. Reengineering
involves the following sub-activities.
• Reverse engineering
• Forward engineering
• Program comprehension
• Restructuring
• Design recovery
• Re-documentation
Reverse engineering
• Reverse engineering is the process of recovering the design specifications of an existing business
system from its implementation and representing it at a much higher level of abstraction .
• Reverse engineering techniques can be performed to extract data, architecture, design
information, and content of a procedural system.
• Reverse engineering techniques provide the means for recovering the lost information and
developing alternative representations of a system, such as generation of structure charts,
dataflow diagrams, entity-relationship diagrams, etc.
Forward Engineering
• Once reverse engineering has been performed and all-important artifacts have been recovered,
forward engineering is performed.
• Forward reengineering is the traditional process of moving from high-level abstraction using
logical design to physical implementation.
• Forward engineering moves from a higher-level abstract representations and design details to
implementation level of the system.
• Design details such as object models, use case diagrams, pseudo codes, etc., can be converted to
object-oriented programming languages.
Program Comprehension
• Program comprehension is an essential part of software evolution and software maintenance.
• Software that is not comprehended cannot be changed.
• Frequently in program comprehension the programmer understands domain concepts, but not the
code. The knowledge of domain concepts is based on program use and therefore it is easier to
acquire than knowledge of the code.
• Program comprehension is the root of the reverse engineering process. It is the process of
acquiring or extracting knowledge about the software artifacts such as code, design, document,
etc.
Restructuring
• Restructuring transforms the system from one representation form to another at the same level of
abstraction.
• Restructuring modifies the code and data that are adaptable to future changes. It preserves
semantics and functionality between the new and old representations.
• Mainly, it has two major aspects, namely, code restructuring and data restructuring.
• Code restructuring produces designs with higher quality than the existing one.
• Data restructuring is performed to extract the data items and objects to understand data flow and
the existing data structure.
Design Recovery
• Design recovery recreates design abstractions from a combination of code, existing design
documentation, personal experience, and general knowledge about problem and application
domains.
• The recovered design abstractions must include conventional software engineering
representations such as formal specifications, module breakdowns, data abstractions, data flows,
and program description language.
• Design recovery is performed across a spectrum of activities from software development to
maintenance.
• A key objective of design recovery is to develop structures that will help the software engineer to
understand a software system.
Re-documentation
• Re-documentation is the process of creating a semantically equivalent representation at the
corresponding levels of abstraction.
• In this aspect, system documents are updated/rewritten/replaced to document the target system.
• Various documents that may be affected in legacy software include requirement specifications,
design and implementation, design decision report, configuration, data dictionary, user and
reference manuals, and the change document.
• The process of re-documentation is similar to reverse engineering activities.