Software Quality Engineering: Prepared By: Sehr Andleeb

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 25

Software Quality Engineering

Prepared by: Sehr Andleeb

1
Shewhart cycle (PDCA)

2
• Software Quality in terms of quality factors and criteria
• A quality factor represents behavioral characteristic of a system
• Examples: correctness, reliability, efficiency, and testability
• A quality criterion is an attribute of a quality factor that is related to
software development
• Example: modularity is an attribute of software architecture

3
Key elements of TQC management

• Quality comes first, not short-term profits.


• The customer comes first, not the producer.
• Decisions are based on facts and data.
• Management is participatory and respectful of all employees.
• Management is driven by cross-functional committees covering
product planning, product design, purchasing, manufacturing, sales,
marketing, and distribution.

4
Quality Perspectives
1. Transcendental View: It envisages quality as something that can be
recognized but is difficult to define. The transcendental view is not specific
to software quality alone but has been applied in other complex areas
of everyday life
2. User View: It perceives quality as fitness for purpose. “Does the product
satisfy user needs and expectations?”
3. Manufacturing View: conformance to the
specification. The quality level of a product is determined by the extent
to which the product meets its specifications.
4. Product View: A product’s inherent characteristics, that is,
internal qualities, determine its external qualities.
5. Value-Based View: amount a customer is willing to pay for it.

5
Activities for software quality assessment
The activities for software quality assessment can be divided into two
broad categories, namely, static analysis and dynamic analysis.

6
• Static Analysis: As the term “static” suggests, it is based on the
examination of a number of documents, namely requirements documents,
software
models, design documents, and source code. Traditional static analysis
includes code review, inspection, walk-through, algorithm analysis, and
proof of correctness. It does not involve actual execution of the code under
development. Instead, it examines code and reasons over all possible
behaviors that might arise during run time. Compiler optimizations are
standard
static analysis.

7
• Dynamic Analysis: Dynamic analysis of a software system involves actual
program execution in order to expose possible program failures. The
behavioral and performance properties of the program are also observed.
Programs are executed with both typical and carefully chosen input values.
Often, the input set of a program can be impractically large. However, for
practical considerations, a finite subset of the input set can be selected.
Therefore, in testing, we observe some representative program behaviors
and reach a conclusion about the quality of the system. Careful selection
of a finite test set is crucial to reaching a reliable conclusion.

8
• Software quality assessment divide into two categories:

• Static analysis
• It examines the code and reasons over all behaviors that might arise during run time
• Examples: Code review, inspection, and algorithm analysis

• Dynamic analysis
• Actual program execution to expose possible program failure
• One observe some representative program behavior, and reach conclusion about the
quality of the system

• Static and Dynamic Analysis are complementary in nature


• Focus is to combines the strengths of both approaches

9
Verification vs. Validation
• Verification
• Evaluation of software system that help in determining whether the product of
a given development phase satisfy the requirements established before the start
of that phase
• Building the product correctly

• Validation
• Evaluation of software system that help in determining whether the product
meets its intended use
• Building the correct product

10
Failure, Error, Fault and Defect
• Failure
• A failure is said to occur whenever the external behavior of a system does not
conform to that prescribed in the system specification
• Error
• An error is a state of the system.
• An error state could lead to a failure in the absence of any corrective action by the
system
• Fault
• A fault is the adjudged cause of an error
• Defect
• It is synonymous of fault
• It a.k.a. bug

11
The objectives of Testing
• It does work:
• Objective is to test that unit of code or system works
• It does not work
• Once working, next objective is to find faults in unit or system, so make
system or fail.
• Reduce the risk of failures
• Objective  To bring down the risk of failing to an acceptable level by
iterative testing and removing faults.
• Reduce the cost of testing
• Number of test cases are directly proportional to cost
• Objective: To produce low-risk software with fewer number of test cases

12
What is a Test Case?
• Test Case is a simple pair of
<input, expected outcome>
• State-less systems: A compiler is a stateless system
• Test cases are very simple
• Outcome depends solely on the current input
• State-oriented: ATM is a state oriented system
• Test cases are not that simple. A test case may consist of a sequences of <input,
expected outcome>
• The outcome depends both on the current state of the system and the current input
• ATM example:
• < check balance, $500.00 >,
• < withdraw, “amount?” >,
• < $200.00, “$200.00” >,
• < check balance, $300.00 >

13
Expected outcome
An outcome of program execution may include
• Value produced by the program
• State Change
• A sequence of values which must be interpreted together for the outcome to be
valid

14
The Concept of Complete Testing
• Complete or exhaustive testing means
“There are no undisclosed faults at the end of test phase”

• Complete testing is near impossible for most of the system


• The domain of possible inputs of a program is too large
• Valid inputs
• Invalid inputs
• The design issues may be too complex to completely test
• It may not be possible to create all possible execution environments of the
system
15
Testing activities

Figure 1.6: Different activities in process testing


• Identify the objective to be tested
• Select inputs
• Compute the expected outcome
• Set up the execution environment of the program
• Execute the program
• Analyze the test results
16
Testing level
• Unit testing
• Individual program units, such as
procedure, methods in isolation
• Integration testing
• Modules are assembled to construct
larger subsystem and tested
• System testing
• Includes wide spectrum of testing
such as functionality, and load
• Acceptance testing
• Customer’s expectations from the
system
• Two types of acceptance testing
• UAT
Figure 1.7: Development and testing phases
• BAT in the V model
• UAT: System satisfies the contractual
acceptance criteria
• BAT: System will eventually pass the
user acceptance test
17
Testing Level

Figure 1.8: Regression testing at different software testing levels

• New test cases are not designed


• Test are selected, prioritized and executed
• To ensure that nothing is broken in the new version of the software

18
Sources of information for test case selection
• Requirement and Functional Specifications
• Source Code
• Input and output Domain
• Operational Profile
• Fault Model
• Error Guessing
• Fault Seeding
• Mutation Analysis

19
White-box and Black-box Testing
• White-box testing a.k.a. structural
testing • Black-box testing a.k.a. functional
testing
• Examines source code with focus on:
• Control flow • Examines the program that is
• Data flow accessible from outside
• Control flow refers to flow of control • Applies the input to a program and
from one instruction to another observe the externally visible outcome
• Data flow refers to propagation of • It is applied to both an entire program
values from one variable or constant to as well as to individual program units
another variable • It is performed at the external interface
• It is applied to individual units of a level of a system
program • It is conducted by a separate software
• Software developers perform structural quality assurance group
testing on the individual program units
they write

20
Test Planning and Design
• The purpose is to get ready and organized for test execution
• A test plan provides a:
• Framework
• A set of ideas, facts or circumstances within which the tests will be conducted
• Scope
• The domain or extent of the test activities
• Details of resource needed
• Effort required
• Schedule of activities
• Budget
• Test objectives are identified from different sources
• Each test case is designed as a combination of modular test components called
test steps
• Test steps are combined together to create more complex tests
21
Monitoring and Measuring Test Execution
• Metrics for monitoring test execution

• Metrics for monitoring defects

• Test case effectiveness metrics


• Measure the “defect revealing ability” of the test suite
• Use the metric to improve the test design process

• Test-effort effectiveness metrics


• Number of defects found by the customers that were not found by the test engineers

22
Test Tools and Automation
• The test cases to be automated are well
• Increased productivity of the testers defined

• Better coverage of regression testing • Test tools and an infrastructure are in


place
• Reduced durations of the testing
phases • The test automation professionals have
prior successful experience in
automation
• Reduced cost of software maintenance

• Adequate budget have been allocation


• Increased effectiveness of test cases for the procurement of software tools

23
Test Team Organization and Management

Figure 16.1: Structure of test groups

• Hiring and retaining test engineers is a challenging task


• Interview is the primary mechanism for evaluating applicants
• Interviewing is a skills that improves with practice
• To retain test engineers management must recognize the importance of testing
efforts at par with development effort
24
References
Book
• Software testing and quality assurance (Indian)

25

You might also like