Real World Software Testing
Real World Software Testing
Real World Software Testing
Agenda
• Software Testing
– Importance
• Challenges and Opportunity
Questions?
Real World Software Testing
• Quality
– QA
– QC
What is Quality ?
prevention
detection
cure
QA and QC Summery
• QA • QC
• Assurance • Control
• Process • Product
• Preventive • Detective
• Quality Audit • Testing
Quality... it’s all about the
End-User
• Testing
– What is testing
– Objectives of Testing
What is Testing?
• Identify defects
– when the software doesn’t work
• Verify that it satisfies specified requirements
– verify that the software works
Mature view of software testing
Functional Build
Specs. System
Design Build
Software
Development Coding
Activity
Progressive distortion
Relative cost to fix software
defects
1000
100
10
Functional Build
Specs. System
Design Build
Software
Development Coding
Activity
Verification & Validation
V&V
Activities
What is Verification?
• Verification is process of confirming whether software meets its
specifications
• The process of reviewing/ inspecting deliverables throughout the life cycle.
• Inspections, walkthroughs and reviews are examples of verification
techniques.
• Verification is the process of examining a product to discover its defects.
• Verification is usually performed by STATIC testing, or inspecting without
execution on a computer.
• Verification is examining Product Requirements, Specifications, Design,
Code for large fundamental problems oversight, omission.
• Verification is a “human” examination or review of the work product.
• Verification
– determining if phase completed correctly
• “are we building the product right?”
What is Validation?
• Reviews
• Walkthrough
• Inspection
Verification “What to Look For?”
• White-box testing
• Black-box testing
White-box testing
White-box testing
requirements
output
input
events
Black-box testing
• Low-level testing
– unit (module) testing
– integration testing
• High-level testing
– function testing
– system testing
– acceptance testing
Unit Testing
• Why retest?
– Because any software product that is actively used and
supported must be changed from time to time, and every new
version of a product should be retested
Progressive testing and regressive
testing
• Divide the input domain into classes of data for which test cases can be
generated.
• Attempting to uncover classes of errors.
• Based on equivalence classes for input conditions.
• An equivalence class represents a set of valid or invalid states
• An input condition is either a specific numeric value, range of values, a set of
related values, or a Boolean condition.
• Equivalence classes can be defined by:
• If an input condition specifies a range or a specific value, one valid and two
invalid equivalence classes defined.
• If an input condition specifies a Boolean or a member of a set, one valid and
one invalid equivalence classes defined.
• Test cases for each input domain data item developed and executed.
Boundary value analysis
Term: 1 to 30 years
Repayment:
Minimum £10
Interest rate:
Total paid back:
Account
number
valid: non-zero
first character:
invalid: zero
number of digits: 5 6 7
invalid invalid
valid
• Based on the theory that test cases can be developed based upon the
intuition and experience of the Test Engineer
• For example, in an example where one of the inputs is the date, a test
engineer might try February 29,2000 or 9/9/99
Error guessing
• The basis of test is the source material (of the product under test) that
provide the stimulus for the test. In other words, it is the area targeted
as the potential source of an error:
• Requirements-based tests are based on the requirements document
• Function-based tests are based on the functional design specification
• Internal-based tests are based on the internal design specification or
code.
• Function-based and internal-based tests will fail to detect situations
where requirements are not met. Internal-based tests will fail to detect
errors in functionality
Test
• Definition of result
– A necessary part of a test case is a definition of the expected
output or result.
• Repeatability
Test Case
• The structure of test cases is one of the things that stays remarkably the
same regardless of the technology being tested. The conditions to be tested
may differ greatly from one technology to the next, but you still need to know
three basic things about what you plan to test:
• ID #: This is a unique identifier for the test case. The identifier does not imply
a sequential order of test execution in most cases. The test case ID can also
be intelligent. For example, the test case ID of ORD001 could indicate a test
case for the ordering process on the first web page.
• Condition: This is an event that should produce an observable result. For
example, in an e-commerce application, if the user selects an overnight
shipping option, the correct charge should be added to the total of the
transaction. A test designer would want to test all shipping options, with each
option giving a different amount added to the transaction total.
Test Case Components
• Procedure: This is the process a tester needs to perform to invoke the condition
and observe the results. A test case procedure should be limited to the steps
needed to perform a single test case.
• Expected Result: This is the observable result from invoking a test condition. If
you can’t observe a result, you can’t determine if a test passes or fails. In the
previous example of an e-commerce shipping option, the expected results
would be specifically defined according to the type of shipping the user selects.
• Pass/Fail: This is where the tester indicates the outcome of the test case. For
the purpose of space, I typically use the same column to indicate both "pass"
(P) and "fail" (F). In some situations, such as the regulated environment, simply
indicating pass or fail is not enough information about the outcome of a test
case to provide adequate documentation. For this reason, some people choose
to also add a column for "Observed Results."
• Defect Number Cross-reference: If you identify a defect in the execution of a test
case, this component of the test case gives you a way to link the test case to a
specific defect report.
• Sample Business Rule
• A customer may select one of the following options for shipping when
ordering products. The shipping cost will be based on product price
before sales tax and the method of shipment according to the table
below.
• If no shipping method is selected, the customer receives an error
message, "Please select a shipping option." The ordering process
cannot continue until the shipping option has been selected and
confirmed.
Characteristics of a good test
• Testing
• Defect
• Defect report
– Summary
– Description
– Severity
– Priority
• Defect tracking
How many testers does it take to
change a light bulb?
• The best tester isn’t the one who finds the most bugs
or who embarrasses the most programmers. The best
tester is the one who gets the most bugs fixed.
- Testing Computer Software
What Do You Do When You Find a
defect?
• Report a defect
• Summery • Priority
• Date reported • System Info
• Detailed description • Status
• Assigned to • Reproducible
• Severity • Detected by
• Detected in Version • Screen prints, logs, etc.
Blank defect report
What is the most important part of
the defect report?
Who reads our defect reports?
• Project Manager
• Executives
• Development
• Customer Support
• Marketing
• Quality Assurance
• Any member of the Project Team
Example 1
Example 2
Downstream affects of a poorly
written subject line
• Development
• Project Manager
• Customer Support
• Quality Assurance
• Any member of the Project Team
Example 1
Example 2
What’s missing in these
descriptions?
Data corruption ? ?
Misspelling ? ?
An example of a bug:
• What if you are typing text in and then you open a file
on top? Does your work get replaced with the file
that was just opened?
What is the bug?
Serious Bug
fixed Bug
Minor Bug
Outcome of
Change Change change
request request request
is open. is assigned is verified.
Duplicated to engineer.
Reject Fix
References
• Testing
• Defect
• Defect report
– Summary
– Description
– Severity
– Priority
• Defect tracking
Test Plan Objectives
1. Test-Plan identifier
– Specify unique identifier
2. Introduction
– Objectives,Background,scope and references are
included.
– Summarize the software items and software features to
be tested.The need for each item and its history may be
included.
Test Plan Details
3. Test items
- identify the test items including their
version/revision level.
- Supply references to the following item
documentation e.g.
- Requirements specification,design
specification, User guide, Operations guide,
Installation guide
-Reference any incident reports relating to the
test items.
Test Plan Details
4. Features to be tested
- identify all software features and combinations of
software features to be tested.
- identify the test-design specification associated
with each feature .
6. Approach
Describes the overall approach to testing.
Specify the major activities , techniques, and tools
which are used to test the designated group of
features.
- The approach should be described in sufficient
detail to permit identification of major testing tasks
and estimation of time required to do each one.
- Specify any additional completion criteria,
constraints .
Test Plan Details
9. Test deliverables
identify the deliverable documents. Such as
- test plan
- test design specifications
- test case specifications
- test procedure specifications
- test logs
- test incident reports
- test summary reports
- test input data and test output data
- test tools(e.g modules, drivers and stubs)
Test Plan Details
12. Responsibilities
- Identify groups responsible for managing,
designing, preparing, executing, checking and
resolving.
- these groups may include – testers, developers,
operation staff, user representatives, technical
support team.
Test Plan Details
14. Schedule
- estimate the time required to do each testing
task.
- specify the schedule for each testing task
- for each resource(i.e. facilities, tools, and staff)
specify its period of use.
Test Plan Details
16. Approvals
- specify the names and titles of all the persons
who must approve the plan.
Example – Payroll system
• Recovery testing
• Performance testing
– Performance will be evaluated against the performance
requirements by measuring the run times of several jobs
using production data volumes.
• Regression
– Regression test will be done on a new version to test the
impacts of the modifications
Item pass /fail criteria
• Testing
• Defect
• Defect report
– Summary
– Description
– Severity
– Priority
• Defect tracking
Test Reporting
• Requirements Tracing
• Function Test Matrix
• This maps the system functions to the test case that validates that
function
• The function test matrix show that tests must be performed in order to
validate the functions. This matrix will be used to determine what tests
are needed, and the sequencing of tests. It will also be used to
determine the status of testing
• In a large testing project, it is easy to lose track of what has been tested and
what should be tested. The question often arises, "Is the testing comprehensive
enough?"
• A simple way of determining what to test is to go through the source documents
(Business Requirements, Functional Specifications, System Design Document,
etc.) paragraph by paragraph and extract each requirement. A simple matrix is
built, with the following format:
Test Assets
• Objective: The purpose of this report is to present what functions have been fully tested,
what function have been tested but contain errors, and what functions have not been
tested. The report will include 100 percent of the functions to be tested in accordance with
the test plan
• Example: A sample of this test report showing that 50 percent of the functions tested have
errors, 40 percent are fully tested, and 10 percent of the functions have not been tested is
illustrated in the above graph
• How to Interpret the Report: The report is designed to show status. It is intended for the test
manager and/or customer of the software system. The interpretation will depend heavily on
the point in the test process at which the report is prepared. As the implementation date
approaches, a high number of functions tested with uncorrected errors, plus functions not
tested, would raise concerns about meeting the implementation date.
Functional Testing Status Report
50
40
Percent
30
20
10
0
Tested Not Tested
Functions Working Timeline
• Objective: The purpose of this report is to show the status of testing and the
probability that the development and test groups Will have the system ready on
the projected implementation date
• The example of the functions working timeline (Figure) shows the normal
projection for having functions working. This report assumes a September
implementation date and shows from January through September the percent
of functions that should be working correctly at any point in time. The actual
line shows that the project is doing better than projected.
• How to Interpret the Report: If the actual is performing better than the planned,
the probability of meeting the implementation date is high. On the other hand, if
the actual percent of functions working is less than planned, both the test
manager and development team should be concerned and may want to extend
the implementation date or add additional resources to testing and/or
development.
Functions Working Timeline
100
80
60
Planed
40 Actual
20
0
Jan Feb Mar Apr May
Defects Uncovered versus
Corrected Gap Timeline
• Objective: The purpose of this report is to show the backlog of detected but uncorrected
defects. It merely requires recording defects when they have been detected, and recording
them again when they have been successfully corrected.
• Example: The example in graph shows a project beginning in January with a projected
September implementation shows the cumulative number of defects uncovered in test and
the second line shows the cumulative number of defects corrected by the development
team, which have been retested to demonstrate that correctness. The gap then represents
the number of uncovered but uncorrected defects at any point in time.
• How to interpret the Report: The ideal project would have a very small gap between these
two timelines. If the gap becomes large, it is indicative that the backlog of uncorrected
defects is growing and the probability of the development team correcting them prior to
implementation date is decreasing. The development team needs to manage this gap to
ensure that it remains minimal.
Defect Uncovered versus Corrected Gap Timeline
100
80 Gap
60 Detected
40 Corrected
20
0
Build 2 3 4 5
1
Average Age Uncorrected Defects
by Type
20
15
10
Days
0
Critical Major Minor
Defect Distribution Report
• Objective: The purpose of this report is to show how defects are distributed among the
modules/units being tested. It shows the total cumulative defects uncovered for each
module being tested at any point in time.
• Example: The defect distribution report example shows eight units under test and the
number of defects that have been uncovered in each of those units to date. The report
could be enhanced to show the extent of testing that has occurred on the modules. For
example, it might be color coded by the number of tests, or the number of tests might be
incorporated into the bar as a number, such as the number 6 for unit that has undergone
six tests at the point in time that the report was prepared.
• How to Interpret this Report: This Report can help identify modules that have an excessive
defect rate. A variation of the report could show the cumulative defects by test. Foe
example, the defects uncovered in test 1, the cumulative defects uncovered by the end of
test 2, the cumulative defects uncovered by test 3, and so forth. Certain modules that have
abnormally high defect are ones that frequently have ineffective architecture, and are
candidates for rewrite rather than additional testing.
Defect Distribution Report
Number of Defects
80
60
40
20
0
A B C D E F G H
Questions?
Agenda
• Testing
• Defect
• Defect report
– Summary
– Description
– Severity
– Priority
• Defect tracking
Why not just "test everything"?
Avr. 4 menus
3 options / menu
Number of Cost of
Missed Bugs Testing
Quantity
Optimal Amount
of Testing
Under
Over
Testing
Testing
Amount of Testing
How much testing?
• It depends on RISK
– risk of missing important faults
– risk of incurring failure costs
– risk of releasing untested or under-tested software
– risk of losing credibility and market share
– risk of missing a market window
– risk of over-testing, ineffective testing
Testing addresses risk
Harmless Annoying
Each piece of the system is
assigned values
High Impact and high Likelihood,
need the most testing
Each piece with high Impact needs
attention, even if the Likelihood value is low
Next we test the high Likelihood
pieces
Risk Based Testing
High
• Testing
• Defect
• Defect report
– Summary
– Description
– Severity
– Priority
• Defect tracking
How does a client/server
environment affect testing?
• Concurrency
• Configuration
• Portability
• Performance
– Load
• Peak load testing
– Stress
• Database
– Integrity
– Validity
– Volume testing
How can World Wide Web sites be
tested?
• Quality/Content
– Broken Links
• Error 404
– Missing Components
• Navigation support
• Browser compatibility
Common Web Site Testing
Objectives
General page layout
• frames
• images
• tables
Functional testing
• of each transaction
• using different sets of valid data
• across different browsers
Regression testing
• hardware and software upgrades
• web site enhancements
General Page Layout
Frame Image
Table
Functional Testing
(for each transaction and with valid and invalid
data)
Create
an order
View
Shopping
E - Commerce Cart
Delete
an order
Create new
account
Functional Testing
(verify web site works for different browsers)
Create Create
an order an order
View View
Shopping Shopping
Cart Cart
Delete Delete
an order an order
Create Create
an order an order
View View
Shopping Shopping
Cart Cart
Delete Delete
an order an order
• Testing
• Defect
• Defect report
– Summary
– Description
– Severity
– Priority
• Defect tracking
Testing Tools
Improvisation required?
Poor candidate for automation.
Test automation is based on a
simple economic proposition
1 2 64 65
invalid valid invalid
number of digits: 5 6 7
invalid invalid
valid