Se 8

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 25

Software Testing

Defective Software

• We develop programs that contain defects


• How many? What kind?

• Hard to predict the future, however…


it is highly likely, that the software we will develop
in the future will not be significantly better.
Sources of Problems
• Requirements Definition: Erroneous, incomplete, inconsistent requirements.

• Design: Fundamental design flaws in the software.

• Implementation: Mistakes in chip fabrication, wiring, programming faults, malicious

code.

• Support Systems: Poor programming languages, faulty compilers and debuggers,

misleading development tools.


Software Testing

Testing is a set of activities that can be planned in advance and conducted


systematically. A number of software testing strategies have been proposed. All
provide you with a template for testing and all have the following generic
characteristics:

• To perform effective testing, you should conduct effective technical reviews. By


doing this, many errors will be eliminated before testing commences.

• Testing begins at the component level and works toward the integration of the
entire computer-based system.
Software Testing

• Different testing techniques are appropriate for different software engineering


approaches and at different points in time.

• Testing is conducted by the developer of the software and an independent test


group. Testing and debugging are different activities, but debugging must be
accommodated in any testing strategy. A strategy for software testing must
accommodate low-level tests that are necessary to verify that a small source code
segment has been correctly implemented as well as high-level tests that validate
major system functions against customer requirements.
Verification and Validation:

Software testing is one element of a broader topic that is often referred to as

verification and validation (V&V). Verification refers to the set of tasks that

ensure that software correctly implements a specific function. Validation

refers to a different set of tasks that ensure that the software that has been

built is traceable to customer requirements. Boehm states this another way:

• Verification: “Are we building the product right?”

• Validation: “Are we building the right product?”


Verification and Validation:

The definition of V&V encompasses many software quality assurance

activities. Verification and validation includes a wide array of SQA

activities: technical reviews, quality and configuration audits, performance

monitoring, simulation, feasibility study, documentation review, database review,

algorithm analysis, development testing, usability testing, qualification testing,

acceptance testing, and installation testing.


Target of the test are -

• Errors - These are actual coding mistakes made by developers. In addition, there

is a difference in output of software and desired output, is considered as an error.

• Fault - When error exists fault occurs. A fault, also known as a bug, is a result of

an error which can cause system to fail.

• Failure - failure is said to be the inability of the system to perform the desired

task. Failure occurs when fault exists in the system.


Manual Vs. Automated Testing

• Testing can either be done manually or using an automated testing tool:

• Manual - This testing is performed without taking help of automated testing

tools. The software tester prepares test cases for different sections and levels of

the code, executes the tests and reports the result to the manager.

• Manual testing is time and resource consuming. The tester needs to confirm

whether or not right test cases are used. Major portion of testing involves manual

testing.
Manual Vs. Automated Testing
• Automated This testing is a testing procedure done with aid of automated testing

tools. The limitations with manual testing can be overcome using automated test

tools.

• A test needs to check if a webpage can be opened in Internet Explorer. This can

be easily done with manual testing. But to check if the web-server can take the

load of 1 million users, it is quite impossible to test manually.


Test cases are specifications of the inputs to the test and the expected output from
the system, plus a statement of what is being tested. Test data are the inputs that
have been devised to test a system. Test data can sometimes be generated
automatically, but automatic test case generation is impossible, as people who
understand what the system is supposed to do must be involved to specify the
expected test results.
Testing Approaches

Tests can be conducted based on two approaches –

• Functionality testing

• Implementation testing

• When functionality is being tested without taking the actual implementation in


concern it is known as black-box testing. The other side is known as white-box testing
where not only functionality is tested but the way it is implemented is also analyzed.

• Exhaustive tests are the best-desired method for a perfect testing. Every single
possible value in the range of the input and output values is tested. It is not possible
to test each and every value in real world scenario if the range of values is large.
Black-box testing

• It is carried out to test functionality of the program. It is also called ‘Behavioral’

testing. The tester in this case, has a set of input values and respective desired

results. On providing input, if the output matches with the desired results, the

program is tested ‘ok’, and problematic otherwise.

• In this testing method, the design and structure of the code are not known to the

tester, and testing engineers and end users conduct this test on the software.
Black-box testing techniques:
• Equivalence class - The input is divided into similar classes. If one element of a class passes the
test, it is assumed that all the class is passed.

• Boundary values - The input is divided into higher and lower end values. If these values pass
the test, it is assumed that all values in between may pass too.

• Cause-effect graphing - In both previous methods, only one input value at a time is tested.
Cause (input) – Effect (output) is a testing technique where combinations of input values are
tested in a systematic way.

• Pair-wise Testing - The behavior of software depends on multiple parameters. In pairwise


testing, the multiple parameters are tested pair-wise for their different values.

• State-based testing - The system changes state on provision of input. These systems are tested
White-box testing
• It is conducted to test program and its implementation, in order to improve code efficiency or
structure. It is also known as ‘Structural’ testing.

• In this testing method, the design and structure of the code are known to the tester.
Programmers of the code conduct this test on the code.

The below are some White-box testing techniques:

• Control-flow testing - The purpose of the control-flow testing to set up test cases which covers
all statements and branch conditions. The branch conditions are tested for both being true and
false, so that all statements can be covered.

• Data-flow testing - This testing technique emphasis to cover all the data variables included in
the program. It tests where the variables were declared and defined and where they were used
Testing Levels

• Testing itself may be defined at various levels of SDLC. The testing process runs parallel
to software development. Before jumping on the next stage, a stage is tested, validated
and verified.

• Testing separately is done just to make sure that there are no hidden bugs or issues left
in the software. Software is tested on various levels -

Unit Testing

• While coding, the programmer performs some tests on that unit of program to know if it
is error free. Testing is performed under white-box testing approach. Unit testing helps
developers decide that individual units of the program are working as per requirement
Integration Testing
• Even if the units of software are working fine individually, there is a need to find out
if the units if integrated together would also work without errors. For example,
argument passing and data updation etc.
System Testing
• The software is compiled as product and then it is tested as a whole. This can be
accomplished using one or more of the following tests:
• Functionality testing - Tests all functionalities of the software against the
requirement.
• Performance testing - This test proves how efficient the software is. It tests the
effectiveness and average time taken by the software to do desired task.
Performance testing is done by means of load testing and stress testing where the
software is put under high user and data load under various environment conditions.
• Security & Portability - These tests are done when the software is meant to work on
various platforms and accessed by number of persons.
Acceptance Testing

• When the software is ready to hand over to the customer it has to go through last phase of
testing where it is tested for user-interaction and response. This is important because even
if the software matches all user requirements and if user does not like the way it appears or
works, it may be rejected.

• Alpha testing - The team of developer themselves perform alpha testing by using the
system as if it is being used in work environment. They try to find out how user would react
to some action in software and how the system should respond to inputs.

• Beta testing - After the software is tested internally, it is handed over to the users to use it
under their production environment only for testing purpose. This is not as yet the
Regression Testing

• Whenever a software product is updated with new code, feature or functionality,


it is tested thoroughly to detect if there is any negative impact of the added code.
This is known as regression testing.
• Validation testing provides final assurance that software meets all informational,
functional, behavioral, and performance requirements.

• System testing: The last high-order testing step falls outside the boundary of
software engineering and into the broader context of computer system
engineering. Software, once validated, must be combined with other system
elements (e.g., hardware, people, databases).

• System testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
SYSTEM TESTING

• System testing is actually a series of different tests whose primary purpose is to fully
exercise the computer-based system. Although each test has a different purpose, all work
to verify that system elements have been properly integrated and perform allocated
functions.

Recovery Testing: Many computer-based systems must recover from faults and resume
processing with little or no downtime. In some cases, a system must be fault tolerant; that is,
processing faults must not cause overall system function to cease. In other cases, a system
failure must be corrected within a specified period of time or severe economic damage will
occur. Recovery testing is a system test that forces the software to fail in a variety of ways
• Security Testing: Any computer-based system that manages sensitive information
or causes actions that can improperly harm (or benefit) individuals is a target for
improper or illegal penetration. Penetration spans a broad range of activities:
hackers who attempt to penetrate systems for sport, disgruntled employees who
attempt to penetrate for revenge, dishonest individuals who attempt to penetrate
for illicit personal gain. Security testing attempts to verify that protection
mechanisms built into a system will, in fact, protect it from improper penetration.
• Stress Testing: Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.

For example,

(1) special tests may be designed that generate ten interrupts per second, when one or two is the
average rate,

(2) input data rates may be increased by an order of magnitude to determine how input functions
will respond,

(3) test cases that require maximum memory or other resources are executed,

(4) test cases that may cause thrashing in a virtual operating system are designed,

(5) test cases that may cause excessive hunting for disk-resident data are created. Essentially, the
• Performance Testing: Performance testing is designed to test the run-time
performance of software within the context of an integrated system.
Performance testing occurs throughout all steps in the testing process. Even at
the unit level, the performance of an individual module may be assessed as tests
are conducted. However, it is not until all system elements are fully integrated
that the true performance of a system can be ascertained. Performance tests are
often coupled with stress testing and usually require both hardware and software
instrumentation. That is, it is often necessary to measure resource utilization
(e.g., processor cycles) in an exacting fashion.
• Deployment Testing: In many cases, software must execute on a variety of

platforms and under more than one operating system environment. Deployment

testing, sometimes called configuration testing, exercises the software in each

environment in which it is to operate. In addition, deployment testing examines

all installation procedures and specialized installation software (e.g., “installers”)

that will be used by customers, and all documentation that will be used to

introduce the software to end users.

You might also like