Functional Testing:-: Software Testing Blackbox Testing

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 5

SOFTWARE TESTING

BLACKBOX TESTING:-

1) FUNCTIONAL TESTING:- In this type of testing, the software is tested for the
functional requirements. The tests are written in order to check if the application
behaves as expected. Although functional testing is often done toward the end of
the development cycle, it can—and should, —be started much earlier. Individual
components and processes can be tested early on, even before it's possible to do
functional testing on the entire system. Functional testing covers how well the
system executes the functions it is supposed to execute—including user commands,
data manipulation, searches and business processes, user screens, and integrations.
Functional testing covers the obvious surface type of functions, as well as the back-
end operations (such as security and how upgrades affect the system).

2) STRESS TESTING:- The application is tested against heavy load such as complex
numerical values, large number of inputs, large number of queries etc. which checks
for the stress/load the applications can withstand. Stress testing deals with the
quality of the application in the environment. The idea is to create an environment
more demanding of the application than the application would experience under
normal work loads. This is the hardest and most complex category of testing to
accomplish and it requires a joint effort from all teams. A test environment is
established with many testing stations. At each station, a script is exercising the
system. These scripts are usually based on the regression suite. More and more
stations are added, all simultaneous hammering on the system, until the system
breaks. The system is repaired and the stress test is repeated until a level of stress is
reached that is higher than expected to be present at a customer site. Race
conditions and memory leaks are often found under stress testing. A race condition
is a conflict between at least two tests. Each test works correctly when done in
isolation. When the two tests are run in parallel, one or both of the tests fail. This is
usually due to an incorrectly managed lock. A memory leak happens when a test
leaves allocated memory behind and does not correctly return the memory to the
memory allocation scheme. The test seems to run correctly, but after being
exercised several times, available memory is reduced until the system fails.

3) LOAD TESTING:- The application is tested against heavy loads or inputs such
as testing of
web sites in order to find out at what point the web-site/application fails or at what
point its performance degrades. Load testing operates at a predefined load level,
usually the highest load that the system can accept while still functioning properly.
Note that load testing does not aim to break the system by overwhelming it, but
instead tries to keep the system constantly humming like a well-oiled machine.In
the context of load testing, extreme importance should be given of having large
datasets available for testing. Bugs simply do not surface unless you deal with very
large entities such thousands of users in repositories such as LDAP/NIS/Active
Directory; thousands of mail server mailboxes, multi-gigabyte tables in databases,
deep file/directory hierarchies on file systems, etc. Testers obviously need
automated tools to generate these large data sets, but fortunately any good scripting
language worth its salt will do the job.

4) AD-HOC TESTING:- This type of testing is done without any formal Test Plan or
Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the
various other testing and it also helps testers in learning the application prior
starting with any other testing. It is the least formal method of testing. One of the
best uses of ad hoc testing is for discovery. Reading the requirements or
specifications (if they exist) rarely gives you a good sense of how a program
actually behaves. Even the user documentation may not capture the “look and feel”
of a program. Ad hoc testing can find holes in your test strategy, and can expose
relationships between subsystems that would otherwise not be apparent. In this way,
it serves as a tool for checking the completeness of your testing. Missing cases can
be found and added to your testing arsenal. Finding new tests in this way can also
be a sign that you should perform root cause analysis. Ask yourself or your test
team, “What other tests of this class should we be running?” Defects found while
doing ad hoc testing are often examples of entire classes of forgotten test cases.
Another use for ad hoc testing is to determine the priorities for your other testing
activities. In our example program, Panorama may allow the user to sort
photographs that are being displayed. If ad hoc testing shows this to work well, the
formal testing of this feature might be deferred until the problematic areas are
completed. On the other hand, if ad hoc testing of this sorting photograph feature
uncovers problems, then the formal testing might receive a higher priority.

5) EXPLORATORY TESTING:- This testing is similar to the ad-hoc testing and is


done in order to learn/explore the application. Exploratory software testing is a powerful
and fun approach to testing. In some situations, it can be orders of magnitude more
productive than scripted testing. At least unconsciously, testers perform exploratory testing
at one time or another. Yet it doesn't get much respect in our field. It can be considered as
“Scientific Thinking” at real time

6) USABILITY TESTING:- This testing is also called as ‘Testing for User-Friendliness’.


This testing is done if User Interface of the application stands an important consideration
and needs to be specific for the specific type of user. Usability testing is the process of
working with end-users directly and indirectly to assess how the user perceives a software
package and how they interact with it. This process will uncover areas of difficulty for
users as well as areas of strength. The goal of usability testing should be to limit and
remove difficulties for users and to leverage areas of strength for maximum usability. This
testing should ideally involve direct user feedback, indirect feedback (observed behavior),
and when possible computer supported feedback. Computer supported feedback is often (if
not always) left out of this process. Computer supported feedback can be as simple as a
timer on a dialog to monitor how long it takes users to use the dialog and counters to
determine how often certain conditions occur (ie. error messages, help messages, etc).
Often, this involves trivial modifications to existing software, but can result in tremendous
return on investment. Ultimately, usability testing should result in changes to the delivered
product in line with the discoveries made regarding usability. These changes should be
directly related to real-world usability by average users. As much as possible,
documentation should be written supporting changes so that in the future, similar situations
can be handled with ease.

7) SMOKE TESTING:- This type of testing is also called sanity testing and is done in
order to check if the application is ready for further major testing and is working properly
without failing up to least expected level. A test of new or repaired equipment by turning it
on. If it smokes... guess what... it doesn't work! The term also refers to testing the basic
functions of software. The term was originally coined in the manufacture of containers and
pipes, where smoke was introduced to determine if there were any leaks. A common
practice at Microsoft and some other shrink-wrap software companies is the "daily build
and smoke test" process. Every file is compiled, linked, and combined into an executable
program every day, and the program is then put through a "smoke test," a relatively simple
check to see whether the product "smokes" when it runs.

8) RECOVERY TESTING:- Recovery testing is basically done in order to check how fast
and better the application can recover against any type of crash or hardware failure etc.
Type or extent of recovery is specified in the requirement specifications. It is basically
testing how well a system recovers from crashes, hardware failures, or other catastrophic
problems.

9) VOLUME TESTING:- Volume testing is done against the efficiency of the application.
Huge amount of data is processed through the application (which is being tested) in order
to check the extreme limitations of the system

Volume Testing, as its name implies, is testing that purposely subjects a system (both
hardware and software) to a series of tests where the volume of data being processed is the
subject of the test. Such systems can be transactions processing systems capturing real time
sales or could be database updates and or data retrieval.

Volume testing will seek to verify the physical and logical limits to a system's capacity and
ascertain whether such limits are acceptable to meet the projected capacity of the
organization’s business processing.

10)DOMAIN TESTING :- Domain testing is the most frequently described test technique. Some
authors write only about domain testing when they write about test design. The basic
notion is that you take the huge space of possible tests of an individual variable and
subdivide it into subsets that are (in some way) equivalent. Then you test a
representative from each subset.

11)SCENARIO TESTING :- Scenario tests are realistic, credible and motivating to stakeholders,
challenging for the program and easy to evaluate for the tester. They provide
meaningful combinations of functions and variables rather than the more artificial
combinations you get with domain testing or combinatorial test design.
12)REGRESSION TESTING:- Regression testing is a style of testing that focuses on retesting
after changes are made. In traditional regression testing, we reuse the same tests (the
regression tests). In risk-oriented regression testing, we test the same areas as before, but
we use different (increasingly complex) tests. Traditional regression tests are often partially
automated. These note focus on traditional regression.

Regression testing attempts to mitigate two risks:

o A change that was intended to fix a bug failed.


o Some change had a side effect, unfixing an old bug or introducing a new bug
o Regression testing approaches differ in their focus. Common examples include:

Bug regression: We retest a specific bug that has been allegedly fixed.

Old fix regression testing: We retest several old bugs that were fixed, to see if they are
back. (This is the classical notion of regression: the program has regressed to a bad state.)

General functional regression: We retest the product broadly, including areas that worked
before, to see whether more recent changes have destabilized working code. (This is the
typical scope of automated regression testing.)

13) USER ACCEPTANCE :- In this type of testing, the software is handed over to the user in
order to find out if the software meets the user expectations and works as it is expected to.
In software development, user acceptance testing (UAT) - also called beta testing,
application testing, and end user testing - is a phase of software development in which the
software is tested in the "real world" by the intended audience. UAT can be done by in-
house testing in which volunteers or paid test subjects use the software or, more typically
for widely-distributed software, by making the test version available for downloading and
free trial over the Web. The experiences of the early users are forwarded back to the
developers who make final changes before releasing the software commercially.

14)ALPHA TESTING :- In this type of testing, the users are invited at the development center
where they use the application and the developers note every particular input or action
carried out by the user. Any type of abnormal behavior of the system is noted and rectified
by the developers.

14)BETA TESTING:- In this type of testing, the software is distributed as a beta version to the
users and users test the application at their sites. As the users explore the software, in case
if any exception/defect occurs that is reported to the developers. Beta testing comes after
alpha testing. Versions of the software, known as beta versions, are released to a limited
audience outside of the company. The software is released to groups of people so that
further testing can ensure the product has few faults or bugs. Sometimes, beta versions are
made available to the open public to increase the feedback field to a maximal number of
future users.

15) Conversion or port testing: The program is ported to a new platform and a subset of
the regression test suite is run to determine whether the port was successful. (Here, the
main changes of interest might be in the new platform, rather than the modified old code.)

16) Configuration testing: The program is run with a new device or on a new version of
the operating system or in conjunction with a new application. This is like port testing
except that the underlying code hasn't been changed--only the external components that the
software under test must interact with.

17) Localization testing: The program is modified to present its user interface in a
different language and/or following a different set of cultural rules. Localization testing
may involve several old tests (some of which have been modified to take into account the
new language) along with several new (non-regression) tests.

WHITE BOX TESTING


18) UNIT TESTING :-
19) STATIC & DYNAMIC ANALYSIS
20)STATEMENT COVERAGE
21)BRANCH COVERAGE
22) SECURITY TESTING
23)MUTATION TESTING

You might also like