Test Automation: Navigation Search Merged Discuss Merged Discuss
Test Automation: Navigation Search Merged Discuss Merged Discuss
Test Automation: Navigation Search Merged Discuss Merged Discuss
Test automation is the use of software to control the execution of tests, the comparison of actual
outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test
reporting functions.[1] Commonly, test automation involves automating a manual process already in
place that uses a formalized testing process.
Contents
[hide]
• 1
Overvie
w
• 2 Code-
driven
testing
• 3
Graphic
al User
Interfac
e (GUI)
testing
• 4 What
to test
• 5
Frame
work
approac
h in
automat
ion
• 6
Definin
g
bounda
ries
betwee
n
automat
ion
framew
ork and
a
testing
tool
• 7
Notable
test
automat
ion
tools
• 8 See
also
• 9
Referen
[edit] Overview
Although manual tests may find many defects in a software application, it is a laborious and time
consuming process. In addition, it may not be effective in finding certain classes of defects. Test
automation is a process of writing a computer program to do testing that would otherwise need to be
done manually. Once tests have been automated, they can be run quickly. This is often the most cost
effective method for software products that have a long maintenance life, because even minor patches
over the lifetime of the application can cause features to break which were working at an earlier point
in time.
There are two general approaches to test automation:
• Code-driven testing. The public (usually) interfaces to classes, modules, or libraries are tested
with a variety of input arguments to validate that the results that are returned are correct.
• Graphical user interface testing. A testing framework generates user interface events such as
keystrokes and mouse clicks, and observes the changes that result in the user interface, to
validate that the observable behavior of the program is correct.
Test automation tools can be expensive, and it is usually employed in combination with manual testing.
It can be made cost-effective in the longer term, especially when used repeatedly in regression testing.
One way to generate test cases automatically is model-based testing through use of a model of the
system for test case generation but research continues into a variety of alternative methodologies for
doing so.[citation needed]
What to automate, when to automate, or even whether one really needs automation are crucial
decisions which the testing (or development) team must make. Selecting the correct features of the
product for automation largely determines the success of the automation. Automating unstable features
or features that are undergoing changes should be avoided.[2]
A Fable
I’ve seen lots of different problems beset test automation efforts. I’ve worked at many software
companies, big and small. And I’ve talked to a people from many other companies. This paper will
present ways to avoid these problems. But first we need to understand them. Let me illustrate with a
fable.
Once upon a time, we have a software project that needs test automation. Everyone on the team agrees
that this is the thing to do. The manager of this project is Anita Delegate. She reviews the different test
tools available, selects one and purchases several copies. She assigns one of her staff, Jerry
Overworked, the job of automating the tests. Jerry has many other responsibilities, but between these,
he tries out the new tool. He has trouble getting it to work with their product. The tool is complicated
and hard to configure. He has to make several calls to the customer support line. He eventually realizes
that they need an expert to set it up right and figure out what the problem is. After more phone calls,
they finally send an expert. He arrives, figures out the problem and gets things working. Excellent. But
many months have passed, and they still have no automation. Jerry refuses to work on the project any
further, fearing that it will never be anything but a time sink.
Anita reassigns the project to Kevin Shorttimer, who has recently been hired to test the software. Kevin
has a recent degree in computer science and is hoping to use this job as a step up to something more
challenging and rewarding. Anita sends him to tool training so that he won't give up in frustration the
way Jerry did. Kevin is very excited. The testing is repetitive and boring so he is glad to be automating
instead. After a major release ships, he is allowed to work full time on test automation. He is eager for a
chance to prove that he can write sophisticated code. He builds a testing library and designs some
clever techniques that will support lots of tests. It takes longer than planned, but he gets it working. He
uses the test suite on new builds and is actually able to find bugs with it. Then Kevin gets an
opportunity for a development position and moves on, leaving his automation behind.
Ahmed Hardluck gets the job of running Kevin's test suite. The sparse documentation he finds doesn’t
help much. It takes a while for Ahmed to figure out how to run the tests. He gets a lot of failures and
isn't sure if he ran it right or not. The error messages aren't very helpful. He digs deeper. Some of the
tests look like they were never finished. Others have special setup requirements. He updates the setup
documentation. He plugs away with it. He finds that a couple failures are actually due to regression
bugs. Everyone is happy that the test suite caught these. He identifies things in the test suites that he'd
like to change to make it more reliable, but there never seems to be the time. The next release of the
product has some major changes planned. Ahmed soon realizes that the product changes break the
automation. Most of the tests fail. Ahmed works on this for a while and then gets some help from
others. They realize that it’s going to take some major work to get the tests to run with the new product
interface. But eventually they do it. The tests pass, and they ship the product. And the customers start
calling right away. The software doesn't work. They come to realize that they reworked some tests so
that error messages were being ignored. These tests actually failed, but a programming error had
dismissed these errors. The product is a failure.
That's my fable. Perhaps parts of the story sound familiar to you. But I hope you haven't seen a similar
ending. This paper will suggest some ways to avoid the same fate. (James Bach has recounted similar
stories of test automation projects [Bach 1996].)
The Problems
This fable illustrates several problems that plague test automation projects:
Spare time test automation. People are allowed to work on test automation on their own time or as a
back burner project when the test schedule allows. This keeps it from getting the time and focus it
needs.
Lack of clear goals. There are many good reasons for doing test automation. It can save time, make
testing easier and improve the testing coverage. It can also help keep testers motivated. But it's not
likely to do all these things at the same time. Different parties typically have different hopes. These
need to be stated, or else disappointment is likely.
Lack of experience. Junior programmers trying to test their limits often tackle test automation projects.
The results are often difficult to maintain.
High turnover. Test automation can take a while to learn. But when the turnover is high, you lose this
experience.
Reaction to desperation. Problems are usually lurking in the software long before testing begins. But
testing brings them to light. Testing is difficult enough in itself. When testing is followed by testing and
retesting of the repaired software, people can get worn down. Will the testing ever end? This
desperation can become particularly acute when the schedule has dictated that the software should be
ready now. If only it weren't for all the testing! In this environment, test automation may be a ready
answer, but it may not be the best. It can be more of a wish than a realistic proposal.
Reluctance to think about testing. Many find automating a product more interesting than testing it.
Some automation projects provide convenient cover stories for why their contributors aren't more
involved in the testing. Rarely does the outcome contribute much to the test effort.
Technology focus. How the software can be automated is a technologically interesting problem. But this
can lose sight of whether the result meets the testing need.
Ensure consistency
Define the testing process and reduce dependence on the few who know it