0% found this document useful (0 votes)
9 views32 pages

Module 5

This document provides an overview of software testing. It discusses the goals of testing to demonstrate that software meets requirements and to discover defects. There are two main types of testing: validation testing uses correct inputs to generate expected outputs, while defect testing aims to find incorrect outputs by using incorrect inputs. Development testing occurs during development and includes unit, component, and system testing. Unit testing focuses on individual program units or classes to test functionality and ensure all features and states are tested. Automated unit testing uses a framework to write and run tests that initialize inputs, call the component, and assert expected outputs. Choosing representative test cases is important and techniques like equivalence partitioning help select inputs that cover normal and boundary conditions.

Uploaded by

pp1797804
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
9 views32 pages

Module 5

This document provides an overview of software testing. It discusses the goals of testing to demonstrate that software meets requirements and to discover defects. There are two main types of testing: validation testing uses correct inputs to generate expected outputs, while defect testing aims to find incorrect outputs by using incorrect inputs. Development testing occurs during development and includes unit, component, and system testing. Unit testing focuses on individual program units or classes to test functionality and ensure all features and states are tested. Automated unit testing uses a framework to write and run tests that initialize inputs, call the component, and assert expected outputs. Choosing representative test cases is important and techniques like equivalence partitioning help select inputs that cover normal and boundary conditions.

Uploaded by

pp1797804
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 32

Module – 5

Chapter 1: Software Testing

Introduction:-
• Testing is intended to show that a program does what it is intended to do and to
discover program defects before it is put into use.
• The testing process has two distinct goals:
1. To demonstrate to the developer and the customer that the software meets its
requirements.
For custom software, this means that there should be at least one test for every
requirement in the requirements document.
For generic software products, it means that there should be tests for all of the
system features, plus combinations of these features, that will be incorporated in the
product release.
2. To discover situations in which the behavior of the software is incorrect, undesirable,
or does not conform to its specification. Defect testing is concerned with rooting out
undesirable system behavior such as system crashes, unwanted interactions with
other systems, incorrect computations, and data corruption.

The diagram shown in Figure 3.1 explains the differences between validation testing and
defect testing. Think of the system being tested as a black box. The system accepts inputs
from some input set I and generates outputs in an output set O. Some of the outputs will be
erroneous. These are the outputs in set Oe that are generated by the system in response to
inputs in the set Ie. The priority in defect testing is to find those inputs in the set Ie because
these reveal problems with the system. Validation testing involves testing with correct inputs
that are outside Ie. These stimulate the system to generate the expected correct outputs.

Page 1
3.1 Development Testing
• Development testing includes all testing activities that are carried out by the team
developing the system.
• During development, testing may be carried out at three levels of granularity:
1. Unit testing, where individual program units or object classes are tested. Unit testing
should focus on testing the functionality of objects or methods.
2. Component testing, where several individual units are integrated to create composite
components. Component testing should focus on testing component interfaces.
3. System testing, where some or all of the components in a system are integrated and
the system is tested as a whole. System testing should focus on testing component
interactions.
• Development testing is primarily a defect testing process, where the aim of testing is
to discover bugs in the software. It is therefore usually interleaved with debugging—
the process of locating problems with the code and changing the program to fix these
problems.

3.1.1 Unit Testing


• Unit testing is the process of testing program components, such as methods or object
classes. Individual functions or methods are the simplest type of component.
• Your tests should be calls to these routines with different input parameters. This
approach can be used to design the function or method tests.
• When you are testing object classes, you should design your tests to provide coverage
of all of the features of the object. This means that you should:
• test all operations associated with the object;
• set and check the value of all attributes associated with the object;
• put the object into all possible states. This means that you should
simulate all events that cause a state change.
• Units may be:
▪ Individual functions or methods within an object
▪ Object classes with several attributes and methods
▪ Composite components with defined interfaces used to access their
functionality.

Page 2
The weather station object Interface:-

• To test the states of the weather station, we use a state model. Using this model, you
can identify sequences of state transitions that have to be tested and define event
sequences to force these transitions.
• In principle, you should test every possible state transition sequence, although in
practice this may be too expensive.
• Examples of state sequences that should be tested in the weather station include:
o Shutdown _ Running _ Shutdown
o Configuring _ Running _ Testing _ Transmitting _ Running
o Running _ Collecting _ Running _ Summarizing _ Transmitting _ Running
Automated Testing
• Whenever possible, unit testing should be automated so that tests are run and checked
without manual intervention.
• In automated unit testing, you make use of a test automation framework (such as
JUnit) to write and run your program tests.
• Unit testing frameworks provide generic test classes that you extend to create specific
test cases. They can then run all of the tests that you have implemented and report,
often through some GUI, on the success of otherwise of the tests.
• An automated test has three parts:
1. A setup part, where you initialize the system with the test case, namely the
inputs and expected outputs.
2. A call part, where you call the object or method to be tested.
3. An assertion part where you compare the result of the call with the expected
result. If the assertion evaluates to true, the test has been successful; if false,then it
has failed.

Page 3
3.1.2 Choosing Unit-test cases.
• The test cases should show that, when used as expected, the component that you are
testing does what it is supposed to do.
• If there are defects in the component, these should be revealed by test cases.
• This leads to 2 types of unit test case:
o The first of these should reflect normal operation of a program and should
show that the component works as expected.
o The other kind of test case should be based on testing experience of where
common problems arise. It should use abnormal inputs to check that these
are properly processed and do not crash the component.
• The Two possible strategies here that can be effective in helping you choose test
cases. These are:
1. Partition testing, where you identify groups of inputs that have common
characteristics and should be processed in the same way. You should choose tests from
within each of these groups.
2. Guideline-based testing, where you use testing guidelines to choose test cases.
These guidelines reflect previous experience of the kinds of errors that programmers often
make when developing components.

Equivalence-Class Partitioning
• The input data and output results of a program often fall into a number of different
classes with common characteristics.
• Examples of these classes are positive numbers, negative numbers, and menu
selections.
• Programs normally behave in a comparable way for all members of a class. That is, if
you test a program that does a computation and requires two positive numbers, then
you would expect the program to behave in the same way for all positive numbers.
• Because of this equivalent behavior, these classes are sometimes called equivalence
partitions or domains (Bezier, 1990).
• In Figure 3.2, the large shaded ellipse on the left represents the set of all possible
inputs to the program that is being tested.
• The smaller unshaded ellipses represent equivalence partitions.

Page 4
Fig 3.2: Equivalence Partitioning
• A program being tested should process all of the members of an input equivalence
partitions in the same way.
• Output equivalence partitions are partitions within which all of the outputs have
something in common.
• The shaded area in the left ellipse represents inputs that are invalid. The shaded area
in the right ellipse represents exceptions that may occur (i.e., responses to invalid
inputs).
• Once you have identified a set of partitions, you choose test cases from each of these
partitions.
• A good rule of thumb for test case selection is to choose test cases on the boundaries
of the partitions, plus cases close to the midpoint of the partition. The reason for this
is that designers and programmers tend to consider typical values of inputs when
developing a system.
• For example, say a program specification states that the program accepts 4 to 8 inputs
which are five-digit integers greater than 10,000. You use this information to identify
the input partitions and possible test input values. These are shown in Figure 3.3

Fig 3.3: Equivalence Partitions

Page 5
Testing Guidelines (Sequences)
• Test software with sequences which have only a single value.
• Use sequences of different sizes in different tests.
• Derive tests so that the first, middle and last elements of the sequence are accessed.
• Test with sequences of zero length.
General Testing Guidelines:-
• Choose inputs that force the system to generate all error messages
• Design inputs that cause input buffers to overflow
• Repeat the same input or series of inputs numerous times
• Force invalid outputs to be generated
• Force computation results to be too large or too small.

3.1.3 Component Testing


 Software components are often composite components that are made up of several
interacting objects.
▪ For example, in the weather station system, the reconfiguration component
includes objects that deal with each aspect of the reconfiguration.
 You access the functionality of these objects through the defined component interface.
 Testing composite components should therefore focus on showing that the component
interface behaves according to its specification.
▪ You can assume that unit tests on the individual objects within the component
have been completed.
 Figure 3.4 illustrates the idea of component interface testing. Assume that
components A, B, and C have been integrated to create a larger component or
subsystem.

Fig 3.4: Interface Testing

Page 6
 The test cases are not applied to the individual components but rather to the interface
of the composite component created by combining these components.
 Interface errors in the composite component may not be detectable by testing the
individual objects because these errors result from interactions between the objects in
the component.

Interface Types:
There are different types of interface between program components and, consequently,
different types of interface error that can occur:
1. Parameter interfaces: These are interfaces in which data or sometimes function references
are passed from one component to another. Methods in an object have a parameter interface.
2. Shared memory interfaces: These are interfaces in which a block of memory is shared
between components. Data is placed in the memory by one subsystem and retrieved from
there by other sub-systems. This type of interface is often used in embedded systems, where
sensors create data that is retrieved and processed by other system components.
3. Procedural interfaces: These are interfaces in which one component encapsulates a set of
procedures that can be called by other components. Objects and reusable components have
this form of interface.
4. Message passing interfaces These are interfaces in which one component requests a
service from another component by passing a message to it. A return message includes the
results of executing the service. Some object-oriented systems have this form of interface, as
do client–server systems.

Interface errors:
These errors fall into three classes:
1. Interface misuse:-A calling component calls some other component and makes an
error in the use of its interface. This type of error is common with parameter
interfaces where parameters may be of the wrong type or be passed in the wrong
order, or the wrong number of parameters may be passed.
2. Interface misunderstanding: - A calling component misunderstands the specification
of the interface of the called component and makes assumptions about its behavior.
The called component does not behave as expected which then causes unexpected
behavior in the calling component.

Page 7
For example, a binary search method may be called with a parameter that is an
unordered array. The search would then fail.
3. Timing errors These occur in real-time systems that use a shared memory or a
message-passing interface. The producer of data and the consumer of data may
operate at different speeds. Unless particular care is taken in the interface design, the
consumer can access out-of-date information because the producer of the information
has not updated the shared interface information.

Some general guidelines for interface testing are:


1. Examine the code to be tested and explicitly list each call to an external component.
Design a set of tests in which the values of the parameters to the external components are at
the extreme ends of their ranges. These extreme values are most likely to reveal interface
inconsistencies.
2. Where pointers are passed across an interface, always test the interface with null pointer
parameters.
3. Where a component is called through a procedural interface, design tests that deliberately
cause the component to fail. Differing failure assumptions are one of the most common
specification misunderstandings.
4. Use stress testing in message passing systems. This means that you should design tests that
generate many more messages than are likely to occur in practice. This is an effective way of
revealing timing problems.
5. Where several components interact through shared memory, design tests that vary the
order in which these components are activated. These tests may reveal implicit assumptions
made by the programmer about the order in which the shared data is produced and consumed.

3.1.4 System Testing:


 System testing during development involves integrating components to create a
version of the system and then testing the integrated system.
 System testing checks that components are compatible, interact correctly and transfer
the right data at the right time across their interfaces. It obviously overlaps with
component testing but there are two important differences:
1. During system testing, reusable components that have been separately developed and
off-the-shelf systems may be integrated with newly developed components. The
complete system is then tested.

Page 8
2. Components developed by different team members or groups may be integrated at this
stage. System testing is a collective rather than an individual process.

Wilderness weather station system:


• The weather station is asked to report summarized weather data to a remote computer.
The Figure 3.5 shows the sequence of operations in the weather station when it responds
to a request to collect data for the mapping system.

Fig 3.5: Collect weather data sequence chart

• You can use this diagram to identify operations that will be tested and to help design the
test cases to execute the tests. Therefore, issuing a request for a report will result in the
execution of the following thread of methods:
SatComms:request _ WeatherStation:reportWeather _ Commslink:Get(summary)
_ WeatherData:summarize

• The sequence diagram helps you design the specific test cases that you need as it shows
what inputs are required and what outputs are created:
1. An input of a request for a report should have an associated acknowledgment. A report
should ultimately be returned from the request. During testing, you should create summarized
data that can be used to check that the report is correctly organized.
2. An input request for a report to WeatherStation results in a summarized report being
generated. You can test this in isolation by creating raw data corresponding to the summary

Page 9
that you have prepared for the test of SatComms and checking that the WeatherStation object
correctly produces this summary. This raw data is also used to test the WeatherData object.
• For most systems, it is difficult to know how much system testing is essential and when
you should to stop testing.
• Exhaustive testing, where every possible program execution sequence is tested, is
impossible. Testing, therefore, has to be based on a subset of possible test cases.
For example:
1. All system functions that are accessed through menus should be tested.
2. Combinations of functions (e.g., text formatting) that are accessed through the same
menu must be tested.
3. Where user input is provided, all functions must be tested with both correct and
incorrect input.
• Automated system testing is usually more difficult than automated unit or component
testing. Automated unit testing relies on predicting the outputs then encoding these
predictions in a program. The prediction is then compared with the result.

3.2 Test-driven development


• Test-driven development (TDD) is an approach to program development in which you
interleave testing and code development (Beck, 2002; Jeffries and Melnik, 2007).
• Essentially, you develop the code incrementally, along with a test for that increment.
You don’t move on to the next increment until the code that you have developed
passes its test.
• The fundamental TDD process is shown in Figure 3.6.

Fig 3.6: Test-Driven Development

• The steps/activities in the process are as follows:


1. You start by identifying the increment of functionality that is required. This should
normally be small and implementable in a few lines of code.

Page 10
2. You write a test for this functionality and implement this as an automated test. This
means that the test can be executed and will report whether or not it has passed or failed.
3. You then run the test, along with all other tests that have been implemented. Initially,
you have not implemented the functionality so the new test will fail. This is deliberate as it
shows that the test adds something to the test set.
4. You then implement the functionality and re-run the test. This may involve refactoring
existing code to improve it and add new code to what’s already there.
5. Once all tests run successfully, you move on to implementing the next chunk of
functionality.

• An automated testing environment, such as the JUnit environment that supports Java
program testing (Massol and Husted, 2003), is essential for TDD.
• As the code is developed in very small increments, you have to be able to run every
test each time that you add functionality or refactor the program. Therefore, the tests
are embedded in a separate program that runs the tests and invokes the system that is
being tested.
• A strong argument for test-driven development is that it helps programmers clarify
their ideas of what a code segment is actually supposed to do.
• For example, if your computation involves division, you should check that you are not
dividing the numbers by zero. If you forget to write a test for this, then the code to
check will never be included in the program.

Benefits of test-driven development are:


1. Code coverage: - In principle, every code segment that you write should have at
least one associated test. Therefore, you can be confident that all of the code in the system
has actually been executed. Code is tested as it is written so defects are discovered early in
the development process.
2. Regression testing: - A test suite is developed incrementally as a program is
developed. You can always run regression tests to check that changes to the program have not
introduced new bugs.
3. Simplified debugging:- When a test fails, it should be obvious where the problem
lies. The newly written code needs to be checked and modified. You do not need to use
debugging tools to locate the problem.

Page 11
4. System documentation: - The tests themselves act as a form of documentation that
describe what the code should be doing. Reading the tests can make it easier to understand
the code.
• One of the most important benefits of test-driven development is that it reduces the
costs of regression testing. Regression testing involves running test sets that have
successfully executed after changes have been made to a system.
• The regression test checks that these changes have not introduced new bugs into the
system and that the new code interacts as expected with the existing code.
• Regression testing is very expensive and often impractical when a system is manually
tested, as the costs in time and effort are very high.
• Test-driven development is of most use in new software development where the
functionality is either implemented in new code or by using well-tested standard
libraries.
• Test-driven development has proved to be a successful approach for small and
medium-sized projects. Generally, programmers who have adopted this approach are
happy with it and find it a more productive way to develop software (Jeffries and
Melnik, 2007).
.
3.3 Release testing
• Release testing is the process of testing a particular release of a system that is
intended for use outside of the development team.
• Normally, the system release is for customers and users. In a complex project,
however, the release could be for other teams that are developing related systems.
• For software products, the release could be for product management who then prepare
it for sale.
There are two important distinctions between release testing and system testing
during the development process:

1. A separate team that has not been involved in the system development should be
responsible for release testing.
2. System testing by the development team should focus on discovering bugs in the
system (defect testing). The objective of release testing is to check that the system meets its
requirements and is good enough for external use (validation testing).

Page 12
• The primary goal of the release testing process is to convince the supplier of the
system that it is good enough for use. If so, it can be released as a product or
delivered to the customer.
• Release testing is usually a black-box testing process where tests are derived from
the system specification. The system is treated as a black box whose behavior can
only be determined by studying its inputs and the related outputs. Another name
for this is ‘functional testing’, so-called because the tester is only concerned with
functionality and not the implementation of the software.

3.3.1 Requirements-based testing


• Requirements-based testing, therefore, is a systematic approach to test case design
where you consider each requirement and derive a set of tests for it.
• Requirements-based testing is validation rather than defect testing—you are trying to
demonstrate that the system has properly implemented its requirements.
• For example, consider related requirements for the MHC-PMS (introduced in Chapter
1), which are concerned with checking for drug allergies:
If a patient is known to be allergic to any particular medication, then prescription
of that medication shall result in a warning message being issued to the
system user.
If a prescriber chooses to ignore an allergy warning, they shall provide a
reason why this has been ignored.
To check if these requirements have been satisfied, you may need to develop several related
tests:
1. Set up a patient record with no known allergies. Prescribe medication for allergies that are
known to exist. Check that a warning message is not issued by the system.
2. Set up a patient record with a known allergy. Prescribe the medication to that the patient is
allergic to, and check that the warning is issued by the system.
3. Set up a patient record in which allergies to two or more drugs are recorded. Prescribe both
of these drugs separately and check that the correct warning for each drug is issued.
4. Prescribe two drugs that the patient is allergic to. Check that two warnings are correctly
issued.
5. Prescribe a drug that issues a warning and overrule that warning. Check that the system
requires the user to provide information explaining why the warning was overruled.

Page 13
3.3.2 Scenario testing
• Scenario testing is an approach to release testing where you devise typical scenarios
of use and use these to develop test cases for the system.
• A scenario is a story that describes one way in which the system might be used.
• Scenarios should be realistic and real system users should be able to relate to them.
• As an example of a possible scenario from the MHC-PMS, Figure 3.7 describes one
way that the system may be used on a home visit.

Fig 3.7: An usage scenario for MHC-PMS

It tests a number of features of the MHC-PMS:


1. Authentication by logging on to the system.
2. Downloading and uploading of specified patient records to a laptop.
3. Home visit scheduling.
4. Encryption and decryption of patient records on a mobile device.
5. Record retrieval and modification.
6. Links with the drugs database that maintains side-effect information.
7. The system for call prompting.
• If you are a release tester, you run through this scenario, playing the role of Kate and
observing how the system behaves in response to different inputs.
• As ‘Kate’, you may make deliberate mistakes, such as inputting the wrong key phrase
to decode records. This checks the response of the system to errors. You should
carefully note any problems that arise, including performance problems.

Page 14
• If a system is too slow, this will change the way that it is used. For example, if it
takes too long to encrypt a record, then users who are short of time may skip this
stage.
• If they then lose their laptop, an unauthorized person could then view the patient
records.
• When you use a scenario-based approach, you are normally testing several
requirements within the same scenario.

3.3.3 Performance testing

• Performance tests have to be designed to ensure that the system can process its
intended load.
• This usually involves running a series of tests where you increase the load until the
system performance becomes unacceptable.
• Performance testing is concerned both with demonstrating that the system meets its
requirements and discovering problems and defects in the system. To test whether
performance requirements are being achieved, you may have to construct an
operational profile.
• An operational profile is a set of tests that reflect the actual mix of work that will be
handled by the system.
• Therefore, if 90% of the transactions in a system are of type A; 5% of type B; and the
remainder of types C, D, and E, then you have to design the operational profile so that
the vast majority of tests are of type A. Otherwise, you will not get an accurate test of
the operational performance of the system.
• This approach, of course, is not necessarily the best approach for defect testing.
• Stress testing is particularly relevant to distributed systems based on a network of
processors. These systems often exhibit severe degradation when they are heavily
loaded. The network becomes swamped with coordination data that the different
processes must exchange. The processes become slower and slower as they wait for
the required data from other processes.
• Stress testing helps you discover when the degradation begins so that you can add
checks to the system to reject transactions beyond this point.

Page 15
3.4 User testing
• User or customer testing is a stage in the testing process in which users or customers
provide input and advice on system testing.
• This may involve formally testing a system that has been commissioned from an
external supplier, or could be an informal process where users experiment with a new
software product to see if they like it and that it does what they need.
• There are three different types of user testing:
1. Alpha testing, where users of the software work with the development team to test the
software at the developer’s site.
2. Beta testing, where a release of the software is made available to users to allow them
to experiment and to raise problems that they discover with the system developers.
3. Acceptance testing, where customers test a system to decide whether or not it is ready
to be accepted from the system developers and deployed in the customer environment.

1. Alpha testing:-
• users and developers work together to test a system as it is being developed.
This means that the users can identify problems and issues that are not readily
apparent to the development testing team.
• Developers can only really work from the requirements but these often do not
reflect other factors that affect the practical use of the software
• Alpha testing is often used when developing software products that are sold as
shrink-wrapped systems.
• It also reduces the risk that unanticipated changes to the software will have
disruptive effects on their business.
• Alpha testing may also be used when custom software is being developed.
2. Beta testing:-
• takes place when an early, sometimes unfinished, release of a software system
is made available to customers and users for evaluation.
• Beta testers may be a selected group of customers who are early adopters of
the system. Alternatively, the software may be made publicly available for
use by anyone who is interested in it.
• Beta testing is mostly used for software products that are used in many
different environments.

Page 16
• Beta testing is therefore essential to discover interaction problems between the
software and features of the environment where it is used.
3. Acceptance testing:-
• Is an inherent part of custom systems development. It takes place after release
testing.
• It involves a customer formally testing a system to decide whether or not it
should be accepted from the system developer. Acceptance implies that
payment should be made for the system.
• There are six stages in the acceptance testing process, as shown in Figure
3.8. They are:

1. Define acceptance criteria This stage should, ideally, take place early in the
process before the contract for the system is signed. The acceptance criteria
should be part of the system contract and be agreed between the customer
and the developer. In practice, however, it can be difficult to define criteria
so early in the process. Detailed requirements may not be available and
there may be significant requirements change during the development
process.
2. Plan acceptance testing This involves deciding on the resources, time, and
budget for acceptance testing and establishing a testing schedule. The
acceptance test plan should also discuss the required coverage of the
requirements and the order in which system features are tested. It should
define risks to the testing process, such as system crashes and inadequate
performance, and discuss how these risks can be mitigated.
3. Derive acceptance tests Once acceptance criteria have been established, tests
have to be designed to check whether or not a system is acceptable.
Acceptance tests should aim to test both the functional and non-functional

Page 17
characteristics (e.g., performance) of the system. They should, ideally,
provide complete coverage of the system requirements.
4. Run acceptance tests The agreed acceptance tests are executed on the system.
Ideally, this should take place in the actual environment where the system
will be used, but this may be disruptive and impractical. Therefore, a user
testing environment may have to be set up to run these tests. It is difficult
to automate this process as part of the acceptance tests may involve testing
the interactions between end-users and the system. Some training of end-
users may be required.
5. Negotiate test results It is very unlikely that all of the defined acceptance tests
will pass and that there will be no problems with the system. If this is the
case, then acceptance testing is complete and the system can be handed
over. More commonly, some problems will be discovered. In such cases,
the developer and the customer have to negotiate to decide if the system is
good enough to be put into use. They must also agree on the developer’s
response to identified problems.
6. Reject/accept system This stage involves a meeting between the developers
and the customer to decide on whether or not the system should be
accepted. If the system is not good enough for use, then further
development is required to fix the identified problems. Once complete, the
acceptance testing phase is repeated.

Page 18
Chapter 2
Software Evolution

Introduction
Software development does not stop when a system is delivered but continuesthroughout the
lifetimeof the system. After a system has been deployed, it inevitablyhas to change if it is to
remain useful.
Causes of system change

 Business changes and changes to user expectationsgenerate new requirements for the
existing software.
 Parts of the softwaremay have to be modified to correct errors that are found in
operation
 To adapt itforchanges to its hardware and software platform
 To improve its performance orother non- functional characteristics.

An alternative view of the software evolutionlife cycle, as shown in the following figure

Fig: Evolution and Servicing

 This model is depicted as having four phases – initial development, evolution,


servicing and phaseout.
 Evolution is the phase in which significant changes to the softwarearchitecture and
functionality may be made.
 During evolution, the software is used successfully and there is a constant streamof
proposedrequirements changes.
 At some stage in the life cycle, thesoftware reaches a transition point where
significant changes,implementing newrequirements, become less and less cost
effective.
 At that stage, the software moves from evolution to servicing.
 During servicing, the only changes thatare made are relatively small, essential
changes. However, the software is still useful.
 In the final stage, phase-out, the software may still be used but nofurther changes are
beingimplemented.

Evolution processes
Change identification and evolution process
System change proposals causes system evolution in all organizations.

Page 19
Change proposals may come from
 existing requirements that have not been implementedin the released system
 requests for new requirements
 bug reports from system stakeholders
 new ideas for software improvement from the system development team
The processes ofchange identification and system evolution are cyclic andcontinue
throughout the lifetime ofa system. This process is as shown in the following figure

Fig: Change identification and evolution process

The software evolution process

Fig: The software evolution process

 An overview of the evolution process is as shown in the above figure


 The process includes the fundamental activities of change analysis, release
planning,systemimplementation, and releasing a system to customers.
 In the impact analysis stage, cost and impact of proposed changes are assessed to see
 how much of the system is affected by the change,
 how much it might cost toimplement the change.
 If the proposed changes are accepted,a new release of the system isplanned.

Page 20
 During release planning, all proposed changes - fault repair, adaptation, and
newfunctionality - are considered.
 A decision is then made on which changes to implement in the next version of the
system.
 The changes are implemented and validated, and a new version of the system is
released.
 The processthen iterates with a new set of changes proposed for the next release.

The change implementation process

Fig: Change Implementation

 Change implementation is an iteration of the development process, where the revisions to


the system are designed, implemented, and tested.
 The change implementation process is as shown in the above figure.
 New requirements that reflect the system changes are proposed, analysed, and validated.
 System components are redesigned and implemented and the system is retested.
 If appropriate, prototyping of the proposed changes may be carried out as part of the
change analysis process.

The emergency repair process

Fig: The emergency repair process

 Change requests sometimes relate to system problems that have to be tackled urgently.
 These urgent changes can arise for three reasons:
1. If a serious system fault occurs that has to be repaired to allow normal operation to
continue.
2. If changes to the systems operating environment have unexpected effects that disrupt
normal operation.
3. If there are unanticipated changes to the business running the system, such as the
emergence of new competitors or the introduction of new legislation that affects the
system.
 The emergency repair process is required to quickly fix the above problems.
 The source code is analyzed and modified directly, rather than modifying the
requirements and design
 The disadvantages of emergency repair process are as follows
o the requirements, the software design, and the code become inconsistent
o the process of software aging is accelerated since a quick workable solution is
chosen rather than the best possible solution for quick fix
o future changes become more difficult and maintenance costs increase

Page 21
Program evolution dynamics
 Program evolution dynamics is the study of system change.
 Lehman’s laws’ concerning system change are as shown below

Continuing change
 The first law states that system maintenance is an inevitable process.
 As the system’s environment changes, new requirements emerge and the system must be
modified.
Increasing complexity
 The second law states that, as a system is changed, its structure is degraded.
 To avoid this, invest in preventative maintenance.
 Time is spent improving the software structure without adding to its functionality.
 This means additional costs, more than those of implementing required system changes.
Large program evolution
 It suggests that large systems have a dynamic of their own
 This law is a consequence of structural factors that influence and constrain system
change, and organizational factors that affect the evolution process.
 Structural factors:
 These factors come from complexity of large systems.
 As you change and extend a program, its structure tends to degrade.

Page 22
 Making large changes to a program may introduce new faults and then inhibit further
program changes.
 Organisational factors:
 These are produced by large organizations.
 Companies have to make decisions on the risks and value of the changes and the costs
involved. Such decisions take time to make.
 The speed of the organization’s decision-making processes therefore governs the rate of
change of the system.
Organizational stability
 In most large programming projects a change to resources or staffing has imperceptible
(slight) effects on the long-term evolution of the system.

Conservation of familiarity
 Adding new functionality to a system inevitably introduces new system faults.
 The more functionality added in each release, the more faults there will be.
 Relatively little new functionality should be included in this release.
 This law suggests that you should not budget for large functionality increments in each
release without taking into account the need for fault repair.

Continuing growth
 The functionality offered by systems has to continually increase user satisfaction.
 The users of software will become increasingly unhappy with it unless it is maintained
and new functionality is added to it.

Declining quality

 The quality of systems will decline unless they are modified to reflect changes in their
operational environment.

Feedback system

 Evolution processes must incorporate feedback systems to achieve significant product


improvement.

Software maintenance
 It is the general process of changing a system after it has been delivered.
 There are three different types of software maintenance:
o Fault repairs
o Environmental adaptations
o Functionality addition
Fault repairs
 Coding errors are usually relatively cheap to correct
 Design errors are more expensive as they may involve rewriting several program
components.

Page 23
 Requirements errors are the most expensive to repair because of the extensive system
redesign which may be necessary.
Environmental adaptation
 This type of maintenance is required when some aspect of the system’s environment such
as the hardware, the platform operating system, or other support software changes.
 The application system must be modified to adapt it to cope with these environmental
changes.
Functionality addition
 This type of maintenance is necessary when the system requirements change in response
to organizational or business change.

Maintenance effort distribution

Fig: Maintenance effort distribution

 Software maintenance takes up a higher proportion of IT budgets than new development.


 Also, most of the maintenance budget and effort is spent on implementing new
requirements than on fixing bugs. This is shown in the above figure

Development and maintenance costs

Fig: Development and maintenance costs

 The above figure shows that overall lifetime costs may decrease as more effort is
expended during system development to produce a maintainable system.
 In system 1, more development cost has resulted in lesser overall lifetime costs when
compared to system 2.
 It is usually more expensive to add functionality after a system is in operation than it is to
implement the same functionality during development. The reasons for this are:

Page 24
1. Team stability
 The new team or the individuals responsible for system maintenance are usually not the
same as the people involved in development
 They do not understand the system or the background to system design decisions.
 They need to spend time understanding the existing system before implementing changes
to it.
2. Poor development practice
 The contract to maintain a system is usually separate from the system development
contract.
 There is no incentive for a development team to write maintainable software.
 The development team may not write maintainable software to save effort.
 This means that the software is more difficult to change in the future.
3. Staff skills
 Maintenance is seen as a less-skilled process than system development.
 It is often allocated to the most junior staff.
 Also, old systems may be written in obsolete programming languages.
 The maintenance staff may not have much experience of development in these languages
and must learn these languages to maintain the system.
4. Program age and structure
 As changes are made to programs, their structure tends to degrade.
 As programs age, they become harder to understand and change.
 System documentation may be lost or inconsistent.
 Old systems may not have been subject to stringent configuration management so time is
often wasted finding the right versions of system components to change.

Maintenance prediction

 It is important try to predict what system changes might be proposed and what parts of the
system are likely to be the most difficult to maintain.
 Also estimating the overall maintenance costs for a system in a given time period is
important.

 The following figure shows these predictions and associated questions

Fig: Maintenance prediction

 The number of change requests for a system requires an understanding of the relationship
between the system and its external environment.

Page 25
 Therefore, to evaluate the relationships between a system and its environment the
following assessment should be made
1. The number and complexity of system interfaces The larger the number of interfaces
and the more complex these interfaces, the more likely it is that interface changes will be
required as new requirements are proposed.
2. The number of inherently volatile system requirements The requirements that reflect
organizational policies and procedures are likely to be more volatile than requirements
that are based on stable domain characteristics.
3. The business processes in which the system is used As business processes evolve, they
generate system change requests. The more business processes that use a system, the
more the demands for system change.

 After a system has been put into service, the process data may be used to help predict
maintainability.
 The process metrics that can be used for assessing maintainability are as follows:

1. Number of requests for corrective maintenance An increase in the number of bug and
failure reports may indicate that more errors are being introduced into the program than
are being repaired during the maintenance process. This may indicate a decline in
maintainability.
2. Average time required for impact analysis The number of program components that
are affected by the change request. If this time increases, it implies more and more
components are affected and maintainability is decreasing.
3. Average time taken to implement a change request This is the amount of time needed
to modify the system and its documentation. An increase in the time needed to implement
a change may indicate a decline in maintainability.
4. Number of outstanding change requests An increase in this number over time may
imply a decline in maintainability.

Software reengineering

 Reengineering is done to improve the structure and understandability of legacy


software systems
 Reengineering makes legacy software systems easier to maintain
 Reengineering may involve redocumenting the system, refactoring the system
architecture, translating programs to a modern programming language, and modifying
and updating the structure and values of the system’s data.
 The functionality of the software is not changed due to reengineering

Benefits of reengineering

Reduced risk Reengineering reduces the high risk in redeveloping business-critical software.
Errors may be made in the system specification or there may be development problems.
Delays in introducing the new software may mean that business is lost and extra costs are
incurred.
Reduced cost The cost of reengineering may be significantly less than the cost of developing
new software.

Page 26
The reengineering process

Fig: The reengineering process


The activities involved in reengineering process are as follows
1. Source code translation
 Using a translation tool, the program is converted from an old programming language
to a more modern version of the same language or to a different language.

2. Reverse engineering
 The program is analyzed and information extracted from it.
 This helps to document its organization and functionality.
 This process is usually completely automated.

3. Program structure improvement


 The control structure of the program is analyzed and modified to make it easier to
read and understand.
 This can be partially automated but some manual intervention is usually required.

4. Program modularization
 Related parts of the program are grouped together.
 Where appropriate, redundancy is removed.
 This is a manual process.

5. Data reengineering
 The data processed by the program is changed to reflect program changes.
 This may mean redefining database schemas, converting existing databases to the new
structure, clean up the data, finding and correcting mistakes, removing duplicate
records, etc.
 Tools are available to support data reengineering.

Reengineering approaches

 The costs of reengineering depend on the extent of the work that is carried out.
 The following figure shows a spectrum of possible approaches to reengineering

Page 27
Fig: Reengineering approaches

 Costs increase from left to right.


 Source code translation is the cheapest option.
 Reengineering as part of architectural migration is the most expensive.

Disadvantages of reengineering

 There are limits to how much you can improve a system by reengineering.
 It isn’t possible to convert a system written using a functional approach to an object-
oriented system.
 Major architectural changes of the system data management cannot be carried out
automatically.
 The reengineered system will probably not be as maintainable as a new system developed
using modern software engineering methods.

Preventative maintenance by refactoring

 Refactoring is the process of making improvements to a program to slow down


degradation through change.
 It means modifying a program to improve its structure, to reduce its complexity, or to
make it easier to understand.
 Refactoring a program is not adding new functionality.
 Refactoring is considered as ‘preventative maintenance’ that reduces the problems of
future change.

Difference between reengineering and refactoring


 Reengineering takes place after a system has been maintained for some time and
maintenance costs are increasing.
 Refactoring is a continuous process of improvement throughout the development and
evolution process. It is intended to avoid the structure and code degradation that increases
the costs and difficulties of maintaining a system.

 There are situations (bad smells) in which the code of a program can be improved or
refactored. They are as follows

1. Duplicate code The same of very similar code may be included at different places in a
program. This can be removed and implemented as a single method or function that is called
as required.

Page 28
2. Long methods If a method is too long, it should be redesigned as a number of shorter
methods.
3. Switch (case) statements These often involve duplication, where the switch depends on
the type of some value. The switch statements may be scattered around a program. In object-
oriented languages, you can often use polymorphism to achieve the same thing.
4. Data clumping Data clumps occur when the same group of data items (fields in classes,
parameters in methods) reoccur in several places in a program. These can often be replaced
with an object encapsulating all of the data.
5. Speculative generality This occurs when developers include generality in a program in
case it is required in future. This can often simply be removed .

Legacy system management

 Organizations have a limited budget for maintaining and upgrading legacy systems.
 They have to decide how to get the best return on their investment.
 This involves making a realistic assessment of their legacy systems and then deciding on
the most appropriate strategy for evolving these systems.

 There are four strategic options:


1. Scrap the system completely This option should be chosen when the system is not
making an effective contribution to business processes.
2. Leave the system unchanged and continue with regular maintenance This option
should be chosen when the system is still required but is fairly stable and the system users
make relatively few change requests.
3. Reengineer the system to improve its maintainability This option should be chosen
when the system quality has been degraded by change and where a new change to the system
is still being proposed.
4. Replace all or part of the system with a new system This option should be chosen when
factors, such as new hardware, mean that the old system cannot continue in operation or
where off-the-shelf systems would allow the new system to be developed at a reasonable
cost.

Legacy system assessment

Legacy systems can be assessed from two perspectives

 Business perspective is to decide whether or not the business really needs the system.
 Technical perspective is to assess the quality of the application software and the
system’s support software and hardware.

There are four clusters of systems

1. Low quality, low business value Keeping these systems in operation will be expensive
and the rate of the return to the business will be fairly small. These systems should be
scrapped.
2. Low quality, high business value These systems are making an important business
contribution so they cannot be scrapped. However, their low quality means that it is
expensive to maintain them. These systems should be reengineered to improve their quality.
They may be replaced, if a suitable off-the-shelf system is available.

Page 29
3. High quality, low business value These are systems that don’t contribute much to the
business but which may not be very expensive to maintain. It is not worth replacing these
systems so normal system maintenance may be continued if expensive changes are not
required and the system hardware remains in use. If expensive changes become necessary,
the software should be scrapped.
4. High quality, high business value These systems have to be kept in operation. However,
their high quality means that you don’t have to invest in transformation or system
replacement. Normal system maintenance should be continued.

Business perspective

The four basic issues that have to be discussed with system stakeholders to assess business
value of the system are as follows

1. The use of the system If systems are only used occasionally or by a small number of
people, they may have a low business value. However, there may be occasional but important
use of systems. For example, in a university, a student registration system may only be used
at the beginning of each academic year. However, it is an essential system with a high
business value.
2. The business processes that are supported When a system is introduced, business
processes are designed to exploit the system’s capabilities. However, as the environment
changes, the original business processes may become obsolete. Therefore, a system may have
a low business value because it forces the use of inefficient business processes.
3. The system dependability If a system is not dependable and the problems directly affect
the business customers or mean that people in the business are diverted from other tasks to
solve these problems, the system has a low business value.
4. The system outputs If the business depends on the system outputs, then the system has a
high business value. Conversely, if these outputs can be easily generated in some other way
or if the system produces outputs that are rarely used, then its business value may be low.

Technical perspective

To assess a software system from a technical perspective, you need to consider both the
application system itself and the environment in which the system operates.

Factors used in environment assessment

Page 30
Factors used in application assessment

Data can be collected to assess the quality of the system. The data that can be collected are
1. The number of system change requests System changes usually corrupt the system
structure and make further changes more difficult. The higher this value, the lower the quality
of the system.
2. The number of user interfaces The more interfaces, the more likely that there will be
inconsistencies and redundancies in these interfaces, hence reducing system quality.

Page 31
3. The volume of data used by the system The higher the volume of data (number of files,
size of database, etc.), the more likely that it is that there will be data inconsistencies that
reduce the system quality.

Page 32

You might also like