II. SLC (Software Life Cycle) Software Development Models

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Defect in code lead to failures.

( but not all defects do so )


Failures can also be caused by external influences.
Enough testing – level of risk, time, budget

Dynamic testing – shows failures that are caused by defects


Static testing
Debugging – dev activity that finds analyzes and removes the cause of the failure
Re-testing – ensures the fix indeed resolves the failure

Designing tests early can help to prevent defects from being introduced into code.
Reviews of documents, identification and resolution of issues prevent defect appearing in code.

Development testing: ( objective to cause failures so that defects are indentified )


Component
Integration
System
Acceptance testing : (objective is to check that the system works as expected – meets requirements )
Maintenance testing : testing that no new defects have been introduced during development of changes
Operational testing : objective assess system characteristics such as reliability or availability

Testing shows presence of defects. ( cannot prove there are 0 )


Exhaustive testing is impossible. ( its best to use risk analysis and priorities )
Testing paradox – repeat running test cases – will lead to - wont find new defects. Its best to update TC

Test plan – time spent on planning tests, designing test cases, preparing for execution and evaluating
results.
Test process:
- test planning and control ( activity of defining objectives and specifications of testing )
- test analysis and design ( objectives are transformed in testing conditions and test cases )
- test implementation and execution ( finalize test cases, test conditions met, test run, test suits )
- evaluating exit criteria and reporting ( comparing test results with objectives )
- test closure activities ( collect data from completed test and enhance future ones )

Error guessing /independent - the act of with a person is skilled in finding defects

II. SLC (software life cycle)

Software Development Models

Test activities are related to software development activities.


1.V- model
May have more, fewer or different levels of development and testing, depending on the project.
-Component testing (component requirements, detailed design, code)
-Integration testing (software/system design, architecture, workflows, use cases)
-System testing (system and software requirement specifications, use cases, functional specification, risk
analysis report)
-Acceptance testing (user requirements, system requirements, use cases, business processes, risk
analysis reports)
2.Iterative-incremental development model –is the process of establishing requirements designing
building and testing a system in a series of short development cycles. (ex : AGILE )
3.Testing with a life cycle model – characteristics of good testing :
For every dev activity there is a corresponding testing activity
Each test level has test objectives specific to that level
The analysis and design of test for a given test level should begin during the corresponding dev activity
Testers should be involved in reviewing documents as soon as drafts are available in the dev life cycle

Test Levels

Component testing - searches for defects in and verifies the functioning of software modules that are
separately testable. Stub, drivers and simulators may be used. ( is functional and non-functional )
Its usually verified by the developer and defects are fixes asap without formally managing them.
Automate.
Integration testing – test interfaces between components, interactions with different parts of the
system. (even hardware), There may be more than one level of int testing :
- Component integration, done after component testing
- System integration testing, tests the interaction between different systems, may be done after
system testing.
( is functional and non-functional )
System testing – is concerned with the behavior of the whole system, may include tests based on risks,
requirements specifications, business processes, use cases, interactions with the operating system.
( is functional and non-functional )
Acceptance testing – establish confidence in the system. Finding defects its not the main obj. Typical
forms of acceptance testing include :
- User acceptance testing – verified the fitness for use of the system by business users
- Operational(acceptance) testing (recovery, maintenance, security…)
- Contract and regulation acceptance testing
- Alpha and beta testing ( alpha – at the developers locations / beta – testing performed by
potential customers at their own locations)

Test Types

There are a group of test activities can be aimed at verifying the software system (or a part)
-Functional testing ( black box)
-Non functional testing ( uses black box techniques )
-Structural testing ( white box)
-Re-testing & regression testing

Functional testing - is based on functions and features . the functions that a system subsystem or
component are to perform and ‘what’ the system does.(external behavior)
Funct testing is also black box testing – specification based.
It includes security testing, interoperability testing.
Non functional testing – includes performance testing, load testing, stress testing, usability testing,
maintainability testing, reliability testing and portability testing. – ‘how’ the system works.
Describes test required to measure characteristics of systems and software that can be quantified on a
varying scale. (external behavior)
Structural testing – best used after specification based testing (black box). Used in all test levels but
especially in component testing and component integration, where we use tools to measure code
coverage of elements such as statements or decisions. Can also be applied at system, system integration
or acceptance testing levels.
Re-testing – after a defect is detected and fixed and a re-check of the defect is performed.
(confirmation)
Regression – is the repeated testing of an already tested program after modification to discover any
defects introduced as a result of the change. Performed when the environment is changed. Regression is
strongly candidate for automation. Includes all of the above testing.

Maintenance Testing
Once deployed a software system is in service for years, during this time the system its configuration
data, environment are often corrected or changed. This is done to be ready to integrate hot fixes, future
releases, emergency changes, database updates and so on.
Determining how the existing system may be affected by changes Is called inpact analysis and is used to
help decide how much regression testing to do. (regression test suite)

III. Static Techniques

The objectives indentify what you will be able to do following the completion of each module.

Dynamic testing – requires the execution of software


Static testing – rely on the manual examination (reviews) and automated analysis of the code or another
project documentation without the execution of the code. ( find causes of failures – defects )

Review process
Note: reviews are a way of testing software work products (including code) and can be performed well
before dynamic test execution. The main manual activity is to examine a work product and make
comments about it. Any software product can be reviewed – design specifications, code, test plans, test
specifications, test cases, test scripts, user guides.
Benefit of review : early defect detection
The way a review is carried out depends on the agreed objectives of the review.

Types of review :
1. Informal review – characterized by no written instructions for reviewers to systematic
characterized by team participation, documented results of the review, and documented
procedures for conducting the review.
- Main purpose : inexpensive way to get some benefit
2. Walkthrough
- Meeting led by author
- Main purposes : learning gaining understanding, finding defects.
3. Formal review ( tehnical review )
-planning (select personnel, allocating roles, entry criteria, what to review)
-kick off (distributing documents, explaining objectives process and documents to participants)
-individual preparation
-review meeting /examination evaluation recording of results (logging with documented results,
record issues)
-rework (fixing defects, updating status)
-follow up (checking defects, metrics, exit criteria)
4. Inspection
- Main purpose : finding defects

Success factors for reviews :


- Clear predefined objectives
- Testers are valued reviwers
- Defects are welcomed
- Atmosphere of trust

Static Analysis by Tools

The obj of static analysis is to find defects in software source code and software models. Is performed
without actually executing the software being examined by the tool; dynamic testing does execute the
software code. Static analysis finds defects rather than failures.
Is good for :
-early detection of defects
-warning about aspects of the code or design
-defects not found with dynamic testing
-check unreachable code
-security vulnerabilityes
They are typically used by developers before and during component testing.

IV. TEST DESIGN TEHNIQUES

The test development process

Can be done informal to very formal. During test imp the test cases are developed implemented
prioritized and organized in the test procedure specification. IEEE STD 829-1998.

Categories of test design techniques

The purpose of tdt is to identify test conditions, test cases and test data.
black box – specification based techniques, does not use information regarding the internal structure of
the system/component
- Specification of the problem to be solved, test cases can be build from here
white box – structure based techniques, based on an analysis of the structure of the system/component
- Test cases constructed based on code and detailed design information
experienced – based on each user experience , knowledge
Specification based (black box)
- Equivalence partitioning – system devided in groups expected to exhibit similar behavior
- Boundary value analysis - behavior of the system at the edge of the groups
- Decision table testing – it creates combinations of conditions (table true or false conditions)
- State transition testing – shows the relationship between the states and inputs to cover every
stat to exercise every transition to exercise specific sequences of transitions or to test invalid
transitions.
- Use case testing –test cases derived from use cases are most useful in uncovering defects in the
process flows during real work use of the system. Useful for designing acceptance tests, they
also help uncover integration defects.
Structure based (white box)
- Statement testing and coverage
- Decision testing and coverage (if statements)
Experienced based
- error guessing/ fault attack
- exploratory testing – when there are few specifications

IV. Test Management

Test organization
Benefits of independent testing is that they see other and different defects and are unbiased.
Drowbacks are they are isolated from the dev team and they can be blamed for delays in releases.
The test leader plans monitors and controls the testing activities and tasks, this role may be performed
by a project manager , a developer manager, a qa manager or the manager of a test group.
Task may include :
- analyze review and assess user requirements specifications and models for testability
- Set up test env
- Prepare and acquire test data
- Implement test on all test levels
- Automate tests
- Review tests developed by others
Test planning
Is a continuous activity and is performed In all life cycle processes and activities.
Test planning activities :
- Determinate the scope and risks, indentify the obj of testing
- Integrating and coordinating the testing activities
- Scheduling test analysis implementation execution
- Assigning resources
- Selecting metrics
Entry criteria
Define when to start testing such as the beginning of a test level or when a set of test is ready for
execution.
- Test env availability
- Test tool readiness
- Testable code availability
- Test data availability

Exit criteria
Define when to stop testing such as the end of a test level or when a set of test has achieved specific
goal.
- Measure of coverage of code, functionality or risk
- Estimate defect density
- Cost
- Residual risk
Test estimation
There are two methods to test :
- The metrics based approach
- The expert based approach
Test strategy/Test approach
Is the implementation of the test strategy for a specific project. Is defined and refined in the test plans
and test designs.

Test progress monitoring and control


The purpose of test monitoring is to provide feedback and visibility about test activities.
Metrics should be collected during and at the end of a test level.
Test control describes any guiding or corrective actions taken as a result of information and metrics
gathered and reported.

Configuration management
The purpose is to establish and maintain the integrity of the products or the software of system through
the project and product life cycle. This should be chosen documented and implemented from test
planning.

Risk and testing


Event, hazard, threat or situation occurring and resulting in undesirable consequences or a potential
problem. Here we have several risks :
- Project risk – is the risk that surround the projects capability to deliver its objectives (
organizational risks, technical issues, supplier issues)
- Product risk – involves the risk to the quality of the product
Risks are used to decide where to start testing and where to test more.

Incident management
The obj of testing is to find defects. The discrepancies between actual and expected outcomes need to
be logged as incidents. Incidents may turn out to be defects after investigations.
Incidents may be raised during development, review, testing or use of the product.
Incidents may include :
- Date organization author
- Expected and actual result
- Environment
- Severity, urgency
- Status of the incident
- Global issues, such as other areas that may be affected by a change
- References, including the identity of the test case specification that revealed the problem

VI. Tools support for testing

1.Type of test tools


Test execution tools, test data generation tools, comparison tools, tools that help manage test, test
result data, requirements, incidents, defects, monitoring test execution e.t.c
Tools support testing and can have multiple purposes, they are best used to improve the efficiency of
the test activities by automating.

Test tools classifications


Some tools support one activity and other support more than one.
Tool support for management of testing and tests
Management tools apply to all test activities over the entire sdlc and they are classified as :
- Test management tools (HP ALM)
- Requirements management tools (requirements)
- Incident management tools (defect tracking)
- Configuration management tools (storage and version management of testware)
Test tools for static testing
Cost effective way of finding more defects at an earlier state in the dev process
- Review tools – assist with review processes, checklists, guidelines
- Static analysis tolls(D) – help providing metrics of the code
- Modeling tools(D)
Tolls support for test specification
Test design tools
Are used to generate test inputs or executable test from requirements, graphical user interfaces, design
models or code.
Test data preparation tools
Manipulate database files or data transmissions to set up test data to be used during execution of test.
Tolls support for test execution and loggin
Test execution tools
These tools eneble test ot be executed automatilcy or semi automatically using sotred inputes and
exected outcomes through the use of scripting language and provide a test log on each run.
Test harness/unit test
Test comparators – determine difference between files databases or test results.
Coverega measurement tools (D)
Security testing tools
Tool support for performance and monitoring
Dynamic analysis tools (D) – find defects that are evident only when software is executed
Performance testing/load testing/ stress testing – how a system behavies under usage conditions
Monitoring tools
Tolls support of specific testing needs

2. Effective use of tools : benefits and risks

Each type of tool require additional effort to achive real and lasting benefits.
Benefits :
- Repetitive work is reduced ( re entering the same test data )
- Consistency and repeatability ( execution of order )
- Ease of access to information
Risks :
- Unrealistic expectations
- Underestimating the time cost and effort for the initial introduction of the tool/training
- Over reliance on the tool
- Poor tool support from vendor, risk of suspension of open source product

3. Introducing a tool into an organization


Main considerations when choosing a tool :
- Organizational maturity strengths and weaknesses and identification of opportunities for an
improved test process supported by tools
- Evaluation against clear requirements and objective criteria
- Proof of concept

1.a / d x
2.b / c x
3.a / a
4.b / a x
5.d / d
6.b / b
7.a /a
8.c / c
9.d /a x
10.d /b x

11. d / d
12. d / d
13. c / c
14. b /d x
15. c / b x
16. b / b
17. c / c
18. c / c
19. c / a x
20. a /a

21. a /a
22. a / a
23. b / b
24. a /a
25. d / d
26. b /b
27. c /c
28. d / a x
29. a / a
30. b /b
31. a /c x
32. b /b
33. a /a
34. a /a
35. c /d x
36. d /d
37. b /b
38. c /c
39. c /a x
40. a /c x

41. a /b x
42. a / a
43. d /d
44. c /a x
45. d /b x
46. d /d
47. e/e
48. a /a
49. a /a
50. a /a

51. a / d x
52. b / b
53. c /c
54. a /b x
55. d /c x
56. c /d x
57. d /a x
58. a /d x
59. a /a
60. c /c

61. c /c
62. d
63. c
64. a /b x
65. b /a x
66. c /d x
67. b /d x
68. d /d
69. a /c x
70. b /b

70/27

You might also like