TMMi Framework R1 2
TMMi Framework R1 2
TMMi Framework R1 2
org
Release 1.2
Copyright Notice
Unlimited distribution subject to Copyright
Copyright © TMMi Foundation, Ireland.
The TMMi Foundation makes no warranties of any kind, either expressed or implied, as to any matter including, but
not limited to, warranty of fitness for purpose or merchantability, exclusivity, or results obtained from use of the
material. The TMMi Foundation does not make any warranty of any kind with respect to freedom from patent,
trademark or copyright infringement.
Use of any trademarks in this document is not intended in any way to infringe on the rights of the trademark holder.
Permission to reproduce this document and to prepare derivative works from this document for internal use is
granted, provided the copyright and “No Warranty” statements are included with all reproductions and derivative
works.
Requests for permission to reproduce this document or prepare derivative works of this document for external and
commercial use should be addressed to the TMMi Foundation.
The following registered trademarks and service marks are used in the TMMi Foundation documentation: CMMI®,
TMMSM, TMMi®, IDEALSM, SCAMPISM, TMap®, TPI® and TPI-Next®.
CMMI is a registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.
TMap, TPI and TPI-Next are registered trademarks of Sogeti, The Netherlands.
Contributors
Doug Ashworth (UK)
Stuart Baker (UK)
Clive Bates (UK)
Jan Jaap Cannegieter (The Netherlands)
Laura Casci (UK)
Vicky Chen (Canada)
Jerry E Durant (USA)
Akhila E. K (India)
Attila Fekete (Sweden)
Thomas George (India)
Andrew Goslin (UK)
Murali Krishnan (India)
Adrian Howes (UK)
Klaus Olsen (Denmark)
Fran O’Hara (Ireland)
Simon Lamers (Germany)
Hareton Leung (Hong Kong)
Robert Magnussion (Sweden)
Nico van Mourik (The Netherlands)
Bill McGirr (USA)
Judy McKay (USA)
Mac Miller (UK)
Sandhya Nagaraj (India)
Viswanathan Narayana Iyer (India)
Adewunmi Okupe (USA)
Piotr Piotrowski (Poland)
Meile Posthuma (The Netherlands)
Meeta Prakash (India)
Alec Puype (Belgium)
Matthias Rasking (Germany)
Howard Roberts (UK)
Geoff Thompson (UK)
Greg Spindler (USA)
Tiruvallur Thattai Srivatsan (India)
Narayanamoorthy Subramanian (India)
David Tracey (UK)
Erik van Veenendaal (Bonaire, Caribean Netherlands)
Nathan Weller (UK)
Brian Wells (UK)
Revisions
This section summarizes the revisions between release 1.0 and release 1.2 of this document.
This section is provided for information only.
Contents
1 Test Maturity Model Integration (TMMi) ................................................................................................................6
1.1 Introduction ......................................................................................................................................................6
1.2 Background and History ..................................................................................................................................6
1.3 Sources ............................................................................................................................................................6
1.4 Scope of the TMMi ...........................................................................................................................................7
2 TMMi Maturity Levels ............................................................................................................................................9
2.1 Overview ..........................................................................................................................................................9
2.2 Level 1 Initial ..................................................................................................................................................10
2.3 Level 2 Managed ...........................................................................................................................................10
2.4 Level 3 Defined ..............................................................................................................................................10
2.5 Level 4 Management and Measurement .......................................................................................................11
2.6 Level 5 Optimization ......................................................................................................................................11
3 Structure of the TMMi ..........................................................................................................................................13
3.1 Required, Expected and Informative Components ........................................................................................13
3.2 Components of the TMMi ..............................................................................................................................13
3.3 Generic Goals and Generic Practices ...........................................................................................................15
3.4 Supporting process areas for generic practices ............................................................................................18
3.5 Supporting CMMI process areas for TMMi ....................................................................................................19
TMMi Level 2: Managed .............................................................................................................................................23
PA 2.1 Test Policy and Strategy ...........................................................................................................................24
PA 2.2 Test Planning.............................................................................................................................................32
PA 2.3 Test Monitoring and Control ......................................................................................................................47
PA 2.4 Test Design and Execution .......................................................................................................................58
PA 2.5 Test Environment ......................................................................................................................................69
TMMi Level 3: Defined ................................................................................................................................................77
PA 3.1 Test Organization ......................................................................................................................................78
PA 3.2 Test Training Program ...............................................................................................................................91
PA 3.3 Test Lifecycle and Integration .................................................................................................................100
PA 3.4 Non-functional Testing ............................................................................................................................115
PA 3.5 Peer Reviews ..........................................................................................................................................126
TMMi Level 4: Measured ..........................................................................................................................................134
PA 4.1 Test Measurement...................................................................................................................................135
PA 4.2 Product Quality Evaluation ......................................................................................................................144
PA 4.3 Advanced Reviews ..................................................................................................................................154
TMMi Level 5: Optimizing .........................................................................................................................................161
PA 5.1 Defect Prevention ....................................................................................................................................162
PA 5.2 Quality Control .........................................................................................................................................174
PA 5.3 Test Process Optimization ......................................................................................................................186
Glossary ....................................................................................................................................................................203
References ................................................................................................................................................................224
1.3 Sources
The development of the TMMi has used the TMM framework as developed by the Illinois Institute of Technology as
one of its major sources [Burnstein]. In addition to the TMM, it was largely guided by the work done on the Capability
Maturity Model Integration (CMMI), a process improvement model that has widespread support in the IT industry.
The CMMI has both a staged and continuous representation. Within the staged representation the CMMI architecture
prescribes the stages that an organization must proceed through in an orderly fashion to improve its development
process. Within the continuous representation there is no fixed set of levels or stages to proceed through. An
organization applying the continuous representation can select areas for improvement from many different
categories.
The TMMi has been developed as a staged model. The staged model uses predefined sets of process areas to
define an improvement path for an organization. This improvement path is described by a model component called
TMMi Framework R1 2 Page 6 of 226
Chapter 1 Test Maturity Model integration
a maturity level. A maturity level is a well-defined evolutionary plateau towards achieving improved organizational
processes. At a later stage a continuous representation of the TMMi may become available. This will most likely not
influence the content of the TMMi. It will ‘only’ provide a different structure and representation.
Other sources to the TMMi development include the Gelperin and Hetzel’s Evolution of Testing Model [Gelperin and
Hetzel], which describes the evolution of the testing process over a 40-year period, Beizer’s testing model, which
describes the evolution of the individual tester’s thinking [Beizer], research on the TMM carried out in the EU-funded
MB-TMM project [V2M2], and international testing standards, e.g., IEEE 829 Standard for Software Test
Documentation [IEEE 829]. The testing terminology used in the TMMi is derived from the ISTQB Standard Glossary
of terms used in Software Testing [ISTQB].
As stated for defining the maturity levels, the evolutionary testing model of Gelperin and Hetzel has served as a
foundation for historical-level differentiation in the TMMi. The Gelperin and Hetzel model describes phases and test
goals for the 1950s through the 1990s. The initial period is described as “debugging oriented”, during which most
software development organizations had not clearly differentiated between testing and debugging. Testing was an
ad hoc activity associated with debugging to remove bugs from programs. Testing has, according to Gelperin and
Hetzel, since progressed to a “prevention-oriented” period, which is associated with current best practices and
reflects the highest maturity level of the TMMi.
Furthermore, various industry best practices, practical experience using the TMM and testing surveys have
contributed to the TMMi development providing it with its necessary empirical foundation and required level of
practicality. These illustrate the current best and worst testing practices in the IT industry, and have allowed the
developers of the TMMi framework to extract realistic benchmarks by which to evaluate and improve testing practices.
1.4.4 Assessments
Many organizations find value in benchmarking their progress in test process improvement for both internal purposes
and for external customers and suppliers. Test process assessments focus on identifying improvement opportunities
and understanding the organization’s position relative to the selected model or standard. The TMMi provides an
excellent reference model to be used during such assessments. Assessment teams use TMMi to guide their
identification and prioritization of findings. These findings, along with the guidance of TMMi practices, are used to
plan improvements for the organization. The assessment framework itself is not part of the TMMi. Requirements for
TMMi assessments are described by the TMMi Foundation in the document “TMMi Assessment Method Application
Requirements” [TAMAR]. These requirements are based upon the ISO 15504 standard. The achievement of a
specific maturity level must mean the same thing for different assessed organizations. Rules for ensuring this
consistency are contained in the TMMi assessment method requirements. The TMMi assessment method
requirements contain guidelines for various classes of assessments, e.g., formal assessments, quick scans and self-
assessments. More details on TMMi assessment and accreditation are provided in Annex A.
As the TMMi can be used in conjunction with the CMMI (staged version), TMMi and CMMI assessments are often
combined, evaluating both the development process and the testing process. Since the models are of similar
structure, and the model vocabularies and goals overlap, parallel training and parallel assessments can be
accomplished by an assessment team. The TMMi can also be used to address testing issues in conjunction with
continuous models. Overlapping process areas that relate to testing can be assessed and improved using the TMMi,
while other process areas fall under the umbrella of the broader-scope model.
(5) Optimization
5.1 Defect Prevention
5.2 Quality Control
5.3 Test Process Optimization
(4) Measured
4.1 Test Measurement
4.2 Product Quality Evaluation
4.3 Advanced Reviews
(3) Defined
3.1 Test Organization
3.2 Test Training Program
3.3 Test Lifecycle and Integration
3.4 Non-functional Testing
3.5 Peer Reviews
(2) Managed
2.1 Test Policy and Strategy
2.2 Test Planning
2.3 Test Monitoring and Control
2.4 Test Design and Execution
2.5 Test Environment
(1) Initial
The process areas for each maturity level of the TMMi are shown in figure 1. They are fully described later in separate
chapters, whilst each level is explained below along with a brief description of the characteristics of an organization
TMMi Framework R1 2 Page 9 of 226
Chapter 2 TMMi Maturity Levels
at each TMMi level. The description will introduce the reader to the evolutionary path prescribed in the TMMi for test
process improvement.
Note that the TMMi does not have a specific process area dedicated to test tools and/or test automation. Within TMMi
test tools are treated as a supporting resource (for practices) and are therefore part of the process area where they
provide support, e.g., applying a test design tool is a supporting test practice within the process area Test Design
and Execution at TMMi level 2 and applying a performance testing is tool is a supporting test practice within the
process area Non-functional Testing at TMMi level 3.
program exist, and testing is perceived as being a profession. Test process improvement is fully institutionalized as
part of the test organization’s accepted practices.
Organizations at level 3 understand the importance of reviews in quality control; a formal review program is
implemented although not yet fully linked to the dynamic testing process. Reviews take place across the lifecycle.
Test professionals are involved in reviews of requirements specifications. Whereas the test designs at TMMi level 2
focus mainly on functionality testing, test designs and test techniques are expanded at level 3 to include non-
functional testing, e.g., usability and/or reliability, depending on the business objectives.
A critical distinction between TMMi maturity level 2 and 3 is the scope of the standards, process descriptions, and
procedures. At maturity level 2 these may be quite different in each specific instance, e.g., on a particular project.
At maturity level 3 these are tailored from the organization’s set of standard processes to suit a particular project or
organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines/
This tailoring also enables valid comparisons between different implementations of a defined process and easier
movement of staff between projects . Another critical distinction is that at maturity level 3, processes are typically
described more rigorously than at maturity level 2. Consequently at maturity level 3, the organization must revisit
the maturity level 2 process areas.The process areas at TMMi level 3 are:
3.1 Test Organization
3.2 Test Training Program
3.3 Test Lifecycle and Integration
3.4 Non-functional Testing
3.5 Peer Reviews
and technological improvements. The testing methods and techniques are constantly being optimized and there is a
continuous focus on fine tuning and process improvement. An optimizing test process, as defined by the TMMi is one
that is:
- managed, defined, measured, efficient and effective
- statistically controlled and predictable
- focused on defect prevention
- supported by automation as much is deemed an effective use of resources
- able to support technology transfer from the industry to the organization
- able to support re-use of test assets
- focused on process change to achieve continuous improvement.
To support the continuous improvement of the test process infrastructure, and to identify, plan and implement test
improvements, a permanent test process improvement group is formally established and is staffed by members who
have received specialized training to increase the level of their skills and knowledge required for the success of the
group. In many organizations this group is called a Test Process Group. Support for a Test Process Group formally
begins at TMMi level 3 when the test organization is introduced. At TMMi level 4 and 5, the responsibilities grow as
more high level practices are introduced, e.g., identifying reusable test (process) assets and developing and
maintaining the test (process) asset library.
The Defect Prevention process area is established to identify and analyze common causes of defects across the
development lifecycle and define actions to prevent similar defects from occurring in the future. Outliers to test
process performance, as identified as part of process quality control, are analyzed to address their causes as part of
Defect Prevention.
The test process is now statistically managed by means of the Quality Control process area. Statistical sampling,
measurements of confidence levels, trustworthiness, and reliability drive the test process. The test process is
characterized by sampling-based quality measurements.
At TMMi level 5, the Test Process Optimization process area introduces mechanisms to fine-tune and continuously
improve testing. There is an established procedure to identify process enhancements as well as to select and
evaluate new testing technologies. Tools support the test process as much as is effective during test design, test
execution, regression testing, test case management, defect collection and analysis, etc. Process and testware re-
use across the organization is also common practice and is supported by a test (process) asset library.
The three TMMi level 5 process areas, Defect Prevention, Quality Control and Test Process Optimization all provide
support for continuous process improvement. In fact, the three process areas are highly interrelated. For example,
Defect Prevention supports Quality Control, e.g., by analyzing outliers to process performance and by implementing
practices for defect causal analysis and prevention of defect re-occurrence. Quality Control contributes to Test
Process Optimization, and Test Process Optimization supports both Defect Prevention and Quality Control, for
example by implementing the test improvement proposals. All of these process areas are, in turn, supported by the
practices that were acquired when the lower-level process areas were implemented. At TMMi level 5, testing is a
process with the objective of preventing defects.
The process areas at TMMi level 5 are:
5.1 Defect Prevention
5.2 Quality Control
5.3 Test Process Optimization
Maturity Levels
Specific Generic
Goals Goals
Specific Generic
Practices Practices
3.2.3 Purpose
The purpose statement describes the purpose of the process area and is an informative component. For example,
the purpose statement of the test planning process area is to “define a test approach based on the identified risks
and the defined test strategy, and to establish and maintain well-founded plans for performing and managing the
testing activities”.
3.2.5 Scope
The scope section of the process area specifically identifies the test practices that are addressed by the process
area, and if necessary test practices that are explicitly outside the scope of this process area.
3.2.10 Sub-practices
A sub-practice is a detailed description that provides guidance for interpreting and implementing a specific practice.
Sub-practices may be worded as if prescriptive, but are actually an informative component meant only to provide
ideas that may be useful for test process improvement.
Institutionalization is an important concept in process improvement. When mentioned in the generic goal and generic
practice descriptions, institutionalization implies that the process is ingrained in the way the work is performed and
there is commitment and consistency to performing the process. An institutionalized process is more likely to be
retained during times of stress. When the requirements and objectives for the process change, however, the
implementation of the process may also need to change to ensure that it remains active. The generic practices
describe activities that address these aspects of institutionalization.
The following is a list of all the generic goals and practices in the TMMi.
GP 2.2 Test Planning - the TMMi Test Planning process area can implement GP
Plan the process 2.2 in full for all project-related process areas (except for test planning
itself). Test planning itself can be addressed as part of the CMMI process
area Project Planning.
GP 2.5 Test Training Program - the TMMi Test Training Program process area
Train people supports the implementation of GP 2.5 for all process areas by making the
organization-wide training program available to those who will perform or
support the processes.
In addition the TMMi Test Planning process area may support this generic
practice by identifying and organizing the training needs that are needed for
testing in the project and documenting those in the test plan.
GP 2.7 Test Planning - the TMMi Test Planning process area may support this
Identify and involve generic practice for all project-related process areas by planning the
the relevant involvement of identified stakeholders and documenting those in the test
stakeholders plan.
Stakeholder involvement for test planning itself can be addressed as part of
the CMMI process area Project Planning.
GP 2.8 Test Monitoring and Control - the TMMi Test Monitoring and Control
Monitor and control process area can implement GP 2.8 in full for all process areas.
the process
GP 2.9 Process and Product Quality Assurance - the CMMI Process and
Objectively evaluate Product Quality Assurance process can implement GP 2.9 in full for all
adherence process areas.
GP 3.2 Collect Organizational Process Focus - the CMMI process area Organizational
improvement Process Focus can provide support for the implementation of GP 3.2 since
information it establishes an organizational measurement repository.
Test Lifecycle and Integration – this TMMi process area can provide
similar support for the implementation of GP 3.2 since it establishes an
organizational test process database.
Measurement and Analysis - for all processes the CMMI Measurement
and Analysis process area and the TMMi Test Measurement process areas
provide general guidance about measuring, analyzing, and recording
information that can be used in establishing measures for monitoring actual
performance of the processes.
2 3 Requirements Development (S) - practices from this CMMI process area can be re-
used when developing test environment requirements within the TMMi process area
Test Environment.
Risk Management (S) - practices from this CMMI process area can be re-used for
identifying and controlling product risk and test project risks within the TMMi process
areas Test Planning and Test Monitoring and Control.
Table 2: Support for TMMi maturity level 2 from CMMI process areas
3 3 Organizational Process Definition (P) - this CMMI process area provides support
for the implementation of the TMMi process area Test Lifecycle and Integration,
especially for SG 1 Establish Organizational Test Process Assets.
The CMMI process area Organizational Process Definition can also support the
implementation of GP 3.1 Establish a defined process by establishing the
organizational process assets needed to implement GP 3.1.
Organizational Process Focus (P) - this CMMI process area provides support for
the implementation of the TMMi process area Test Organization, especially for SG 4
Determine, Plan and Implement Test Process Improvements and SG 5 Deploy
Organizational Test Processes and Incorporate Lessons Learned.
The CMMI process area Organizational Process Focus also provides support for
the implementation of the TMMi generic practice GP 3.2 Collect improvement
information since it establishes an organizational measurement repository.
Organizational Training (S) - this CMMI process area provides support for the
implementation of the TMMi process area Test Training Program.
Verification (P) - the practices within SG 2 ‘Perform Peer Reviews’ of this CMMI
process area provide support for the implementation of the TMMi process area Peer
Reviews.
Table 3: Support for TMMi maturity level 3 from CMMI process areas
4 3 Organizational Process Definition (S) - This CMMI process area supports the
implementation of GP 3.1 Establish a defined process by establishing the
organizational process assets needed to implement GP 3.1.
Organizational Process Focus (S) – this CMMI process area provides support for
the implementation of GP 3.2 Collect improvement information since it establishes an
organizational measurement repository.
4 4 Quantitative Project Management (S) - this CMMI process area provides support
for the implementation of the TMMi process area Product Quality Evaluation, both for
SG 1 Measurable Project Goals for Product Quality and their Priorities are
Established, and SG 2 Actual Progress towards Achieving Product Quality Goals is
Quantified and Managed.
Table 4: Support for TMMi maturity level 4 from CMMI process areas
5 3 Organizational Process Definition (S) - This CMMI process area supports the
implementation of GP 3.1 Establish a defined process by establishing the
organizational process assets needed to implement GP 3.1.
Organizational Process Focus (S) – this CMMI process area provides support for
the implementation of GP 3.2 Collect improvement information since it establishes an
organizational measurement repository.
5 5 Causal analysis and Resolution (P) – This CMMI process area provides support
for the implementation of the TMMi process area Defect Prevention, especially for
SG 1 Determine Common Causes of Defects.
Organizational Performance Management (S) - This CMMI process area provides
support for the implementation of the TMMi process area Test Process Optimization,
especially for SG 1 Select Test Process Improvements, SG 2 New Testing
Technologies are Evaluated to Determine their Impact on the Testing Process and
SG 3 Deploy Test Improvements.
Table 5: Support for TMMi maturity level 5 from CMMI process areas
Introductory Notes
When an organization wants to improve its test process, it should first clearly define a test policy. The test policy
defines the organization’s overall test objectives, goals and strategic views regarding testing. It is important for the
test policy to be aligned with the overall business (quality) policy of the organization. A test policy is necessary to
attain a common view of testing and its objectives between all stakeholders within an organization. This common
view is required to align test (process improvement) activities throughout the organization. The test policy should
address testing activities for both new development and maintenance projects. Within the test policy the objectives
for test process improvement should be stated. These objectives will subsequently be translated into a set of key test
performance indicators. The test policy and the accompanying performance indicators provide a clear direction, and
a means to communicate expected and achieved levels of test performance. The performance indicators must show
the value of testing and test process improvement to the stakeholders. The test performance indicators will provide
quantitative indication whether the organization is improving and achieving the defined set of test (improvement)
goals..
Based upon the test policy a test strategy will be defined. The test strategy covers the generic test requirements for
an organization or program (one or more projects). The test strategy addresses the generic product risks and
presents a process for mitigating those risks in accordance with the testing policy. Preparation of the test strategy
starts by performing a generic product risk assessment analyzing the products being developed within a program or
organization.
The test strategy serves as a starting point for the testing activities within projects. The projects are set up in
accordance with the organization-wide or program-wide test strategy. A typical test strategy will include a description
of the test levels that are to be applied, for example: unit, integration, system and acceptance test. For each test
level, at a minimum, the objectives, responsibilities, main tasks and entry/exit criteria are defined. The test strategy
serves as a starting point for the testing activities within projects. The projects are set up in accordance with the
organization-wide or program-wide test strategy. When a test strategy is defined and followed, less overlap between
the test levels is likely to occur, leading to a more efficient test process. Also, since the test objectives and approach
of the various levels are aligned, fewer holes are likely to remain, leading to a more effective test process.
Note that test policy and test strategy modification is usually required as an organization’s test process evolves and
moves up the levels of the TMMi.
Scope
The process area Test Policy and Strategy involves the definition and deployment of a test policy and test strategy
at an organizational level. Within the test strategy, test levels are identified. For each test level, at a minimum, test
objectives, responsibilities, main tasks and entry/exit criteria are defined. To measure test performance and the
accomplishment of test (improvement) objectives, test performance indicators are defined and implemented.
A definition of testing
A definition of debugging (fault localization and repair)
Basic views regarding testing and the testing profession
The objectives and added value of testing
The quality levels to be achieved
The level of independence of the test organization
A high level test process definition
The key responsibilities of testing
The organizational approach to and objectives of test process improvement
6. Categorize and group generic product risks according to the defined risk categories
7. Prioritize the generic product risks for mitigation
8. Review and obtain agreement with stakeholders on the completeness, category and priority level
of the generic product risks
9. Revise the generic product risks as appropriate
Note that product risk categories and parameters as defined in the Test Planning process area (SP
1.1 Define product risk categories and parameters) are largely re-used within this specific practice.
Refer to SG 1 Perform a Product Risk Assessment from the process area Test Planning for more
details on the practices for performing a product risk assessment.
In general the defined test performance indicators should relate to the business value of testing.
TMMi Framework R1 2 Page 28 of 226
TMMi Level 2 Test Policy and Strategy
Elaboration
A group with the authority and knowledge is designated to be responsible for defining a test policy,
test strategy and test performance indicators. The group typically consists of the following
stakeholders: resource management, business management, quality management, project
management, operations, test management and test engineers.
Introductory Notes
After confirmation of the test assignment, an overall study is carried out regarding the product to be tested, the project
organization, the requirements, and the development process. As part of Test Planning, the test approach is defined
based on the outcome of a product risk assessment and the defined test strategy. Depending on the priority and
category of risks, it is decided which requirements of the product will be tested, to what degree, how and when. The
objective is to provide the best possible coverage to the parts of the system with the highest risk.
Based on the test approach the work to be done is estimated and as a result the proposed test approach is provided
with clear cost information. The product risks, test approach and estimates are defined in close co-operation with the
stakeholders rather than by the testing team alone. The test plan will comply, or explain non-compliances, with the
test strategy.
Within Test Planning, the test deliverables that are to be provided are identified, the resources that are needed are
determined, and aspects relating to infrastructure are defined. In addition, test project risks regarding testing are
identified. As a result the test plan will define what testing is required, when, how and by whom.
Finally, the test plan document is developed and agreed to by the stakeholders. The test plan provides the basis for
performing and controlling the testing activities. The test plan will usually need to be revised, using a formal change
control process, as the project progresses to address changes in the requirements and commitments, inaccurate
estimates, corrective actions, and (test) process changes.
Scope
The process area Test Planning involves performing a product risk assessment on the test object and defining a
differentiated test approach based on the risks identified. It also involves developing estimates for the testing to be
performed, establishing necessary commitments, and defining and maintaining the plan to guide and manage the
testing. A test plan is required for each identified test level. At TMMi level 2 test plans are typically developed per test
level. At TMMi level 3, within the process area Test Lifecycle and Integration, the master test plan is introduced as
one of its goals.
2. Define consistent criteria for evaluating and quantifying the product risk likelihood and impact
levels
3. Define thresholds for each product risk level
Risk level is defined as the importance of a risk as defined by its characteristics (impact and
likelihood). For each risk level, thresholds can be established to determine the acceptability or
unacceptability of a product risk, prioritization of product risks, or to set a trigger for management
action.
Sub-practices
1. Identify and select stakeholders that need to contribute to the risk assessment
2. Identify product risks using input from stakeholders and requirements documents
Examples of product risk identification techniques include the following:
Risk workshops
Brainstorming
Expert interviews
Checklists
Lessons learned
Sub-practices
1. Breakdown the prioritized product risks into items to be tested and not to be tested
2. Document the risk level and source documentation (test basis) for each identified item to be tested
3. Breakdown the prioritized product risks into features to be tested and not to be tested
4. Document the risk level and source documentation (test basis) for each identified feature to be
tested
5. Review with stakeholders the list of items and features to be tested and not to be tested
7. Align the test approach with the defined organization-wide or program-wide test strategy
8. Identify any non-compliance to the test strategy and its rationale
9. Review the test approach with stakeholders
10. Revise the test approach as appropriate
Examples of when the test approach may need to be revised include the following:
New or changed priority level of product risks
Lessons learned after applying the test approach in the project
3. Review the entry criteria with stakeholders, especially those responsible for meeting the entry
criteria
2. Specify the resumption criteria used to specify the test tasks that must be repeated when the
criteria that caused the suspension are removed
5. Identify indirect test tasks to be performed such as test management, meetings, configuration
management, etc.
Note that the WBS should also take into account tasks for implementing the test environment
requirements. Refer to the Test Environment process area for more information on this topic.
Note that appropriate methods (e.g., validated models or historical data) should be used to determine
the attributes of the test work products and test tasks that will be used to estimate the resource
requirements.
2. Study (technical) factors that can influence the test estimate
Examples of factors that can influence the test estimate include the following:
Usage of test tools
Quality of earlier test levels
Quality of test basis
Development environment
Test environment
Availability of re-usable testware from previous projects
Knowledge and skill level of testers
3. Select models and/or historical data that will be used to transform the attributes of the test work
products and test tasks into estimates of the effort and cost
Examples of models that can be used for test estimation include the following:
Test Point Analysis [TMap]
Three point estimate
Wide Band Delphi [Veenendaal]
Ratio of development effort versus test effort
4. Include supporting infrastructure needs when estimating test effort and cost
Examples of supporting infrastructure needs include the following:
Test environment
Critical computer resources
Office environment
Test tools
5. Estimate test effort and cost using models and/or historical data
6. Document assumptions made in deriving the estimates
7. Record the test estimation data, including the associated information needed to reconstruct the
estimates
2. Analyze the identified test project risks in terms of likelihood and impact
3. Prioritize the analyzed test project risks
4. Review and obtain agreement with stakeholders on the completeness and priority level of the
documented test project risks
5. Define contingencies and mitigation actions for the (high priority) test project risks
6. Revise the test project risks as appropriate
Examples of when test project risks may need to be revised include:
When new test project risks are identified
When the likelihood of a test project risk changes
When test project risks are retired
When testing circumstances change significantly
Refer to the Test Environment process area for information on environmental needs and
requirements.
Elaboration
The test planning policy typically specifies:
Each project will define a test plan that includes a test approach and the accompanying test effort
and estimates
Each project’s test approach will be derived from the test strategy
Test plans shall be developed using a standard process and template
Standard tools that will be used when performing test planning
The requirements will be used as a basis for test planning activities
The testing commitments will be negotiated with resource management, business management
and project management
Any involvement of other affected groups in the testing activities must be explicitly agreed upon
by these groups
Management will review all testing commitments made to groups external to the organization
The test plan will be managed and controlled
Elaboration
A test manager is typically designated to be responsible for negotiating commitments and developing
the test plan. The test manager, either directly or by delegation, coordinates the project’s test planning
process.
Introductory Notes
The progress of testing and the quality of the products should both be monitored and controlled. The progress of the
testing is monitored by comparing the status of actual test (work) products, tasks (including their attributes), effort,
cost, and schedule to what is identified in the test plan. The quality of the product is monitored by means of indicators
such as product risks mitigated, the number of defects found, number of open defects, and status against test exit
criteria.
Monitoring involves gathering the required (raw) data, e.g., from test log and test incidents reports, reviewing the raw
data for their validity and calculating the defined progress and product quality measures. Test summary reports
should be written on a periodic and event-driven basis as a means to provide a common understanding on test
progress and product quality. Since ‘testing is the measurement of product quality’ [Hetzel], the practices around
product quality reporting are key to the success of this process area.
Appropriate corrective actions should be taken when the test progress deviates from the plan or product quality
deviates from expectations. These actions may require re-planning, which may include revising the original plan or
additional mitigation activities based on the current plan. Corrective actions that influence the original committed plan
should be agreed by the stakeholders.
An essential part of test monitoring and control is test project risk management. Test project risk management is
performed to identify and solve as early as possible major problems that undermine the test plan. When performing
project risk management, it is also important to identify problems that are beyond the responsibility of testing. For
instance, organizational budget cuts, delay of development work products or changed/added functionality can all
significantly affect the test process. By building on the test project risks already documented in the test plan, test
project risks are monitored and controlled and corrective actions are initiated as needed.
Scope
The process area Test Monitoring and Control involves monitoring the test progress and product quality against
documented estimates, commitments, plans and expectations, reporting on test progress and product quality to
stakeholders, taking control measures, (e.g., corrective actions, when necessary) and managing the corrective
actions to closure.
3. Monitor the attributes of the test work products and test tasks
Refer to SP 3.3 Determine estimates of test effort and cost from the Test Planning process area for
information about the attributes of test work products and test tasks.
Examples of test work products and test task attributes monitoring typically include the following:
Periodically measuring the actual attributes of the test work products and test tasks, such as
size or complexity
Comparing the actual attributes of the test work products and test tasks to the estimates
documented in the test plan
Identifying significant deviations from the estimates in the test plan
Sub-practices
1. Periodically review the status of stakeholder involvement
2. Identify and document significant issues and their impact
3. Document the results of the stakeholder involvement status reviews
2. Identify and document significant deviations from expectations for measures regarding defects
found
3. Review the status regarding defects, product risks and exit criteria
4. Identify and document significant product quality issues and their impacts
5. Document the results of the reviews, actions items, and decisions
6. Update the test plan to reflect accomplishments and the latest status
Note that many of the potential actions listed above will lead to a revised test plan.
2. Review and get agreement with relevant stakeholders on the actions to be taken
3. Re-negotiate commitments with stakeholders (both internally and externally)
Note that this generic practice only covers the involvement of relevant stakeholders in test monitoring
and controlling.
Note that this generic practice only covers the monitoring and controlling of test monitoring and control
activities.
Introductory Notes
Structured testing implies that test design techniques are applied, possibly supported by tools. Test design
techniques are used to derive and select test conditions and design test cases from requirements and design
specifications. The test conditions and test cases are documented in a test specification. A test case consists of the
description of the input values, execution preconditions, expected results and execution post conditions. At a later
stage, as more information becomes available regarding the implementation, the test cases are translated into test
procedures. In a test procedure, also referred to as a manual test script, the specific test actions and checks are
arranged in an executable sequence. Specific test data required to be able to run the test procedure is created. The
tests will subsequently be executed using these test procedures.
The test design and execution activities follow the test approach as defined in the test plan. The specific test design
techniques applied (e.g., black box, white box or experience-based) are based on level and type of product risk
identified during test planning.
During the test execution stage, incidents are found and incident reports are written. Incidents are logged using an
incident management system and are communicated to the stakeholders per established protocols. A basic incident
classification scheme is established for incident management, and a procedure is put in place to handle the incident
lifecycle process including managing each incident to closure.
Scope
The process area Test Design and Execution addresses the test preparation phase including the application of test
design techniques to derive and select test conditions and test cases. It also addresses the creation of specific test
data, the execution of the tests using documented test procedures and incident management.
Note that in addition to black box and white box techniques, experience-based techniques such as
exploratory testing can also be used which result in documenting the test design specification by
means of a test charter.
Typically more than one test design technique is selected per test level in order to be able to
differentiate the intensity of testing, e.g., number of test cases, based on the level of risk of the test
items. In addition to using the risk level to prioritize testing, other factors influence the selection of test
design techniques such as development lifecycle, quality of the test basis, skills and knowledge of the
testers, contractual requirements and imposed standards.
4. Derive the test conditions from the test basis using test design techniques
5. Prioritize the test conditions based on identified product risks
6. Document the test conditions in a test design specification, based on the test design specification
standard
Examples of elements of a test design specification include the following [after IEEE 829]:
Test design specification identifier
2. Develop the intake test procedure, based on the checks identified, by putting the checks (test
cases) in an executable order and including any other information needed for test execution
3. Document the intake test procedures in a test procedure specification, based on the test
procedure specification standard
4. Review the intake test procedure specification with stakeholders
5. Revise the intake test procedure specification as appropriate.
Sub-practices
1. Perform the intake test (confidence test) using the documented intake test procedure to decide if
the test object is ready for detailed and further testing
2. Document the results of the intake test by means of a test log, based on the test log standard
3. Log incidents when a discrepancy is observed
Note that this practice is highly related to the practice SP 2.4 Perform test environment intake test
from the process area Test Environment. The intake test on the test object and test environment can
possibly be combined.
5. Record the decision including rationale and other relevant information in the incident database; the
incident report is updated.
6. Assign the incident to the appropriate group, e.g., development, to perform appropriate actions
Sub-practices
1. Repair the incident which may involve updating documentation and/or software code
2. Record information on the repair action in the incident database; the incident report is updated
3. Perform re-testing, and possibly regression testing, to confirm the fix of the incident
4. Record information on the re-testing action in the incident database; the incident report is updated
5. Formally close the incident provided re-testing was successful
Elaboration
Typically, the plan for performing the test design and execution process is included in the test plan,
which is described in the Test Planning process area. The activities for test design and execution are
explicitly scheduled as part of the test plan.
Introductory Notes
A managed and controlled test environment is indispensable for any testing. It is also needed to obtain test results
under conditions which are as close as possible to the ‘real-life’ situation. This is especially true for higher level
testing, e.g., at system and acceptance test level. Furthermore, at any test level the reproducibility of test results
should not be endangered by undesired or unknown changes in the test environment.
Specification of test environment requirements is performed early in the project. This specification is reviewed to
ensure its correctness, suitability, feasibility and accurate representation of a ‘real-life’ operational environment. Early
test environment requirements specification has the advantage of providing more time to acquire and/or develop the
required test environment and components such as simulators, stubs or drivers. The type of environment required
will depend on the product to be tested and the test types, methods and techniques used.
Availability of a test environment encompasses a number of issues which need to be addressed. For example, is it
necessary for testing to have an environment per test level? A separate test environment per test team or per test
level can be very expensive. Maybe it is possible to have the same environment shared between testers and
developers. If so, strict management and control is necessary as both testing and development activities are done in
the same environment, which can easily negatively impact progress. When poorly managed, this situation can cause
many problems ranging from conflicting reservations to people finding the environment in an unknown or undesired
state when starting their activities.
Finally test environment management also includes managing access to the test environment by providing log-in
details, managing test data, providing and enforcing configuration management and providing technical support on
progress disturbing issues during test execution.
As part of the Test Environment process area, the requirements regarding generic test data, and the creation and
management of the test data are also addressed. Whereas specific test data is defined during the test design and
analysis activity, more generic test data is often defined and created as a separate activity. Generic test data is re-
used by many testers and provides overall background data that is needed to perform the system functions. Generic
test data often consists of master data and some initial content for primary data. Sometimes timing requirements
influence this activity.
Scope
The process area Test Environment addresses all activities for specifying test environment requirements,
implementing the test environment and managing and controlling the test environment. Management and control of
the test environment also includes aspects such as configuration management and ensuring availability. The Test
Environment process area scope includes both the physical test environment and the test data.
3. Document the test environment needs, including generic test data, expectations and constraints
Having prioritized test environment requirements helps to determine scope. This prioritization
ensures that requirements critical to the test environment are addressed quickly.
3. Allocate test environment requirements to test environment components
2. Identify key test environment requirements having a strong influence on cost, schedule or test
performance
3. Identify test environment requirements that can be implemented using existing or modified
resources
4. Analyze test environment requirements to ensure that they are complete, feasible and realizable
5. Analyze test environment requirements to ensure that together they sufficiently represent the ‘real-
life’ situation, especially for higher test levels
6. Identify test project risks related to the test environment requirements
7. Review the test environment requirements specification with stakeholders
8. Revise the test environment requirements specification as appropriate
An example of when the test environment may need to be revised is when problems surface during
implementation that could not be foreseen during requirements specification.
Note that this practice is highly related to the practice SP 3.1 Perform intake test from the process
area Test Design and Execution and the intake test on the test object and test environment can
possibly be combined.
Adequate time and resources are provided to engineers to develop stubs and drivers needed for
low level testing
Note that configuration management for test environments and test data is key to any testing and is a
requirement for test reproducibility.
Providing resources and/or input for the implementation of the test environment, e.g.,
subcontractors that develop test environment components
Introductory Notes
Establishing a test organization implies a commitment to better testing and higher-quality software. To initiate the
process, upper management must support the decision to establish a test group and commit resources to the group.
It also requires leadership in areas that relate to testing and quality issues. The staff members of such a group are
called test specialists. A test organization (group) is the representation of effective relationships between test
specialists, test facilities and project-related test activities in order to achieve a high standard in structured testing.
Well-defined communication links from the test group to business, development, and quality assurance are
established. The synergy between these elements creates a structure that is more than the sum of the parts.
It is important for an organization to have an independent test group. The group shall have a formalized position in
the organizational hierarchy. The term independence is used generically, but each organization must develop its own
interpretation and implementation of the right level of independence. A test organization can, for instance, be
organized as a test competence center with a test resource pool. In this type of organization, group members are
assigned to projects throughout the organizations where they do their testing work, or as an independent test group
that performs acceptance testing before release. In the TMMi sense, independence for the test organization means
that testers are recognized as engineering specialists. Testers are not considered to be developers, and most
importantly they report to management independent of the development management. Test specialists are allowed
to be objective and impartial, unhindered by development organization pressures.
Testing is regarded as a valued profession and the test group is recognized as a necessity. Detailed and specialized
knowledge and skills regarding test engineering, test management and the application domain are characteristics of
the motivated individuals assigned to the test group. Test functions and test career paths are defined and supported
by a test training program. The group is staffed by people who have the skills and motivation to be good testers. They
are assigned to a specific test function and are dedicated to establishing awareness of, and achieving, product quality
goals. They measure quality characteristics, and have responsibilities for ensuring the system meets the customers’
requirements. Also the test activities, roles and responsibilities for other staff members (non-test specialists) are
specified. For each test function the typical tasks, responsibilities, authorities, required knowledge, skills and test
training are specified. As a result, the process areas “Test Organization” and “Test Training Program” are closely
related and interdependent. One of the principal objectives of the training program is to support the test organization
in training of test specialists.
Whereas at TMMi level 2 test process improvement is sometimes an ad hoc project, it is now well-organized and
structured within the test organization. The responsibility for facilitating and managing the test process improvement
activities, including coordinating the participation of other disciplines, is typically assigned to a test technology
manager supported by a management steering committee. Sometimes a test process improvement group, often
called a Test Process Group, is already established and staffed. Candidates for process improvements are obtained
from various sources, including measurements, lessons learned and assessment results. Careful planning is required
to ensure that test process improvement efforts across the organization are adequately managed and implemented.
The planning for test process improvement results in a process improvement plan. This plan will address assessment
planning, process action planning, pilot planning and deployment planning. When the test improvement is to be
deployed, the deployment plan is used. This plan describes when and how the improvement will be implemented
across the organization.
Scope
The process area Test Organization defines the functioning (tasks, responsibilities, reporting structure) and the
position of a test group in the overall organization. Test roles, functions, and career paths are defined to support the
acceptance of testing as a professional discipline. Within the test organization, test process improvement is a key
activity. Test process improvement encompasses assessing the current test process and using lessons learned to
identify possible test improvements, implementing improvements and deploying them in testing activities in projects.
Note, that ideally the test organization should be a separate organizational entity or function.
However, this is not always possible or practical given the size of the organization, risk level of the
systems being developed and the resources available.
2. Review the test organization description with stakeholders
Test engineer
Test consultant
Test environment engineer
2. Incorporate the job descriptions into the organization’s Human Resource Management (HRM)
framework
3. Extend job descriptions for other job categories (non-test specialist) to include the test tasks and
responsibilities, as appropriate
Examples of non-test specialists’ job categories that typically encompass test activities and
responsibilities include the following:
Software developer
System engineer
System integrator
User representative
4. Use the organization’s standard test process as a major input to define and enhance the job
descriptions
TMMi Framework R1 2 Page 81 of 226
TMMi Level 3 Test Organization
3. Identify and document actions that are needed to advance the career development of the staff
member
4. Track the defined test career development actions to closure
5. Revise the personal development plan, as appropriate
4. Review and negotiate the test process improvement plan with stakeholders (including members of
process action teams)
5. Review and update the test process improvement plan as necessary
10. Establish and maintain records of the organization’s test process improvement activities
The standard test process (including templates) that is defined and maintained by the test
organization is consistently applied
The approach to test metrics, test databases, test tools, and test re-use
The test activities that the test organization facilitates and/or coordinates in projects
The test evaluation report (lessons learned) that each (test) project will provide for use in
improving the standard test process
The objectives and organizational structure regarding test process improvement
The approach for planning, implementing and deploying test process improvements across the
organization
Note that training for (test) engineers and (test) managers on the standard test process and supporting
test tools is addressed as part of the process area Test Training Program.
Introductory Notes
Test Training Program includes training to support the organization’s strategic business objectives and to meet the
training needs that are common across projects. Specific training needs identified by individual projects are handled
at project level. Test Training Program is closely related to and interdependent with the Test Organization process
area. One of the main objectives of the Test Training Program is to support the test organization by training the test
specialists and other stakeholders involved. A quality training program ensures that those involved in testing continue
to improve their testing skills and update their domain knowledge and other knowledge related to testing. The training
program may be organized and managed by means of a dedicated training group.
Establishing a test training program is an additional commitment by management to support high quality testing staff
and to promote continuous test process improvement. In testing, a variety of skills is needed. The main categories
are test principles, test techniques, test management, test tools, domain knowledge, IT knowledge, system
engineering, software development and interpersonal skills. A test training program, consisting of several training
modules, is developed to address these categories. Note at higher levels of TMMi other more advanced training
categories will become important, e.g., defect prevention at TMMi level 5. Some skills are effectively and efficiently
imparted through informal methods (e.g., training-on-the-job and mentoring) whereas other skills require formal
training.
The term “training” is used throughout this process area to include all of these learning options. The test training
program is linked to the test functions and test roles, and will facilitate test career paths. Deploying the training
program guarantees the appropriate knowledge and skill level for all people involved in testing. The implementation
of the Test Training Program process area involves first identifying the organizational test training needs, developing
or acquiring specific training modules, conducting training to address the identified needs as required and, finally,
evaluating the effectiveness of the training program.
Scope
The process area Test Training Program addresses the establishment of an organizational test training plan and test
training capability. It also addresses the actual delivery of the planned test training. Project specific training needs
are not part of this process area. They are addressed in the process area Test Planning.
Note the identification of test process training is primarily based on the skills that are required to
perform the organization’s set of standard test processes.
2. Periodically assess the test skill set of the people involved in testing
3. Document the strategic test training needs of the organization
4. Map the test training needs to the test functions (including test career paths) and test roles of the
organization
5. Revise the organizations strategic test training needs as necessary
Sub-practices
1. Analyze the test training needs identified by various projects
Analysis of specific project needs is intended to identify common test training needs that can be
most efficiently addressed organization-wide. This analysis activity can also be used to anticipate
future test training needs that are first visible at the project level.
2. Determine whether the training needs identified by the various projects are project specific or
common to the organization
Test training needs common to the organization are normally managed by means of the
organizational test training program.
3. Negotiate with the various projects on how their specific training needs will be satisfied
4. Document the commitments for providing test training support to the projects
Refer to SP 4.2 Plan for test staffing from the process area Test Planning for more information on
project specific plans for training.
2. Review test training plan with affected groups and individuals, e.g., human resources, test
resources and project management
3. Establish commitment to the test training plan
4. Revise test training plan and commitments as necessary
2. Determine whether to develop test training materials internally or acquire them externally
Example criteria that can be used to determine the most effective mode of knowledge or skill
acquisition include the following:
Time available to prepare the training materials
Availability of in-house expertise
Availability of training (materials) from external sources
Available budget
Time required for maintenance of training material
Elaboration
Examples of measures used in monitoring and control of the Test Training Program process
include the following:
Number of training courses delivered (e.g., planned versus actual)
Actual attendance at each training course compared to the projected attendance
Schedule for delivery of training
Schedule for development of courses
Training costs against allocated budget
Progress in developing and providing training courses compared to the documented test
training needs
Elaboration
Examples of measures include the following:
Number of training courses delivered (e.g., planned versus actual)
Post-training evaluation ratings
Training program quality survey ratings
Introductory Notes
An important responsibility of the test organization is to define, document and maintain a standard test process, in
line with the organization’s test policy and goals. Organizational test process assets enable consistent test process
performance across the organization and provide a basis for cumulative, long-term benefits to the organization. The
organization’s test process asset library is a collection of items maintained for use by the people and projects of the
organization. The collection of items include descriptions of test processes, descriptions of test lifecycle models
(including supporting templates and guidelines for the test deliverables), supporting test tools, process tailoring
guidelines and a test process database. The organization’s test process asset library supports organizational learning
and process improvement by sharing best practices and lessons learned across the organization.
The standard test lifecycle models define the main phases, activities and deliverables for the various test levels. The
testing activities will subsequently be performed in projects according to these models. Standards and guidelines are
developed for test related (work) products. The standard test lifecycle models are aligned with the development
lifecycle models to integrate the testing activities in terms of phasing, milestones, deliverables, and activities.
Lifecycle integration is done in such a way that early involvement of testing in projects is ensured, e.g., test planning
starts during the requirements specification phase, integration and unit test planning are initiated at detailed design
time. Testers will review the test basis documents to determine testability and development planning may be
influenced by the test approach. The organization’s set of standard test processes can be tailored by projects to
create their specific defined processes. The work environment standards are used to guide creation of project work
environments.
At TMMi level 3, test management is concerned with master test planning which addresses the coordination of testing
tasks, responsibilities and test approach over multiple test levels. This prevents unnecessary redundancy or
omissions of tests between the various test levels and can significantly increase the efficiency and quality of the
overall test process. The information resulting from project test planning is documented in a project test plan, which
governs the detailed level test plans to be written specifically for an individual test level. The master test plan
describes the application of the test strategy for a particular project, including the particular levels to be carried out
and the relationship between those levels. The master test plan should be consistent with the test policy and strategy,
and, in specific areas where it is not, should explain those deviations and exceptions. The master test plan will
complement the project plan or operations guide which describes the overall test effort as part of the larger project
or operation. The master test plan provides an overall test planning and test management document for multiple
levels of test (either within one project or across multiple projects). On smaller projects or operations (e.g., where
only one level of testing is formalized) the master test plan and the level test plan will often be combined into one
document.
Scope
The process area Test Lifecycle and Integration addresses all practices to establish and maintain a usable set of
organizational test process assets (e.g., a standard test lifecycle) and work environment standards, and to integrate
and synchronize the test lifecycle with the development lifecycle. Test Lifecycle and Integration also addresses the
master test planning practices. The master test plan at TMMi level 3 defines a coherent test approach across multiple
test levels.
Interfaces
Exit criteria
4. Ensure that the organization’s set of standard test processes adheres to organizational policies,
standards, and models
Adherence to applicable standards and models is typically demonstrated by developing a mapping
from the organization’s set of standard test processes to the relevant standards and models.
5. Ensure the organization’s set of standard test processes satisfies the test process needs and
objectives of the test organization
6. Document the organization’s set of standard test processes
7. Conduct peer reviews on the organization’s set of standard test processes
8. Revise the organization’s set of standard test processes as necessary
SP 1.2 Establish test lifecycle model descriptions addressing all test levels
Descriptions of the test lifecycle models (including supporting templates and guidelines for the test
deliverables) that are approved for use in the organization are established and maintained, ensuring
coverage of all identified test levels.
Example work products
1. Description of test lifecycle models
Sub-practices
1. Select test lifecycle models based on the needs of the projects and the organization
2. Document the descriptions of the test lifecycle models
A test lifecycle model description typically includes the following:
Test strategy, e.g., test levels and their objectives
Test lifecycle phases, e.g., planning and control, test analysis and design, test implementation
and execution, evaluating exit criteria and reporting, and test closure activities
Entry and exit criteria for each phase
Testing activities per phase
Responsibilities
Deliverables
Milestones
3. Develop supporting templates and guidelines for the deliverables identified within the test lifecycle
models
Examples of test deliverables that are supported by means of templates and guidelines typically
include the following:
Master test plan
Level test plan
Test design specification
Test case specification
Test procedure specification
Test log
Incident report
Test summary report
Test evaluation report
4. Conduct peer reviews on the test lifecycle models, and supporting templates and guidelines
5. Revise the description of the test lifecycle models, and supporting templates and guidelines, as
necessary
Sub-practices
1. Specify the selection criteria and procedures for tailoring the organization’s set of standard test
processes
Examples of tailoring actions include the following:
Modifying a test lifecycle model
Combining elements of different test lifecycle models
Modifying test process elements
Replacing test process elements
Deleting test process elements
Reordering test process elements
2. The data entered into the test process database is reviewed to ensure the integrity of the
database content
The test process database also contains or references the actual measurement data and related
information and data needed to understand and interpret the measurement data and assess it for
reasonableness and applicability.
3. The test process database is managed and controlled
User access to the test process database contents is controlled to ensure completeness, integrity,
security and accuracy of the data.
Sub-practices
1. Evaluate commercially-available work environment standards appropriate for the organization
2. Adopt existing work environment standards and develop new ones to fill gaps based on the
organization’s test process needs and objectives
SP 2.3 Obtain commitments on the role of testing within the integrated lifecycle
models
Commitments are obtained regarding the role of testing within the integrated lifecycle models from the
relevant stakeholders who are responsible for managing, performing and supporting project activities
based on the integrated lifecycle models.
Example work products
1. Documented requests for commitments
2. Documented commitments
Sub-practices
1. Identify needed support and negotiate commitments with relevant stakeholders
2. Document all organizational commitments, both full and provisional
3. Review internal commitments with senior management as appropriate
4. Review external commitments with senior management as appropriate
Refer to SG 1 Perform a Product Risk Assessment from the process area Test Planning for more
details on the (sub) practices for performing the product risk assessment.
Refer to SG 4 Develop a Test Plan from the process area Test Planning for more details on the (sub)
practices for developing a master test plan.
Refer to the process area Test Environment for more information on environment needs and
requirements.
Experienced individuals, who have expertise in the application domain of the test object and
those who have expertise on the development process are available to support the
development of the master test plan
Tools to support the master test planning process are available, e.g., project planning and
scheduling tools, estimation tools, risk assessment tools, test management tools and
configuration management tools
Test management, and other individuals or groups involved, are trained in master test planning and
the accompanying procedures and techniques.
Examples of training topics include the following:
Planning principles
Test strategy
Product and project risk assessment process and techniques
Defining a test approach
Test plan templates and standards
Organizational structures
Test estimation and test scheduling
Supporting test planning tools
Execution of the master test plan is typically monitored and controlled by means of the practices of the
process area Test Monitoring and Control.
Introductory Notes
Quality of products is all about satisfying stakeholders’ needs. These needs have to be translated to well-described
functional (“what” the product does) and non-functional (“how” the product does it) requirements. Often the non-
functional requirements are highly important for customer satisfaction. This process area addresses the development
of a capability for non-functional testing. There is a set of principal non-functional attributes that are used to describe
the quality of software products or systems. These quality attributes can be assessed using non-functional test
techniques. Application of the various test techniques varies depending on the ability of the tester, the knowledge of
the domain, and the attributes being addressed.
A test approach needs to be defined based on the outcome of a non-functional product risk assessment. Depending
on the level and type of non-functional risks, it is decided which requirements of the product will be tested, to what
degree and how. The non-functional product risks and test approach are defined in close cooperation between test
specialists and the stakeholders; testers should not make these decisions in isolation.
Non-functional test techniques are applied, possibly supported by tools. Test techniques are used to derive and
select non-functional test conditions and create test cases from non-functional requirements and design
specifications. The test cases are subsequently translated into manual test procedures and/or automated test scripts.
Specific test data required to execute the non-functional test is created. During the test execution stage, the non-
functional tests will be executed, incidents found and incident reports written.
Scope
The process area Non-functional Testing involves performing a non-functional product risk assessment and defining
a test approach based on the non-functional risks identified. It also addresses the test preparation phase to derive
and select non-functional test conditions and test cases, the creation of specific test data and the execution of the
non-functional tests. Test environment practices, which are often critical for non-functional testing, are not addressed
within this process area. They are addressed as part of the TMMi level 2 process area Test Environment and should
now also support non-functional testing.
Note that product risk categories and parameters as defined in the Test Planning process area (SP1.1
Define product risk categories and parameters) are largely re-used and potentially enhanced within
this specific practice.
Note that black box techniques, white box techniques and experienced-based techniques such as
exploratory testing and checklists can also be selected to test specific non-functional quality attributes.
2. Define the approach to reviewing test work products
3. Define the approach for non-functional re-testing
4. Define the approach for non-functional regression testing
5. Define the supporting test tools to be used
6. Identify significant constraints regarding the non-functional test approach, such as test resource
availability, test environment features and deadlines
7. Align the non-functional test approach with the defined organization-wide or program-wide test
strategy
8. Identify any areas of non-compliance with the test strategy and the rationale
9. Review the non-functional test approach with the stakeholders
10. Revise the non-functional test approach as appropriate
Examples of when the non-functional test approach may need to be revised include the following:
New or changed priority level of non-functional product risks
Lessons learned on applying the non-functional test approach in the project
Examples of elements of a test case specification include the following [IEEE 829]:
Test case specification identifier
Features (and/or items) to be tested
Input specifications
Output specifications
Environmental needs
Special procedural requirements
Inter-case dependencies
Note that some non-functional testing will be conducted informally without using pre-defined detailed
test procedures, e.g., a heuristic evaluation to test the usability.
Note that the non-functional test execution is normally preceded by the overall intake test. Refer to the
practices SP 2.3 Specify intake test procedure and SP 3.1 Perform intake test from the process area
Test Design and Execution for more details on the intake test on the test object, and to the practice SP
2.4 Perform test environment intake test from the process area Test Environment for more details on
the intake test on the test environment.
Introductory Notes
Reviews involve a methodical examination of work products by peers to identify defects and areas where changes
are needed. Reviews are conducted with a small group of engineers, generally between 2-7 persons. The work
product to be reviewed could be a requirements specification, design document, source code, test design, a user
manual, or another type of document. In practice, there are many ways by which the group of reviewers is selected.
Reviewers may be:
Specialists in reviewing (quality assurance or audit)
People from the same project
People invited by the author because of their specific knowledge
People, e.g., business representatives, who have a significant interest in the product
Several types of reviews are defined, each with its own purpose and objective. In addition to informal reviews, more
formal review types such as walkthroughs, technical reviews and inspections are used [IEEE 1028]. In a walkthrough,
the author guides a group of people through a document and his thought process, so everybody understands the
document in the same way and they reach a consensus on the content or changes to be made. In a technical review
the group discusses, after an individual preparation, the content and the (technical) approach to be used. An
inspection, the most formal review type, is a technique where a document is checked for defects by each individual
and by the group, using sources and standards and following prescribed rules.
Scope
The Peer Review process area covers the practices for performing peer reviews on work products, e.g., testers
reviewing a requirements specification for testability. It also includes the practices for establishing the peer review
approach within a project. Project reviews (also known as management reviews) are outside the scope of this process
area. At TMMi maturity level 3 peer reviews are not yet fully integrated with the dynamic testing process, e.g., part of
the test strategy, test plan and test approach.
Typical data are product type, product size, type of peer review, number of reviewers, preparation
time per reviewer, length of the review meeting, number of (major) defects found, etc.
Examples of inappropriate use of peer review data include using the data to evaluate the
performance of people and using data for attribution.
Examples of peer review data that can be analyzed include the following:
Phase defect was injected
Preparation effort or rate versus expected effort or rate
Actual review effort versus planned review effort
Number of defects versus number expected
Types and severity level of defects detected
Number of defects versus effort spent
Causes of defects
Defect resolution impact
The organization identifies a standard set of work products that will undergo review, including test
deliverables
Each project selects the work products that will undergo review and the associated review type(s)
Peer review leaders and other participants will be trained for their role
Testers shall participate in reviews on development documents to address testability issues
The checklists are modified as necessary to address the specific type of work product and peer
review. The checklists themselves are reviewed by peers and potential users
Tools to support the peer review process are available, e.g., communication tools, data analysis
tools and peer review process tools
Examples of training topics for peer review leaders include the following:
Developing a peer review approach
Type of reviews
Peer review leader tasks and responsibilities
Leading and facilitating a meeting
Achieving buy-in for reviews
Peer review metrics
Participants in peer reviews receive training for their roles in the peer review process
Examples of training topics for peer review participants include the following:
Objectives and benefits of peer reviews
Types of reviews
Peer review roles and responsibilities
Peer review process overview
Peer review preparation
Document rules and checklists, e.g., regarding testability
Peer review meetings
Elaboration
Examples of measures used in monitoring and controlling the peer review process include the
following:
Number of peer reviews planned and performed
Number of work products reviewed compared to plan
Number and type of defects found during peer reviews
Schedule of peer review process activities (including training activities)
Effort spent on peer reviews compared to plan
Introductory Notes
Achieving the goals of TMMi levels 2 and 3 has had the benefits of putting into place a technical, managerial, and
staffing infrastructure capable of thorough testing and providing support for test process improvement. With this
infrastructure in place, a formal test measurement program can be established to encourage further growth and
accomplishment.
Test measurement is the continuous process of identifying, collecting, and analyzing data on both the test process
and the products being developed in order to understand and provide information to improve the effectiveness and
efficiency of the test processes and possibly also the development processes. Measurement and analysis methods
and processes for data collection, storage, retrieval and communication are specified to support a successful
implementation of a test measurement program. Note that a test measurement program has two focal areas: it
supports test process and product quality evaluation, and it supports process improvement.
In order to be successful, a test measurement program needs to be linked to the business objectives, test policy and
test strategy [Van Solingen and Berghout]. The business objectives are the starting point for defining test
measurement goals and metrics. From the business objectives, goals are derived for the organization’s standard test
process. When implemented successfully, the test measurement program will become an integral part of the test
culture, and measurement will become a practice adopted and applied by all test groups and teams. Test
measurement is the continuous process of identifying, collecting, and analyzing data in order to improve the test
process and product quality. It should help the organization improve planning for future projects, train its employees
more effectively, etc. Examples of test related measurements include test costs, number of test cases executed,
defect data and product measures such as mean time between failures.
The Test Measurement process area involves the following:
- Specifying the objectives of test measurement such that they are aligned with identified information needs and
business objectives
- Specifying measures, analysis and validation techniques as well as mechanisms for data collection, data storage,
retrieval, communication and feedback
- Implementing the collection, storage, analysis, and reporting of the data
- Providing objective results that can be used in making informed decisions and in taking appropriate actions.
It is suggested at lower TMMi levels that an organization should begin to collect data related to the testing process,
e.g., test performance indicators within Test Policy and Strategy. It is recommended that an organization at the lower
TMMi levels begin to assemble defect-related measurements in the context of a simple defect repository. When
moving towards TMMi level 4, an organization will realize the need for additional measures to achieve greater levels
of test process maturity. In anticipation of these needs, TMMi calls for the establishment of a formal test measurement
program as a goal to be achieved at TMMi level 4. For most organizations it may be practical to implement such a
test measurement program as a supplement to a general measurement program.
At TMMi level 4 and above the test measurement activities are at the organizational level addressing organizational
information needs. However, test measurement will also provide support to individual projects by providing data, e.g.,
to support objective planning and estimation. Because the data is shared widely across projects, it is often stored in
an organization-wide test measurement repository.
Scope
The process area Test Measurement addresses the measurement activities at an organizational level. For
organizations that have multiple test groups or teams, test measurement will be performed identically across all test
groups as part of one overall test measurement program. Test Measurement covers practices such as defining
measurement objectives, creating the test measurement plan, gathering data, analyzing data and reporting the
results. It will also encompass organizational test measurement activities that were defined at lower TMMi levels,
TMMi Framework R1 2 Page 134 of 226
TMMi Level 4 Test Measurement
such as test performance indicators (a specific type of test measure) from Test Policy and Strategy and generic
practice 3.2 Collect improvement information. This process area also will provide support to the measurement
activities for the other TMMi level 4 process areas: Product Quality Evaluation and Advanced Reviews. The
measurement activities at the project level, e.g., the process area Test Monitoring and Control, will remain at the
project level but will interface with the organizational Test Measurement process area.
2. Document the test measures including their related test measurement objective
3. Specify operational definitions in exact and unambiguous terms for the identified test measures
4. Review and update the specification of test measures
Proposed specifications of the test measures are reviewed and agreed for their appropriateness
with potential end users and other relevant stakeholders and updated as necessary.
3. Specify administrative procedures for analyzing the data and communicating the results
4. Review and update the proposed content and format of the specified analysis procedures and
communication reports
5. Update test measures and test measurement objectives as necessary
Just as measurement needs drive data analysis, clarification of analysis criteria can affect
measurement. Specifications for some measures may be refined further based on the
specifications established for data analysis procedures. Other measures may prove to be
unnecessary, or a need for additional measures may be recognized.
3. Define corrective and improvement actions based on the analyzed test measurement results
Elaboration
The plan for performing the test measurement process can be included in (or referenced by) the test
process improvement plan, which is described in the Test Organization process area, or the
organization’s quality plan.
Elaboration
Examples of activities for stakeholder involvement include:
Eliciting information needs and objectives
Establishing procedures
Reviewing and agreeing on measurement definitions
Assessing test measurement data
Providing meaningful feedback to those responsible for providing the raw data
on which the analysis and results depend
Introductory Notes
Product Quality Evaluation involves defining the project’s quantitative product quality goals and establishing plans to
achieve those goals. It also involves defining quality metrics for evaluating (work) product quality. Subsequently the
plans¸ products, activities and product quality status are monitored and adjusted when necessary. The overall
objective is to contribute to satisfying the needs and desires of the customers and end users for quality products.
The practices of the Product Quality Evaluation build on the practices of process areas at TMMi maturity levels 2 and
3. The Test Design and Execution, Test Monitoring and Control and Non-functional Testing process areas establish
and implement key test engineering and measurement practices at the project level. Test Measurement establishes
a quantitative understanding of the ability of the project to achieve desired results using the organization’s standard
test process.
In this process area quantitative goals are established for the products based on the needs of the organization,
customer, and end users. In order for these goals to be achieved, the organization establishes strategies and plans,
and the projects specifically adjust their defined test process to accomplish the quality goals.
Scope
The Product Quality Evaluation process area covers the practices at the project level for developing a quantitative
understanding of the product that is being developed and achieving defined and measurable product quality goals.
Both functional and non-functional quality attributes are to be considered when defining the goals and practices for
this process area. Product Quality Evaluation is strongly supported by the Test Measurement process area that
provides the measurement infrastructure.
Sub-practices
1. Review the organization’s objectives for product quality
The intent of this review is to ensure that the project stakeholders understand the broader
business context in which the project will need to operate. The project’s objectives for product
quality are developed in the context of these overarching organizational objectives.
2. Identify and select stakeholders that need to contribute to the identification of the project’s product
quality needs
3. Elicit product quality needs using input from stakeholders and other sources
Examples of when product quality needs may need to be revised include the
following:
New or changing requirements
Evolved understanding of product quality needs by customers and end users
Lessons learned on product quality issues within the project
2. Prioritize the identified set of product quality attributes based on the priorities of the product quality
needs
3. Define quantitative product goals for each of the selected product quality attributes
To support this sub-practice selected product quality attributes are often broken down into product
quality sub-attributes. For each of the quality goals, measurable numeric values based on the
required and desired values are identified [Gilb]. The quality goals will act as acceptance criteria
for the project.
4. Assess the capability of the project’s defined process to satisfy the product quality goals
5. Define interim quantitative product quality goals for each lifecycle phase and corresponding work
products, as appropriate, to be able to monitor progress towards achieving the project’s product
quality goals
The interim quality goals will act as exit criteria for the appropriate lifecycle phases.
6. Allocate project product quality goals to subcontractors, as appropriate
7. Specify operational definitions in exact and unambiguous terms for the identified (interim) product
quality goals
8. Establish traceability between the project’s quantitative product quality goals and the project’s
product quality needs
9. Revise the product quality goals as appropriate
SP 1.3 Define the approach for measuring progress toward the project’s product
quality goals
The approach is defined for measuring the level of accomplishment toward the defined set of product
quality goals.
Refer to the Test Measurement process area for how to define measures.
Example work products
1. Measurement approach for product quality
2. Definitions of (test) measurement techniques to be used
Sub-practices
1. Select the (test) measurement techniques to be used to measure the progress toward achieving
the (interim) product quality goals
2. Define the points in the lifecycle, e.g., the test levels, for application of each of the selected
techniques to measure product quality
3. Specify data collection and storage procedures
Refer to the Test Measurement process area for more information on data collection and storage
procedures.
4. Select analysis techniques to be used to analyze the product quality measurement data
5. Define the supporting (test) measurement tools to be used
6. Identify any significant constraints regarding the approach being defined
7. Review and obtain agreement with stakeholders on the product quality measurement approach
8. Revise the product quality measurement approach as appropriate
Requirements documents
Design documents
Interface specifications
Prototypes
Code
Individual components
2. Perform product quality measurements on the product in accordance with the selected (test)
measurement techniques and the defined approach
3. Collect product quality measurement data as necessary
4. Review the product quality measurement data to ensure quality
5. Revise the product quality measurement approach and product quality measures as appropriate
SP 2.2 Analyze product quality measurements and compare them to the product’s
quantitative goals
The (interim) product quality measurements are analyzed and compared to the project’s (interim)
product quality goals on an event-driven and periodic basis.
Example work products
1. Analysis results
2. Product quality measurement report
3. Documented product quality review results, e.g., minutes of the meetings
4. List of product quality issues needing corrective actions
Sub-practices
1. Conduct initial analysis on the (interim) product quality measurements
Refer to the Test Measurement process area for more information on data analysis.
2. Compare the product quality measures against the project’s product quality goals, and draw
preliminary conclusions
Metrics that indicate low product quality should be subject to further scrutiny
3. Conduct additional product quality measurements and analysis as necessary, and prepare results
for communication
4. Communicate product quality measurement results and the level of achievement of (interim)
product quality quantitative goals to relevant stakeholders on a timely basis
5. Review the results of product quality measurements and the level of achievement of (interim)
product quality quantitative goals with relevant stakeholders
6. Identify and document significant product quality issues and their impact
7. Define corrective actions to be taken based on the analyzed product quality measurement results
8. Manage corrective actions to closure
Refer to SG 3 Manage corrective actions to closure from the process area Test Monitoring and
Control for more information on managing corrective actions to closure
9. Revise the product quality goals and measurement approach as appropriate
Schedule of data collection, analysis and reporting activities related to the product quality
goals
Introductory Notes
The definition of testing clearly states that “it is a process that encompasses of all lifecycle activities, both static and
dynamic, concerned with planning, preparation and evaluation of software products and related work products”. This
view of testing which originates from the evolutionary test model [Gelperin and Hetzel] holds the position that testing
should cover both validation and verification and include both static and dynamic analysis. In line with this view of
testing, reviews are an intrinsic part of testing, serving as a verification, validation and static analysis technique. At
TMMi level 4 this view is supported by a coordinated approach to manage peer reviews (static testing) and dynamic
testing. A coordinated test approach covering both static and dynamic testing will typically result in both efficiency
and effectiveness benefits. This expands upon the peer review process at TMMi level 3, where peer reviews are
performed but are not coordinated with dynamic testing.
Peer reviews, as an isolated process, are an effective way to identify defects and product risks before the actual
product is built. When peer reviews and dynamic testing are coordinated, the early review results and data are used
to influence the test approach. Building on the testing principle of defect clustering [Graham], the types and quantity
of defects found during reviews can help to select the most effective tests, and may also influence the test approach
or even the test objectives. Typically, at project milestones, the test approach is re-evaluated and updated. Peer
review data should be one of the drivers for this update.
At TMMi level 4, the organization sets quantitative goals for software products and related work products. Peer
reviews play an essential role in achieving these goals. Whereas at TMMi level 3 peer reviews are mainly performed
to find defects, the emphasis is now on measuring product (document) quality. Building on the experiences of
performing peer reviews at TMMi level 3, the review practices are enhanced to include practices like sampling,
applying exit criteria, and prescribing rules. To improve the reliability of the measurements, advanced defect finding
techniques such as perspective-based reading [Veenendaal] are practiced. The measurement results are also used
by (project) management to control product quality early in the lifecycle (see Product Quality Evaluation for more
information on measuring and managing product quality).
Scope
The Advanced Review process area builds on the practices of the TMMi level 3 Peer Reviews process area. It covers
the practices for establishing a coordinated test approach between peer reviews and dynamic testing and the use of
peer review results and data to optimize the test approach. At TMMi maturity level 4, peer reviews are fully integrated
with the dynamic testing process, e.g., part of the test strategy, test plan and test approach. The Advanced Review
process area also covers the practices that facilitate the shift from peer reviews as a defect detection technique to a
product quality measurement technique in line with the process area Product Quality Evaluation. These practices
include document sampling, definition of rules, strict exit criteria and perspective-based reading.
8. Estimate the effort and costs required to perform the coordinated test approach
9. Review the coordinated test approach with the stakeholders
10. Document the coordinated test approach as part of a (master) test plan
11. Obtain commitment to the coordinated test approach with management
12. Revise the coordinated test approach as appropriate
2. Analyze the identified product risks using the predefined parameters, e.g., likelihood and impact
Note that both newly identified product risks and previously identified product risks are subject to
the analysis.
3. (Re-)categorize and (re-)group the product risks according to the defined risk categories
4. (Re-)prioritize the product risks for mitigation
5. Document the rationale for the updates to the project’s product risk list
6. Review and obtain agreement with stakeholders regarding the completeness, category and priority
level of the revised product risks
7. Revisit the set of product risks based on peer review measurement data at project milestones and
on an event-driven basis
Review measurement data is collected and used to tune the dynamic test approach, improve the
review process, and predict product quality
Introductory Notes
In line with the evolutionary test model [Gelperin and Hetzel], testing at TMMi level 5 completes its journey from being
detection-focused to being a prevention-focused process. In line with this view of testing, testing is focused on the
prevention of defects that otherwise might have been introduced rather than just their detection during testing
activities. Defect Prevention involves analyzing defects that were encountered in the past, identifying causes and
taking specific actions to prevent the occurrence of those types of defects in the future. The selection of defects to
be analyzed should be based on various factors including risk. Focus needs to be given to those areas where
prevention of defects has the most added value (usually in terms of reduced cost or risk) and/or where the defects
are most critical. Attention should be given to both existing types of defects as well as new types of defects such as
defects that are new to the organization but are known to occur in the industry. Defect Prevention activities are also
a mechanism for spreading lessons learned across the organization, e.g., across projects.
Defect Prevention improves quality and productivity by preventing the introduction of defects into a product. Industry
data shows that reliance on detecting defects after they have been introduced is usually not cost effective [Boehm].
It is usually more cost effective to prevent defects from being introduced by integrating Defect Prevention practices
into each phase of the project. At TMMi level 5, an organization will know which is more cost effective, prevention or
detection of a certain type of defect. Many process improvement models emphasize the use of causal analysis as a
means of continually improving the capability of the process. Examples of methods for causal analysis are specific
causal analysis meetings, using tools such as fault tree analysis and cause/effect diagrams, project retrospectives,
causal analysis during formal reviews, and usage of standard defect classifications.
Defect Prevention is a mechanism to evaluate the complete development process and identify the most effective
improvements regarding product quality. As part of the Defect Prevention practices, trends are analyzed to track the
types of defects that have been encountered and where they were introduced, and to identify defects that are most
likely to reoccur. A (test) measurement process is already in place having been introduced at level 4. The available
measures can be used, though some new measures may be needed to analyze the effects of the process changes.
Based on an understanding of the organization’s defined standard development and test process and how it is
implemented, the root causes of the defects and the implications of the defects for future activities are determined.
Specific actions are defined and taken to prevent reoccurrence of the identified defects. Defect Prevention is an
essential part of a mature test process. Defects found during development, testing or even during production must
be systematically analyzed, prioritized and action must be undertaken to prevent them from occurring in the future.
The test organization coordinates the Defect Prevention activities. This should be done in close cooperation with
other disciplines, e.g., requirements engineering, system engineering and/or software development, as improvement
actions will often affect other disciplines.
Scope
The process area Defect Prevention addresses the practices for identifying and analyzing common causes of defects,
and defining specific actions to remove the common causes of those types of defects in the future, both within the
project, and elsewhere in the organization. All defects, whether found during development, testing or in the field, are
within the scope of the process area. Process defects that have resulted in outliers and not meeting expected process
performance also are within the scope. Since Defect Prevention needs measurement data and measurement
processes as an input, Defect Prevention builds on the TMMi level 4 measurement practices and available
measurement data regarding development, testing and product quality.
2. Review and agree the defined defect selection parameters with relevant stakeholders
3. Revisit defect classification scheme
A consistent defect classification allows statistics to be obtained regarding improvement areas to
be analyzed across the organization. The defects to be analyzed will be recorded from all lifecycle
phases, including maintenance and operation. Standards such as [IEEE 1044] allow a common
classification of anomalies leading to an understanding of the project stages when faults are
introduced, the project activities occurring when faults are detected, the cost of rectifying the
faults, the cost of failures, and the stage where the defect was raised versus where it should have
been found (also known as defect leakage) [ISTQB ITP].
IEEE 1044 distinguishes the following four phases in the incident/defect lifecycle:
Recognition - When the incident is found
Investigation - Each incident is investigated to identify all known related issues and
proposed solutions
Action - A plan of action is formulated on the basis of the investigation (resolve, retest)
Disposition - Once all actions required are complete the incident shall be closed
In each phase a number of attributes have been defined by the standard that can be used
for classification. IEEE 1044 provides comprehensive lists of classifications and related
data items, such as the following:
During recognition the following classifications (including related data items) are
provided: project activity, phase, suspected cause, repeatability, system, product
status, etc.
During investigation the following classifications (including related data items) are
provided: actual cause, defect source, defect type, etc.
The defect classification scheme typically initially defined at the Test Design and Execution
process area and possibly enhanced at later stages, e.g., as part of the Test Measurement
process area, will be re-used in this sub-practice. The defect classification scheme is revisited
from a defect prevention perspective; is all defect data recorded that is needed for an effective and
efficient defect prevention process.
Note that the revisted defect classification scheme shall now be applied during defect logging
activities such as SP 3.3 Report test incidents (process area Test Design and Execution) and SP
5.2 Report non-functional test incidents (process area Non-functional Testing).
4. Review and agree revisited defect classification scheme with relevant stakeholders
Examples of activities to be carried out during the preparation for defect selection include
the following:
Establish a comprehensive list of all defects. Defects reports can originate from static
testing, dynamic testing, actual usage in operation and from process performance
outliers
Make an initial selection from the defect repository. During this activity, the defects that
have a small likelihood of being selected, for instance minor defects, are removed from
the list. Defects that adhere to the defect selection parameters are identified
Perform an initial analysis on the defects, e.g., to identify defect types that have high
occurrence, using techniques such as Pareto Analysis and Histograms
4. The stakeholders decide which defects (or defect types) will be analyzed in detail. The defect
selection parameters and other information prepared are used to make this decision. Attention
should be given to both existing types of defects as well as new types of defects
Examples of supporting methods to determine root causes include the following [ISTQB ITP]:
Cause/effect diagrams
Ishikawa fishbone diagrams
Fault tree analysis
Process analysis
Use of standard defect classifications [IEEE 1044]
Checklists
FMEA (Failure Mode Effects Analysis)
Hardware Software Interaction Analysis
3. Define solutions
Define the solution(s) to the common cause based on the identified type(s) of solutions.
Potentially appropriate methods, tools and techniques are selected as part of the solutions.
Methods, tools and techniques can help the organization define coherent solutions that prevent
the defects from occurring again. Methods, tools and techniques can deliver solutions that are not
yet used in or known by the organization.
It is also possible that best practices from within the organization are part of the solution. Best
practices performed in a specific project or specific part of the organization can support the
organization in defining coherent solutions that prevent defects from reoccurring.
4. Validate proposed solutions
Validate the proposed solution to determine if the solutions prevent the selected defects from
occurring again.
(Manual) simulation
Solution refinements
Cost of the analysis and resolution activities
Measures of changes to the performance
Elaboration
Examples of activities for stakeholder involvement include the following:
Defining defect selection parameters
Defining defect classification schemes
Selecting defects for analysis
Conducting causal analysis
Validating proposed solutions
Defining action proposals
Introductory Notes
Quality Control consists of the procedures and practices employed to ensure that a work product or deliverable
conforms to standards or requirements. In a broad view, the Quality Control procedures and practices can also be
applied to the processes creating the product, thereby creating a feedback loop in line with the prevention-oriented
and optimizing approach of TMMi level 5. At TMMi level 5, organizations use Quality Control to drive the testing
process.
Process Quality Control is supported by statistical techniques and methodologies. The basis for process quality
control is viewing the testing process as a series of steps, each of which is a process in itself with a set of inputs and
outputs. Ideally the output of each step is determined by rules, procedures and/or standards that prescribe how the
step is to be executed. Practically speaking the outcome of a step may be different than expected. The differences
are caused by variations. Variations may be due to human error, influences outside of the process, unpredictable
events such as hardware/software malfunctions and so forth. If there are many unforeseen variations impacting the
process step, then the process will be unstable, unpredictable, and uncontrollable. When a process is unpredictable
then it cannot be relied upon to give quality results.
An organization that controls its processes quantitatively will be able to do the following:
Determine the stability of a process
Identify the process performance within the defined natural boundaries
Identify unpredictable processes
Identify the improvement opportunities in existing processes
Identify the best performing processes
Process quality control involves establishing objectives for the performance of the standard test process, which is
defined in the Test Lifecycle and Integration process area. These objectives should be based on the defined test
policy. As already stated in the Test Lifecycle and Integration process area, multiple standard test processes may be
present to address the needs of different application domains, test levels, lifecycle models, methodologies, and tools
in use in the organization. Based on the measurements taken on test process performance from the projects, analysis
takes place and adjustments are made to maintain test process performance within acceptable limits. When the test
process performance is stabilized within acceptable limits, the defined test process, the associated measurements
and the acceptable limits for measurements are established as a baseline and used to control test process
performance statistically. The test process capability of the organization’s standard test process, i.e., the test process
performance a new project can expect to attain, is now fully understood and known. As a result, the deviations from
these expectations can be acted upon in a project early and consistently to ensure that the project performs within
the acceptable limits. The test process capability can be used to establish unambiguous quantitative test process
performance objectives for the project.
Product quality control builds on operational profiles [Musa] and usage models of the product in its intended
environment to make statistically valid inferences resulting in a representative sample of test cases. This approach,
especially useful at the system test level, uses statistical testing methods to predict product quality based on this
representative sample. In other words, when testing a subset of all possible usages as represented by the usage or
operational profile, the test results can serve as the basis for conclusions about the product’s overall performance.
At TMMi level 5, an organization is able to quantify confidence levels and trustworthiness because the infrastructure
has been provided to reflect the most frequently requested operations or paths through an operational profile using
historical data. Using test data from statistical testing, models such as reliability growth models are built to predict
the confidence level and trustworthiness of the system. Confidence level, usually expressed as a percentage,
provides information as to the likelihood that the product is defect free. Trustworthiness is defined as the probability
that there are no defects in the product that will cause the system to fail. Both the level of confidence and
trustworthiness are typically used as exit criteria when applying statistical testing. At TMMi level 5 these factors are
used in combination and are usually the main drivers to determine when to stop testing.
Note that addressing product quality control and statistical testing requires a great deal of expertise with statistical
techniques including modeling, usage modeling, statistics, testing, and measurements. Specialists must be selected
and trained to become leaders in this area of testing.
Scope
The process area Quality Control addresses the practices for establishing a statistically controlled test process
(process quality control), and testing based on statistical methods and techniques (product quality control). Process
quality control strongly builds on the deployed measurement practices from the Test Measurement process area from
TMMi level 4. Product quality control builds on the deployed practices from the Product Quality Evaluation process
area from TMMi level 4. Both types of quality control make use of available measurement data regarding the test
process and product quality from the TMMi level 4 process areas.
3. Define the organization’s quantitative objectives for test process performance in cooperation with
relevant stakeholders
Objectives may be established directly for test process measurements (e.g., test effort and defect
removal effectiveness) or indirectly for product quality measurements (e.g., reliability) that are the
result of the test process.
4. Define the priorities of the organization’s quantitative objectives for test process performance in
cooperation with relevant stakeholders, e.g., customers and end users
5. Resolve conflicts among the test process performance objectives (e.g., if one objective cannot be
achieved without compromising another objective)
6. Revise the organization’s quantitative objectives for test process performance as necessary
Sub-practices
1. Collect and analyze measurements from projects
Refer to the Test Measurement process area for more information on collecting and analyzing
data.
2. Establish and maintain the organization’s test process performance baselines from the collected
measurements and analyses
Test process performance baselines (typically including minimum and maximum tolerances) are
derived by analyzing the collected measures to establish a distribution and range of results that
characterize the expected performance for selected test processes when used on an individual
project in the organization.
3. Review the validity and get agreement with relevant stakeholders about the test process
performance baselines
4. Make the test process performance baselines available across the organization
The test process performance baselines are used by the projects to estimate the upper and lower
boundaries for test process performance. (Refer to SP 1.4 Apply statistical methods and
understand variations for more information on upper and lower boundaries of test process
performance.)
5. Revise the set of test process performance baselines as appropriate
3. Calculate the boundaries of test process performance for each measured attribute
Examples of techniques for analyzing the reasons for causes of variation include
the following:
Cause and effect (fishbone) diagrams
Designed experiments
Control charts (applied to input or underlying test sub-processes)
Subgrouping
Note that some anomalies may simply be extremes of underlying distribution rather than
problems.
Refer to the Defect Prevention process area for more information about analyzing the cause of an
anomaly.
6. Determine what corrective action should be taken when causes of variations are identified
Refer to the Test Process Optimization process area for more information about taking corrective
action.
7. Recalculate the upper and lower boundaries for each measured attribute of the selected test
processes as necessary
8. Record statistical management data in the organization’s measurement repository
Refer to the Test Measurement process area for more information about managing and storing
data, measurement definitions, and results.
3. For each test process, documentation of actions needed to address deficiencies in its process
performance
Sub-practices
1. Compare the boundaries of the measured attributes to the test process performance objectives
This comparison provides an appraisal of the test process capability for each measured attribute
of a test process.
2. Periodically review the performance of each selected test process, its capability to be statistically
managed and appraise progress towards achieving the test process performance objectives
3. Identify and document test process capability deficiencies
4. Determine and document actions needed to address test process capability deficiencies
Examples of types of reliability growth models include the following [Musa and
Ackerman]:
Static model, which is best applied to unchanging software with an unchanged
operational profile
Basic model, which is useful for modeling failure occurrences for software
being tested and continuously debugged
Logarithmic poisson model, which is best applied when it is assumed that some
defects are more likely to cause failures, and that on average the improvement
in failure intensity with each correction decreases exponentially as the
corrections are made.
Elaboration
Examples of measures used in monitoring and controlling the Quality Control process include the
following:
Trends in the organization’s test process performance with respect to changes in work
products and task attributes (e.g., test effort, lead time and product quality)
Profile of test processes under statistical management (e.g., number planned to be under
statistical management, number currently being statistically managed, and number that are
statistically stable)
Number of causes of variation identified and resolved
The degree to which actual testing experiences has become a good representative of
expected usage
Reliability trends
Introductory Notes
At the highest level of the TMMi, the test process is subject to continuous improvement across projects and across
the entire organization. The test process is quantified and can be fine-tuned in order for capability growth to become
an ongoing process. An organizational infrastructure exists to support this continuous growth. This infrastructure,
which consists of policies, standards, training facilities, tools and organizational structures, has been put in place
through goal achievement processes that constitute the TMMi hierarchy. Test Process Optimization is in essence
about developing a system to continuously improve testing. Optimizing the test process involves the following:
Establishing test process assessment and improvement procedures with responsibilities assigned from a
leadership perspective
Identifying testing practices that are weak and those that are strong and suggesting areas for process asset
extraction and re-use
Deploying incremental and innovative improvements that measurably improve the organization’s test processes
and technologies
Selecting and providing best practices to the organization
Continuously evaluating new test-related tools and technologies for adoption
Supporting technology and knowledge transfer
Re-use of high quality test assets
Continuously improving the testing process involves proactively and systematically identifying, evaluating and
implementing improvements to the organization’s standard test process and the projects’ defined processes on a
continuous basis. Test process improvement activities are often also needed as a result of a changing environment,
e.g., the business context, the test environment itself or a new development lifecycle. All of this is done with higher-
level management sponsorship. Training and incentive programs are established to enable and encourage everyone
in the organization to participate in test process improvement activities. Test improvement opportunities are identified
and evaluated for potential return on investment to the organization using business goals and objectives as a point
of reference. Pilots are performed to assess, measure and validate the test process changes before they are
incorporated into the organization’s standard process.
To support Test Process Optimization the organization typically has established a group, e.g., a Test Process Group,
that works with projects to introduce and evaluate the effectiveness of new testing technologies (e.g., test tools, test
methods, and test environments) and to manage changes to existing testing technologies. Particular emphasis is
placed on technology changes that are likely to improve the capability of the organization’s standard test process (as
established in the Test Lifecycle and Integration process area). By maintaining an awareness of test-related
technology innovations and systematically evaluating and experimenting with them, the organization selects
appropriate testing technologies to improve the quality of its products and the productivity of its testing activities.
Pilots are performed to assess new and unproven testing technologies before they are incorporated into standard
practice.
Organizations now fully realize that both test processes and testware are corporate assets and that those of high
quality should be documented and stored in a process repository in a format that is modifiable for re-use in future
projects. Such a repository, possibly already establishedless comprehensively at TMMi level 3, is often called a test
process asset library. At TMMi level 3 some re-use of testware across projects may already take place; however, re-
use of test assets become a major goal at TMMi level 5. Note that test process re-use in this context means the use
of one test process description to create another test process description.
Scope
The process area Test Process Optimization addresses the practices for continuously identifying test process
improvements, evaluating and selecting new testing technologies and deploying them in the organization’s standard
test process, including planning, establishing, monitoring, evaluating and measuring the test improvement actions. It
also covers the re-use of high quality test assets across the organization. This process area complements and
TMMi Framework R1 2 Page 183 of 226
TMMi Level 5 Test Process Optimization
extends the processes and practices defined by the Test Organization and Test Lifecycle and Integration process
areas at TMMi level 3.
Examples of sources for test process improvement proposals include the following:
Findings and recommendations from regular test process assessments (Refer to the Test
Organization process area at TMMi level 3 for more information on test process assessments.)
Note that at TMMi level 5 test process assessments, both formal and informal, are typically
performed more frequently.
Analysis of data about customer/end-user problems as well as customer/end-user satisfaction
Analysis of data about product quality and test process performance compared to the
objectives
Analysis of data to determine common defect causes, e.g., from Defect Prevention
Operational product data
Measured effectiveness and efficiency of test process activities
Lessons learned documents (e.g., test evaluation reports)
Spontaneous ideas from managers and staff
Project retrospective meetings
Test tool evaluations (Test tools are regularly evaluated regarding achievement of their defined
objectives.)
Refer to the Test Organization process area for more information about test process improvement
proposals.
2. Analyze the costs and benefits of test process improvement proposals as appropriate
Test process improvement proposals that do not have an expected positive return on investment
are rejected.
Examples of criteria for evaluating costs and benefits include the following:
Contribution towards meeting the organization’s product quality and test
process performance objectives
Effect on mitigating identified test project and product risks
Ability to respond quickly to changing circumstances
Effect on related (test) processes and associated assets
Cost of defining and collecting data that supports the measurement and
analysis of the test process proposal
Expected life span of the results of implementing the proposal
4. Estimate the costs, effort and schedule required for deploying each process improvement
proposal
5. Identify the process improvement proposal to be piloted prior to organization-wide deployment
Alternatives to piloting are considered as appropriate e.g., controlled experiments, simulations,
case studies.
6. Document the results of the evaluation of each process improvement proposal
2. Analyze potential innovative and new testing technologies, e.g., new test tools or methods, to
understand their effects on test process elements and to predict their influence on the process
As part of the analysis consider constraints, prioritization of possible features, hardware/software
issues, suppliers’ track records, suppliers’ presentations, and integration with existing technologies
and processes.
3. Analyze the costs and benefits of potential new testing technologies
Test process improvement proposals that do not have an expected positive return on investment
are rejected. A major criterion is the expected contribution of the new testing technology toward
meeting the organization’s product quality and test process performance objectives.
Both short-term, and long-term recurring (maintenance), costs should be taken into account, and
also the compliance of the new testing technology with the test policy.
As part of this sub-practice, alternative solutions, e.g., a test process change, which provide the
same benefits but at lower costs, also are considered.
4. Create an improvement proposal for those new testing technologies that could result in improving
the organization’s way of working
As part of the improvement proposal, estimate the cost, effort and schedule required for deploying
the new testing technology.
5. Identify the new testing technologies to be piloted before organization-wide deployment
Alternatives to piloting are considered, e.g., controlled experiments, simulations, case studies.
6. Document the results of the evaluation of each new testing technology
5. Perform each pilot in an environment that is sufficiently representative of the environment in which
the new testing technology will be deployed eventually
Allow for additional resources for the pilot project, as necessary.
6. Track the pilots against their plans
7. Review and document the results of the pilots
Refer to SP 1.2 Pilot test process improvement proposals for more details on this sub-practice.
2. Identify approaches to address potential problems to deploying each test process and testing
technology improvement
When defining the plan, changes and stability for the organization and project must be carefully
balanced. A risk assessment may be used to identify the potential problems. The lifecycle model
being used (e.g., sequential, iterative, agile) will influence the frequency cycle for changes in
process that will be acceptable to projects.
3. Determine change management activities that are required to successfully deploy the test
improvements
4. Establish objectives and measures for confirming the value of each test process and testing
technology improvement with respect to the organization’s test performance objectives
Examples of measures for determining the value of a test process and testing
technology improvement include the following:
Return on investment
Payback period
Measured improvement in product quality
Measured improvement in the project’s test process performance
Number and type of project and product risks mitigated
Refer to the Test Measurement process area for more information on establishing measures and
the measurement and analysis process.
5. Document the plan for deploying each test process and testing technology improvement
6. Review and get agreement with relevant stakeholders on the plan for deploying each test process
and testing technology improvement
7. Revise the plan for deploying each test process and testing technology improvement as necessary
- Identifying and documenting lessons learned and problems encountered during the
deployment
- Identifying and documenting new test process and testing technology improvement proposals
- Revising test process and testing technology improvement measures, objectives, priorities
and deployment plans
Examples of activities where test assets for re-use can be identified include the
following:
Project retrospectives / lessons learned sessions
Test evaluation report
Test process assessments, whereby areas of strength often indicate test
process components and/or testware of high quality that are candidates for re-
use
Test improvement efforts
2. Document the background and context for each of the identified test assets for re-use
3. Submit re-use proposals to the Test Process Group
TMMi Framework R1 2 Page 192 of 226
TMMi Level 5 Test Process Optimization
re-use, each test asset meeting the re-use criteria should be represented by a template. The
template should contain information that allows the test asset to be tailored for specific projects.
Refer to the Test Organization and Test Lifecycle and Integration process areas at level 3 for more
information about the test process asset library.
2. Review and test the defined re-usable test asset to ensure it is fit for re-use
3. Deploy the re-usable test assets across the organization and within projects
4. Provide consulting, as appropriate, to support deployment of the new or updated re-usable test
assets
5. Provide (updated) training material and perform the training as necessary
6. Perform marketing inside and outside testing on successes achieved on the re-use process to
keep staff motivated and involved
7. Document and review the results of test asset re-use deployment
The generic training package on the test asset is tailored to meet the specific project needs. The
training package is used to instruct project staff.
3. Use the test asset on a project
The (tailored) test asset is implemented (used) for the project. It is monitored and controlled using
appropriate mechanisms. Measurements are taken during test process execution regarding the
test asset.
4. Refine the re-usable test asset
Using the measurements taken during process execution, it is determined whether the re-use of
the test asset is efficient and effective. If there are issues, these are analyzed. Appropriate
changes are made to the test asset definition.
Elaboration
Examples of training topics include the following:
Test process improvement
Planning, designing and conducting pilots
Test process assessments
Cost/benefit analysis
Tool selection and implementation process
Process analysis and modeling
Deployment strategies
Technology transfer
Change management
Team building
Re-use strategies and processes
When conducting assessments against the TMMi model, there is a need to develop/purchase and implement an
assessment framework (robust method, skilled resources, tools, etc.) that can effectively and efficiently use the model
to evaluate organizational process capabilities. A good assessment framework will deliver high quality outputs in a
consistent, repeatable and comparable way. The outputs also must be easily understood and usable. This also
includes the prime tenet of being able to assess an organization’s capability through accurate interpretation of the
model requirements based on the context of the organization (size, industry sector, development methodology, etc.)
and being able to interpret the capability based on fitness for purpose.
An assessment can be formal (leading to certification) or informal (indicative information only). The section
“Assessment Types” below provides more detailed information on the various types of assessments. An assessment
process consists of three elements to be used as appropriate depending on the assessment type:
1. A reference model (e.g., the TMMi) that every assessment will use to evaluate an organization’s processes,
procedures, activities and deliverables including the organizational levels of institutionalization, management,
adherence and quality assurance
2. An assessment method that contains the process flow, detailed procedures, tools, etc., that will ensure
consistency and auditability during the assessment
3. Trained and skilled assessor and lead assessor resources that are qualified to manage and all aspects of an
assessment in accordance with the defined assessment method
The first element is provided via the TMMi standard reference model. The following sections provide guidance on
acquiring an accredited assessment method and developing accredited assessor resources as outlined by the TMMi
Foundation. For more information on assessment methods and assessor resource accreditation by the TMMi
Foundation, please visit their website.
The requirements contained in TAMAR ensure that the assessment method is;
Performable - The assessment can be performed in a manner that satisfies the objectives of the
assessment and complies with the requirements of the TMMi Foundation
Repeatable - The assessment provides consistent benchmarks e.g., if the appraisal was re-run by a
different assessment team on the same data, the same results, strengths and weaknesses would be
identified
Informal Assessments
Informal assessments are indicative and have fewer requirements for compliance. This means they are often more
flexible and cheaper to perform because the corroboration of evidence is not required, which significantly reduces
the time it takes to perform the assessment. The additional flexibility of an informal assessment enables the
organization to focus an assessment on a particular business unit. Informal assessments will provide only an
indicative view of the organization’s maturity and cannot lead to a formal maturity rating or certification. An informal
assessment is often used to identify the major improvements that need to be made and it can also be used determine
the progress of a TMMi implementation. An informal assessment is often adequate as an initial survey, although a
formal assessment can also be used for this.
Formal Assessments
Formal assessments must conform to all the TAMAR requirements and must be led by a qualified Lead Assessor.
There are two mandated data sources: interviews and artifacts (documentary evidence). All interview data must be
supported by corroborating artifacts. Artifacts are evidence which could include documents, templates, screen
shots or similar. A formal assessment, if undertaken against an accredited assessment method, can lead to a
formal maturity rating or certification.
Data Collection
During the Data Collection phase, the assessment team conducts interviews and gathers information to support the
interviews by means of artifacts, questionnaires or surveys. All data is collected, logged and stored according to the
requirements for confidentiality and security.
Assessment Closure
The Closure phase occurs when the assessment is complete. The main activities are:
Archiving all assessment data according to required confidentiality and security requirements
Completing the Data Submission Report (see below)
Assessment Data
The TMMi Foundation will use the assessment data to analyze market trends and industry level maturity. Sanitized
data is sent to the Foundation who will then;
Establish initial consistency and completeness of assessments
Enable the Foundation to perform market research (on sanitized data)
Enable the Foundation to verify formal assessment ratings and issue certificates as appropriate
Assessor Data
Assessor reporting criteria is set out in the DSR Requirements. Each assessor is responsible for maintaining his or
her individual logs which must be sent to the TMMi Foundation. This is necessary to demonstrate to the Foundation
that assessors not only have acquired the required skills, experience and training to be accredited by the Foundation,
but also maintain these skills.
There are two ways to gain access to an accredited method and become an accredited assessment provider. This
can be done either by using the assessment method of the TMMi Foundation or developing one’s own assessment
method. A list of accredited methods and accredited assessment providers can be found at the web site of the TMMi
Foundation. An independent Accreditation Panel within the TMMi Foundation manages accreditation of methods. An
accreditation for a method is valid for three years after which the method is subject to re-accreditation.
All applications are reviewed by an independent Accreditation Panel in the TMMi Foundation. If the requirements are
satisifed the assessor will be accredited as demonstrating the experience, knowledge and training sufficient to
conduct TMMi assessments using a prescribed, accredited method. The Accreditation Panel will take into
consideration equivalent experience and/or qualifications. Assessor resource accreditation is valid for one year and
renewable subject to demonstrating that the skills and experience are still current (e.g., contributing sufficient
assessment hours/activities, etc.). All accredited assessor and lead assessor resources are listed on the website of
the TMMi Foundation.
A.4.2 Training
The TMMi Foundation does not provide training on the structure, contents or interpretation of the TMMi model.
However, they have published Learning Objectives under the title “TMMi Professional – TMMi model training” which
must be covered by any training received. Further, they publish training providers of TMMi model courses and will
run independent examinations and provide certification for attendees that have passed the exam.
The TMMi Foundation does provide approved training courses for assessors and lead assessors using the
Foundation-owned assessment method. If the assessment provider is using a proprietary, accredited method, they
need to demonstrate adequate training has been provided to the satisfaction of the Accreditation Panel of the TMMi
Foundation.
Organizations are free to choose the improvement approach for the implementation of TMMi. In addition to IDEAL,
there are several other models for the implementation of process improvement. In general these models are based
on Edward Deming’s plan-do-check-act cycle. The Deming cycle starts with making a plan that determines the
improvement goals and how they will be achieved (plan). Then the improvements are implemented (do) and it is
determined whether the planned advantages have been achieved (check). Based on the results of this assessment
further actions are taken as needed (act).
This annex provides an overview of the phases and activities of the IDEAL improvement process.
1This text in this annex is re-used with permission from chapter 5 of “The Little TMMi” [Veenendaal and
Cannegieter]
TMMi Framework R1 2 Page 204 of 226
Glossary
Set Context
The management needs to determine how the change effort fits the quality and business strategy. Which specific
organizational goals will be realized or supported by the TMMi implementation? How are current projects influenced
by the improvement? Which proceeds need to be yielded, for example in terms of fewer issues and incidents or the
shortening of the lead time for test execution? During the project, the context and effects will become more concrete,
but it is important to be as clear as possible early in the project.
Build Sponsorship
Gaining support from the responsible managers, or building sponsorship, is extremely important in improvement
projects. This concerns test management, IT management and business management sponsorship, because all
these stakeholders will be influenced by the change. Sponsorship is important during the entire project, but because
of the insecurity caused by changes, especially active support at the beginning of the project is important. Supporting
the improvement program is an important part of sponsorship, but sponsorship also includes providing active
participation or promoting the project when there is resistance.
Charter Infrastructure
As a final activity in the Initiating phase, the way in which a change project is executed is determined. An infrastructure
is put in place for this. The infrastructure must be described explicitly, including responsibilities and qualifications.
Usually the infrastructure consists of a project board guiding several improvement teams. On the project board are
the sponsor, possibly the other sponsors of other improvement projects, the manager of the improvement program
and possibly an external consultant. In addition there is often also an (external) TMMi expert. The project board is
ultimately responsible for the improvement program and agrees on plans, milestones and final results. The project
board has the ultimate power to decide and is the highest escalation level.
Develop Recommendations
The recommendations suggest a way of proceeding in subsequent activities. Which TMMi process area is
implemented first? Which part of a process area is to be addressed and in what way? The recommendations are
formulated under the guidance of (internal or external) TMMi experts in the specific process area.
Set Priorities
The first activity of this phase is to set priorities for the change effort. For example, it is futile to implement all five
process areas of level 2 at once. When priorities are set, it is determined which process area(s) and which parts of
them are implemented first. Several factors, such as available resources, visibility of the results, likely resistance,
contribution to organizational goals, etc., should be taken into account.
Develop Approach
Using the recommendations and priorities, a strategy is developed for achieving the desired situation, the desired
TMMi level, and the resources needed to achieve them. Technical factors considered include new methods,
techniques or resources. Attention must be paid to training, developing process descriptions and possible tool
selection. Non-technical factors considered include knowledge and experience, implementation approach,
resistance, support, sense of urgency, and the organization’s culture, among other things.
Plan Actions
With the approach defined, detailed actions can be determined. Together with information taken from prior activities,
these are combined into a plan including, among other things, actions, schedule, milestones, decision points,
resources, responsibilities, measurement, tracking mechanisms, risks and implementation strategy.
Create Solution
The Acting phase begins with developing solutions to address the broadly outlined problems. These solutions should
satisfy the purposes and practices of TMMi and contribute to achieving the desired situation. The solutions can
include processes, templates, tools, knowledge, skills (training), information and support. The solutions, which can
be quite complex, are often developed by improvement teams which include a TMMi expert. An approach using
improvement teams that has been proven to be successful is the improvement team relay [Zandhuis]. In an
improvement team relay, a number of successive improvement teams develop and implement (parts of) the solution
in a short time. Some advantages of the improvement team relay include reducing the lead time that would be
required if only one overall improvement team was used, achieving results quickly and allowing for more exact
guidance. Every improvement team needs to have a clear goal and be given a clear assignment by management.
As many employees as possible need to be involved in actually working out the solutions; an external consultant can
provide guidance and content input.
Pilot/Test Solution
Following Tom Gilb’s advice, “If you don’t know what you are doing, don’t do it on a large scale,” the created solution
first needs to be tested in one or more test projects. Sometimes only practical experience can show the exact effect
of a solution. In pilots such as this, usually one or more test projects are appointed in which the improvements are
implemented and evaluated before additional projects adopt the improvements.
Refine Solution
With the use of the results of the test or pilot, the solution can be optimized. Several iterations of the test-optimizing
process may be necessary to reach a satisfactory solution that will work for all projects. A solution should be workable;
waiting for a “perfect” solution may unnecessarily delay the implementation.
Implement Solution
Once the solutions are deemed workable, they can be implemented throughout the (test) organization. This is usually
the activity that provokes the most resistance. Several implementation approaches can be used, such as:
Big bang - all the organizational changes are implemented at the same time
One project at a time - in every project the change is implemented at a set moment in time
Just in time - the change is implemented when the process is executed
No single implementation approach is always better than another; the approach should be chosen based on the
nature of the improvement and organizational circumstances. For a major change, implementation may require
substantial time, resources, effort and attention from management.
Using these questions for guidance, lessons learned are collected, analyzed, summarized and documented.
Glossary
acceptance criteria The exit criteria that a component or system must satisfy in order to be
accepted by a user, customer, or other authorized entity. [IEEE 610]
acceptance testing Formal testing with respect to user needs, requirements, and business
processes conducted to determine whether or not a system satisfies the
acceptance criteria and to enable the user, customers or other authorized entity
to determine whether or not to accept the system. [After IEEE 610]
action proposal The documented action to be taken to prevent the future occurrence of common
causes of defects or to incorporate best practices into test process assets.
actual result The behavior produced/observed when a component or system is tested.
alpha testing Simulated or actual operational testing by potential users/customers or an
independent test team at the developers’ site, but outside the development
organization. Alpha testing is often employed for off-the-shelf software as a form
of internal acceptance testing.
audit An independent evaluation of software products or processes to ascertain
compliance to standards, guidelines, specifications, and/or procedures based
on objective criteria, including documents that specify:
(1) the form or content of the products to be produced
(2) the process by which the products shall be produced
(3) how compliance to standards or guidelines shall be measured. [IEEE 1028]
availability The degree to which a system or component is operational and accessible
when required to use. [IEEE 610]
best practice A superior method or innovative practice that contributes to the improved
performance of an organization within a given context, usually recognized as
‘best’ by other peer organizations.
beta testing Operational testing by potential and/or existing users/customers at an external
site not otherwise involved with the developers, to determine whether or not a
component or system satisfies the user/customer needs and fits within the
business processes. Beta testing is often employed as a form of external
acceptance testing for off-the-shelf software in order to acquire feedback from
the market.
black-box testing Testing, either functional or non-functional, without reference to the internal
structure of the component or system.
black-box test design Technique/procedure to derive and/or select test cases based on an analysis of
technique the specification, either functional or non-functional, of a component or system
without reference to its internal structure.
boundary value analysis A black box test design technique in which test cases are designed based on
boundary values.
branch coverage The percentage of branches that have been exercised by a test suite. 100%
branch coverage implies both 100% decision coverage and 100% statement
coverage.
branch testing A white box test design technique in which test cases are designed to execute
branches.
Capability Maturity Model A framework that describes the key elements of an effective product
Integration development and maintenance process. The Capability Maturity Model
(CMMI) Integration covers best practices for planning, engineering and managing
product development and maintenance. [CMMI]
capture/playback tool A type of test execution tool where inputs are recorded during manual testing in
order to generate automated test scripts that can be executed later (i.e.
replayed). These tools are often used to support automated regression testing.
TMMi Framework R1 2 Page 209 of 226
Glossary
cause-effect graphing A black box test design technique in which test cases are designed from cause-
effect graphs. [BS 7925/2]
classification tree method A black box test design technique in which test cases, described by means of a
classification tree, are designed to execute combinations of representatives of
input and/or output domains. [Grochtmann]
checklist Checklists are ‘stored wisdom’ aimed at helping to interpret the rules and
explain their application. Checklists are used to increase effectiveness at finding
major defects in a specification during a review. A checklist usually takes the
form of a list of questions. All checklist questions are derived directly and
explicitly from cross-referenced specification rules. [Gilb and Graham]
code coverage An analysis method that determines which parts of the software have been
executed (covered) by the test suite and which parts have not been executed,
e.g., statement coverage, decision coverage or condition coverage.
common causes The underlying source of a number of defects of a similar type, so that if the root
cause is addressed the occurrence of these types of defects is decreased or
removed.
component A minimal software item that can be tested in isolation.
component integration Testing performed to expose defects in the interfaces and interaction between
testing integrated components.
component testing The testing of individual software components. [After IEEE 610]
condition coverage The percentage of condition outcomes that have been exercised by a test suite.
100% condition coverage requires each single condition in every decision
statement to be tested as True and False.
condition determination The percentage of all single condition outcomes that independently affect a
coverage decision outcome that have been exercised by a test case suite. 100% modified
condition decision coverage implies 100% decision condition coverage.
condition testing A white box test design technique in which test cases are designed to execute
condition outcomes.
confidence level The likelihood that the software is defect-free. [Burnstein]
configuration The composition of a component or system as defined by the number, nature,
and interconnections of its constituent parts.
configuration auditing The function to check on the contents of libraries of configuration items, e.g., for
standards compliance. [IEEE 610]
configuration control An element of configuration management, consisting of the evaluation, co-
ordination, approval or disapproval, and implementation of changes to
configuration items after formal establishment of their configuration
identification. [IEEE 610]
configuration control board A group of people responsible for evaluating and approving or disapproving
(CCB) proposed changes to configuration items, and for ensuring implementation of
approved changes. [IEEE 610]
configuration identification An element of configuration management, consisting of selecting the
configuration items for a system and recording their functional and physical
characteristics in technical documentation. [IEEE 610]
configuration item An aggregation of hardware, software or both, that is designated for
configuration management and treated as a single entity in the configuration
management process. [IEEE 610]
configuration management A discipline applying technical and administrative direction and surveillance to:
identify and document the functional and physical characteristics of a
configuration item, control changes to those characteristics, record and report
change processing and implementation status, and verify compliance with
specified requirements. [IEEE 610]
configuration management A tool that provides support for the identification and control of configuration
tool items, their status over changes and versions, and the release of baselines
consisting of configuration items.
confirmation testing See re-testing.
continuous representation A capability maturity model structure wherein capability levels provide a
recommended order for approaching process improvement within specified
process areas. [CMMI]
coverage tool A tool that provides objective measures of what structural elements, e.g.,
statements and/or branches, have been exercised by a test suite.
debugging tool A tool used by programmers to reproduce failures, investigate the state of
programs and find the corresponding defect. Debuggers enable programmers to
execute programs step by step, to halt a program at any program statement and
to set and examine program variables.
decision coverage The percentage of decision outcomes that have been exercised by a test suite.
100% decision coverage implies both 100% branch coverage and 100%
statement coverage.
decision table testing A black box test design technique in which test cases are designed to execute
the combinations of inputs and/or stimuli (causes) shown in a decision table.
[Veenendaal]
decision testing A white box test design technique in which test cases are designed to execute
decision outcomes.
defect A flaw in a component or system that can cause the component or system to fail
to perform its required function, e.g., an incorrect statement or data definition. A
defect, if encountered during execution, may cause a failure of the component
or system.
defect based test design A procedure to derive and/or select test cases targeted at one or more defect
technique categories, with tests being developed from what is known about the specific
defect category. See also defect taxonomy.
defect classification A set of categories, including phase, defect type, cause, severity, priority, to
scheme describe a defect in a consistent manner.
defect density The number of defects identified in a component or system divided by the size
of the component or system (expressed in standard measurement terms, e.g.,
lines-of-code, number of classes or function points).
Defect Detection The number of defects found by a test phase, divided by the number found by
Percentage that test phase and any other means afterwards.
(DDP)
defect management The process of recognizing, investigating, taking action and disposing of
defects. It involves recording defects, classifying them and identifying the
impact. [After IEEE 1044]
defect management tool A tool that facilitates the recording and status tracking of defects and changes.
They often have workflow-oriented facilities to track and control the allocation,
correction and re-testing of defects and provide reporting facilities. See also
incident management tool.
defect masking An occurrence in which one defect prevents the detection of another. [After
IEEE 610]
defect prevention The activities involved in identifying defects or potential defects, analyzing these
defects to find their root causes and preventing them from being introduced into
future products. [After Burnstein]
defect report A document reporting on any flaw in a component or system that can cause the
component or system to fail to perform its required function. [After IEEE 829]
defect taxonomy A system of (hierarchical) categories designed to be a useful aid for
reproducibly classifying defects.
TMMi Framework R1 2 Page 211 of 226
Glossary
defined process A managed process that is tailored from the organization’s set of standard
processes according to the organization’s tailoring guidelines; has maintained
process description; and contributes work products, measures, and other
process improvement information to the organizational process assets. [CMMI]
deliverable Any (work) product that must be delivered to someone other than the (work)
product’s author.
driver A software component or test tool that replaces a component that takes care of
the control and/or the calling of a component or system. [After TMap]
dynamic analysis tool A tool that provides run-time information on the state of the software code.
These tools are most commonly used to identify unassigned pointers, check
pointer arithmetic and to monitor the allocation, use and de-allocation of
memory and to flag memory leaks.
dynamic testing Testing that involves the execution of the software of a component or system.
efficiency The capability of the software product to provide appropriate performance
relative to the amount of resources used under stated conditions. [ISO 9126]
elementary comparison A black box test design technique in which test cases are designed to execute
testing combinations of inputs using the concept of condition determination coverage.
[TMap]
emulator A device, computer program, or system that accepts the same inputs and
produces the same outputs as a given system. [IEEE 610] See also simulator.
entry criteria The set of generic and specific conditions for permitting a process to go forward
with a defined task, e.g., test phase. The purpose of entry criteria is to prevent a
task from starting which would entail more (wasted) effort compared to the effort
needed to remove the failed entry criteria. [Gilb and Graham]
equivalence partition A portion of an input or output domain for which the behavior of a component or
system is assumed to be the same, based on the specification.
equivalence partitioning A black box test design technique in which test cases are designed to execute
representatives from equivalence partitions. In principle test cases are designed
to cover each partition at least once.
error A human action that produces an incorrect result. [After IEEE 610]
error guessing A test design technique where the experience of the tester is used to anticipate
what defects might be present in the component or system under test as a
result of errors made, and to design tests specifically to expose them.
exhaustive testing A test approach in which the test suite comprises all combinations of input
values and preconditions.
exit criteria The set of generic and specific conditions, agreed upon with the stakeholders,
for permitting a process to be officially completed. The purpose of exit criteria is
to prevent a task from being considered completed when there are still
outstanding parts of the task which have not been finished. Exit criteria are used
to report against and to plan when to stop testing. [After Gilb and Graham]
expected result The behavior predicted by the specification, or another source, of the
component or system under specified conditions.
experienced-based test Procedure to derive and/or select test cases based on the tester’s experience,
design technique knowledge and intuition.
exploratory testing An informal test design technique where the tester actively controls the design
of the tests as those tests are performed and uses information gained while
testing to design new and better tests. [After Bach]
failure Deviation of the component or system from its expected delivery, service or
result. [After Fenton]
heuristic evaluation A static usability test technique to determine the compliance of a user interface
with recognized usability principles (the so-called “heuristics”).
higher level management The person or persons who provide the policy and overall guidance for the
process, but do not to provide direct day-to-day monitoring and controlling of the
process. Such persons belong to a level of management in the organization
above the intermediate level responsible for the process and can be (but are not
necessarily) senior managers. [CMMI]
horizontal traceability The tracing of requirements for a test level through the layers of test
documentation (e.g., test plan, test design specification, test case specification
and test procedure specification or test script). The horizontal traceability is
expected to be bi-directional.
impact analysis The assessment of change to the layers of development documentation, test
documentation and components, in order to implement a given change to
specified requirements.
improvement proposal A change request that addresses a proposed process or technology
improvement and typically also includes a problem statement, a plan for
implementing the improvement, and quantitative success criteria for evaluating
actual results of the deployment within the change process managed by the
Test Process Group.
Incident Any event occurring that requires investigation. [After IEEE 1008]
incident logging Recording the details of any incident that occurred, e.g., during testing.
incident management The process of recognizing, investigating, taking action and disposing of
incidents. It includes logging incidents, classifying them and identifying the
impact. [After IEEE 1044]
incident management tool A tool that facilitates the recording and status tracking of incidents. They often
have workflow-oriented facilities to track and control the allocation, correction
and re-testing of incidents and provide reporting facilities. See also defect
management tool.
incident report A document reporting on any event that occurred, e.g., during the testing, which
requires investigation. [After IEEE 829]
independence of testing Separation of responsibilities, which encourages the accomplishment of
objective testing. [After DO-178b]
indicator A measure that can be used to estimate or predict another measure. [ISO 14598]
informal review A review not based on a formal (documented) procedure.
input A variable (whether stored within a component or outside) that is read by a
component.
inspection A type of peer review that relies on visual examination of documents to detect
defects, e.g., violations of development standards and non-conformance to
higher level documentation. The most formal review technique and therefore
always based on a documented procedure. [After IEEE 610, IEEE 1028] See
also peer review.
institutionalization The ingrained way of doing business that an organization follows routinely as
part of its corporate culture.
intake test A special instance of a smoke test to decide if the component or system is
ready for detailed and further testing. An intake test is typically carried out at the
start of the test execution phase. See also smoke test.
integration The process of combining components or systems into larger assemblies.
integration testing Testing performed to expose defects in the interfaces and in the interactions
between integrated components or systems. See also component integration
testing, system integration testing.
level test plan A test plan that typically addresses one test level. See also test plan.
maintainability The ease with which a software product can be modified to correct defects,
modified to meet new requirements, modified to make future maintenance
easier, or adapted to a changed environment. [ISO 9126]
managed process A performed process that is planned and executed in accordance with policy;
employs skilled people having adequate resources to produce controlled
outputs; involves relevant stakeholders; is monitored, controlled and reviewed;
and is evaluated for adherence to its process description. [CMMI]
management review A systematic evaluation of software acquisition, supply, development, operation,
or maintenance process, performed by or on behalf of management that
monitors progress, determines the status of plans and schedules, confirms
requirements and their system allocation, or evaluates the effectiveness of
management approaches to achieve fitness for purpose. [After IEEE 610, IEEE
1028]
master test plan A test plan that typically addresses multiple test levels. See also test plan.
maturity level Degree of process improvement across a predefined set of process areas in
which all goals in the set are attained. [CMMI]
Mean Time Between The arithmetic mean (average) time between failures of a system. The MTBF is
Failures (MTBF) typically part of a reliability growth model that assumes the failed system is
immediately repaired, as a part of a defect fixing process.
Mean Time To Repair The arithmetic mean (average) time a system will take to recover from any failure.
(MTTR) This typically includes testing to insure that the defect has been resolved.
measure The number or category assigned to an attribute of an entity by making a
measurement. [ISO 14598]
measured process A defined process whereby product quality and process attributes are
consistently measured, and the measures are used to improve and make
decisions regarding product quality and process-performance.
precondition Environmental and state conditions that must be fulfilled before the component
or system can be executed with a particular test or test procedure.
process capability The range of expected results that can be achieved by following a process.
process improvement A program of activities designed to improve the performance and maturity of the
organization’s processes, and the result of such a program. [CMMI]
process performance A measure of actual results achieved by following a process. [CMMI]
process performance A documented characterization of the actual results achieved by following a
baseline process, which is used as a benchmark for comparing actual process
performance against expected process performance. [CMMI]
process performance Objectives and requirements for product quality, service quality and process
objectives performance.
product risk A risk directly related to the test object. See also risk.
project A project is a unique set of coordinated and controlled activities with start and
finish dates undertaken to achieve an objective conforming to specific
requirements, including the constraints of time, cost and resources. [ISO 9000]
project risk A risk related to management and control of the (test) project, e.g., lack of
staffing, strict deadlines, changing requirements, etc. See also risk.
project test plan See master test plan.
quality assurance Part of quality management focused on providing confidence that quality
requirements will be fulfilled. [ISO 9000]
quality attribute A feature or characteristic that affects an item’s quality. [IEEE 610]
quantitatively managed A defined process that is controlled using statistical and other quantitative
process techniques. The product quality, service quality, and process-performance
attributes are measured and controlled throughout the project. [CMMI]
regression testing Testing of a previously tested program following modification to ensure that
defects have not been introduced or uncovered in unchanged areas of the
software, as a result of the changes made. It is performed when the software or
its environment is changed.
release note A document identifying test items, their configuration, current status and other
delivery information delivered by development to testing, and possibly other
stakeholders, at the start of a test execution phase. [After IEEE 829]
reliability The capability of the software product to perform its required functions under
stated conditions for a specified period of time, or for a specified number of
operations. [ISO 9126]
reliability growth model A model that shows the growth in reliability over time during continuous testing of
a component or system as a result of the removal of defects that result in reliability
failures.
risk type A specific category of risk related to the type of testing that can mitigate
(control) that category. For example the risk of user-interactions being
misunderstood can be mitigated by usability testing.
root cause A source of a defect such that if it is removed, the occurrence of the defect type
is decreased or removed. [CMMI]
root cause analysis An analysis technique aimed at identifying the root causes of defects. By
directing corrective measures at root causes, it is hoped that the likelihood of
defect recurrence will be minimized.
rule A rule is any statement of a standard on how to write or carry out some part of a
systems engineering or business process. [Gilb and Graham]
sampling A statistical practice concerned with the selection of an unbiased or random
subset of individual observations within a population of individuals intended to
yield some knowledge about the population of concern as a whole.
scribe The person who records each defect mentioned and any suggestions for
process improvement during a review meeting, on a logging form. The scribe
has to ensure that the logging form is readable and understandable.
severity The degree of impact that a defect has on the development or operation of a
component or system. [After IEEE 610]
simulator A device, computer program or system used during testing, which behaves or
operates like a given system when provided with a set of controlled inputs.
[After IEEE 610, DO178b] See also emulator.
smoke test A subset of all defined/planned test cases that cover the main functionality of a
component or system, ascertaining that the most crucial functions of a program
work, but not bothering with finer details. A daily build and smoke test is among
industry best practices. See also intake test.
software lifecycle The period of time that begins when a software product is conceived and ends
when the software is no longer available for use. The software lifecycle typically
includes a concept phase, requirements phase, design phase, implementation
phase, test phase, installation and checkout phase, operation and maintenance
phase, and sometimes, retirement phase. Note these phases may overlap or be
performed iteratively.
specific goal A required model component that describes the unique characteristics that must
be present to satisfy the process area. [CMMI]
specific practice An expected model component that is considered important in achieving the
associated specific goal. The specific practices describe the activities expected
to result in achievement of the specific goals of a process area. [CMMI]
specification A document that specifies, ideally in a complete, precise and verifiable manner,
the requirements, design, behavior, or other characteristics of a component or
system, and, often, the procedures for determining whether these provisions
have been satisfied. [After IEEE 610]
specified input An input for which the specification predicts a result.
staged representation A model structure wherein attaining the goals of a set of process areas
establishes a maturity level; each level builds a foundation for subsequent
levels. [CMMI]
standard Formal, possibly mandatory, set of requirements developed and used to
prescribe consistent approaches to the way of working or to provide guidelines
(e.g., ISO/IEC standards, IEEE standards, and organizational standards). [After
CMMI]
state transition testing A black box test design technique in which test cases are designed to execute
valid and invalid state transitions.
statement coverage The percentage of executable statements that have been exercised by a test
suite.
TMMi Framework R1 2 Page 218 of 226
Glossary
statement testing A white box test design technique in which test cases are designed to execute
statements.
static analysis Analysis of software artifacts, e.g., requirements or code, carried out without
execution of these software artifacts.
static code analyzer A tool that carries out static code analysis. The tool checks source code, for
certain properties such as conformance to coding standards, quality metrics or
data flow anomalies.
static testing Testing of a component or system at specification or implementation level
without execution of that software, e.g., reviews or static code analysis.
statistical process control Statistically based analysis of a process and measurements of process
performance, which will identify common and special causes of variation in the
process performance, and maintain process performance within limits. [CMMI]
statistical technique An analytical technique that employs statistical methods (e.g., statistical
process control, confidence intervals, and prediction intervals). [CMMI]
statistical testing A test design technique in which a model of the statistical distribution of the
input is used to construct representative test cases.
statistically managed A process that is managed by a statistically based technique in which
process processes are analyzed, special causes of process variation are identified, and
process performance is contained within well-defined limits. [CMMI]
status accounting An element of configuration management, consisting of the recording and
reporting of information needed to manage a configuration effectively. This
information includes a listing of the approved configuration identification, the
status of proposed changes to the configuration, and the implementation status
of the approved changes. [IEEE 610]
stub A skeletal or special-purpose implementation of a software component, used to
develop or test a component that calls or is otherwise dependent on it. It
replaces a called component. [After IEEE 610]
sub-practice An informative model component that provides guidance for interpreting and
implementing a specific or generic practice. Sub-practices may be worded as if
prescriptive, but are actually meant only to provide ideas that may be useful for
process improvement. [CMMI]
suspension criteria The criteria used to (temporarily) stop all or a portion of the testing activities on
the test items. [After IEEE 829]
syntax testing A black box test design technique in which test cases are designed based upon
the definition of the input domain and/or output domain.
system A collection of components organized to accomplish a specific function or set of
functions. [IEEE 610]
system integration testing Testing the integration of systems and packages; testing interfaces to external
organizations (e.g., Electronic Data Interchange, Internet).
system testing The process of testing an integrated system to verify that it meets specified
requirements. [Hetzel]
technical review A peer group discussion activity that focuses on achieving consensus on the
technical approach to be taken. [Gilb and Graham, IEEE 1028] See also peer
review.
test A set of one or more test cases. [IEEE 829]
test approach The implementation of the test strategy for a specific project. It typically includes
the decisions made that consider the (test) project’s goal and the risk
assessment carried out, starting points regarding the test process, the test
design techniques to be applied, exit criteria and test types to be performed.
test basis All documents from which the requirements of a component or system can be
inferred. The documentation on which the test cases are based. If a document
can be amended only by way of a formal amendment procedure, then the test
basis is called a frozen test basis. [After TMap]
test case A set of input values, execution preconditions, expected results and execution
post conditions, developed for a particular objective or test condition, such as to
exercise a particular program path or to verify compliance with a specific
requirement. [After IEEE 610]
test case specification A document specifying a set of test cases (objective, inputs, test actions,
expected results, and execution preconditions) for a test item. [After IEEE 829]
test charter A statement of test objectives, and possibly test ideas about how to test. Test
charters are used in exploratory testing. See also exploratory testing.
test closure During the test closure phase of a test process data is collected from completed
activities to consolidate experience, test ware, facts and numbers. The test
closure phase consists of finalizing and archiving the test ware and evaluating
the test process, including preparation of a test evaluation report. See also test
process.
test comparator A test tool to perform automated test comparison of actual results with expected
results.
test condition An item or event of a component or system that could be verified by one or
more test cases, e.g., a function, transaction, feature, quality attribute, or
structural element.
test control A test management task that deals with developing and applying a set of
corrective actions to get a test project on track when monitoring shows a
deviation from what was planned. See also test management.
test cycle Execution of the test process against a single identifiable release of the test
object.
test data Data that exists (for example, in a database) before a test is executed, and that
affects or is affected by the component or system under test.
test data preparation tool A type of test tool that enables data to be selected from existing databases or
created, generated, manipulated and edited for use in testing.
test design (1) See test design specification.
(2) The process of transforming general testing objectives into tangible test
conditions and test cases.
test design specification A document specifying the test conditions (coverage items) for a test item, the
detailed test approach and identifying the associated high level test cases.
[After IEEE 829]
test design technique Procedure used to derive and/or select test cases.
test design tool A tool that supports the test design activity by generating test inputs from a
specification that may be held in a CASE tool repository, e.g., requirements
management tool, from specified test conditions held in the tool itself, or from
code.
test environment An environment containing hardware, instrumentation, simulators, software
tools, and other support elements needed to conduct a test. [After IEEE 610]
test estimation The calculated approximation of a result (e.g., effort spent, completion date,
costs involved, number of test cases, etc.) which is usable even if input data
may be incomplete, uncertain, or noisy.
test evaluation report A document produced at the end of the test process summarizing all testing
activities and results. It also contains an evaluation of the test process and
lessons learned.
test execution The process of running a test on the component or system under test,
producing actual result(s).
TMMi Framework R1 2 Page 220 of 226
Glossary
test execution phase The period of time in a software development lifecycle during which the
components of a software product are executed, and the software product is
evaluated to determine whether or not requirements have been satisfied. [IEEE
610]
test execution schedule A scheme for the execution of test procedures. The test procedures are
included in the test execution schedule in their context and in the order in which
they are to be executed.
test execution tool A type of test tool that is able to execute other software using an automated test
script, e.g., capture/playback. [Fewster and Graham]
test harness A test environment comprised of stubs and drivers needed to execute a test.
test implementation The process of developing and prioritizing test procedures, creating test data
and, optionally, preparing test harnesses and writing automated test scripts.
test improvement plan A plan for achieving organizational test process improvement objectives based
on a thorough understanding of the current strengths and weaknesses of the
organization’s test processes and test process assets. [After CMMI]
test infrastructure The organizational artifacts needed to perform testing, consisting of test
environments, test tools, office environment and procedures.
test input The data received from an external source by the test object during test
execution. The external source can be hardware, software or human.
test item The individual element to be tested. There usually is one test object and many
test items. See also test object.
test level A group of test activities that are organized and managed together. A test level
is linked to the responsibilities in a project. Examples of test levels are
component test, integration test, system test and acceptance test. [After TMap]
test log A chronological record of relevant details about the execution of tests. [IEEE
829]
test logging The process of recording information about tests executed into a test log.
test manager The person responsible for project management of testing activities and
resources, and evaluation of a test object. The individual who directs, controls,
administers, plans and regulates the evaluation of a test object.
test management The planning, estimating, monitoring and control of test activities, typically
carried out by a test manager.
test management tool A tool that provides support to the test management and control part of a test
process. It often has several capabilities, such as test ware management,
scheduling of tests, the logging of results, progress tracking, incident
management and test reporting.
Test Maturity Model A five level staged framework for test process improvement, related to the
(TMM) Capability Maturity Model (CMM), which describes the key elements of an
effective test process.
Test Maturity Model A five level staged framework for test process improvement, related to the
integration Capability Maturity Model Integration (CMMI), which describes the key elements
(TMMi) of an effective test process.
test monitoring A test management task that deals with the activities related to periodically
checking the status of a test project. Reports are prepared that compare the
actuals to that which was planned. See also test management.
test object The component or system to be tested. See also test item.
test objective A reason or purpose for designing and executing a test.
test performance indicator A high level metric of effectiveness and/or efficiency used to guide and control
progressive test development, e.g., Defect Detection Percentage (DDP).
test phase A distinct set of test activities collected into a manageable phase of a project,
e.g., the execution activities of a test level. [After Gerrard]
test plan A document describing the scope, approach, resources and schedule of
intended test activities. It identifies amongst others test items, the features to be
tested, the testing tasks, who will do each task, degree of tester independence,
the test environment, the test design techniques and entry and exit criteria to be
used, and the rationale for their choice, and any risks requiring contingency
planning. It is a record of the test planning process. [After IEEE 829]
test planning The activity of establishing or updating a test plan.
test policy A high level document describing the principles, approach and major objectives
of the organization regarding testing.
Test Point Analysis A formula based test estimation method based on function point analysis.
(TPA) [TMap]
test procedure specification A document specifying a sequence of actions for the execution of a test. Also
known as test script or manual test script. [After IEEE 829]
test process The fundamental test process comprises test planning and control, test analysis
and design, test implementation and execution, evaluating exit criteria and
reporting, and test closure activities.
test process asset library A collection of test process asset holdings that can be used by an organization
or project. [CMMI]
Test Process Group (TPG) A permanent or virtual entity in the organization responsible for test process
related activities such as process definition, analysis and assessment, action
planning and evaluation. It has the overall test process ownership as defined in
an organization’s test policy.
Test Process Improvement A continuous framework for test process improvement that describes the key
(TPI) elements of an effective test process, especially targeted at system testing and
acceptance testing.
test progress report A document summarizing testing activities and results, produced at regular
intervals, to report progress of testing activities against a baseline (such as the
original test plan) and to communicate risks and alternatives requiring a
decision to management.
test run Execution of a test on a specific version of the test object.
test schedule A list of activities, tasks or events of the test process, identifying their intended
start and finish dates and/or times, and interdependencies.
test script Commonly used to refer to a test procedure specification, especially an
automated one.
test session An uninterrupted period of time spent in executing tests. In exploratory testing,
each test session is focused on a charter, but testers can also explore new
opportunities or issues during a session. The tester creates and executes test
cases on the fly and records their progress. See also exploratory testing.
test specification A document that consists of a test design specification, test case specification
and/or test procedure specification.
test strategy A high-level description of the test levels to be performed and the testing within
those levels for an organization or programme (one or more projects).
test suite A set of several test cases for a component or system under test, where the
post condition of one test is often used as the precondition for the next one.
test summary report A document summarizing testing activities and results. It also contains an
evaluation of the corresponding test items against exit criteria. [After IEEE 829]
test tool A software product that supports one or more test activities, such as planning
and control, specification, building initial files and data, test execution and test
analysis. [TMap]
test type A group of test activities aimed at testing a component or system focused on a
specific test objective, i.e. functional test, usability test, regression test etc. A
test type may take place on one or more test levels or test phases. [After TMap]
testability review A detailed check of the test basis to determine whether the test basis is at an
adequate quality level to act as an input document for the test process. [After
TMap]
tester A skilled professional who is involved in the testing of a component or system.
testing The process consisting of all lifecycle activities, both static and dynamic,
concerned with planning, preparation and evaluation of software products and
related work products to determine that they satisfy specified requirements, to
demonstrate that they are fit for purpose and to detect defects.
testware Artifacts produced during the test process required to plan, design, and execute
tests, such as documentation, scripts, inputs, expected results, set-up and
clear-up procedures, files, databases, environment, and any additional software
or utilities used in testing. [After Fewster and Graham]
traceability The ability to identify related items in documentation and software, such as
requirements with associated tests. See also horizontal traceability, vertical
traceability.
trustworthiness The probability that there are no defects in the software that will cause the
system to fail catastrophically. [Burnstein]
unit testing See component testing.
usability The capability of the software to be understood, learned, used and attractive to
the user when used under specified conditions. [ISO 9126]
use case testing A black box test design technique in which test cases are designed to execute
user scenarios.
V-model A framework to describe the software development lifecycle activities from
requirements specification to maintenance. The V-model illustrates how testing
activities can be integrated into each phase of the software development
lifecycle.
validation Confirmation by examination and through provision of objective evidence that
the requirements for a specific intended use or application have been fulfilled.
[ISO 9000]
verification Confirmation by examination and through provision of objective evidence that
specified requirements have been fulfilled. [ISO 9000]
vertical traceability The tracing of requirements through the layers of development documentation
to components.
walkthrough A step-by-step presentation by the author of a document in order to gather
information and to establish a common understanding of its content. [Freedman
and Weinberg, IEEE 1028] See also peer review.
white-box test design Procedure to derive and/or select test cases based on an analysis of the
technique internal structure of a component or system.
white-box testing Testing based on an analysis of the internal structure of the component or
system.
Wide Band Delphi An expert based test estimation technique that aims at making an accurate
estimation using the collective wisdom of the team members.
References
[Bach] J. Bach (2004), Exploratory Testing, in: E. van Veenendaal, The Testing Practitioner – 2nd edition, UTN
Publishing
[BS7925-2] BS7925-2 (1998), Standard for Software Component Testing, British Standards Institution
[CMMI] M.B. Chrissis, M. Konrad and S. Shrum (2007), CMMI Second Edition, Guidelines for Process
Integration and Product Improvement, Addison Wesley
[DO-178b] DO-178 (1992). Software Considerations in Airborne Systems and Equipment Certification,
Requirements and Technical Concepts for Aviation (RTCA SC167),
[Fenton] N. Fenton (1991), Software Metrics: a Rigorous Approach, Chapman & Hall
[Fewster and Graham] M. Fewster and D. Graham (1999), Software Test Automation, Effective use of test
execution tools, Addison-Wesley
[Freedman and Weinberg] D. Freedman and G. Weinberg (1990), Walkthroughs, Inspections, and Technical
Reviews, Dorset House Publishing
[Gelperin and Hetzel] D. Gelperin and B. Hetzel (1998), “The Growth of Software Testing”, in: CACM, Vol. 31, No.
6, 1988, pp. 687-695
[Gerrard] P. Gerrard and N. Thompson (2002), Risk-Based E-Business Testing, Artech House Publishers
[Gilb and Graham] T. Gilb and D. Graham (1993), Software Inspection, Addison-Wesley
[Graham] D. Graham, E. van Veenendaal, I. Evans and R. Black (2007), Foundations of Software Testing,
Thomson Learning
[Grochtmann] M. Grochtmann (1994), Test Case Design Using Classification Trees, in: Conference Proceedings
STAR 1994.
[Hauser and Clausing] J.R. Hauser and D. Clausing (1988), The House of Quality, in: Harvard Business Review,
Vol. 66, Nr. 3, 1988
[Hetzel] W. Hetzel (1988), The complete guide to software testing – 2nd edition, QED Information Sciences
[Hollenbach and Frakes] C. Hollenbach and W. Frakes (1996), Software process re-use in an industrial setting, in:
Proceedings Fourth International Conference on Software-Reuse, Orlando, April 1998, pp. 22-30
[IDEAL] SEI (1997), IDEAL: A Users Guide for Software Process Improvement, Software Engineering Institute
[IEEE 610] IEEE 610.12 (1990), Standard Glossary for Software Engineering Terminology, IEEE Standards Board
[IEEE 829] IEEE 829 (1998), Standard for Software Test Documentation, IEEE Standards Board
[IEEE 1008] IEEE 1008 (1993), Standard for Software Unit Testing, IEEE Standards Board
[IEEE 1028] IEEE 1028 (1997), Standard for Software Reviews and Audits, IEEE Standards Board
[IEEE 1044] IEEE 1044 (1993), Standard Classification for Software Anomalies, IEEE Standards Board
[ISO 9000] ISO 9000 (2005), Quality Management Systems – Fundamentals and Vocabulary, International
Organization of Standardization
[ISO 9126] ISO/IEC 9126-1 (2001). Software Engineering – Software Product Quality – Part 1: Quality
characteristics and sub-characteristics, International Organization of Standardization
[ISO 12207] ISO/IEC 12207 (1995), Information Technology – Software Lifecycle Processes, International
Organization of Standardization
[ISO 14598] ISO/IEC 14598-1 (1999), Information Technology – Software Product Evaluation - Part 1: General
Overview, International Organization of Standardization
[ISO 155504] ISO 15504-9 (1998), Information Technology – Software Process Assessment – Part 9: Vocabulary,
International Organization of Standardization
[ISTQB] ISTQB – E. van Veenendaal (ed.) (2010), Standard Glossary of Terms Used in Software Testing, V2.1,
International Software Testing Qualifications Board
[Koomen and Pol] T. Koomen and M. Pol (1999), Test Process Improvement, Addison-Wesley
[Sogeti] Sogeti (2009), TPI-Next - Business Driven Test Process Improvement, UTN Publishing
[TAMAR] A. Goslin (ed.) (2011), TMMi Assessment Method Application Requirements V2.0, TMMi Foundation
[TMap] M. Pol, R. Teunissen, E. van Veenendaal (2002), Software Testing, A guide to the TMap Approach,
Addison Wesley.
[Trienekens and Van Veenendaal] J. Trienekens and E. van Veenendaal (1997), Software Quality from a Business
Perspective, Kluwer Bedrijfsinformatie
[V2M2] QualityHouse (2006), V2M2; A Verification and Validation Maturity Model – Improving test practices and
models
[Veenendaal] E. van Veenendaal (2004), The Testing Practitioner – 2nd edition, UTN Publishing
[Veenendaal en Cannegieter] E. van Veenendaal and J.J. Cannegieter (2011), The little TMMi – Objectives-Driven
Test Process Improvement, UTN Publishing
[Van Solingen and Berghout] R. van Solingen and E. Berghout (1999), The Goal/Question/Metric method, McGraw-
Hill