Factors in Software Quality. Volume-III. Preliminary Handbook On Software Quality For An Acquisiton Manager
Factors in Software Quality. Volume-III. Preliminary Handbook On Software Quality For An Acquisiton Manager
Factors in Software Quality. Volume-III. Preliminary Handbook On Software Quality For An Acquisiton Manager
epwIt,
/
/0
(~'P-elimlnary'!Handbookwon
j~/AquisitionManager,
~4iJim
Vk~~
A. /m'cCall,
Gene F./Waiters
This report has been reviewed by the PADC Information Office (01) and is
releasable to the National Technical Information Service (NTIS). At NTIS it
will be releasable to the general public, including foreign nations.
RADC-TR-77-369, Vo] III (of three) has been reviewed and approved for
publication.
APPROVED:
JOSEPH P. CAVANO
Project Engineer
APPROVED:
(2
&4~L
If your address has changed or if you wish to be removed from the RADC mailing
list, or if the addressee is no longer employed by your organization, please
notify RADC (ISIS) Griffiss AFB NY 13441. This will assist us in maintaining
a current mailing list.
Do not return this copy.
Retain or destroy.
SECURITY
e En
Vt
eed)RE
D I S
UC ON
BFRE COMPTING ORM
REPORT NU
bER
...
'
RADC-TR-77-369,
Jul 77
I.
AUTHOR(s)
76
F30602-76-C-0417
Paul K. Richards
Gene F. Walters
PERFORMING ORGA WZATION NAME AND ADDRESS
7.
10.
64740F
22370301
12.
11.
REPORT DATE
-November 1977
3NUBROPAE
14.
UNCLASSIFIED
Same
16.
DISTRIBUTION
___
STATEMENT
_______
____
___
___
___
____
/A
__
ID I
different
f;RogRport)
Same
IS.
SUPPLEMENTARY NOTES
19.
KEY WORDS (Continue on reverse oide if nececemy and identify'Ar block nianber)
Software Quality
Quality Factors
Metrics
Software Measurements
20O ABSTRACT (Continue on reerse aide it nece.aay mid identify by black nuot~)
DD
FON~,
1473
UNCLASSIFIED
acquiaitom manapge
UNCLASSIFIM
SISCU01ITY CLASFICATION OF THIS PAO4lMgm b~a gW,*
PREFACE
This document is the final technical report (CDRL A003) for the Factors in
Software Quality Study, contract number F030602-76-C-0417. The contract was
performed in support of the U.S. Air Force Electronic Systems Division's
(ESD) and Rome Air Development Center's (RADC) mission to provide standards
and technical guidance to software acquisition managers.
The report consists of three volumes, as follows:
Volume I
Volume II
Volume III
Concept and
Metric Data
Preliminary
Acquisition
The objective of the study was to establish a concept of software quality and
provide an Air Force acquisition manager with a mechanism to quantitatively
specify and measure the desired level of quality in a software product.
Software metrics provide the mechanism for the quantitative specification and
measurement of quality.
This third volume is a preliminary stand-alone reference document to be used
by an acquisition manager to implement the techniques established during the
study.
A-
..
.........
RY3
fol:rpM C
TABLE OF CONTENTS
Pe
Section
I
INTRODUCTION
1.1
. .
. .
Purpose
..
1.2 Scope
1.....
1-1
1-2
. . . . .
1-1
. . . .
1-2
. .
1-3
. .
1-4
Final Report ..
1.7
1-5
.......................
Definitions .........
1-7
........................
1-9
...............
2-1
..................
. .........
2-1
2-9
2-14
...
. .............
3-1
3-1
. . .
3-4
3-5
................
.&
..
............
iii
. .
.................
2-4
. .
..
3-6
LIST OF FIGURES
Figure Number
1-1
1-2
2-1
Page
Title
3-1
3-2
3-3
2-2
........
3-8
LIST OF TABLES
Table Number
1-1
2-1
2-2
2-3
2-4
2-5
2-6
3-1
iv
Title
Presentation of Approaches to Specifying and
Measuring Software Quality ...............
....
Definition of Software Quality Factors ... .......
System Characteristics and Related Quality Factors
The Impact of Not Specifying or Measuring
Software Quality Factors .....
..............
Relationships Between Software Quality Factors . . .
Criteria Definitions for Software Quality Factors..
Problem Report and Man-Power Expenditure
Categorization ......
...................
Example Metrics .......
..................
Page
1-6
2-3
2-5
2-7
2-8
2-12
2-15
3-3
7SECTION
1
INTRODUCTION
l1 PUPEO,
In the acquisition of a new software system, a P*jor problem facing a System
Program Office (SPO) is to specify the requirements to the software developer
and then to determine whether those requirements are being satisifed as the
software system evolves. The parameters of the specification center about the
technical definition of the application and the function of the software within
the overll system. Following this specification, a realistic schedule and
costs are negotiated.
There has been no mechanism for specifying the qualities or characteristics
of the software - qualities such as reliability, maintainability, usability,
testability, and portability. The importance of these software qualities
which go beyond the technical mission has been recognized in recent years as
a necessary concern for software development managers.
This recognition has come about because of the many instances in which the
consequences of not considering software quality has driven total project costs
and schedule well beyond initial estimates. It has been found that the costs
throughout the total life cycle are more affected by the characteristics of the
software system than by the mission-oriented functions performed by the software
system. Large-scale software systems have sometimes proven untestablE, unmodifiable, and largely unusable by operations personnel because of the
characteristics of the software.
While the application functions, cost, and schedule aspects of development can
be objectively defined, measured, and assessed throughout the developemnt of
the system, the quality desired has historically been definable only in subJective terms. This occurs because the SPO has no quantificable criteria
4gqinst which to judge the quality of the software until he begins tp use the
system uyu4r qppration conditions. This usually leaves the SPO with only two
alternatives: to incur increased costs or to back off from the requirements
initially desired for the system.
1-1I
The objectives of this handbook are (1) to describe to the acquisition manager
what the software qualities or characteristics are that should be included in
a specification, (2)provide a mechanism for objectively specifying the
software quality requirements, and (3) introduce a methodology for measuring
the level of software quality achieved.
1.2 SCOPE
This handbook is based on the results of a study conducted in support of the
U.S. Air Force Electronic Systems Division's (ESD) and Rome Air Development
Center's (RADC) mission to provide standards and technical guidance to software acquisition managers. The study represented an initial conceptual investigation of the factors of software quality with a limited sample validation
of the concept. Further reasearch and demonstrations of the concept are
planned. With this fact in mind, three approaches are described for both
specifying and measuring software qualities. The approaches are presented
in order of increasing quantification. For both specification and measurement, the first two approaches described are immediately implementable and
usable, while the third approach is in its conceptual infancy. Further experience and analysis are required to derive the generally usable quantified
relationships required by the third approach.
1.3 RELATIONSHIP OF HANDBOOK TO QUALITY ASSURANCE FUNCTION
The techniques described in this handbook are envisioned as an integral part of
an overall quality assurance program. Two important facets of quality assurance
are covered in the handbook; software quality specification and software quality
evaluation. A review of MIL-STD 483 and 490 provides insight into how these
techniques fit into current software development and quality assurance practices.
Appendix I, System Specification, Type A, of IL-STD 490 covers characteristics
(paragraph 3.2) of the system which should be described in the specification.
Characteristics such as reliability, maintainability, availability, and interchangeability are mentioned but are oriented toward hardware systems. The
software quality factors described in Section 2 of this handbook should be
incorporated in this section of a system specification.
1-2
1-3
I1
HANDBOOK ORGANIZATION
1-4
The first section provides introductory information, definitions, and recommendations for use of the handbook.
The second section describes three approaches to specifying software quality.
Each description is organized in the following manner:
e General discussion of approach
e Steps to be followed
The third section describes three approaches to measuring software quality.
Each approach is described using the same format of the second section.
The three approaches in each section are presented in order of increasing
formality and quantification of the relationship between the metrics and
the quality factors. The approaches are titled and can be easily identified
in the handbook as shown in Table 1-1.
1.6
This handbook is based on and utilizes the concepts described in the Factors
in Software Quality Final Report, Volumes I and II, and previous research
efforts related to software quality referenced in that report.
The report
is the result of work performed for ESD and RADC under contract number
F030602- 76-C-0417.
1-5
cI!)
I-
cr
I--~
400
V)LL-
Co
LA
U.
PC
>
V,
cQ~
o-
1-
i)
C.)
La.
La..
"/
I-
I--
0.
(n
(a
=
(a
5-$S..
4-
U~
4-
0404
S.
cZ
)
LL
w
Z
S-
La3=a.
V)
4JJ
o-
(al
01
0.
=-
(.V)
I--
0'j
U-
(n (3
~j
3n
LmjE
W-4
LA
V C
1.7
DEFINITIONS
this handbook:
*
Software:
1-7
applications. So, while the C2 final product may have a higher degree of
reliability according to our measures, it may be no more acceptable to
the user than the MIS system with its lower reliability is to its user.
COST
'f~-
1
Figure 1-1
Rating of Reliability
MIS
1-8
IA
LI
I.-.
40 aw
l-
4A
4A
vJJo
'K V)
A.
ICO
0-- 40w
I-
ig
gw
t4)
LaL
a1.
C2,
WI-C
-SECTION
2
SPECIFYING SOFTWARE QUALITY
CONCEPT OF FACTORS IN SOFTWARE, QJALITY
An acquisition manager's involvement with a software p0oduct can be categorized
in terms of three distinct activities as follows:
2.1
Product Operation
Product Revision
* Product Transition
Specific qualities or characteristics (quality factors) of the software product
are related to these activities as shown in Figure 9-1. The questions in
parentheses provide the relevancy or interpretation of the factors to an
acquisition manager.
Thus, with respect to the operation of a software system, a m acquisition manager's
concern for quality is in terms of its correctness, reliability, efficiency.
integrity, and usability. Over the life cycle of a system, revisions to the
system may be necessary due to problems or changing requirements. The
acquisition manager is therefore concerned with the maintainability*
flexibility, and testability of the software. Longer range considerations
may involve moving the software to another hardware system, interfacing it
with another system, or developing neer versions of the system. The
related quality concerns are for portability, interoperability, and reusability.
The definitions of these eleven quality factors are in Table 2-1.
All of the quality factors should be considered in the initial specification
for the software. The first approach (paragraph 2.2) describes the procedure
for considering these quality factors. The following paragraphs (2.3, 2.4)
describe progressively more detailed approaches to specifying software quality.
2-1
<1
=- =LIAJ
C..
cc
-j
ma
ca
3a
-'-1
4J
4-44J
-A
V)
-L
-J
cJ
-P
dc
4-7
LIL.
ot
9-
LLi,
I..-
-II
0.
L4A.
w
0--
Table 2-1
,' s,Extent
PUIABILITY
EFFICIEJ Y
I,!TERITY
UJILITY
IAINTAIIAILITY
TESTABILT
FLEXIBILITY
PORTABILITY
REJSMILITY
another.
Extent to which a program can be used in other
applications - related to the packaging and scope of the
functions that programs perform.
INTEROPEABILITY
21
2-3
2.2
Each system
These
2-4
W4ij ?73 Yt
CIIARD4TERISTJ
*MLIT F*TS
141ieoility
Plimbit
Pn-oqard impae
Efficiengy
PReliability
Efi
P~y
inforumation
2-5
2-6
34
34
EcbE
ImI
1.
4,1
04--
----
GAA
-2-
-~
COETNESSY
~~RELIABILITY
0
0
_0
LEGEND
22%8
*I
2. Once the critical quality factors have been identified and priorities
assigned, they should be included in the RFP or SRS with definitions from
paragraph 2.1, and the developer required to comment on how the software will be developed to exhibit the qualities specified.
3. Wherever possible, as much detailed explanation should be included
with the definition for each quality factor. For example, if
portability is a major concern to an acquisition manager, as precise
a description as possible should be included as to the types of
environments to which the system might be transported.
2.3
'I
CORRECTNESS
~~~~~Traceability
-'
Cnitny
Cmltns
RELIABILITY
ErrTlerance
Consistency ]
Accuracy
Simpliit
EFFICIENCY
Execution Effiency
LEGEND
C> Factor
E=Criteria
Storage Efficiec
INTEGRITY
Access Control
AccessAui
USABILITY
Training
FConnnicativeness I
Operability
MAINTAINADILITY
Consitency
Simplcity
Conciseness
Modularity
SefDsrpives
NTESTPEABILITY
*
~~smlity
Daa
CourctioustComnaiy
F~gure 2-2
Relai~
mcieinSfwr
ofCrtra tware
oauiait
Qute
aity ct
Indepndene
In2-11e
eovs(cntiud
Table 2-5
CRITERION
RELATED
FACTORS
DEFINITION
TRACEABILITY
COMPLETENESS
Correctness
CONSISTENCY
Correctness
Reliability
Maintainability
ACCURACY
Reliability
SIMPLICITY
Reliability
Maintainability
Testability
MODULARITY
Maintainability
Flexibility
Testability
Portability
Reusability
Interoperabil1ty
GENERALITY
Flexibility
Reusability
EXPANDABILITY
Flexibility
ERROR TOLERANCE
Reliability
INSTRUMENTATION
Testability
SELFDESCRIPTIVENESS
Flexibility
Maintainability
Testability
Portability
ity
___Reusabil
2-12
. ...
. . .. . .. .. .. ... . ...
".
. . .
.I
II
IS
Table 2-5
CRITERION
DEFINITION
RELATED
FATORS
EXECUTION
EFFICIENCY
Efficiency
STORAGE
EFFICIENCY
Efficiency
during operation.
ACCCSS CONTROL
Integrity
ACCESS AUDIT
Integrity
OPERABILITY
Usability
Those attributes of the software that
determine operation and procedures concerned with the operation of the software.
TRAINING
Usability
COMMUNICATIVENESS
Usability
SOFTWARE SYSTEM
INDEPENDENCE
Portability
Reusability
MACHINE
INDEPENDENCE
Portability
Reusability
system.
COMMUNICATIONS
COMMONALITY
Interoperability
DATA COMMONALITY
Interoperability
CONCISENESS
Maintainability
2-13
STEPS TO BE FOLLOWED
1. After identification of the critical quality factors, specific
performance levels or ratings required for each factor should be
specified. For example, a rating for mintainability might be
that the average time to fix a probim should be five man-days or
that 90% of the problem fixes should take less than six man-days.
This rating would be specified in the RFP. To comply with this
2-14
2. I
CATEGORY BY
QUALITY FACTOR
EXPLANATION
o CORRECTNESS
* RELIABILITY
o EFFICIENCY
The software does not et performance (speed, storage) requirements. The rating Is in terms of effort
required to fix.
e INTEGRITY
* USABILITY
There is a problem related to operation of the software, the user interface, or the input/output. The
rating is in terms of effort required to fix.
e MAINTAINABILITY
e FLEXIBILITY
9 TESTABILITY
e REUSABILITY
9 PORTABILITY
* INTEROPERABILITY
2-15
Sspecification,
2-16
SECTION 3
MEASURING SOFTWARE QUALITY
3.1 THE CONCEPT OF QUALITY METRICS
Figure 3-1 Illustrates the concept of applying metrics during the development
of a software system. The metrics are quantitative fesures of the software
attributes (criteria identified in paragraph 2.3) which are necessary to realize
certain characteristics (quality factors) in the software. The metrics
provide an indication of the progression toward the achtievwnt of high quality
end products. Specific acceptance tests can be oriented toward evaluating the
levels of quality achieved but these testing strategies are not within the
scope of this handbook.
As previously mentioned, the metrics have been developed to be applied to
products currently provided during a software development. They may be applied
either by acquisition manager personnel to delivered products, by contractor
personnel and reported in sumary format to the acquisition manager during
reviews, or by contractor personnel as part of their own quality assurance
program.
The metrics were developed so as not to restrict or interfere with the management and development methodologies and techniques of the developer.
The metrics are listed in Table 6.2-1 of the Factors
Report with definitions following that table. Their
products is described in Appendix D of that report.
available in software development environments which
3-1
I-
4a
UJU
0 V-4
a.C4-b
'.4
06.1C4
11IW
4a.
06.r
4C1
5
4D. IV
0
I,
*
94
000
48.4A
5.
3-2
a0
14A,.
0~
H';
C~
C4
IL
Cc~
4J'
#A4(1.A
q-
dl
CRITERION
RELATED QUALITY
FACTORS
METRIC
WfENDNIC
SI.3
Simplicity
Reliability
Maintainabil ity
Testability
Effectiveness
of Comments
Measure
SD.2
Self-Descriptiveness
Flexibility
Maintainability
Reuisability
Portability
Machine
Independence
Measure
MI.1
Machine
Independence
Portability
Reusabil1ity
Completeness
Checklist
CP.l
Completeness
Correctness
3-3
OISCUSSION OF APPROACH
The first level of measuring software quality involves applying the metrics
to software products as they are produced.
STEPS TO BE FOLLOWED
1. The subset of metrics which relate to the identified critical quality
factors and software attributes and are applicable to the phase of
development should be applied to the available software products.
For example, during the design phase, metrics could be applied to
design specifications, interface control documents, test plans,
minutes and materials prepared for reviews, and so on.
2. A subjective evaluation of how well the software is being developed
with respect to the specific quality factors can be made based on the
inspection of the software products using the metrics.
3-4
DISCUSSION OF APPROACH
TFPS TO BE FOLLOWED
1. After the metrics are applied to the available software products,
the values are obtained and evaluated. If particular modules receive
low metric scores, they can be individually evaluated for potential
problems. If low metric scores are realized across the system, an
evaluation should be made to identify the cause. It may be that a
design or implementation technique used widely by the development
team is the cause. Corrective action such as the enforcement of a
development standard can then be introduced.
2. Further analysis can be conducted,
scores for each module in a system will reveal which metrics vary
widely. Further examination will reveal if this variation correlates
with the number of problem reports or with historical variances in
performance. This sensitivity analysis identifies characteristics
of the software, represented by the metrics, which are critical to
the quality of the product. Quality assurance personnel should
place increased emphasis on these aspects of the software product.
3. Threshold values may be established below which certain actions
would be required. A simple example is the percent of comments
per line of source code. Certainly code which exhibits only 1%
or 2% measurements for this metric would be identified for correc4
3.4
DISCUSSION OF APPROACH
This approach is the most detailed of the three approaches to measuring software quality. The underlying mathemtical foundations to the derivation of
the relationships are described in Section 7 and Appendix C of the Factors in
Software Quality Final Report. Basically, the measurements (m) for a
3.4.1
(m, m2 , ...
m)
= rF
STEPS TO BE FOLLOWED
1. To illustrate the procedures involved in this approach, a normalization function for ttw quality factor flexibility developed
during the Factors in Software Quality study will be used. The
normalization function, applicable during the design phase,
relates measures of modular implementation (MO.2F) to the flexibility
of the software. The predicted rating of flexibility is in terms
of the average time to implement a change in specifications. The
normalization function is shown in Figure 3-2.
3-6
1.0
+"i
.9
.7
rF
-'
.6
.S
AVG MAN-DAYS
TOCHANGE
.2
.10
l-f
. '
/
.33
.3
lig
rF _" "SS
rF
.1
.2
MO.2
.3
.4
.5
.6.6S .7
.8
.9 1.0
..
. .... ..I.. ..I
.. lllI II
UI
..
-7 , , -
(SPECIFIED RATING) .2
Figure 3-3
3-8
MISSION
of
Rome Air Development Center
Pritd by
United Sttes Air Force
Hanscom AFB, Mass. 01731