CS430 Lecture Notes
CS430 Lecture Notes
Lecture Notes
Fall 2023
Collin Roberts
December 14, 2023
Contents
1 Lecture 01 - Introduction to Software Engineering 7
1.1 Introduction to CS 430 - Course Outline . . . . . . . . . . . . 7
1.2 Introduction to the Scope of Software Engineering . . . . . . . 8
1.3 Historical Aspects . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Economic Aspects . . . . . . . . . . . . . . . . . . . . . . . . . 9
1
4 Lecture 04 - Life-Cycle Models 22
4.1 Other Life Cycle Models . . . . . . . . . . . . . . . . . . . . . 22
4.1.1 Code and Fix Life-Cycle Model . . . . . . . . . . . . . 22
4.1.2 Waterfall (Modified) Life-Cycle Model . . . . . . . . . 23
4.1.3 Rapid Prototyping Life-Cycle Model . . . . . . . . . . 25
4.1.4 Open Source Life-Cycle Model . . . . . . . . . . . . . . 26
4.1.5 Agile Processes . . . . . . . . . . . . . . . . . . . . . . 27
4.1.6 Synchronize and Stabilize Life-Cycle Model . . . . . . . 29
4.1.7 Spiral Life-Cycle Model . . . . . . . . . . . . . . . . . 29
4.2 Comparison of Life-Cycle Models . . . . . . . . . . . . . . . . 32
2
7 Lecture 07 - Teams I 45
7.1 Team Organization . . . . . . . . . . . . . . . . . . . . . . . . 46
7.2 Classical Chief Programmer Teams . . . . . . . . . . . . . . . 46
7.3 Democratic Teams . . . . . . . . . . . . . . . . . . . . . . . . 48
7.4 Beyond Chief Programmer and Democratic Teams . . . . . . . 49
8 Lecture 08 - Teams II 50
8.1 Synchronize and Stabilize Teams . . . . . . . . . . . . . . . . 50
8.2 Teams for Agile Processes . . . . . . . . . . . . . . . . . . . . 51
8.3 Open Source Programming Teams . . . . . . . . . . . . . . . . 52
8.4 People Capability Maturity Models . . . . . . . . . . . . . . . 53
8.5 Choosing an Appropriate Team Organization . . . . . . . . . . 54
3
11.2 Non-Execution Based Testing . . . . . . . . . . . . . . . . . . 67
11.2.1 Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . 67
11.2.2 Walkthroughs . . . . . . . . . . . . . . . . . . . . . . . 68
11.2.3 Managing Walkthroughs . . . . . . . . . . . . . . . . . 68
11.2.4 Inspections . . . . . . . . . . . . . . . . . . . . . . . . 68
11.2.5 Comparison of Walkthroughs and Inspections . . . . . 69
11.2.6 Strengths and Weaknesses of Reviews . . . . . . . . . . 69
11.2.7 Metrics for Inspections . . . . . . . . . . . . . . . . . . 69
4
16 Lecture 16 - The OO Paradigm - Abstract Data Types, In-
formation Hiding and Objects 84
16.1 Abstract Data Types (§7.5) . . . . . . . . . . . . . . . . . . . 84
16.2 Information Hiding (§7.6) . . . . . . . . . . . . . . . . . . . . 85
16.3 Objects (§7.7) . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
18 Lecture 18 - Reusability 91
18.1 Re-Use Concepts . . . . . . . . . . . . . . . . . . . . . . . . . 91
18.2 Impediments to Re-Use . . . . . . . . . . . . . . . . . . . . . . 92
18.3 Types of Re-Use . . . . . . . . . . . . . . . . . . . . . . . . . . 93
18.3.1 Accidental (Opportunistic) . . . . . . . . . . . . . . . . 93
18.3.2 Deliberate (Systematic) . . . . . . . . . . . . . . . . . 93
18.4 Objects and Re-Use . . . . . . . . . . . . . . . . . . . . . . . . 93
18.5 Re-Use During Design and Implementation . . . . . . . . . . . 93
18.5.1 Library (toolkit) . . . . . . . . . . . . . . . . . . . . . 93
18.5.2 Application Framework . . . . . . . . . . . . . . . . . . 94
18.5.3 Software Architecture . . . . . . . . . . . . . . . . . . . 95
18.5.4 Component-Based Software Engineering . . . . . . . . 95
5
19.1.5 Abstract Factory Design Pattern (§8.6.5) . . . . . . . . 103
19.1.6 Categories of Design Patterns (§8.7) . . . . . . . . . . . 106
19.1.7 Strengths/Weaknesses of Design Patterns (§8.8) . . . . 106
19.2 Re-Use During Post-Delivery Maintenance . . . . . . . . . . . 106
6
23.6 Training Requirements . . . . . . . . . . . . . . . . . . . . . . 127
23.7 Documentation Standards . . . . . . . . . . . . . . . . . . . . 128
23.8 CASE Tools for Planning and Estimating . . . . . . . . . . . . 128
23.9 Testing the SPMP . . . . . . . . . . . . . . . . . . . . . . . . 128
7
iii. More details about Kritik will be posted on the unsecured
course website in advance of the release of Case Studies #1.
3. I will announce suggested pre-reading from the text for all the future
lectures, by email.
4. Clickers
(a) iClicker Remote
i. Register your iClicker using the instructions on LEARN.
ii. If your Participation grade still shows as 0 on LEARN after a
few lectures, then please contact the instructor to correct the
registration of your iClicker.
(b) Paper Submit one piece of paper to me at the end of each lecture,
indicating
i. student number / login ID / both, and
ii. one line per CQ, numbered my way, indicating which of op-
tions A–E you chose.
8
produce bills for incorrect amounts.
2. Even when software is delivered fault-free, it is often late, over budget,
and/or fails to meet all the user’s requirements.
3. This motivates the following key definition.
Remarks:
1. Applying the same principles that traditional engineers use can help
improve the delivery of software.
Remarks:
1. The name software engineering indicates that software developers will
have better success if they use the same principles as traditional engi-
neers.
2. Software engineering is new field with a broad scope - math, CS, science,
engineering, management, etc.
3. Software Engineering is a response to the Software Crisis.
9
Pros Cons
-short term: lower development -higher maintenance costs with a
costs blended system
-longer term: compound short term -possible need to rewrite existing
savings code
-possible improved security features -possible compatibility problems
-new environment unproven, possi-
bly unstable
-no benefit with respect to mainte-
nance costs
-might affect user experience in un-
expected ways
-training / learning curve
-new code could be less robust
-might require hardware changes
-cost of purchasing the new devel-
opment studio is not stated
-possible issues with stability, per-
formance
-new code might be of lower quality
Moral: This is not a clear yes/no answer. More analysis is still required.
Remarks:
1. The “Pro”s touch the development phase; the “Con”s reveal impacts
in other phases.
2. It turns out that historically, maintenance costs have grown faster than
development costs. Moral: reducing maintenance costs is a bigger win.
This is Coding example. Coding = 10-15% software development effort.
Similar principles apply to all aspects of software development
10
(a) The Importance of Postdelivery Maintenance
4. Requirements, Analysis and Design Aspects
5. Team Development Aspects
6. The Object-Oriented Paradigm
7. The Object-Oriented Paradigm In Perspective
8. Ethical Issues
11
time between our two main points of influence: requirements
and user acceptance testing
ii. BUT - additional structure makes it much more likely that
we get our work right the first time (unlike in the past were
a lot of rework was required)
iii. It is more likely we will get our requirements correct now
iv. integration projects are now possible (they weren’t before)
v. We can plan better for the future, using past history
vi. additional structure should make the software we produce be
of higher quality
vii. We are less vulnerable to knowledge loss if developers leave
the organization
2. Why does the Waterfall life-cycle model not have any of the following
phases?
(a) Planning
(b) Testing
(c) Documentation
Answer:
(a) All three activities are crucial to project success.
(b) Therefore all three activities must happen throughout the project
and cannot be limited to just one project phase.
Other Observations
Pros Cons
-more structure -less freedom
-people can specialize their roles to -need to finish requirements before
the phases analysis
-better chance of getting correct -requirements need to be good
documents earlier in the project enough (not perfect)
-we have some chance to manage the -we cannot test until develop-
present, plan for the future ment/unit testing is finished
-decide which projects to do, using -slow
cost-benefit analysis
Question from the Class: Why do we study the Classical life-cycle model
in CS 430?
Answer:
1. Understand why OO is better.
12
2. Many organizations still use Classical.
3. Much legacy code still exists, that was written using Classical tech-
niques.
13
Pros Cons
-classes can belong to libraries, not -learning curve!
systems
-hence classes are more easily re- -it can be more difficult to enforce
used code standards
-fewer regression faults -increased costs of development
maintenance work
-can test classes independently
of each other, unlike developing the
whole system before we can test
Question from the Class: Why does OO not come with a life-cycle picture,
as Classical does?
Answer:
1. The change from Classical to Object Orientation is more a change of
mindset than of methodology.
2. We change our mindset from building one monolithic thing (Clas-
sical) to building many smaller classes that do work for us to-
gether (OO). Many life-cycle models (including Waterfall) can be used
to build these classes effectively.
14
(b) Unit test each individual component
(c) Integration (system) testing - combine components, test interfaces
among components
(d) Acceptance testing - use live data in client’s test environment.
Clients participate in testing & verification of test results, and
sign off when they are happy with the results.
(e) Deploy to production environment.
5. Post delivery maintenance - maintain the software while it’s being used
to perform the tasks for which it was developed.
(a)
Definition 2.3.1. Corrective Maintenance: Removal of resid-
ual faults while software functionality & specs remain relatively
unchanged. (aka fix production problems)
(b)
Definition 2.3.2. Perfective Maintenance:
i. Implement changes the client thinks will improve effective-
ness of product (e.g. additional functionality, reduce response
time) (aka enhancements or upgrades)
ii. Specs must be changed
(c)
Definition 2.3.3. Adaptive Maintenance:
i. Change the software to adapt to changes in environment (e.g.
new policy, tax rate, regulatory requirements, changes in sys-
tems environment) - may not necessarily add to functionality.
You allow software to survive
ii. Specs may change to address the new environment
6. Retirement
(a) Product is removed from service: functionality provided by soft-
ware is no longer useful / further maintenance is no longer eco-
nomically feasible.
15
Cost of Post delivery Maintenance continues to go up, while (possibly
surprisingly) cost of implementation is nearly flat.
Example: My first project at OpenText was to develop a Consolidated
Customer Database. After the initial scrubbing of the data, management
opted not to re-scrub the following year. The database withered and died
because management was unwilling to pay for post delivery maintenance.
16
2. Fails to address growing costs of post-delivery maintenance.
Reason: Classical techniques focus on data or operation, but not both.
Contrast With The Object-Oriented Paradigm:
1. The object-oriented paradigm treats data (attributes) and operations
(methods) together, as equally important.
17
3.1 Introduction to Software Development Life-Cycle
Models
Where Chapter 1 attempted to describe software development in the ideal
world, Chapter 2 attempts to describe software development in the real world.
∅
Requirements
Analysis
Design
Implementation
In theory, we do not have to deal with any changes once the Requirements
phase is complete.
18
ii. over budget.
(c) Nothing in the Example explicitly says that there were faults in
the completed software product.
3. There was lots of rework, which was needed because each episode
spawned a classical life-cycle effort (iteration), in which work done in
one iteration had no easy way to feed into the next, if they overlapped
in time.
4. This was caused, in part, by the overall slowness of the Classical model.
5. Starting to develop the single-precision fix before confirming it would
provide the desired performance improvement was a waste of time.
6. Some re-use was achieved when the scanning software was packaged
and re-sold.
7. There was testing throughout the case.
8. More testing throughout Episode 1 might have revealed the perfor-
mance problems sooner.
9. Instructor Remark: Perhaps a small pilot project, prototyping the
scanning hardware and software together would have revealed the per-
formance problems earlier. This is a proof of concept prototype.
We will discuss such prototypes again in Chapter 5.
10. Packaging and re-selling was a win.
11. The project ultimately did satisfy the specification.
12. Based on our own work experience to date, this is not the worst case
we have seen so far. (The text agrees with us on this point.)
Morals of the Example:
1. The Classical model is most effective when the IT team can work with-
out accepting changes to the requirements after the requirements are
complete. Changes to requirements (e.g. adding the performance re-
quirement, the Mayor’s later change) negatively affects software quality,
delivery dates, and budgets.
2. BUT in the real world, change is inevitable. We cannot prevent change;
we must learn to manage it.
3. Here is a sketch of Figure 2.2 in the text for an example of the evolution-
tree life cycle model for this example, using the key:
19
/ Development
/ .
Maintenance
-
Requirements1 Requirements4
Analysis1 Analysis4
,
Design1 Design2 Design4
)
Implementation1 Implementation2 Implementation2 Implementation4
20
1.
Definition 3.4.1. The moving target problem occurs when the re-
quirements change while the software is being developed.
Unfortunately this problem has no solution!
2.
Definition 3.4.6. Miller’s Law states that, at any one time, a hu-
man is only capable of concentrating on approximately seven chunks of
information.
Why this Matters for Software Engineering:
(a) One person can effectively work on at most seven items at once.
21
(b) Any software project of significant size will have many more than
seven components.
(c) Hence we must start by working on ≤ 7 highly important things
first, temporarily ignoring all the rest.
(d) This is the technique of stepwise refinement (Definition 9.1.1).
This technique will come up again in Chapter 5.
When you have time, you may enjoy listening to this YouTube video (the
visual is just a static image) about Iteration & Incrementation:
https://youtu.be/FTygpfEFFKw
Next Time: (Almost) all the remaining life-cycle models are variations on
Iteration and Incrementation.
22
(b) Easy to incorporate changes to requirements.
(c) Generates a lot of lines of code (whether this is actually a strength
depends on organizational norms).
3. Weaknesses:
(a) This technique is totally unsuitable for systems of any reasonable
size.
(b) This technique is unlikely to yield the optimal solution.
(c) Slow.
(d) Costly.
(e) Likelihood of regression faults is high.
Remarks:
1. It is appropriate (and really the only choice) for a user base of size 1,
e.g. for any programming assignment you would do for a CS assignment
at uWaterloo.
2. We met this model once before: it was the only model in existence
before the Waterfall model was introduced in 1970.
23
/ Development
of Figure 2.9 in the text, using the key: / .
Maintenance
' s
Analysis
O T
%
Design
OX
)
Implementation
O
Postdelivery Maintenance
Retirement
Remarks:
1. No phase is complete until all its documents are complete, and the
output(s) of the phase are approved by the SQA (Software Quality
Assurance) team.
2. Testing is carried out throughout the project.
3. Strengths:
(a) Discipline enforced by SQA.
4. Weaknesses:
(a) Specification documents are often written in a way that does not
enable the client to understand what the finished product will look
like.
i. Hence specification documents may not be fully understood
before they are approved.
ii. Hence the finished product may not actually meet the client’s
needs.
The next model, rapid prototyping, is an adaptation if the waterfall
model to address this key weakness.
24
4.1.3 Rapid Prototyping Life-Cycle Model
/ Development
Here is a sketch of Figure 2.10 in the text, using the key: / .
Maintenance
' s
Analysis
O T
%
Design
OX
)
Implementation
O
Postdelivery Maintenance
Retirement
Remarks:
1. This diagram looks almost identical to that for Waterfall (Modified).
2. Key Difference: Requirements has been replaced with Rapid Proto-
type. Huh?
25
1. If the product is a payroll system, then a rapid prototype might have
a subset of the screens and might produce mocked-up pay stubs, but
might not have any database updating or batch processing behind the
scenes.
Remarks:
1. The feedback loops from the waterfall model are less heavily used here.
2. The word “rapid” is crucial. Speed is of the essence!
Summary: The purpose of a rapid prototype is to improve requirements.
Implement the
first version
'
Perform corrective,
perfective and adaptive
postdelivery maintenance
(
Retirement
26
4. Participation is voluntary and unpaid.
5. Roles:
(a) Core group: dedicated maintainers
(b) Peripheral group: suggest bug fixes from time to time
6. Success depends on the interest generated by the initial version.
Many open source projects do not amount to anything. But there have been
some spectacularly successful examples (mentioned at the beginning of the
section).
Reasons Why Open Source Projects Are Successful:
1. Perception that the initial release is a “winner” (most important)
2. Large potential user base
Instructor Remarks:
1. Participation in an Open Source project is voluntary and unpaid.
2. The idea of Open Source is in direct conflict with a corporation’s need
to achieve competitive advantage, by writing good software.
27
(d) What have we forgotten?
(e) What did I learn that I would like to share with the team?
Differences Between Agile and Classical:
1. Diagram of Team Organization:
CTO
Project Manager
Scrum Master
28
1. The text makes a big deal of Extreme Programming (XP), and
states that a key feature of XP is pair programming. I had always
suspected that this was a bit too rigid - now we have this suspicion
confirmed by presentations from students who have worked under this
model. It made a lot more sense to me that the groups formed to do
the work need not always be pairs - they are whatever is appropriate
to the task at hand.
29
Key Problem: There are many risks associated with software development
projects, which if realized will mean that the project is a failure.
Key Ideas:
1. Minimize risks inherent in software development by the (repeated) use
of proof-of-concept prototypes and other means.
2. N.B. Unlike rapid prototypes, which aim to improve requirements
by letting users interact with a subset of the target functionality, a
proof-of-concept prototype aims to determine whether an architec-
ture design is good (e.g. will it perform quickly enough?)
Figure 2.13: Spiral, Full
Remarks:
1. The quadrants in the above diagram could be labelled:
1. Planning / Requirements 2. Risk Analysis
4. Plan Next Phase 3. Develop and Verify
Figure 2.12: Spiral, Simplified
30
Remarks:
1. Strengths:
(a) Emphasis on alternatives and constraints supports re-use, and
software quality.
(b) This technique encourages doing the correct amount of testing.
2. Weaknesses:
(a) This model is only meant for internal building of large-scale soft-
ware.
(b) If risks are not analyzed correctly, then all may appear fine even
when the project is headed for disaster.
(c) Makes the (often wrong) assumption that software is developed in
discrete phases, when in reality, software is developed iteratively
and incrementally (like in the Winburg example).
31
4.2 Comparison of Life-Cycle Models
Here is Figure 2.14 from the text:
Life-Cycle Model Strengths Weaknesses
Evolution Tree (§2.2) -Closely models real-world
software production
-Equivalent to iteration
and incrementation
Iteration and -Closely models real-world
Incrementation (§2.5) software production
-Underlies the Unified
Process
Code-and-fix (§2.9.1) -Fine for short programs that -Totally unsuitable for
require no maintenance non-trivial programs
Waterfall (§2.9.2) -Disciplined approach -Delivered product may
-Document driven not meet client’s needs
Rapid Prototyping (§2.9.3) -Ensures the delivered -Not yet proven beyond
product meets the client’s needs all doubt
Open Source (§2.9.4) -Has worked extremely well in -Limited applicability
a small number of instances -Usually does not work
Agile Processes (§2.9.5) -Works well when the client’s -Appear to work on only
requirements are vague small-scale projectes
Synchrionize-and- -Future users’ needs are met -Has not been widely
stabilize (§2.9.6) -Ensures that components used other than at
can be successfully integrated Microsoft
Spiral (§2.9.7) -Risk driven -Can be used for only
large-scale, in-house
products
-Developers have to be
competent in risk analysis
and risk resolution
32
4. Requirements Workflow
5. Analysis Workflow
6. Design Workflow
7. Implementation Workflow
8. Test Workflow
(a) Requirements
(b) Analysis
(c) Design
(d) Implementation
9. Post-Delivery Maintenance
10. Retirement
Remarks:
1. With Definition 5.1.1, we could have defined the software crisis (Defi-
nition 1.3.1) as our inability to manage the software process effectively.
2. The goal of Software Engineering is to improve the software process.
33
3. The workflows have
(a) a technical context, e.g. the business case in the requirements
workflow is technical, and
(b) a task orientation.
4. The phases have
(a) an economic context, e.g. the business case in the Inception
phase is economic, and
(b) a time orientation.
Definition 5.2.1. An artifact is a work product from a workflow.
Questions From the Class
1. Q: Why are the artifacts tied to workflows, instead of to phases?
A: Since it is more natural to think of the artifacts from a task point
of view, it is more natural to tie the artifacts to the workflows than to
the phases.
34
2. we iterate through the increments (each having a mini-Classical shape)
to complete the project.
Example:
consults
Radiologist Lawyer
Motivation:
1. Even the best software engineers almost never get their artifacts right
on the first attempt. So stepwise refinement will be needed.
2. UML diagrams are visual, hence more intuitive than a block of ver-
biage. “A picture is worth a thousand words.”
3. The visual nature of a UML model fosters collaborative refinement.
Remarks:
1. Presenting the entire Unified Process would take more time and space
than we have during the remainder of this term, hence we will stick to
the highlights.
2. The names of the workflows (mostly) match the names of the phases
of the classical model. The descriptions of the workflow artifacts that
follow are similar to the outputs of the corresponding classical phases.
3. The classical model tied tasks and time together in sequence. The
unified model separates tasks and time.
Summary of Requirements, Analysis, Design and Implementation
Workflows
1. Each workflow corresponds (task-based) with the Classical phase hav-
ing the same name.
2. See the notes below for full details.
35
2. Pitfalls: Do the Lecture 05 Example Here.
Problems Found With Requirements Given in Example
(a) Standings Changes: do we actually want all increases, all de-
creases, or both?
(b) Standing Display: If we don’t include previous, then changes will
not be clear from the report
(c) Some students with no changes in their standings should be in-
cluded (e.g. if they are still on probation)
(d) MAV used for criteria, but only CAV is displayed on the report -
unclear
(e) Conflicting sort criteria in different parts of the specification
(f) Missing criterion for filtering down to just the program for which
I am responsible
(g) Should the two inclusion criteria be combined with AND or with
OR?
(h) And possibly more...
Moral: To summarize the Example, requirements artifacts can be
(a) incorrect (only the client can detect this)
(b) ambiguous (e.g. AND versus OR in the inclusion criteria - IT can
detect this)
(c) incomplete (e.g. missing criterion to filter down to the program -
only the client can detect this)
(d) contradictory (e.g. conflicting sort criteria - IT can detect this)
3. Using UML diagrams correctly helps to mitigate the above problems
with requirements.
36
5.6 Design Workflow
1. Goal: Show how the product is to do what it must do.
2. More precisely, refine the artifacts of the analysis workflow until the
result is good enough for the developers to implement it.
3. There are differences between the classical and the object-oriented
paradigms here.
4. It is important to keep detailed records about design decisions.
5.8.1 Requirements
1. Key Idea: traceability: every later artifact must trace back to a re-
quirement artifact.
2. Key Observation: Until Implementation, there will be no code to test,
only documents. Hence we test by holding a review of the document,
with the key stakeholders. We will delve deeper into this in Chapter 6.
5.8.2 Analysis
1. Tactic: Hold a review of analysis artifacts with the key stakeholders,
chaired by SQA.
2. Review the SPMP too.
5.8.3 Design
1. Again, design artifacts must trace back to analysis artifacts.
37
2. Tactic: Again, hold a review of design artifacts (likely without the
client this time)
5.8.4 Implementation
Remarks:
1. This will be explained in detail in Chapter 6.
The testing must include
1. desk checking (programmer)
2. unit testing (SQA)
3. integration testing (SQA)
4. product testing (SQA)
5. (user) acceptance testing (SQA and client)
Remarks:
1. Some projects also incorporate alpha and beta testing (usually the
beta version is the first version that the public would see).
2. Although it is tempting, alpha testing should not replace thorough
testing by the SQA group.
Definition 5.9.1. Positive testing means testing that what you in-
tended to change was changed in the desired way.
Strategy:
(a) Select test cases exercising the changed business rules.
(b) Compare pass 0 (no changes) against pass 1 (with changes).
(c) Confirm that the pass 1 output has the desired changes applied.
5.10 Retirement
1. This is triggered when post-delivery maintenance is no longer feasible
or cost-effective.
38
2. Usually a software product is replaced at this point. The software
product must be replaced if the business need persists.
3. True retirements are rare.
39
6.1.2 Inception Phase
Goal: Determine whether it is worthwhile to develop the target software
product. Is it economically viable to build it?
Here we explain the interaction between the Inception phase and each work-
flow (i.e. which workflow artifacts are typically produced during the Incep-
tion phase).
1. Requirements Workflow Key Steps:
(a) Understand what is
Definition 6.1.1. The domain of a software product is the place
(e.g. TV station, hospital, air traffic control tower, etc.) in which
it must operate.
(b) Build
Definition 6.1.2. A business model is a description of the
client’s business process, i.e. “how the client operates within the
domain”.
(c) Determine the project scope.
(d) The developers make the initial
Definition 6.1.3. A business case is a document which answers
these questions.
i. Is the proposed software cost effective? Will the benefits out-
weigh the costs? In what timeframe? What are the costs of
not developing the software?
ii. Can the proposed software be delivered on time? What impacts
will be realized if the software is delivered late?
iii. What risks are involved in developing the software, and how
can these risks be mitigated? Similarly to above, what risks
are there if we do not build it? There are three major risk
categories.
A. Technical Risks
B. Bad Requirements
C. Bad Architecture
2. Analysis Workflow
(a) Extract the information needed to design the architecture.
3. Design Workflow
(a) Create the design.
40
(b) Answer all questions required to start Implementation.
4. Implementation Workflow
(a) Usually little to no coding is done during the inception phase.
(b) Sometimes it will be necessary to build a proof-of-concept pro-
totype.
5. Test Workflow Goal: Ensure that the requirements artifacts are cor-
rect.
Deliverables from the Inception Phase:
1. initial version of the domain model
2. initial version of the business model
3. initial version of the requirements artifacts
4. initial version of the analysis artifacts
5. initial version of the architecture
6. initial list of risks
7. initial use cases (from analysis workflow, usually documented in UML)
8. plan for Elaboration phase (we must always plan for the next phase)
9. initial version of the business case (overall aim of Inception phase).
This describes the scope of the software product plus financial details.
(a) If software is to be marketed, then this includes revenue projec-
tions, market estimates and initial cost estimates, etc.
(b) If software is to be used in-house, then this includes the initial
cost/benefit analysis.
41
7. SPMP
8. the completed business case
42
(a) Testing
(b) Elaboration (or Construction)
6. SPMP: First Draft
(a) Analysis
(b) Inception (or Elaboration)
7. SPMP: Completed
(a) Analysis
(b) Transition (or Construction)
8. System Architecture Design: First Draft
(a) Design
(b) Elaboration (or Inception)
9. Code: First Draft
(a) Implementation
(b) Construction (or Elaboration for “low hanging fruit”)
10. Code: Ready for Deployment
(a) Implementation
(b) Transition (or Construction for “low hanging fruit”)
1-D 2-D
Classical (Waterfall) (Figure 3.2a) Unified Process (Figure 3.2b)
Code-And-Fix Iteration and Incrementation
Open Source? Spiral
Possibly Others... Possibly Others...
Remarks:
1. Two-dimensional models are more complicated, but for all the reasons
from Chapter 2, we cannot avoid working with them, especially the
Unified Process.
2. The Unified Process is the best model we have so far, but it is sure to
be superseded by a superior methodology in the future.
43
6.3 Improving the Software Process
1. Our fundamental problem in Software Engineering is our inability to
manage the software process effectively (the text cites a US government
report from 1987 to justify this statement).
2. The US DoD responded by creating the Software Engineering Institute
(SEI) at Carnegie Mellon University.
3. SEI in turn created the Capability Maturity Model (CMM) ini-
tiative.
44
ii. Some measurements are taken (e.g tracking costs, schedules).
iii. Managers identify problems as they arise and take immediate
corrective action to prevent them from becoming crises.
(c) Defined
i. The process for software production is fully documented (man-
agement / technical).
ii. There is continual process improvement.
iii. Reviews are used to achieve software quality goals.
iv. CASE environments increase quality / productivity further.
Definition 6.4.1. CASE stands for Computer Aided/Assisted
Software Engineering.
We will discuss CASE in more detail in Chapter 5.
(d) Managed
i. The organization sets quality/productivity goals for each soft-
ware project.
ii. Both are measured continually and corrective action is taken
when there are unacceptable deviations. (Statistical meth-
ods are used to distinguish a random deviation from a mean-
ingful violation of standards.)
iii. Typical measure: # faults / 1000 lines of code, in some time
interval.
(e) Optimizing
i. The goal is continual process improvement.
ii. Statistical quality / process control techniques are used to
guide the organization.
iii. Positive Feedback Loop: Knowledge gained from each
project is used in future projects. Therefore productivity and
quality steadily improve.
7 Lecture 07 - Teams I
Outline
1. Team Organization
2. Classical Chief Programmer Teams
3. Democratic Teams
4. Beyond Chief Programmer and Democratic Teams
45
7.1 Team Organization
1. To develop a software product of any significant size, a team is re-
quired.
2. Question: Suppose that a software product requires 12 person-months
to build it. Does it follow that 4 programmers could complete the work
in 3 months?
Answer: No:
(a) There are new issues (communication / integration / etc.) once a
team is involved, as contrasted with an individual.
(b) Not all programming tasks can be fully shared in time or in se-
quencing. Maybe the software product naturally has three chunks,
or maybe it has many chunks with complicated dependencies.
(c) A project manager’s Gantt Chart is a tool for managing the
dependencies in a team project.
3. Another key point:
cation paths. Every pair of people can communicate directly with each
other.
A six-person team with a chief programmer (this is Figure 4.3 in the
46
text) looks like:
47
the New York Times and other publications. If you have the text, see
§4.3.1.
Weaknesses:
1. Chief/Backup Programmers are hard to find.
2. Secretaries are also hard to find.
(a) We seek someone with strong technical skills, then demand only
clerical work from them.
3. The Programmers may be frustrated at being “second class citizens”
under this model.
Remarks:
1. In reality, most team organizations lie somewhere between the two ex-
tremes of classical chief programmer (very hierarchical) and democratic
(non-hierarchical).
48
2. It is hard to create such a team. Such teams tend to spring up
spontaneously from the “grass roots”, often in the context of re-
search as contrasted with the context of business.
3. A certain organizational culture is required before such a team
can emerge.
Student Questions:
1. Do the Programming Secretary and Backup Programmer roles still exist
here?
Instructor Answer: In my experience, no:
49
(a) The first version of this model dates from 1971, when a program
was a stack of punch cards. In today’s environment, with “soft”
code, and robust version control, the Programming Secretary is
obsolete.
(b) While a backup programmer is no longer explicitly identified, ev-
ery organization must grapple with succession planning, somehow.
2. Does the Backup Programmer role also have to be split up, like the
Chief Programmer does?
Instructor Answer: No, the split would occur when the Backup
Programmer is promoted to Chief Programmer.
3. What are the strengths of the Classical Chief Programmer Team Or-
ganization?
Instructor Answer:
(a) Key Observation: This team organization is extremely rigid with
respect to which communication paths are permitted. Recall Brooks’
Law (Definition 7.1.1). A Classical Chief Programmer is least vul-
nerable to the effects of Brooks’ Law, because the number of com-
munication paths only grows linearly as the number of program-
mers grows. Under a team organization in which any programmer
can talk to any other programmer, the number of communication
paths grows as the square of the number of programmers.
(b) As will be suggested later for Synchronize and Stabilize teams,
this team organization fosters a team culture in which all team
members work together towards a common goal.
8 Lecture 08 - Teams II
Outline
1. Synchronize and Stabilize Teams
2. Teams for Agile Processes
3. Open Source Programming Teams
4. People Capability Maturity Models
5. Choosing an Appropriate Team Organization
50
used within Microsoft.
2. Rule #1: The developers must adhere strictly to the agreed upon
time to check their code in for that day’s synchronization.
3. Rule #2: If a developer’s code prevents the product from being com-
piled for that day’s synchronization, then the problem must be fixed
immediately, so that the rest of the team can test and debug.
4. Remark: The culture of the organization must fully support Rules
#1 and #2 before this life-cycle model and team organization can have
any success.
5. Strengths:
(a) Encourages individual programmers to be creative and innovative,
a characteristic of a democratic team.
(b) The synchronization step ensures that all programmers work to-
gether for a common goal, a characteristic of a chief program-
mer team.
6. Weaknesses:
(a) There is no evidence yet that this model can work outside of Mi-
crosoft.
A Possible Explanation: There is something unique about Mi-
crosoft’s culture, which has yet to be replicated elsewhere.
51
(d) Each programmer must regard the other as an equal.
(e) Feedback given by teammates may not always be constructive.
(f) Extremely shy people might dislike this technique - they must
speak up while (pair) programming and during (daily) meetings.
Overbearing people might dominate.
3. More research is needed to determine whether the benefits outweigh
the costs.
52
a “winner” to attract and retain volunteers to work on it. Corollary:
The key individual behind the project must be a superb motivator.
Morals:
1. For success, top-calibre programmers are required. Such programmers
can succeed, even in an environment as unstructured as an open-source
one.
2. The way that a successful open-source project team is organized is
essentially irrelevant to the success/failure of the project.
53
8.5 Choosing an Appropriate Team Organization
Here is Figure 4.7 from the text:
Team Organization Strengths Weaknesses
Classical Chief Pro- -Major success of -Impractical
grammer Teams (§4.3) NYT project
Democratic -High quality code as a -Experienced staff resent
Teams (§4.2) consequence of positive atti- their code being appraised
tude towards finding faults by beginners
-Particularly good with -Cannot be externally
hard problems imposed
Modified Chief Pro- -Many successes -No success comparable
grammer Teams (§4.3.1) to the NYT project
Modern hierarchical -Team manager / Team -Problems can arise unless team
programming teams leader obviates need for manager / leader responsibilities
(§4.4) chief programmer are clearly delineated
-Scales up
-Supports decentralization
when needed
Synchronize and -Encourages creativity -No evidence so far that this
Stabilize Teams -Ensures that a huge number method can be used
(§4.5) of developers can work outside Microsoft
towards a common goal
Agile Process -Programmers do not -Still too little evidence
Teams (§4.6) test their own code regarding efficacy
-Knowledge is not lost if
one programmer leaves
-Less experienced programmers
can learn from others
-Group ownership of code
Open Source -A few projects are -Narrowly applicable
Teams (§4.7) extremely successful -Must be led by
a superb motivator
-Required top-calibre participants
1. There is no one choice of team organization that is optimal in all situ-
ations. Different strengths / weaknesses will matter more at different
times.
54
2. In practice most teams are organized according to some variant of the
(modified) chief programmer model.
55
(a) The problem to be solved may initially be unclear e.g. the team
might start with a symptom, and understand the underlying cause
through brainstorming.
(b) All team members are encouraged to speak, especially the shy
ones.
(c) No editing in the first round(s), when ideas are being suggested.
Editing happens after all ideas have been suggested.
(d) Student Question: Is brainstorming always top-down then?
Instructor Answer: Brainstorming can be
i. top-down for Intuitives, and
ii. bottom-up for Sensors.
Either way can be productive.
Pitfalls:
1. We must quantify everything to start. Some things are easier to
quantify than others.
(a) Tangible benefits are easy to measure, e.g. estimated revenue from
a new product.
(b) Intangible benefits can be more challenging e.g. the reputation of
your organization (think Facebook, recently).
i. To quantify intangible benefits, we must make assumptions,
e.g. Facebook hacks will cause 5000 users to close their ac-
counts - then we can estimate lost advertising revenue, using
historical data.
A. Advantage: With better assumptions (say from improved
historical data or from a new team member who brings
56
new experiences) we can obtain more accurate quantifica-
tions of our intangible benefits.
B. As software engineering practitioners, we must gather all
of our information by ethical means!
57
Remarks:
1. Separation of concerns is a “new and improved” version of divide and
conquer. The new guiding principle for how to divide up the compo-
nents is to reduce or eliminate the overlaps in their functionalities.
Motivation:
1. Minimize the number of regression faults! If separation of concerns is
truly achieved, then changing one module cannot affect another mod-
ule.
2. When done correctly, this also facilitates re-use of modules in future
software products.
3. Manifestations of separation of concerns:
(a) design technique of high cohesion: maximum interaction within
each module (§7.2).
(b) design technique of loose coupling: minimum interaction be-
tween modules (§7.3).
(c) encapsulation (§7.4).
(d) information hiding (§7.6).
(e) three tier architecture (§8.5.4).
4. Tracking which modules were written by weaker programmers may fa-
cilitate more proactive maintenance work.
Moral: Separation of concerns is desirable for Software Engineering.
58
(b) process metrics, e.g.
i. # lines of code for the organization.
ii. ##ofoffaults detected during product development
faults detected during product’s lifetime
, taken over all software
products in the organization. (measures effectiveness of fault
detection during development)
4. Some metrics are clearly tied to a certain workflow (e.g. we cannot
count lines of code until implementation)
5. Five essential, fundamental metrics for a software project:
(a) Size (e.g. in # Lines of Code)
(b) Cost to develop / maintain (in dollars)
(c) Duration to develop (in months)
(d) Effort to develop (in person-months; or as in my experience in
person-days)
(e) Quality (in number of faults detected during the project)
6. There is no universal agreement among software engineers about which
metrics are right, or even preferred.
59
gineering.
2. At present, a computer is a tool of, and not a replacement for, a software
professional.
3. CASE tools used during the
(a) earlier workflows (requirements, analysis, design) are called front-
end or upperCASE tools, and
(b) later workflows (implementation, postdelivery maintenance) are
called back-end or lowerCASE tools.
4. Examples
(a) data dictionary - list of every data item defined in the software
product. Some things to include:
i. an English description of every item in the dictionary
ii. Module names ✓
iii. Procedure names: ✓
A. parameters, and
B. their types,
C. locations where they are defined (i.e. which module), and
D. description of purpose
iv. Variable names: ✓
A. types, and
B. locations (i.e. which module & procedure) where they are
defined
(b) consistency checker - to confirm that every data item in the
specification document is reflected in the design, and vice versa.
(c) report generator
(d) screen generator - for creating data capture screens.
5. Taxonomy
(a) Combining multiple tools creates a workbench.
(b) Combining multiple workbenches creates an environment.
(c) So our taxonomy is
tools (task level) → workbenches (team level) → environments
(organization level).
60
(c) Automation makes maintenance easier.
(d) Do everything more quickly, hence more cheaply.
2. For example, if a specification is created by hand, there may not be
any way to tell whether the document is current by reading it. On the
other hand, if the specification is maintained within CASE software,
then the latest version is the one the CASE software displays.
3. Similarly, other documentation about the software is easier to maintain
inside of CASE software.
4. Online documentation, word processors, spreadsheets, web browsers,
and email are CASE tools.
5. Coding tools of CASE include
(a) text editors (including structure editors which are sensitive
to syntax, including online interface checking), debuggers,
pretty printers / formatters, etc.
6. An operating system front end allows the programmer to issue
operating system commands (e.g. compile, link, load) from within the
editor.
7. A source-level debugger automatically causes trace output to be
produced. An interactive source-level debugger is what its name
says.
8. Programming-in-the-small: coding a single module.
9. Programming-in-the-large: coding at the system level.
10. Programming-in-the-many: software production by a team.
61
10.3.2 Variations
Definition 10.3.2. A variation is a slightly changed version that fulfills
the same role in a slightly changed situation.
Examples:
1. two printer drivers, one for a laser printer and one for an inkjet printer,
or
2. optimizing an application to run on different platforms, e.g. desktop
vs. smart phone.
Remarks:
1. Often the variation is also embedded into the file name.
10.3.3 Moral
1. A CASE tool is needed to effectively manage multiple revisions of mul-
tiple variations.
62
(b) the versions of the compilers/linkers used to assemble the product,
(c) the date/time of assembly, plus
(d) the name of the programmer who created the version.
7. A version-control tool is required to effectively track derivations.
10.4.2 Baselines
1. When multiple programmers are working on fixing faults, a baseline
is needed.
2. A baseline is a set of versions of all the code artifacts in a project (i.e.
what versions are in production right now).
3. A programmer starts by copying the baseline files into a private workspace.
Then he/she can freely change anything without affecting anything else.
4. The programmer freezes the version of the artifact to be changed to
fix the fault. No other programmer can modify a frozen version.
5. After the fault is fixed, the new code artifact is promoted to production,
modifying the baseline.
6. The old, frozen version is kept for future reference, and can never be
changed.
7. This technique extends in the natural way to multiple programmers
and multiple code artifacts.
8. Instructor Remark: In my experience, the strict technique described
here is too slow. Instead developer #2 starts work right away, and
incorporates developer #1’s changes as soon as they are promoted to
production. SQA needs to be kept informed in this situation!
One could argue that this technique is vulnerable to exponential growth
of effort as the number of faults in a code artifact increases. The
63
instructor counter-argues that if we achieve separation of concerns
in our software products, then the probability of >> 2 simultaneous
faults in one code artifact is low.
Student Question: What if #1 and #2 actually touch the same code?
Instructor Answer: I recommend using the same technique, being mindful
that extra care will be needed when
1. incorporating #1’s changes into #2’s version, and
2. doing SQA (e.g. what should be the test cases and expected results for
pass 0 and for pass 1?).
64
(a) Automatically re-compile and re-link every night. Obviously this
is expensive.
(b) Use a tool like make to decide more intelligently, based on date and
time stamps of compiled code. This idea has been incorporated
into many different programming environments.
3. Student Question: What is the difference between a build tool (Def-
inition 10.5.1) and a configuration control tool (Definition 10.4.2)?
Answer: The purpose of a build tool is to make certain we have
the correct compiled code artifacts linked in to a specific version of the
S/W product. This can be effective for a small organization, managing
one version of a S/W product at one location. This explains why auto-
recompiling each night is a viable technique.
A configuration control tool is needed to manage multiple revi-
sions of multiple variations. E.g. for a large organization which must
manage multiple configurations running simultaneously across multiple
locations.
65
11 Lecture 11 - Testing I - Non-Execution-
Based Testing
Outline
1. Quality Issues
(a) Software Quality Assurance (SQA)
(b) Managerial Independence
2. Non-Execution Based Testing
(a) Reviews
(b) Walkthroughs
(c) Managing Walkthroughs
(d) Inspections
(e) Comparison of Walkthroughs and Inspections
(f) Strengths and Weaknesses of Reviews
(g) Metrics for Inspections
Definition 11.1.4. Quality describes the extent to which the S/W product
satisfies its specification.
66
(a) the quality (usual English meaning) of the S/W process, and thus
ensure
(b) the quality (S/W product meaning) of the S/W product.
4. Once the developers complete a workflow and check their work, the
SQA team must verify that all artifacts are correct.
67
(d) takes too much time: a review should last at most two hours.
11.2.2 Walkthroughs
The two steps for a walkthrough:
1. preparation
2. team analysis of the document
4-6 participants (e.g. for an analysis artifact):
1. SQA (chair - as above)
2. manager responsible for requirements (previous workflow)
3. manager responsible for analysis (current workflow)
4. manager responsible for design (next workflow)
5. client representative (maybe less crucial for later workflows)
11.2.4 Inspections
The five steps for an inspection (each with a formal process):
1. overview document author gives the overview; document is distributed
to the participants.
2. preparation participants examine the document, individually.
3. inspection quick document walkthrough; immediately commence fault-
finding.
4. rework document author corrects all faults noted in the written report
from step 3.
5. follow-up moderator ensures that every fault identified has been fixed,
and that no new faults were introduced in the process of fixing.
Roles for an Inspection (e.g. for a Design Artifact):
1. moderator (from SQA)
68
2. analyst (i.e. stakeholder, previous workflow)
3. designer (i.e. document author; stakeholder, current workflow)
4. implementer (i.e. stakeholder, next workflow)
5. tester (SQA, a different person than the moderator)
69
12 Lecture 12 - Testing II - Execution Based
Testing
Outline
1. Execution-Based Testing
2. What Should Be Tested?
(a) Utility
(b) Reliability
(c) Robustness
(d) Performance
(e) Correctness
70
(a) an avionics system in an aircraft, for which the inputs describing
the current state of the aircraft’s flight cannot be controlled (a
partial solution to this problem is provided by a simulator), and
(b) a system for controlling trains.
Remarks:
1. Despite these problems, Definition 12.2.1 is the best one available.
12.2.1 Utility
Definition 12.2.2. The utility of a software product is the extent to which
the software product meets the user’s needs when operated under conditions
permitted by its specification.
Elements:
1. Is the software product easy to use?
2. Does the software product perform useful functions?
3. Is the software product cost effective?
Remarks:
1. If a software product fails a test of its utility, then testing should pro-
ceed no further!
12.2.2 Reliability
Definition 12.2.3. The reliability of a software product measures the fre-
quency and severity of its failures.
Elements:
1. mean time between failures (Recall Definition 11.1.1.) Long times
→ more reliable.
2. mean time to repair failures Long times → less reliable.
(a) Also important (often overlooked): time required to fix the
effects of the failure (e.g. correcting corrupted data). Long times
→ less reliable.
12.2.3 Robustness
Elements:
1. range of operating conditions (permissible by the specifications, or not)
(a) A robust product has a wide range of operating conditions, in-
cluding some outside its specification.
71
2. possibility of unacceptable output given acceptable input
(a) A robust product produces acceptable output, given acceptable
input.
3. acceptability of output given unacceptable input
(a) A robust product produces acceptable output (e.g. a helpful error
message instead of a crash), even given unacceptable input.
12.2.4 Performance
1. It is crucial to verify that a software product meets its constraints with
respect to:
(a) Space constraints which can be critical in miniature applications,
e.g.
i. missile guidance systems as in the text, or
ii. smart phone apps.
(b) Time constraints which can be critical in real time applications,
e.g.
i. measuring core temperature in a nuclear reactor as in the text,
or
ii. controlling signals on a railroad network.
12.2.5 Correctness
Definition 12.2.4. A software product is correct if it satisfies its output
specification, without regard for the computing resources consumed, when op-
erated under permissible (pre-)conditions.
Remarks:
1. This definition is partial correctness. It tacitly assumes that the
program terminates.
Problems with Definition 12.2.4:
1. Specifications can be wrong.
(a) Then a software product can be correct, but not be acceptable.
i. Cute text example: a sort program whose specification omits
the requirement that the sorted list be a permutation of the
original list - clearly not acceptable!
2. (a) A software product can be acceptable, but not be correct.
i. Cute text example: a compiler, faster than its predecessor,
but which prints a spurious error message (which is easily
72
ignored) in one rare situation. This compiler is acceptable.
However it is not correct since producing the spurious error
message is not part of its specification.
73
Lx ≥ 0M
y = 1 ;
z = 0 ;
while (z != x) {
z = z + 1 ;
y = y * z ;
}
Ly = x!M
Lx ≥ 0M
L1 = 0!M assignment
y = 1 ;
Ly = 0!M assignment
z = 0 ;
Ly = z!M assignment
while (z != x) {
L(y = z! ∧ z ̸= x)M partial-while
Ly(z + 1) = (z + 1)!M implied (b)
z = z + 1 ;
Lyz = z!M assignment
y = y * z ;
Ly = z!M assignment
}
L(y = z! ∧ z = x)M partial-while
74
Ly = x!M implied (b)
Lx ≥ 0M
y = 1 ;
z = 0 ; At start of loop: x − z = x ≥ 0✓
while (z != x) {
z = z + 1 ; x − z decreases by 1 ✓
y = y * z ; x − z unchanged
}
Ly = x!M
The value of x − z will eventually reach 0. The loop then exits and the
program terminates. ✓
This completes the proof of total correctness.
75
(a) S/W Engineers lack the mathematical training to write correct-
ness proofs. Partial Refutation:
i. This may have been true in the past.
ii. However many CS graduates today (including all from uWa-
terloo) do have the required mathematical background.
(b) Correctness proving is too time consuming and hence too expen-
sive. Partial Refutation:
i. Costs can be assessed using a cost-benefit analysis, on a project-
by-project basis.
ii. The benefit is weighted higher the more that correctness mat-
ters, e.g. where human lives depend on program correctness.
(c) Correctness proving is too difficult. Partial Refutation:
i. Some non-trivial S/W products have successfully been proven
correct.
ii. There exists theorem-proving software to save manual work
in some situations.
iii. However proving program correctness in general is an un-
decidable problem, so no theorem-prover can handle every
possible situation.
Morals:
1. Correctness proving is a useful tool, when human lives are at stake, or
when the cost-benefit analysis justifies doing it for other reasons.
2. However correctness proving alone is not enough. Testing is still a
crucial need for a S/W product.
3. Languages like Java and C++ support variations of an assert state-
ment, which permits a programmer to embed assertions directly into
the code. A switch then controls whether assertion checking is enabled
(slower) or not (faster) at run time.
4. Model checking is a new technology that may eventually replace
correctness proving. It is describe in Chapter 18 of the text, which
unfortunately will be beyond the scope of CS 430.
76
ii. Testing’s goal (exposing faults) is destructive.
iii. Programmers feel protective of their own code, hence they
have an incentive not to expose faults in the code.
(b) The programmer may have misunderstood the specification.
i. An SQA professional has a better chance to understand the
specification correctly, and to test accordingly.
2. After the programmer completes and hands off the code artifact, SQA
should perform systematic testing:
Definition 13.2.1. Systematic testing is described by the following
procedure:
(a) Select test cases to exercise all parts of the specification.
(b) For each test case, determine its expected output before execu-
tion starts.
(c) Execute the program on each test case, and record the actual
results.
(d) Compare the actual results to the expected results. Document all
differences.
(e) Correct faults (either in the specification or in the code or possibly
both) which explain each difference, and repeat the execution.
(f ) Archive all test results electronically, for purposes of regression
testing during future projects and post-delivery maintenance.
(a) Ambiguity about the term desk checking in the text:
i. first mention (description of testing workflow): Here desk check-
ing meant the testing that a programmer does during develop-
ment. This is the meaning with which I was already familiar
from my time in industry.
ii. second mention (description of who should perform execution-based testing):
Here desk checking means the checking of the design artifact
that the programmer does before starting to code.
3. As outlined earlier, the SQA group must have managerial independence
from the development team.
77
1. Will we have to write correctness proofs like the one in the notes for
this lecture?
Answer: No.
(a) I will include a small example of the Hoare Triple technique for
the next assignment, which can be done “with bare hands” (i.e.
you will not need the machinery that the example uses).
(b) There will be no correctness proving on the Final Exam.
78
2. In the OO paradigm, every class and every method within a class is a
module.
(a) The main idea of OO is to keep data, and operations on that data,
together.
(b) We need to be clear about the difference between the program
statements that define the properties (a.k.a. attributes) of a class,
and some instantiation of that class. Only an instantiation of a
class can actually contain data.
Remarks:
1. The aim of C/SD is to apply common sense to make S/W product
designs “make sense”. (E.g. see Figures 7.1 to 7.3 in the text for
designs that do, and do not, make sense.)
2. C/SD done well achieves separation of concerns (Definition 9.4.1).
Remarks:
1. The text defines many levels of cohesion. Do not memorize these!
2. For us, it will be enough to distinguish between high and low cohesion.
high = good; low = bad.
Remarks:
1. The text defines many levels of coupling. Do not memorize these!
2. For us, it will be enough to distinguish between loose and tight cou-
pling. loose = good; tight = bad.
79
14.4 Cohesion & Coupling Example
Remarks on Assessing Cohesion and Coupling:
1. Suppose that we are given two pairs of modules and it is our job to
assess which pair’s modules have
(a) high versus low cohesion, and
(b) loose versus tight coupling.
2. Because we have dispensed with the detailed levels of cohesion and cou-
pling from the textbook, therefore making both judgments is relative,
not absolute.
3. We can decide
(a) which pair’s modules have higher cohesion than the modules of
the other pair, and
(b) which pair’s modules have looser coupling than the modules of the
other pair.
4. In past offerings of CS 430, cohesion and coupling has caused some
confusion. Keep our definitions, plus the above remarks in mind, and
work out your comparisons carefully.
Cohesion / Coupling Example Refer to the Examples document.
Results:
1. low cohesion, tight coupling (bad)
2. high cohesion, loose coupling (good)
Why Coupling is Important
1. Tight coupling means a higher probability of regression faults.
2. Suppose modules p and q are tightly coupled.
3. Then it is likely that making a change to p requires a change to q.
4. Making the change to q adds time, and hence cost, to the project (which
would not be required with looser coupling).
5. Not making the change to q likely causes a fault later on.
6. The stronger the coupling with some other module, the more fault-
prone a module is.
7. This in turn makes the module the more difficult and costly to maintain.
8. As mentioned above, our goal is high cohesion and loose coupling. The
rest of Ch7 is about refining the techniques to achieve this goal. Ch14
of the text goes into more detail; unfortunately this will be beyond the
scope of CS 430.
9. Also note that separation of concerns (in general terms) means high
cohesion and loose coupling (in OO terms).
80
15 Lecture 15 - The OO Paradigm - Encap-
sulation and Abstraction
Outline
1. Encapsulation (§7.4)
(a) Encapsulation and Development (§7.4.1)
(b) Encapsulation and Maintenance (§7.4.2)
Remarks:
1. Why we adopt Definition # 2: In many OO languages, hiding of com-
ponents is not automatic or can be overridden; thus, information
hiding (Definition 16.2.1) is defined as a separate notion.
2. Encapsulation plus information hiding (Definition 16.2.1) is used to
hide the values of a structured data module, preventing unauthorized
parties’ direct access to them.
3. Publicly accessible methods are provided (so-called getters and set-
ters) to access the values; other client modules call these methods to
retrieve/modify the values within the module.
4. Hiding the internals of the module protects its integrity by preventing
users from setting the internal data of the module into an invalid /
inconsistent state.
5. A benefit of encapsulation is that it can reduce system complexity, and
thus increase reliability, by allowing the developer to limit the inter-
81
dependencies between S/W components (i.e. this provides a technique
for achieving separation of concerns).
6. The features of encapsulation are supported by using classes (Defini-
tion 16.3.1) in OO programming languages.
7. Encapsulation is not unique to OO programming. Implementations
of abstract data types (Definition 16.1.1) offer a similar form of
encapsulation.
8. See the Example (text pp 199-201) of refining a S/W product from an
initial design having low cohesion into a better design having encapsu-
lation.
9. In the first solution to the Cohesion/Coupling example (last lecture), we
could have achieved high cohesion and loose coupling by simply copying
all the needed code into both modules. But this would indicate a failure
to abstract effectively (Definition 15.1.2). We would have duplicated
code in the two modules.
10. Moral: Doing OO effectively requires doing a good job on all of its
ingredients.
82
2. As a recommendation to the programmer, the abstraction principle
reads
Each significant piece of functionality in a program should be
implemented in just one place in the source code. Where
similar functions are carried out by distinct pieces of code,
combine them into one, abstracting out the varying parts.
In short, “Don’t repeat yourself.”
3. Effective abstraction guides us to good choices of what to encapsulate
when we design and develop our S/W.
4. Abstraction and encapsulation are different, but go hand-in-hand in
OO design and development.
5. Abstraction permits a designer to temporarily ignore the details of the
levels above and below the level currently being worked on, both in
terms of data and procedures. An example of a data abstraction
(Definition 15.1.3) is:
(a) A database designer focuses on designing a table, temporarily ig-
noring the details of
i. the whole database (the level above), and
ii. the other tables having foreign key relationships to the current
table (the level below).
83
16 Lecture 16 - The OO Paradigm - Abstract
Data Types, Information Hiding and Ob-
jects
Outline
1. Abstract Data Types (§7.5)
2. Information Hiding (§7.6)
3. Objects (§7.7)
Examples:
1. The integers are an ADT, defined as the values {. . . , −2, −1, 0, 1, 2, . . .},
and by the operations of +, −, ∗, and sometimes /, etc., which behave
according to the familiar rules of arithmetic (e.g. associativity, com-
mutativity, distributive laws, no dividing by 0, etc).
Typically integers are represented in a data structure as binary num-
bers, but there are many representations.
The user is abstracted from the concrete choice of representation, and
can simply use the data objects and operations according to the familiar
rules.
2. a stack (i.e. a last-in, first-out data structure).
Remarks:
1. An abstract data type (ADT) need not be an arithmetic object it-
self; however each of its operation must be defined by some algorithm.
2. In CS, an abstract data type (ADT) is a mathematical model,
where a data type is defined by its behaviour (“what it does”, not
“how it does it”) from the point of view of a user (not an implementer),
specifically:
(a) possible values,
(b) possible operations on data of this type, and
(c) the behaviour of these operations.
84
3. This contrasts with data structures, which are concrete representa-
tions of data, from the point of view of an implementer, not a user.
4. Using abstract data types supports abstraction of both kinds, data and
procedural.
5. Hence abstract data types are desirable from the viewpoints of both
development and maintenance.
85
(c) re-use.
3. Reminder: Use our definitions from the Lectures Notes instead of the
definitions from the text, where there are any conflicts.
Examples (Remark: the text example, in which Person is the parent class
of the Parent class, is needlessly confusing!)
1. Start with a Person class, having
(a) Properties (/ Attributes)
i. LastName,
ii. FirstName,
iii. DateOfBirth
and
(b) Methods
i. createFullName,
ii. createEmail and
iii. computeAge.
2. Then define a Student class, having all the Properties/Methods of
Person, plus
(a) Properties
i. StudentNumber
ii. CumulativeAverage (in reality we would compute this from
individual grades rather than storing it; we make it a property
here for simplicity).
3. Then define a Professor class, having all the Properties/Methods
of Person, plus
(a) Properties
i. EmployeeNumber
ii. NSERCAccountNumber.
4. Then each of Student, Professor
(a) inherits from Person,
(b) isA Person, and
(c) is a specialization of Person.
86
5. Here is a diagram of the relationships between these classes.
Person
LastName : string
FirstName : string
DateOfBirth : date
createFullName()
createEmail()
computeAge()
Student Professor
StudentNumber : string EmployeeNumber : string
CumulativeAverage : double NSERCAccountNumber : integer
PersonalComputer
87
Example (Association):
consults
Radiologist Lawyer
Definition 17.1.1. At run time, the system decides which Open method
to invoke. This is called dynamic binding.
6.
88
Definition 17.1.2. The Open method is called polymorphic, because
it applies to different sub-classes, differently.
7. Problems with Dynamic Binding/Polymorphism
(a) We cannot determine at compile time which version of a polymor-
phic method will be called at run time. This can make failures
hard to diagnose.
(b) Similarly a S/W product that makes heavy use of polymorphism
can be hard to understand and hence hard to maintain/enhance.
89
same project with the Classical paradigm. This is particularly pronounced
if the project has a large GUI component. But after the initial project,
1. the re-use of classes in subsequent projects usually pays back the initial
investment (again, this is more pronounced with a large GUI compo-
nent) and
2. post-delivery maintenance costs are reduced.
1. In the best case, all descendants need to be recompiled after the base
class is changed.
2. In the worst case, all descendants have to be re-coded then re-compiled.
This is bad!
To mitigate this, meticulously design all classes, especially parent classes in
an inheritance tree.
90
17.2.7 OO Will Be Replaced In The Future
1. As mentioned earlier, the OO paradigm is certain to be superseded by
some superior methodology in the future.
2. Aspect Oriented Programming (AOP) (covered in §18.1 in the
text) is one possible candidate to replace the OO paradigm.
18 Lecture 18 - Reusability
Outline
1. Re-Use Concepts
2. Impediments to Re-Use
3. Types of Re-Use
(a) Accidental (Opportunistic)
(b) Deliberate (Systematic)
4. Objects and Re-Use
5. Re-Use During Design and Implementation
(a) Library (toolkit)
(b) Application Framework
(c) Software Architecture
(d) Component-Based Software Engineering
91
(d) If we view the re-used module as black-box, then we may struggle
to confirm that our S/W product will actually match the spec; if
a failure occurs in the re-used module after deployment, then we
may be slow to diagnose the cause.
(e) Compatibility Issues:
i. S/W versions, or
ii. the provided interface (the Adapter design pattern can some-
times solve this problem).
(f) Writing a module to handle multiple situations can make the mod-
ule less efficient than if a separate module was written for each
individual situation - but this would not be effective abstraction.
(g) If performance of the re-used module is not optimized, then all
re-users will suffer a performance hit.
(h) Undetected faults get propagated.
(i) Documentation is often poor in practice.
3. Other Aspects
(a) On average, 15% of any S/W product is written to serve a unique
purpose.
(b) In theory, remaining 85% could be standardized and reused.
(c) In practice, only 40% reuse is achieved.
4. Re-use refers not only to code, but also to
(a) documents (e.g. design, manuals, SPMP, etc.)
(b) duration/cost estimates
(c) test data
(d) architecture
(e) etc.
92
5. Re-use can be expensive. It is costly to:
(a) develop reusable modules, and
(b) search the libraries and re-use the right module.
6. Legal issues with contract developers (possible intellectual property
problems)
7. Commercial Of The Shelf (COTS): Developers do not provide the
source code, so there is limited to no ability to modify and to re-use.
8. Etc.
93
1. What is Re-Used: There is a library, a set of related re-usable
operations e.g.
(a) A Matrix library contains many operations - +, *, determinant,
invert, etc.
(b) GUI library contains different GUI classes - window, menu, radio
button, etc.
The re-user calls modules from the library.
2. What is New: The re-user must
(a) supply control logic of S/W product as a whole, and
(b) call library routines at the right moment using the control logic
(c) See Figure 8.2a in the text.
94
i. in my experience, Library re-use is much more common than
Application Framework re-use.
Reason: It is rare to find two different S/W products with
identical control logic.
4. Examples of Application Framework Re-Use:
(a) games
(b) Automated Teller Machines (ATMs)
i. Suppose you are managing a team to develop S/W for ATMs,
deployed by several banks.
ii. The control logic for an ATM deposit will be the same, re-
gardless of the bank (note, we are over-simplifying a tiny bit
here).
iii. However the details of how to carry out a deposit will depend
completely on the choice of bank.
iv. A side comment here is that this would be an example of
deliberate (systematic) re-use. We would design and build the
control logic with the intent to re-use it at all of the banks.
95
19 Lecture 19 - Design Patterns
Outline
1. Design Patterns
(a) Introduction
(b) Adapter Design Pattern (§8.6.2)
(c) Bridge Design Pattern (§8.6.3)
(d) Iterator Design Pattern (§8.6.4)
(e) Abstract Factory Design Pattern (§8.6.5)
(f) Categories of Design Patterns (§8.7)
(g) Strengths/Weaknesses of Design Patterns (§8.8)
2. Re-Use During Post-Delivery Maintenance
96
Applicant
computePremium(age,gender)
3. The old computation of premiums used this class:
Neutral Applicant
computeNeutralPremium(age)
4. The new computation of premiums will use this class:
5. However there has not been enough time to change the entire system.
The situation is displayed in the following figure (Fig 8.4 in the text).
Client
Insurance
determinePremium()
{
applicant.computePremium(age,gender);
}
Neutral Applicant
computeNeutralPremium(age)
97
Client
Insurance
determinePremium()
{
wrapper.computePremium(age,gender);
}
Wrapper
computePremium(age,gender)
{
neutralApplicant.computeNeutralPremium(age);
}
Neutral Applicant
computeNeutralPremium(age)
98
Client
Abstract Target
abstract request()
Adapter
request()
{
adaptee.specificRequest();
}
Adaptee
specificRequest()
99
3. The abstract request method from Abstract Target is implemented
in the (concrete) subclass Adapter, to invoke the specificRequest
method in Adaptee.
4. This solves the interfacing problems from earlier. This is the raison
d’être for the Adpater design pattern.
5. But the pattern is more powerful than that. It provides a way for an
object to permit access to its internal implementation in such a way
that clients are not coupled to the structure of that internal implemen-
tation. In other words, it provides the benefits of information hiding
(Definition 16.2.1) without having to actually hide the implementation
details.
Client
Abstract Conceptualization
Abstract Implementation
operation()
{
abstract operationImplementation()
impl.operationImplementation();
}
Concrete Implementation
Refined Conceptualization
operationImplementation()
100
Notation: / for “References”.
101
Client
Abstract Iterator
Abstract Aggregate
abstract first()
abstract next()
abstract createIterator() : Iterator
abstract isDone() : Boolean
abstract currentItem(): Item
createIterator() first()
{ next()
return new concreteIterator(this); isDone() : Boolean
} currentItem(): Item
102
7. Multiple different traversal methods may be present, if there are mul-
tiple types of list elements. Because of the common interface provided
by Abstract Iterator, we do not need to know up front which types
are possible.
8. Last lecture we pointed out that some up-front investment is required
to position for future re-use.
9. We see this phenomenon in the example too: it takes more time to
create the abstract classes (interfaces), then create the concrete classes,
than it would take to write the concrete classes alone. But omitting
the abstract classes forces the client to refer to all the concrete classes
directly, frustrating our efforts to achieve re-use.
103
Abstract Widget Factory
abstract createProductA()
abstract createProductB()
Abstract Product A
Abstract Product B
4. This is Fig 8.10 in the text, our instance of the Abstract Factory
Design Pattern, in the case of a toolkit for a graphical user interface.
104
Abstract Widget Factory
abstract createMenu()
abstract createWindow()
Abstract Menu
Abstract Window
105
9. The critical aspect of this pattern is that the three interfaces between
Client and the widget factory (namely Abstract Widget Factory,
Abstract Menu and Abstract Window) are all abstract classes. None
of these classes is specific to any one operating system. Consequently,
the design of Fig 8.10 has uncoupled the application program from the
operating sytem.
106
20 Lecture 20 - Portability
Outline
1. Portability Concepts
2. Hardware Incompatibilities
3. Operating System Incompatibilities
4. Numerical System Incompatibilities
5. Compiler Incompatibilities
6. Is Portability Really Necessary?
7. Techniques for Achieving Portability
(a) Portable Operating System Software
(b) Portable Application Software
(c) Portable Data
(d) Object-Oriented Technologies (OOT)
107
2. Similar problems on mainframe-scale systems.
3. JCL (Job Control Language, for specifying all the parameters needed
to run mainfrmame batch jobs)
(a) Each OS’s JCL is slightly different.
4. Virtual Memory (i.e. augmenting physical memory by allocating some
disk space as virtual memory)
(a) If S/W is developed on an O/S that supports virtual memory, then
there is no practical limit on the amount of memory available.
(b) But if that same S/W is then ported to an O/S that does not
support virtual memory, then there is a hard limit on the amount
of memory available.
108
(a) platform-independent (portable):
i. 9000 LOC written in C
(b) platform-dependent (must be re-written for each platform):
i. 1000 LOC written in Assembly
ii. 1000 LOC of C - device drivers
2. Lessons of UNIX
(a) We should emulate the techniques used to design/build UNIX as
much as possible.
(b) When we have a choice of O/S, we should choose UNIX.
109
21.1 Planning and the Software Process
1. When to estimate
(a) after requirements workflow - only an informal understanding of
what is needed
i. At this point, our ranges of estimates must be broad.
ii. Figures 9.1 & 9.2 explain somewhat why this is true.
iii. This is a summarized Figure 9.1 from the text. It displays a
model for estimating the relative range of a cost estimate for
each workflow.
Relative Range of Cost Estimate
Phase
iv. This is a summarized Figure 9.2 from the text. It displays the
range of cost estimates, in millions of dollars, for a software
product that costs $1 million to build.
110
Cost Estimate
upper bound
lower bound
Phase
v. We provide a preliminary estimate here, so that the client can
decide whether to proceed to analysis or not.
(b) after analysis workflow - a more detailed understanding of what
is needed
i. For the rest of Ch 9, we assume that we are estimating at this
point.
Remarks:
1. In practice, you may find yourself getting pressured by the client to re-
duce your preliminary estimates, to ensure that the project goes ahead.
Common sense says that a client cannot dictate both the requirements
and the costs to satisfy them. If the client thinks that the preliminary
estimates are too high, then they can:
(a) reduce the scope of the requirements, to reduce the estimated cost,
or
(b) increase the total budget.
Giving in to pressure to reduce estimates at this point ALWAYS leads
to problems later on.
111
21.2 Estimating Duration and Cost
1. Estimating Cost: All Costs of Development:
(a) internal, i.e. the cost of our developers, e.g.
i. salaries of project team members
ii. costs of H/W and S/W
iii. overhead costs
(b) external, i.e. the price to the client, e.g.
i. usually internal costs plus some mark-up
2. Estimating Duration: The client will need to know when to expect
the S/W product to be delivered.
3. Obstacles to Estimating Accurately:
(a) human
i. variations in quality
ii. turnover
iii. varying levels of experience
112
Level of Complexity
Component Simple Average Complex
Input item 12 8 0
Output item 10 7 1
Inquiry 8 4 1
Master file 1 1 1
Interface 6 2 0
Solution:
113
Factor Name Count to Use
1 Data communication 2
2 Distributed data processing 0
3 Performance criteria 3
4 Heavily utilized hardware 1
5 High transaction rates 3
6 Online data entry 5
7 End-user efficiency 5
8 Online updating 1
9 Complex computations 3
10 Reusability 3
11 Ease of installation 0
12 Ease of operation 5
13 Portability 3
14 Maintainability 5
Solution: Summing the counts in the above table gives us the
total degree of influence:
DI = 2 + 0 + 3 + 1 + 3 + 5 + 5 + 1 + 3 + 3 + 0 + 5 + 3 + 5
= 39,
so that the corresponding technical complexity factor (TCF)
is
T CF = 0.65 + (0.01)DI
= 0.65 + (0.01)(39)
= 1.04.
(c) Use the results of parts a) and b) to compute the function points
(FP) for the given software product.
Solution:
F P = (U F P )(T CF )
= (272)(1.04)
= 282.88,
so that we measure this software product at 283F P . (Only whole
numbers make sense here; we always round up to be conserva-
tive.)
114
Remarks About Function Points
1. Observe that nowhere in the computation of UFP or FP did we ask
(a) in what language is this software product written? or
(b) how many lines of code does this software product have?
FP are designed to be independent of these factors. FP compare sizes
of different software products, regardless of their implementations.
Remarks:
1. There is no perfect technique for estimating the cost/duration of a
S/W project.
2. Some factors to consider:
(a) skill levels of project personnel (including familiarity with the S/W
product)
(b) complexity of project
(c) project deadlines
(d) target hardware
(e) availability of CASE tools
3. Techniques of Estimation
(a) Expert Judging by Analogy
i. experts using history of similar past projects.
115
(b) Bottom-Up Approach
i. analogous to divide and conquer (Definition 9.3.1), and
ii. most common in my SLF experience.
(c) Algorithmic Cost Estimation Models (e.g. COCOMO)
i. Compute the size of the S/W product, using function points,
or some other method.
ii. Use the size of the S/W product from 3(c)i to estimate cost
& duration of the project to build it.
116
Rating
Cost Very Very Extra
Drivers Low Low Nominal High High High
Product Attributes
-Required software reliability 0.75 0.88 1.00 1.15 1.40
-Database size 0.94 1.00 1.08 1.16
-Product complexity 0.70 0.85 1.00 1.15 1.30 1.65
Computer Attributes
-Execution time constraint 1.00 1.11 1.30 1.66
-Main storage constraint 1.00 1.06 1.21 1.56
-Virtual machine volatility 0.87 1.00 1.15 1.30
-Computer turnaround time 0.87 1.00 1.07 1.15
Personnel Attributes
-Analyst capabilities 1.46 1.19 1.00 0.86 0.71
-Applications experience 1.29 1.13 1.00 0.91 0.82
-Programmer capability 1.42 1.17 1.00 0.86 0.70
-Virtual machine experience 1.21 1.10 1.00 0.90
-Programming language experience 1.14 1.07 1.00 0.95
Project Attributes
-Use of modern programming practices 1.24 1.10 1.00 0.91 0.82
-Use of software tools 1.24 1.10 1.00 0.91 0.83
-Required development schedule 1.23 1.08 1.00 1.04 1.10
multipliers to use
117
Cost Rating
Drivers to Use
Product Attributes
-Required software reliability Nominal
-Database size Low
-Product complexity Low
Computer Attributes
-Execution time constraint High
-Main storage constraint Nominal
-Virtual machine volatility Low
-Computer turnaround time Nominal
Personnel Attributes
-Analyst capabilities Very High
-Applications experience Very High
-Programmer capability High
-Virtual machine experience Low
-Programming language experience High
Project Attributes
-Use of modern programming practices Nominal
-Use of software tools Nominal
-Required development schedule High
Solution: The effort multipliers for the given drivers are:
multipliers to use
118
Cost Rating Multiplier
Drivers to Use to Use
Product Attributes
-Required software reliability Nominal 1.00
-Database size Low 0.94
-Product complexity Low 0.85
Computer Attributes
-Execution time constraint High 1.11
-Main storage constraint Nominal 1.00
-Virtual machine volatility Low 0.87
-Computer turnaround time Nominal 1.00
Personnel Attributes
-Analyst capabilities Very High 0.71
-Applications experience Very High 0.82
-Programmer capability High 0.86
-Virtual machine experience Low 1.10
-Programming language experience High 0.95
Project Attributes
-Use of modern programming practices Nominal 1.00
-Use of software tools Nominal 1.00
-Required development schedule High 1.04
Using the given effort multipliers gives
(1.00)(0.94)(0.85)
(1.11)(1.00)(0.87)(1.00)
(0.71)(0.82)(0.86)(1.10)(0.95)
(1.00)(1.00)(1.04)46
≈ 19.31377308,
22.1.3 COCOMO II
1. COCOMO was introduced in 1981 (before OO was widely accepted;
most systems were mainframe-based; classical paradigm was prevalent),
and it became less reliable as time went on.
119
2. COCOMO II was a major revision to address these weaknesses.
(a) COCOMO is all based on LOC (equivalently KDSI)
(b) 3 applications of COCOMO II:
i. application composition model
ii. early design model
iii. post architecture model
(c) Where COCOMO outputs a single estimate, COCOMO II outputs
a range of estimates for each model.
(d) When I have taught CS 430 in the past, I have made a note to
myself to present COCOMO II instead of Intermediate COCOMO,
because we make the case throughout the course that we should
adopt the OO paradigm.
(e) However I found that doing this was not practical. I have posted
a .pdf detailing COCOMO II on LEARN. Please peruse it at your
leisure.
(f) You may also find the following web pages about COCOMO II
interesting:
i. Overview: http://sunset.usc.edu/csse/research/cocomoii/
cocomo_main.html
ii. Calculator: http://csse.usc.edu/tools/COCOMOII.php
3. We don’t have time to go into the details of COCOMO II in CS 430.
See the text for references for additional reading if interested.
120
2. SPMP Framework
3. IEEE SPMP
4. Planning Testing
5. Planning OO Projects
6. Training Requirements
7. Documentation Standards
8. CASE Tools for Planning and Estimating
9. Testing the SPMP
121
Resource consumption
Time
While this is cute, it will not appear on the final exam. In my
experience, I have never seen the Raleigh distribution used in re-
ality.
(c) money to pay for it all
i. Detail the money to be spent, and when it will be spent.
122
1. Overview.
(a) Project summary.
i. Purpose, scope and objectives. A brief description is
given of the purpose and scope of the software product to be
delivered, as well as project objectives. Business needs are
included in this subsection.
ii. Assumptions and constraints. Any assumptions under-
lying the project are stated here, together with constraints,
such as the delivery date, budget, resources, and artifacts to
be reused.
iii. Project deliverables. All the items to be delivered to the
client are listed here, together with the delivery dates.
iv. Schedule and budget summary. The overall schedule is
presented here, together with the overall budget.
(b) Evolution of the project management plan. No plan can
be cast in concrete. The project management plan, like any other
plan, requires continual updating in the light of experience and
change within both the client organization and the software de-
velopment organization. In this section, the formal procedures
and mechanisms for changing the plan are described, including
the mechanism for placing the project management plan itself un-
der configuration control.
2. Reference materials. All documents referenced in the project man-
agement plan are listed here.
3. Definitions and acronyms. This information ensures that the project
management plan will be understood the same way by everyone.
4. Project organization.
(a) External interfaces. No project is constructed in a vacuum.
The project members have to interact with the client organiza-
tion and other members of their own organization. In addition,
subcontractors may be involved in a large project. Administrative
and managerial boundaries between the project and these other
entities must be laid down.
(b) Internal structure. In this section, the structure of the de-
velopment organization itself is described. For example, many
software development organizations are divided into two types of
groups: development groups that work on a single project and
support groups that provide support functions, such as config-
123
uration management and quality assurance, on an organization-
wide basis. Administrative and managerial boundaries between
the project group and the support group also must be defined
clearly.
(c) Roles and responsibilities. For each project function, such as
quality assurance, and for each activity, such as product testing,
the individual responsible must be identified.
5. Managerial process plans.
(a) Start-up plan.
i. Estimation plan. The techniques used to estimate project
duration and cost are listed here, as well as the way these
estimates are tracked and, if necessary, modified while the
project is in progress.
ii. Staffing plan. The numbers and types of personnel required
are listed, together with the durations for which they are
needed.
iii. Resource acquisition plan. The way of acquiring the nec-
essary resources, including hardware, software, service con-
tracts, and administrative services, is given here.
iv. Project staff training plan. All training needed for suc-
cessful completion of the project is listed in this subsection.
(b) Work plan.
i. Work activities. In this subsection, the work activities are
specified, down to the task level if appropriate.
ii. Schedule allocation. In general, the work packages are in-
terdependent and further dependent on external events. For
example, the implementation workflow follows the design work-
flow and precedes product testing. In this subsection, the
relevant dependencies are specified.
iii. Resource allocation. The various resources previously listed
are allocated to the appropriate project functions, activities,
and tasks.
iv. Budget allocation. In this subsection, the overall budget is
broken down at the project function, activity, and task levels.
(c) Control plan.
i. Requirements control plan. As described in Part B of the
text, while a software product is being developed, the require-
ments frequently change. The mechanisms used to monitor
124
and control the changes to the requirements are given in this
section.
ii. Schedule control plan. In this subsection, mechanisms for
measuring progress are listed, together with a description of
the actions to be taken if actual progress lags behind planned
progress.
iii. Budget control plan. It is important that spending should
not exceed the budgeted amount. Control mechanisms for
monitoring when actual cost exceeds budgeted cost, as well
as the actions to be taken should this happen, are described
in this subsection.
iv. Quality control plan. The ways in which quality is mea-
sured and controlled are described in this subsection.
v. Reporting plan. To monitor the requirements, schedule,
budget, and quality, reporting mechanisms need to be in place.
These mechanisms are described in this subsection.
vi. Metrics collection plan. As explained in text §5.5, it is not
possible to manage the development process without measur-
ing relevant metrics. The metrics to be collected are listed in
this subsection.
(d) Risk management plan. Risks have to be identified, priori-
tized, mitigated, and tracked. All aspects of risk management are
described in this section.
(e) Project close-out plan. The actions to be taken once the
project is completed, including reassignment of staff and archiving
of artifacts, are presented here.
6. Technical process plans.
(a) Process model. In this section, a detailed description is given
of the life-cycle model to be used.
(b) Methods, tools and techniques. The development method-
ologies and programming languages to be used are described here.
(c) Infrastructure plan. Technical aspects of hardware and soft-
ware are described in detail in this section. Items that should
be covered include the computing systems (hardware, operating
systems, network, and software) to be used for developing the soft-
ware product, as well as the target computing systems on which
the software product will be run and CASE tools to be employed.
(d) Product acceptance plan. To ensure that the completed soft-
125
ware product passes its acceptance test, acceptance criteria must
be drawn up, the client must agree to the criteria in writing, and
the developers must then ensure that these criteria are indeed met.
The way that these three stages of the acceptance process will be
carried out is described in this section.
7. Supporting process plans.
(a) Configuration management plan. In this section, a detailed
description is given of the means by which all artifacts are put
under configuration management.
(b) Testing plan. Testing, like all other aspects of software develop-
ment, needs careful planning.
(c) Documentation plan. A description of documentation of all
kinds, whether or not to be delivered to the client at the end of
the project, is included in this section.
(d) Quality assurance plan. All aspects of quality assurance, in-
cluding testing, standards, and reviews, are encompassed by this
section.
(e) Reviews and audits plan. Details as to how reviews are con-
ducted are presented in this section.
(f) Problem resolution plan. In the course of developing a software
product, problems are all but certain to arise. For example, a
design review may bring to light a critical fault in the analysis
workflow that requires major changes to almost all the artifacts
already completed. In this section, the way such problems are
handled is described.
(g) Subcontractor management plan. This section is applicable
when subcontractors are to supply certain work products. The
approach to selecting and managing subcontractors then appears
here.
(h) Process improvement plan. Process improvement strategies
are included in this section.
8. Additional plans. For certain projects, additional components may
need to appear in the plan. In terms of the IEEE framework, the appear
at the end of the plan. Additional components may include security
plans, safety plans, data conversion plans, installation plans, and the
software project postdelivery maintenance plan.
126
23.4 Planning Testing
1. Include a detailed schedule for what testing must be done during each
workflow. Potential problems if this is not done:
(a) Capturing traceability between workflows (which is required to
test effectively) may not be done correctly.
(b) Missed opportunities to follow-up on later artifacts as suggested by
unusually high numbers of faults in early artifacts of the project.
(c) Black-box test cases should be selected at the end of the analysis
workflow (while details are fresh in developers’/SQA members’
minds). If not, then black box test cases may be hurriedly thrown
together later on (less effective).
(d) Etc.
127
23.7 Documentation Standards
1. Documentation is an integral part of any S/W project.
2. Hence it is crucial that standards be established (in the SPMP if
nowhere else), understood and followed by all team members. Rea-
sons:
(a) fewer misunderstandings between team members
(b) aids the SQA group
(c) after initial training, no additional training will be needed when
staff change teams internally,
(d) etc.
128
(b) Maintenance (esp Post-Delivery)
(c) Why There Is No Phase for
i. Planning
ii. Testing
iii. Documentation
(d) The OO Paradigm
2. S/W Life-Cycle Models
(a) Change is Inevitable
(b) Iteration and Incrementation (which drives the remainder of the
items in the list)
(c) Other Life-Cycle Models
i. Code-And-Fix
A. This was the prevailing model pre-Waterfall / Classical.
B. Under this model there was no change management at all!
ii. Waterfall / Classical
iii. Rapid Prototyping
iv. Open Source
v. Agile Processes
vi. Synchronize and Stabilize
vii. Spiral
(d) No One Life-Cycle Model dictated by SW-CMM
3. The S/W Process
(a)
(b) Postdelivery Maintenance
(c) One- and Two-Dimensional Life-Cycle Models
(d) Capability Maturity Models (SW-CMM specifically)
4. Teams
(a) Democratic
(b) Classical Chief Programmer
129
(c) Modified Chief Programmer
(d) Teams for Life-Cycle Models
i. Synchronize and Stabilize
ii. Agile Processes
iii. Open Source
(e) No One Team Organization dictated by P-CMM
5. The Tools of the Trade
(a) Stepwise Refinement
(b) Cost-Benefit Analysis
(c) Divide and Conquer
(d) Separation of Concerns
(e) S/W Metrics
(f) CASE tools
(g) Version/Configuration Control
6. Testing
(a) Quality Issues
i. SQA
ii. Managerial Independence
(b) Non-Execution-Based Testing (Reviews)
i. Walkthroughs
ii. Inspections
(c) Execution-Based Testing
i. Best Practice: Determine expected results before you execute
your first test.
(d) What to Test
i. Utility
ii. Reliability
iii. Robustness
iv. Performance
v. Correctness
(e) Testing versus Correctness Proofs
i. There will be no correctness-proving on the final exam.
ii. When correctness-proving can be justified is fair game for the
final exam.
(f) Who Should Perform Execution-Based Testing? Answer: SQA!
7. From Modules to Objects (N.B. Use our definitions from the Lecture
Notes, NOT the text definitions here)
(a) Cohesion
130
(b) Coupling
(c) Encapsulation
(d) Abstract Data Types
(e) Information Hiding
(f) Objects
(g) Inheritance, Polymorphism, Dynamic Binding
8. Reusability and Portability
(a) Reusability
i. Impediments to Reusability
ii. Objects and Reusability
iii. Types of Re-use
A. Library (toolkit)
B. Application Framework
C. Design Patterns
(b) Portability
i. Impediments to Portability
A. Hardware Incompatibilities
B. Operating System Incompatibilities
C. Numerical System Incompatibilities
D. Compiler Incompatibilities
ii. Objects and Portability
9. Planning and Estimating
(a) Estimation
i. Metrics for Size of a S/W product - Function Points
ii. Estimating Duration - Intermediate COCOMO
(b) Project Management
i. Testing
ii. Training
iii. Documentation
iv. CASE Tools
v. Testing the SPMP
131
Index
abstract class, 98 Cost-Benefit Analysis, 55
abstract data type, 82, 83 coupling, 78
abstract method, 98 cursor, 100
abstraction, 81, 90
abstraction principle, 81 data abstraction, 81, 82
activity, 120 data dictionary, 59
aggregation, 86 data structures, 83
analysis package, 56 data type, 83
artifact, 33 defect, 65
aspect oriented programming, 89 democratic team, 47, 49
association, 86 derivation, 61
design pattern, 94
baseline, 62 desk checking, 76
beta, 40 divide and conquer, 56, 114
Brooks’ Law, 45 domain, 38
build tool, 63 dynamic binding, 87
business case, 39
egoless programming, 47, 50
business model, 39
encapsulation, 80
C/SD, 78 error, 65
CASE, 43 estimated effort, 115
chief programmer model, 49 evolution tree, 19
class, 56, 80, 84 evolution-tree, 19
Classical Chief Programmer Team, 45 execution-based testing, 69
CMM, 42 extreme programming, 28
code-and-fix, 88
failure, 65
coding tools, 60
fault, 20, 65
cohesion, 78
first-order logic, 72
composite/structured design, 78
fragile base class problem, 88
composition, 86
frequency of failures, 70
configuration, 61
function points, 111
configuration control tool, 61
function points (FP), 111
consistency checker, 59
correct, 71, 72 Hoare triples, 72
correctness proof, 72
information hiding, 80, 83, 99
132
inheritance, 84, 88 regression test, 20
instantiation, 77 regression testing, 76
interface, 84 reliability, 80
iterator, 100 replaced, 37
report generator, 59
KDSI, 114 retirement, 37
library, 92 review, 36, 54, 66
revision, 60
metric, 57
Miller’s Law, 20, 54 screen generator, 59
model, 34 Scrum Method, 26
model checking, 75 separation of concerns, 56, 78, 80
module, 77 severity of failures, 70
moving target problem, 19, 38 simulator, 69
software crisis, 8, 32
nominal effort, 115 software depression, 8
non-execution based testing, 66 software engineering, 8
software process, 32
object, 86 SQA, 23
stepwise refinement, 21, 34, 54
pair programming, 28, 50
systematic testing, 76
partial correctness, 71
polymorphic, 87 task, 120
portable, 106 technical complexity factor (TCF), 112,
positive testing, 37 113
post-delivery maintenance, 76 total degree of influence, 113
Predicate logic, 72 traceability:, 36
procedural abstraction, 81, 82
project function, 120 UML, 34
proof of concept prototype, 18 unadjusted function points (UFP), 111
proof-of-concept prototype, 29, 39 undecidable, 75
Unified Process, 32, 33
quality, 65 utility, 70
rapid prototype, 24 variation, 61
rapid prototypes, 29 version-control, 62
re-use, 57, 88
real time, 71
regression fault, 20
133