Lesson 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

LINEAR MODEL

[MUSIC] Welcome to Module Two of the Software Processes and Agile Practices
Course. I want to begin by telling you a story. Imagine a pair of construction
workers, each helping to construct a railroad. One worker pulls up to the job site in
her car. Pulls out a sledgehammer and begins to hammer spikes into the ground.
The other worker arrives on the site, pulling behind him a trailer holding a
spike-driving machine. This second worker begins to set up their machine. The
first worker scoffs at the second, thinking it was silly for them to take the time to
set up this big machine, but it didn't take any time at all for them to pull out a
hammer and start working. A few days later, the second worker's machine is
ready. And they begin laying railway ties. At the end of the week, who do you think
made the most progress and laid the most amount of track? The first worker, the
one with the sledgehammer, probably thought they were being efficient by
getting a good start on their project. However, by taking the time to set up a
different method of doing the same work, the second worker was able to quickly
outperform the first. We can take this analogy a few steps further. Now imagine a
situation where the railroad company only needs to lay a little bit of track, just to
store a railway car off the main track. Yes, once set up, the spike driving machine
could easily get the job done in no time. But in the amount of time it takes to set
up, a worker with a sledgehammer could have quickly gotten the job done. So the
spike driving machine is clearly incredibly valuable. In fact, it could lay a complete
railroad network much faster. For smaller projects though, a sledgehammer
would have been more valuable. If the second worker didn't know that
sledgehammers existed, they might have used a machine that would have
ultimately taken more time to get the project done, at a much higher cost. This
goes the other way too. The first worker didn't know that using a sledgehammer
was actually an inefficient way of completing the project, and that another option
existed. They wasted time and money building a project using tools which are
insufficient to get the job done efficiently. The point is, knowing and possessing
the latest and most advanced tools for a job, may not be the most time or cost
effective way of doing things. A lot of jobs can be done more quickly using simpler
tools. A lot of jobs could also get done more efficiently by using more advanced
tools. What you use, really depends on the task at hand. If you don't know about
all the options available to you, how do you know if you're using the right tool for
the job? In software development, it's the same thing. Sometimes we need to have
an in-depth knowledge of the latest software engineering processes and
practices in order to ship a product on time. Sometimes all we need is a text
editor, a keyboard and a rough idea of what to do. What I'm here to do, is to help
you understand the variety of processes available to you, so that you can make
the best choice possible for your project. In the last module, Morgan talked about
what processes and practices are, and why they're useful to organized work. She
also explained what a software engineering activity is, and went into detail on
common activities found in the field. In this module, I'm going to take a little bit of
a step back and talk about some of the processes that have been proven useful
in the past. And how they've evolved to create some of the more common
processes we see today. I'm going to begin by talking about some processes that
are simple, and then we'll move on to how processes have evolved to address
these deficiencies. Remember, like the sledgehammer, and the spike driving
machine, just because one process is more evolved and advanced, does not
mean that the other is now useless or obsolete. It's important that you understand
all the options available to you, along with their pros and cons, or else you may
fall into a trap of using a process which is inappropriate for the task at hand. With
that in mind, let's get to it. In the introduction course, you got a glimpse of the
different processes which you might encounter when learning about software
engineering processes. Linear process models follow a pattern of phases
completed one after another without repeating prior phases. The product is
designed, developed, and released without revisiting earlier phases. In this lesson,
I'm gonna dive into more detail on these linear life cycle process models. I'll talk
about the ways in which they work, as well as some pros and cons of each one.
This will give some idea of why linear models came to be developed, as well as
some context as to why they eventually came to be less common in the field.
Before we move on, let's test your understanding of linear process models. Please
choose the linear process model from the list. A. each phase happens
sequentially and then loops back to the beginning when all the phases are
complete. B. each phase happens in parallel with other phases, until the product
is done with no repetition between or within phases. C. each phase happens
sequentially and never loops or repeats. Or D. each phase can be repeated, until
the product is complete. The correct answer is C, a linear process model is one
which doesn't support looping within or between process phases. Process models
which allow for looping are called iterative models. Linear models also require that
phases be done sequentially, with no overlap between phases. Process models
which allow for overlap are called parallel models. You'll learn more about
iterative and parallel process models later in this module. So let's first talk about
the one you probably hear about the most. The waterfall process model. If you're
looking into software engineering, you've probably heard about this one. The older
waterfall model is criticized for being inefficient and restrictive. Let's talk about the
waterfall model in more detail, so that we can see its strengths and weaknesses.
You've probably already used a waterfall like process in many areas of your life.
Really it's just a basic linear process. One thing happens after another. It's called
waterfall because each phase of the process is fed into by an approved work
product from the previous phase. So, for example, at the end of the requirements
phase, you'll end up with a product requirements document. This document is
approved and then fed into the next phase, design. At the end of this phase, you
will have completed a set of models, schema, and business rules. This work is then
signed off and then feeds into the next phase of the waterfall, and so on. This
model allows developers to get started on building a product quickly. It allows
them to avoid the issue of changing requirements by determining their scope
early and sticking to it. Waterfall places a lot of emphasis on documenting things
like requirements and architecture, to capture a common written understanding
of the software by the development team. However, the waterfall model is not
very adaptable to changes, that is to say, it's not very agile. One of the main
setbacks of the waterfall model is that, it does not allow for the development
team to review and improve upon their product. As we told you before, software's
a very dynamic thing. The waterfall model is simply not designed to address
midstream changes, which may require revisiting earlier phases. Consequently,
there are variations of the waterfall model that allow feedback opportunities to
earlier phases and their activities to support certain changes. But what if your
client needs a change since the approved requirements document. Unfortunately
the client doesn't get to see the product until the very end. This can be many
months later. Understandably, the slow response frequently leads to disappointed
clients. The waterfall model served it purpose, but it's inability to ensure that the
work being done is appropriately verified is a serious shortcoming. To try to
address this, the V-model of software development came into existence. This is
very similar to the waterfall model in that one thing happens after another in
sequential order. The difference is that it emphasizes verification activities to
ensure the implementation matches the design behavior. This also ensures that
the implemented design matches requirements. The idea is to organize each
level of verification to appropriate phases, rather than all at once. What
distinguishes the V-model from the waterfall model is that the V-model
specifically divides itself into two branches, hence, the name V. Like the waterfall
model, the V-model begins with requirements, and feeds into system architecture
and design. This branch is represented by the left hand side of the V. Followed
from the top down. At the end of this branch, emphasis shifts from the design to
the implementation. This is the bottom of the V. Once implementation is
complete, the model then shifts its emphasis to verification activities, which is
represented by the right hand side of the V, followed from the bottom up. Each
phase on the right hand side is intended to check against its corresponding
phase on the left hand side of the V. Here's an example. On the left-hand side of
the V-Model, your development team plans unit tests to be implemented later.
These unit tests are designed to make sure that the code you write actually
addresses the problem you're trying to solve. When in the unit testing phase on
the right-hand side of the V, these unit tests are then run in the code to make sure
that, after everything is written all your code runs properly. After the tests are run,
and everything is running smoothly, the team moves on to the integration testing
phase. So in this way, the right hand side of the V, verifies the left hand side. The V
model has the same advantages and disadvantages of the waterfall model. It's
straightforward to apply, but it doesn't account for important aspects of software
development. Like the inevitability of change. However, the V-model does allow for
the development team to verify the work of constructive phases of the process. So
we're getting somewhere, but this client still doesn't get to see the finished
product until the very end when everything is complete. Study our diagram which
depicts the V-model of software development. If you are in the integration testing
phase, which phase are you verifying when you run your test? A. unit testing. B.
coding. C. high level design. Or D. operational testing. The answer is C, high level
design. When you're in the high level design phase, you create tests which are
then run in the integration testing phase. You do not run tests from the next or the
last phase, or from the coding phase. So, now we need a process which will allow
us to involve the client along the way instead of only at the end when the product
is deemed complete. That's where the Sawtooth model comes in. This model is
very similar to the last two. And that it is also a linear model of software
development. However, it also improves upon them in that it gives you that much
needed client interaction throughout the process. What makes the sawtooth
model distinct is that it distinguishes between the client and the development
team. In this model, tasks requiring the clients presence and tasks only requiring
the development team are made distinct. These client tasks are interspersed
throughout the process, so that feedback can be gathered at meaningful times.
So being similar to the last two concepts, you're probably already way ahead of
me. Yes, the Sawtooth model also suffers the same disadvantages of the last two
linear models. It's really easy to apply, but it doesn't address change very well. In
this lesson, we discussed three important pre-agile manifesto process models in
the history of software development: the Waterfall model, the V-model, and the
Sawtooth model. They all share commonalities and have their differences. The
main thing which these models have in common is that they all include phases,
which happen sequentially, one after another. It's very clear to everyone what's
expected next. This common feature is the main reason for their shared
advantages and disadvantages. They each allow development to happen in a
straightforward way, but they also greatly restrict the project to fit the process. In
that sense, these early linear process models subscribe to a manufacturing view
of a software product. That is machined and assembled according to certain
requirements. And once produced, the product only requires minor maintenance
upkeep. Kinda like an appliance. The emphasis, then, is on getting the
requirements right, upfront, and not changing them afterwards. In reality,
developing a software product is a creative endeavor, which necessitates
experimentation and constant rework. Also, in the past, computer time was
expensive compared to human labor. The focus was making tasks like
programming efficient for computers, though not necessarily for people. For
software development, the cycle time between writing a line of code and seeing
its result could be hours. This didn't favor having developers try small
programming experiments to quickly test out their ideas. This did, however, put a
focus on trying to get things right the first time and avoid re-work. The linear
process models fit into this early thinking. Nevertheless, when documenting the
internals of a software product for a new developer. You might still describe the
project in a linear way, through the phases and associated documents. Even
though it might have followed some other process. This puts on some semblance
of order. So that the new developer does not need to relive the whole project, just
to learn initially about its current implementation. This is akin to the clean rational
version of mathematical proofs and scientific theories you find in textbooks. In the
next lesson, I'm going to cover the next generation of software processes, iterative
models. There, I'll tell you all about a process called Spiral and its advantages.

SPIRAL MODEL

Iterative software process models are ones which allow for repeating stages of
the process. They are cyclical. Iterative models, extend upon the linear models,
which we talked about earlier. The biggest advantage they bring to the table is
that they add the ability to loop back on previous steps. Each loop back is an
iteration, hence iterative model. Iterations allow for feedback within the process.
Iterative models can be considered a forerunner to truly agile practices, yet they
also, embody sequential steps, reminiscent of previous linear processes. When
you begin talking about agile practices with Morgan, in the next module, try to see
if you can find the points of compatibility, between agile practices and iterative
processes. In this lesson, I'm going to talk about the Spiral process model. When
the Spiral model was introduced by Barry Boehm in 1986, it outlined a basic
process for designing and implementing a software system, by revisiting phases
of the process, after they've been completed. Before you move on, I want to warn
you that some of this stuff can get technical. If you find yourself needing
clarification on a concept, please look at the course resources for this lesson.
There, you will find references to all the content, which I will talk about here. All
right. So this is a simplified explanation of the Spiral process model. You can see
right away why it's called Spiral model. On a basic level, the model consists of four
quadrants. As you move through this process, the idea is that you move from one
quadrant to another. Consider each of these four quadrants, as a phase of an
iteration. Where an iteration is, the duration of one full spiral or all four quadrant
phases, being completed one time. Each of these phases contain activities, like
Morgan defined in the first module. In Spiral, you begin by coming up with the
objectives and needs, and generating solutions for the current iteration. Then, you
identify and assess risks, and evaluate those solutions. You then move on to
developing and testing the product in the current iteration. Once you have a
product that satisfies the objectives, you move on to planning the next iteration. If
each quadrant is a phase of an iteration, then once you complete an iteration,
you move onto the next. That's the basic flow associated with the Spiral model.
You gradually build up a product, by repeating the phase cycle. For example, you
might start a project in the Spiral model, by determining the client and user
requirements. You'll then come up with potential solutions to fit those needs. After
that, you might choose to evaluate the solutions you came up with, then you
might build an initial prototype of the product, and review what needs to be done
for the next iteration, that will be one iteration. In the next iteration, you might start
again by defining the objectives of the iteration. Perhaps, features to be added to
the prototype that you just built. Then, you'd evaluate these features, and move
onto building the features you deem appropriate. You then review what needs to
be done for the next iteration, and continue from there. Until the project has been
completed to your client's satisfaction. So unlike linear models, which we
described in the last lesson, iterative models, like Spiral, tend to repeat elements
of the process throughout. This means that the Spiral model allows for a
development team to review their product at the end of each spiral iteration.
Doing so can better ensure that your product is being built to specification.

Since Spiral is iterative, you loop back around and begin determining your
objectives, and refining your design again, but you only do that after you've first
planned your design, at the end of the iteration. Even though some aspects of
projects following Spiral model may change from project to project, six conditions
almost always stay the same. These are called the invariants of a Spiral model.
They were first described by Boehm, in his follow up paper, published in the year
2000. What's interesting about this, is that for the most part, the invariants actually
also apply to a lot of other process models, as well. Now these invariants can get
pretty technical. So instead of going into detail about all six, I'm only going to
cover the core concepts of a few key invariants. If you'd like more details about
each of the invariants, please check out the course resources. There you'll find
detailed descriptions of each invariant. The first invariant of the Spiral model
states that all work products, of a software project, should be created
concurrently, at the same time. This may seem strange, but the basic idea is that
without defining things at the same time, you put your project at risk. This is
because with the usual method of Waterfall, doing things sequentially means
making decisions based on only a limited amount of information. The second
invariant of the Spiral model is really simple. All the quadrants in the model must
be addressed, there's no skipping steps. This is because each quadrant of the
model, brings value, if you skip one, you put your project unnecessarily at risk.
Because what you're likely doing is making assumptions about the project.
Assumptions can, of course, be false. You don't want to build a project based on
false assumptions. The last four invariants are pretty technical, so I won't get into
much detail. Essentially they say this, every project implementing the Spiral model
should base the amount of time they spend on any particular activity, on the
amount of risk involved in carrying that activity out. It focuses a lot on making sure
that you base your decisions on risk data. The other invariants also mention that
each iteration of the Spiral should result in a tangible work product. And that the
focus of the process should be on improving the process, as a whole. So all these
invariants build up to create the Spiral model. This helps to define the model, but
can you see any issues? Although Spiral is clearly an improvement upon linear
processes, in many ways, it also has its share of disadvantages. One of these
disadvantages is planning tends to be done upfront at beginning of each Spiral.
Depending on the duration of the Spiral, this could make it difficult to make good
estimates. Another disadvantage is that the ability to minimize risk in a calculated
fashion, which this process lays out, requires an immense amount of analytical
expertise. Think about the amount of data that you would need in order to know
exactly how much time is too much, or too little, when working on an activity.
These sorts of risk assessment tasks consume a great deal of resources, in order
to get right. If you find yourself in an organization which uses the Spiral model, it's
likely that you're working on large projects, with years of experience, data, and
technical expertise, at your disposal. So there you have it. The Spiral model is a
great example of an iterative process. Things get done in a way that allows for the
revision of the product in certain intervals. Like other process models, it has its
disadvantages. But it's clearly different from what we saw before, right? In the next
lesson, I'm going to talk about the Unified process model. Try comparing what you
learned in this lesson, to that.

UNIFIED PROCESS

In the last lesson, I outlined the basics of the spiral model. We talked a little bit
about its advantages and disadvantages, and how it fits into the timeline of the
evolution of software development processes. The spiral model was an iterative
model of software development. Meaning that the product is built in a series of
repeated phases. This is in contrast to the linear process models, which we
covered at the beginning of this module. Such as the Waterfall model and the V
model. In this lesson, I'm going to talk about another iterative model of software
development. The Unified Process Model or just Unified Process. As I said before,
unified process is an iterative model of software development. It's basic structure
is to work in a series of phases which get repeated until the final phase is deemed
complete. Within most unified process phases, development happens in small
iterations until the phase is deemed complete. Usually, phases are deemed
complete when a milestone is reached. Unified process tries to emphasize
gradual development as much as possible. Instead of narrowing down all the
requirements of your software product at the beginning, unified process focuses
on the importance of developing your product's architecture over time.
Architecture is a set of designs upon which the software product is built. So in
unified process, the development team's focus is to develop design models along
with a working product. While the general structure of unified is to build iteratively,
the model allows for tasks done in one phase to overlap with another. This is
referred to as doing work in parallel. So, instead of only going through a sequence
of phases, developers can actually do things like design the product architecture
while developing tests at the same time. Jeff is building tests for his code as he
designs his product's architecture design. He's also clarifying and eliciting
requirements from his client occasionally, as he runs into issues. What style of
software development is Jeff using? A. Parallel development, B. Iterative
development, C. Incremental development, or D. Synchronized development? The
answer is A, parallel development. Since Jeff is participating in múltiple phases of
development at once, we call this parallel. The first phase of the unified process is
called the inception phase. This phase is meant to be small, just enough time to
ensure that you have a strong enough basis to continue on to the next phase. In
fact the inception phase is the only phase in unified process where development
does not happen in iterations. If your inception phase is long, this might suggest
that you have spent too much time building requirement in the inception phase.
Your main goal is to see if there's a strong enough business case to continue
development. What this means is that there has to be good enough financial
reason to build the product. To do this, the inception phase calls for the creation
of basic use cases. Use cases outline the main user interactions with a product. In
this phase, you also define the project scope and potential risks. If you'd like to
learn more about use cases and how to create them please check out the course
on client needs and software requirements. At the end of the inception phase you
achieved the life cycle objective milestone. At this point, you'll have a reasonable
description of how viable the product is and be ready to move on to the next
phase of the process, the elaboration phase. The elaboration phase is the first of
the unified process phases to implement those small iterations, which I
mentioned earlier in this lesson. The goal of this phase is to basically create a
model, or a prototype of the product, which you'll refine later. We'll talk more about
different types of prototypes later in this module. For now the purpose of this
phase is to define the system architecture. Developers define the requirements
conceived in the inception phase. They also develop key requirements in
architecture documentation, such as use case diagrams and high level class
diagrams. This gives the foundation on which actual development will be built.
Remember, this phase allows for iterations, so building the prototype in an
iteration may go through a redesign before the requirements and architecture
models are deemed complete enough to move on. At the end of the elaboration
phase, developers deliver a plan for development in the next phase, the
construction phase. This plan basically builds on what was developed during
inception and integrates everything learned during elaboration so the
construction can happen effectively. Remember, in the unified process model,
development can happen in parallel. This means that when you begin the
construction phase, you'll continue to do work that was being done in the
elaboration phase. The only difference is that the emphasis on the work may
change. For instance, while testing and programming may have been important
in an elaboration, they become even more important in construction. Similarly,
assessing risks is important in construction, but it's less important in this phase
than in an elaboration. The construction phase is very straightforward. It's another
phase in which development can happen iteratively. It focuses on building upon
the work which was done in elaboration. This is where your product’s guts are
really built, and the product comes to life. In the construction phase, thorough use
cases are developed to drive product development. These use cases are more
robust than the ones created in the inception phase. Construction phases use
cases offer more specific insights into how your product should be created. Your
product is built iteratively throughout this construction phase until your product is
ready to be released. At that point, your development team begins transitioning
your product to your client and your users. Now that you've learned about the
inception, elaboration, and construction phases of the unified process, let's test
your understanding. Which of the following describe the main aspects of the
elaboration phase? A. Identifying a strong business case for the project, B.
Creating use cases, C. creating use case diagrams and, or D. creating class
diagrams. The correct answers are C and D. Use case diagrams and class
diagrams are main work products of the elaboration phase. That's not to say that
these work products don't also happen in the construction phase. It's just that
they're more focused in elaboration. You identify a strong business case for the
project in the inception phase. In this transition phase, you have a major roll out of
your product. Your development team receives feedback from your users. It's at
this point when you really see how well your design stacks up against your users'
needs. By gathering this feedback, your development team can make
improvements to your product, creating bug fixes and other releases. After your
product has completed its iteration, it's possible to cycle back through the phases
of unified process again. This would be in cases where you intend to create further
major releases on the product and apply user feedback as a means of
influencing plans for later development. These cycles repeat until you and your
development team are ready to release your product. So that's unified process.
Like I said at the top of the lesson, unified is an example of an iterative process. But
as you saw throughout this lesson, unified is also a parallel process. Activities
related to requirements, design, and implementation can happen at the same
time. In the grand scheme of things unified is much more similar to a
spike-driving machine than to a sledgehammer. It's a great process for large
projects where a great deal of refinement is needed in order for the product to
stay on track. Having iterations allows your product to grow naturally without
becoming limited by upfront plans. In the next lesson, I'm going to talk about
prototyping, and how prototypes can be used to drive software development. I'll
see you there.

PROTOTYPING

[MUSIC] Welcome back. In the last lesson I talked about the unified process as an
example of a iterative process as well as a parallel process. I talked about each of
the phases of the process. And defined some key terms, like phase, cycle, and
iteration. Before that, I discussed another iterative process called spiral. Hopefully
you can see that as I introduce these processes, each becomes more
sophisticated than the last. We're building up to more advance processes, but
that doesn't mean that learning about these simpler processes isn't important.
Don't forget the analogy of the sledgehammer and the spike driving machine.
While one may be less sophisticated than the other, that doesn't mean that it
lacks value. In fact, all the processes which I have covered so far are actually quite
frequently used in the industry today. In this lesson, I'm going to talk about
something which applies to the spiral and unified process models, which we just
talked about, prototypes. Of course, prototypes aren't limited to just these
processes, so you'll see them referred to more and more in future lessons. There
are five types of prototypes which I'll cover in this lesson. These five types are
illustrative, exploratory, throwaway, incremental, and evolutionary prototypes. Lets
start with the illustrative prototype. These are the most basic of prototypes. They
can be drawings on a napkin, a brief slide show, or even a couple of index cards
with components drawn on to them. Whatever the case, the point of an illustrative
prototype is to share an idea using a low fidelity, disposable image. Illustrative
prototypes help to get the system's look and feel right, without investing much
time or money into developing a product. They can save a lot of time and
development later on. You can use an illustrative prototype as a way to weed out
bad ideas. And as a guide for development. Instead of having to imagine and
program ideas on the fly, they can give the development team guidance for
creating solutions. When I do prototyping, I personally like to mock up prototypes
with the functionality which I plan on implementing. One of my favorite ways of
doing this is by sketching my key features in a drawing program and tying them
together by using a slide show editor. The prototype then ends up becoming a
slightly more realistic example of how a product looks without having to expend
much extra energy. You can achieve a similar result with features drawn on paper.
You could demonstrate an idea by swapping out one paper screen for another,
when the end user selects certain elements. This technique is usually used when
time is short, and only a basic idea is needed in order to get the point across. You
can go really far with this idea. Some prototypes go as far as faking their
functionality by having a human control the functionality of the system behind
the scenes. This is like the person behind the curtains, as in the movie, The Wizard
of Oz. One of the ways which I've seen this be successful is on a project which
used wireless communication between devices as a key aspect of its
functionality. Writing code which allow the user to actually communicate
wirelessly to another device will take a long time, so instead the development
team got creative. I had a slide show program run each device. When one device
would send data to another, the developers would cleverly advance a slide show
on the other device. The development team did this with good enough timing that
it seemed like data was actually been transmitted between the two devices. You
can probably see why illustrator prototyping is coming, it takes very little time to
flesh out the feature set. And it can give you a really good idea of how your
product will look when it's finished. If you have more time and you want a more
comprehensive understanding of what the product will look like, some teams turn
to what we call exploratory prototyping. Exploratory prototyping allows you to
focus just on what the products look and feel is. You'll also be able to determine
the effort it takes to build that project. You build working code so that you can
actually see what's possible. Fully expecting to throw the work out after learning
from the process. Is this method expensive? Absolutely. It's better than finding out
later that the product solution just isn't workable. The usual motivation behind
exploratory prototyping is that the product developers want to study how feasible
some product idea is. It's no longer about just looking at what the product looks
like. It's about how realizable it is to develop the product or how useful the product
may be before committing to further effort. The first version of a product, for
almost anything, often has various problems. Because of that, why not just build a
second versions from scratch, and toss away the first? The first version you've built
is what's called a throwaway prototype. You should know that all is not lost from
the first version. There could be many useful lessons to be learned and problems
to avoid in the second version. Throwaway prototypes give you the opportunity to
learn from past mistakes. This give you the chance to make your release stop or
product look more rock solid than it would have been if you had just kept evolving
from the first version. Carly just built her first iteration of her product and now has
a first generation working product prototype. She intends on adding to this
prototype in further increments. The test users are critical of the initial prototype.
After seeing the product design they suggest a different approach that uses
some of the features that were already built. What kind of prototyping is Carly
using? A, Working. B, Illustrative. C, Throwaway. Or D, Incremental. The answer is D.
Incremental prototyping. Carly started with a working prototype that was later
expanded upon. The initial prototype was coded and kept, so it's not throwaway
prototyping or illustrative prototyping. Okay, so hopefully, you don't end up with an
unintentional throwaway prototype. The three types of prototypes which I just
described are all prototypes which end up not being used directly in he final
version of the product. Doesn't it make sense to re-use work that's been done in
prototyping during actual product development? That's where incremental and
evolutionary prototyping come into play. They let your efforts of prototyping carry
through to your final product. The key idea is to have working software for each
successive prototype, any of which could be released as a version of your
software product. When you create incremental prototypes you build and release
your product in increments one at a time. Incremental prototyping works in
stages based on a triage system. All that means is that you asses each of the
system's components and assign them a priority. Based on that priority, you then
develop the product from the ground up. You wold develop your product from
most important to least important. So you assign priorities to a product's features
based on what must be done, should be done, and what could be done. You
assign your core features to the must-do priority. Then, you assign all the features
which would support your product but aren't absolutely critical to a should-do.
Everything else that seems like an extraneous feature would then be assigned to
the could do priority. Based on these ratings you would then develop your product
by starting with the features which you assigned to the must-do priority. The
resulting software product contains the core features and could be released as
an incremental prototype. Then, as resources permit, you develop features under
the should-do priority. And then features in could-do, which results in a more fully
featured incremental prototype. Here's an example, you're developing a
messaging app. First and foremost, you want your users to be able to talk to each
other through the app. Anything related to that, like integrating the ability to find
another users message, sending and receiving functions, or text editing, could be
your highest priority. So these features would be assigned to the must-do priority.
You might imagine that users would like to be able to add profile pictures or post
status updates. Maybe message groups of people. These features would be
considered the should-do functionalities. Any features like being able to change
message fonts, send custom drawings to other users, or post links, would
probably be assigned to the could-do. With all of these in place, all that's left to do
is build your app! Since you prioritized your product's features, you can easily map
out and plan your development. In fact, this idea of prioritizing your features and
working off your plan is a basic concept, which you're going to see recur in the
Agile Planning for Software Requirements course. So, you just learned about
incremental prototyping. Let's test your knowledge and see what you remember.
What sets incremental prototyping apart from illustrative, throwaway, or
exploratory prototyping? You may select multiple answers. A, Incremental
prototypes is a triage system. B, Incremental prototypes get discarded after
they're created. C, Incremental prototypes do not contain any code. And/or D,
Incremental prototypes may contain working software for the end product. The
correct answers are A and D. Incremental prototypes are different from the
previous ones which we talked about, because they allow your development
team to create a potentially releasable product. This is done by developing
features which have been prioritized using a triage system. The final type of
prototype, which I'm going to talk about, is the evolutionary prototype. In
incremental prototyping, you'll be in with a core set of features and add new
features over time. In evolutionary prototyping, you begin with a set of all the
features in basic form, and refine or evolve them, over time. In either case, the end
product is a feature rich product. Let's compare this using the messaging app
example I outlined earlier. In incremental prototyping we prioritize the software
products features using a triage system. Then, we built successive incremental
prototypes of the product that included the most important features to the least
important. In evolutionary prototyping, you would have an early version of all the
features and build successive prototypes by working the features until they were
fully mature. For example, later evolutionary prototypes could make the existing
features easier or more flexible to use. In the messaging app, consider a feature
like adding a profile picture. Initially, a user might have to specify the path of the
photo to be added to the profile. Not a very efficient way of doing things, but it
works. In another prototype, that developer might allow for the user to choose the
photo from a drop down menu of available photos. A little better but in the next
prototype you might imagine the system to allow for drag and drop functionality.
So in this way, your product evolves from a rudimentary working prototype to
something feature rich and robust. Both incremental and evolutionary
prototyping are ways to make working software that can be shown at regular
intervals to gain further feedback. In practice, you can blend both approaches. It's
a real moral boost for your development team to see the product as it comes
together over time. All right, so you now know some of the different types of
processes. Now, think back on what you learned in previous lessons about the
spiral model and unified processes. Can you picture how a prototype would fit
into these models? In a spiral model, imagine where you would start. Usually the
place to start is by creating a prototype, right? You might go through the first
iteration of the spiral model just creating an illustration of the prototype. You
could scribble a few drawings onto the paper and get an idea of how your system
will work. The same could be said for the inception phase of unified, couldn't it? By
creating prototypes, you can better visualize what your product does, and
therefore, make feature decisions based on what the product might look like. But it
doesn't stop there. I'll bet you're already imagining the possibility of combining the
illustrative prototype with an incremental or evolutionary prototype, and it makes
sense right? Your first version is just an idea written on a few pieces of paper. Then,
to further test your idea, you'll outline some key features and start building. Before
long, you could end up with a prototype. You can then take that prototype to a
client or potential investors to prove that the product is conceptually viable. That's
really the core idea behind any kind of prototyping, to gain feedback on versions
of your product. You can begin by spending a minimal amount of time developing
your initial prototypes to make the most efficient use of your resources. In the next
lesson, I'm going to talk about continuous delivery in software development and
how that relates to Microsoft daily builds. I'll see you there.

CONTINUOUS DELIVERY

[MUSIC] Welcome to last lesson of the second module of this course. In the
previous lesson I talked about how prototyping works in software development.
Before that I talked about the evolution of different processes throughout history.
In this lesson, I am going to complete that history using the spike driving machine
element of our railroad building analogy. In incremental or evolutionary
prototyping, the product is refined over time. You'd start with a basic product and
see where it goes. Over time, successive prototypes are constructed. Typically,
these prototypes are released to your client for feedback. However, this notion of
a release is quite loose. It might take lots of manual work to build and integrate
the code into a runnable prototype. A prototype that functions well internally may
still not work for a client. They may have a device that hasn't been tested, or some
other detail could have been overlooked. One way to avoid these oversights is to
automate the build and integration aspects of your project. That's where
continuous delivery comes in. As the name suggests, this allows the developers to
deliver a product continuously, as it's being developed. Whenever a developer
commits a code change it will be built, tested, integrated, and released. The time
between making a change and having it released can be very short. So any
problems will be noticed right away. Continuous delivery prepares you to release
your product at any point, but you're not forced to release the prototype if you
don't feel it's ready. Prototype releases are placed in specific channels, or streams
Intended for different audiences. This is done so that you can make sure your
continuous releases are tested properly before being released to the public. For
example, you can have a developer channel for day to day builds developers
generate. But aren't ready for widespread use. You could have a test channel for
prototypes created for a group internal to a company, then you could have a
stable channel for releases targeted at the core users. Developers can gain
insight into their product by receiving feedback from each channel. The
continuous delivery practice fits well with iterative process models, like unified
process. Let's see how that works. Remember how unified it was composed of the
different phases which work in parallel? Those phases were inception, elaboration,
construction, and transition. The most relevant phase to continuous delivery is the
construction phase. You could deliver your product continuously through a
serious of short iterations. The product is built iteratively over time and comes
together in steps. During any given iteration, your development team can be
working on constructing the next prototype for release. That could include a lot of
activities like detail design, coding, and testing of features. Continuous delivery
could also be used in Unified process during its elaboration phase. In the
elaboration phase, high level product architecture design and test writing is done.
So, in order to support continuous delivery, you could use automated tools. These
tools can be used to build and integrate the code, run tests, and package the
product into a releasable form. For automated tests of your code, a best practice
is to write tests before you actually write the code itself. This approach is called
test-driven development, and it ensures that you're actually solving the right
problem and making the functionality you want. Initially, when running these tests,
they'll fail, because the corresponding code doesn't exist yet. But that's fine
because the code is written, and then the tests should eventually pass. As code is
written, features start to take form within the product. Continuous delivery ensures
that the process is happening all the time, so if nothing breaks during the process,
there should be a prototype ready for distribution at any time. So the prototype
should be ready to try out by your end users. Of course with any system, when end
users see your product, errors will begin to surface that you never noticed before.
It's an ongoing process to receive feedback from your users. You want to learn
from that, and fix errors that they point out to you. The continuous delivery
practice aims to have a releasable prototype, essentially the product, at the end
of every iteration. That has fantastic advantages. If you were to abandon your
project at the end of an iteration, you would still have a releasable product. You
could even release the product without completing all the plan features if
resources ran out. Beyond that, your product quality would actually improve.
Integrating your code into a larger product with each iteration, we'll make sure
that everything works properly in small doses. This fixes a lot problems that would
occur if you tried to build, test, integrate, and release one big product all at once.
Howie and his development team are constructing the support infrastructure to
enable the continuous delivery of the product prototypes put together by other
developers. He has automated tools in place to build and integrate code,
package the product, and install the product in a test environment. In this
infrastructure, he also needs automated tools to do what? A. make prototypes. B.
do detailed design. C. write the code. Or D. run tests. The answer is D, of the
possibilities, Making prototypes, doing detailed design, and writing the code are
not part of continuous delivery. If they can even be automated at all. Automated
testing, however, is. Lets look at an example of continuous delivery in action. The
Microsoft Daily Build. In it, each iteration of the construction phase is laid out in a
day, hence daily build. The most important part of the Daily Build is the daily part.
The whole point of the Microsoft Daily Build is to ensure that your programmers
are in sync at the beginning of each build activity. By following a process that
makes your developers integrate their code into the larger system. At the end of
every day, you make sure that nobody wanders too far off the beaten path. To
control this, Microsoft uses a system of Continuous Integration. If you haven't
heard this term before, that's okay. All it means is that when a developer writes a
piece of code, and wants to share that code remotely with anyone else on the
team. That code must first be put through a process of automatic testing. This
ensures that it will work with the project as a whole. The implications this has for
your team are enormous. If all of your developers were on the same page, they
can easily see how their work fits into the project as a whole. But, also how their
work affects other members of the team. This not only keeps your developers'
morale up, but it also increases the quality of the product. The daily build does this
by giving your developers the ability to catch errors before they become a real
problem. If one of your developers pushes a piece of code into the system, and it
fails the tests, then you know immediately which piece of code is the problem. Or
if you try to run the product and it doesn't work, you know that the problem is with
something that got integrated into the previous build. So, error checking in this
way becomes extremely easy in the daily build. For a large product, the
automated tests will be run during the night on the build for that day. The next
morning developers can see how tests went and decide what to fix. So the
Microsoft Daily Build offers your team the opportunity to act quickly and catch
errors before they can become a major headache. By using automation, you also
make things much easier on your development team. And that's the Microsoft
Daily Build in a nut shell. Now that you have seen continuous delivery and the
Microsoft Daily Build, let's see what you learn. Thomas is a developer working for
Microsoft. He has just spent his whole day writing the code, which will become
part of the next version of the Windows operating system. At the end of the day,
he uploads his changes onto a server. What can cause his changes not to be
tested? A. his code does not work, B. his code does not build, C. other code in the
product does not work, and, or D. Other code in the product does not build. Of
these possibilities, the inability to build a product would really hamper testing.
Code that builds, but does not work, could still be tested. So, the answers are B
and D. All right. So, that's it for this lesson. Let's review a little bit about what was
discussed. I started off by talking about continuous delivery, which can be
incorporated into an iterative process like Unified. Continuous delivery is used to
release incremental or evolutionary prototypes. Then I talked about how the
Microsoft Daily Build is an example of continuous delivery. So, a combination like
the Unified process with prototyping and continuous delivery is our spike driving
machine. It's a great tool for large, long-term projects in which the product's
quality could be severely affected by faulty changes made by the development
team. It's clearly a significantly more robust process than, say, the Waterfall
process. Just don't forget that it can't fit every situation. There will be
circumstances, especially on small projects, where setting out the required
infrastructure would take more time than it's worth. And that's why I talked about
the sledgehammers of the software world as well. Those being Waterfall, the
V-Model, and Sawtooth. They're all very important even though they're less robust
than the later models like Spiral and Unified. Even those processes would be too
simplistic for a truly huge project. I want you to keep an open mind about these
processes. I don't want you coming away thinking that iterative or parallel
software process models are the best and only tools for software product
management. There are applications for each tool, and it's up to you as a
software product manager to apply them in the right situations. What's more,
none of the processes which I mentioned throughout this module are necessarily
entirely independent from one another. You can reasonably integrate aspects
from each process to create your own. Do what works best for you and your
project. Don't feel like you have to fit a mold which someone else has created. If,
for example, you like the idea of testing your code in iteration cycles, but you don't
think you need to revisit the design at the beginning of every single iteration. Then,
by all means, do what works best for you.

Module 2: Supplemental Resources


Listed below are selected resources related to the topics
presented in this module.

Linear Models

This is a super relevant and interesting read for this module. It explains how the Waterfall model
was a misunderstanding and how it was never actually intended to be used. This article also
highlights how some of the findings in the original paper on the Waterfall model (which was
written way before the Agile Manifesto) actually align with Agile philosophy.

"Why Waterfall was a big misunderstanding from the beginning ..." 2012. 21 Jun. 2016
<https://pragtob.wordpress.com/2012/03/02/why-waterfall-was-a-big-misunderstanding-from-the-
beginning-reading-the-original-paper/>

This paper shows a timeline of some software and project management methods. It's a long
article and it may not be worth reading the entire thing, but has a visual timeline which is useful
and interesting.

Rico, DF. "SHORT HISTORY OF SOFTWARE METHODS by David F. Rico This ..." 2011.
<http://ww.davidfrico.com/rico04e.pdf>

Spiral Models

A good paper that explains the Spiral Model in a detailed manner. Worth reading if you are
interested in pursuing the Spiral Model.

"A Spiral Model of Software Development and Enhancement." 2015. 21 Jun. 2016
<http://csse.usc.edu/TECHRPTS/1988/usccse88-500/usccse88-500.pdf>
Another detailed explanation of Spiral Model. Does a good job of explaining all the invariants in
the Spiral Model.

Boehm, B. "Spiral Development: Experience, Principles, and Refinements." 2000.


<https://resources.sei.cmu.edu/asset_files/SpecialReport/2000_003_001_13655.pdf>

Unified Process

This link gives you an excerpt of the book Unified Process Explained. These chapters explain
some of the history, background knowledge, and execution of Unified Process.

"Overview of the Unified Process | Introduction | InformIT." 2007. 21 Jun. 2016


<http://www.informit.com/articles/article.aspx?p=24671>

A basic explanation of Use Case Diagrams. The link includes some examples of Use Case
Diagrams as well.

"Use case diagrams are UML diagrams describing units of useful ..." 2010. 21 Jun. 2016
<http://www.uml-diagrams.org/use-case-diagrams.html>

A basic explanation of UML Diagrams. Also includes example diagrams.

"UML Class and Object Diagrams Overview - common types of class ..." 2011. 21 Jun. 2016
<http://www.uml-diagrams.org/class-diagrams-overview.html>

Prototyping

A good overview of some of the different types of prototyping. Doesn’t go into much detial, but
provides a good explanation.
"SDLC - Software Prototype Model - TutorialsPoint." 2013. 7 Jul. 2016
<http://www.tutorialspoint.com/sdlc/sdlc_software_prototyping.htm>

A really interesting TedTalk that explains the value of prototyping and process. I highly
recommend that you give this a watch.

"Tom Wujec: Build a tower, build a team - YouTube." 2010. 21 Jun. 2016
<https://www.youtube.com/watch?v=H0_yKBitO8M>

Continuous Delivery

A really good site that explains Continuous Delivery and it’s principles and foundation. It also
provides case studies and evidence supporting Continuous Delivery.

"What is Continuous Delivery? - Continuous Delivery." 2010. 7 Jul. 2016


<https://continuousdelivery.com/>

In this module, you learned that Microsoft continuously integrates their code via the Microsoft
Daily Build. This resource explain that process in more detail.

Cusumano, MA. "How Microsoft builds software - ACM Digital Library." 1997.
<http://dl.acm.org/citation.cfm?id=255656.255698>

This article explains what a Daily Build is an some of the advantages of using this method. It’s a
relatively short article and a good read if you are interested in learning more.

"Daily Build and Smoke Test - Steve McConnell." 2003. 21 Jun. 2016
<http://www.stevemcconnell.com/ieeesoftware/bp04.htm>

You might also like