0% found this document useful (0 votes)
28 views36 pages

Unit 2

Uploaded by

bhavani
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
28 views36 pages

Unit 2

Uploaded by

bhavani
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 36

Software Engineering | Comparison of different life cycle models:

Classical Waterfall Model: The Classical Waterfall model can be considered as the basic
model and all other life cycle models are based on this model. It is an ideal model. However,
the Classical Waterfall model cannot be used in practical project development, since this model
does not support any mechanism to correct the errors that are committed during any of the
phases but detected at a later phase. This problem is overcome by the Iterative Waterfall model
through the inclusion of feedback paths.
Iterative Waterfall Model: The Iterative Waterfall model is probably the most used software
development model. This model is simple to use and understand. But this model is suitable
only for well-understood problems and is not suitable for the development of very large
projects and projects that suffer from a large number of risks.
Prototyping Model: The Prototyping model is suitable for projects, which either the customer
requirements or the technical solutions are not well understood. This risks must be identified
before the project starts. This model is especially popular for the development of the user
interface part of the project.
Evolutionary Model: The Evolutionary model is suitable for large projects which can be
decomposed into a set of modules for incremental development and delivery. This model is
widely used in object-oriented development projects. This model is only used if incremental
delivery of the system is acceptable to the customer.
Spiral Model: The Spiral model is considered as a meta-model as it includes all other life
cycle models. Flexibility and risk handling are the main characteristics of this model. The
spiral model is suitable for the development of technically challenging and large software that
is prone to various risks that are difficult to anticipate at the start of the project. But this model
is more complex than the other models.
Agile Model: The Agile model was designed to incorporate change requests quickly. In this
model, requirements are decomposed into small parts that can be incrementally developed. But
the main principle of the agile model is to deliver an increment to the customer after each
Time-box. The end date of an iteration is fixed, it can’t be extended. This agility is achieved by
removing unnecessary activities that waste time and effort.
Selection of appropriate life cycle model for a project: Selection of proper lifecycle model
to complete a project is the most important task. It can be selected by keeping the advantages
and disadvantages of various models in mind. The different issues that are analyzed before
selecting a suitable life cycle model are given below:
 Characteristics of the software to be developed: The choice of the life cycle model
largely depends on the type of the software that is being developed. For small services
projects, the agile model is favored. On the other hand, for product and embedded
development, the Iterative Waterfall model can be preferred. The evolutionary model is
suitable to develop an object-oriented project. User interface part of the project is mainly
developed through prototyping model.
 Characteristics of the development team: Team member’s skill level is an important
factor to deciding the life cycle model to use. If the development team is experienced in
developing similar software, then even an embedded software can be developed using the
Iterative Waterfall model. If the development team is entirely novice, then even a simple
data processing application may require a prototyping model.
 Risk associated with the project: If the risks are few and can be anticipated at the start of
the project, then prototyping model is useful. If the risks are difficult to determine at the
beginning of the project but are likely to increase as the development proceeds, then the
spiral model is the best model to use.
 Characteristics of the customer: If the customer is not quite familiar with computers,
then the requirements are likely to change frequently as it would be difficult to form
complete, consistent and unambiguous requirements. Thus, a prototyping model may be
necessary to reduce later change requests from the customers. Initially, the customer’s
confidence is high on the development team. During the lengthy development process,
customer confidence normally drops off as no working software is yet visible. So, the
evolutionary model is useful as the customer can experience a partially working software
much earlier than whole complete software. Another advantage of the evolutionary model
is that it reduces the customer’s trauma of getting used to an entirely new system.

UNIT-2: Software process:


Introduction:
The software process is the way we produce software. It incorporates the methodology
with its underlying software life-cycle model and techniques, the tools we use , and most
important of all, the individuals building the software. Different organizations have different
software processes.
For example: consider the issue of documentation. Some organizations consider the software
they produce to be self-documenting, that is, the product can be understood simply by reading
the source code. Other organizations, however, are documentation intensive. They punctiliously
draw up specifications and check them methodically. Then they perform design activities pains
takingly, check and recheck their designs before coding commences, and give extensive
descriptions of each code artifact to the programmers.
Test cases are preplanned, the result of each test run is logged, and the test data are
meticulously fi led away. Once the product has been delivered and installed on the client’s
computer, any suggested change must be proposed in writing, with detailed reasons for making
the change. The proposed change can be made only with written authorization, and the
modification is not integrated into the product until the documentation has been updated and the
changes to the documentation approved.
THE UNIFIED PROCESS:
The Unified “Process” is actually a methodology, but the name Unified Methodology
already had been used as the name of the first version of the Unified Modeling Language
(UML). The three precursors of the Unified Process (OMT, Booch’s method, and Objectory)
are no longer supported, and the other object-oriented methodologies have had little or no
following. As a result, the Unified Process is usually the primary choice today for object-
oriented software production.
The Unified Process is not a specific series of steps that, if followed, result in the
construction of a software product. In fact, no such single “one size fits all” methodology could
exist because of the wide variety of types of software products.
The Unified Process is an attempt to draw on the best features and characteristics of
traditional software process models, but characterize them in a way that implements many of the
best principles of agile software development.

The Unified Process recognizes the importance of customer communication and streamlined
methods for describing the customer’s view of a system. It emphasizes the important role of
software architecture and “helps the architect focus on the right goals, such as understandability,
reliance to future changes, and reuse”.

Phases of the Unified Process


This process divides the development process into five phases:

1. Inception
2. Elicitation
3. Elaboration
4. Negotiation
5. Specification
6. Validation
7. Requirements Management
1. Inception: This is the first phase of the requirements analysis process. This phase gives an
outline of how to get started on a project. In the inception phase, all the basic questions are
asked on how to go about a task or the steps required to accomplish a task. A basic
understanding of the problem is gained and the nature of the solution is addressed. Effective
communication is very important in this stage, as this phase is the foundation as to what has to
be done further. Overall in the inception phase, the following criteria have to be addressed by
the software engineers:
 Understanding of the problem.
 The people who want a solution.
 Nature of the solution.
 Communication and collaboration between the customer and developer.
2. Elicitation: This is the second phase of the requirements analysis process. This phase
focuses on gathering the requirements from the stakeholders. One should be careful in this
phase, as the requirements are what establishes the key purpose of a project. Understanding the
kind of requirements needed from the customer is very crucial for a developer. In this process,
mistakes can happen in regard to, not implementing the right requirements or forgetting a part.
The right people must be involved in this phase. The following problems can occur in the
elicitation phase:
 Problem of Scope: The requirements given are of unnecessary detail, ill-defined, or not
possible to implement.
 Problem of Understanding: Not having a clear-cut understanding between the developer
and customer when putting out the requirements needed. Sometimes the customer might
not know what they want or the developer might misunderstand one requirement for
another.
 Problem of Volatility: Requirements changing over time can cause difficulty in leading a
project. It can lead to loss and wastage of resources and time.
3. Elaboration: This is the third phase of the requirements analysis process. This phase is the
result of the inception and elicitation phase. In the elaboration process, it takes the
requirements that have been stated and gathered in the first two phases and refines them.
Expansion and looking into it further are done as well. The main task in this phase is to
indulge in modeling activities and develop a prototype that elaborates on the features and
constraints using the necessary tools and functions.
4. Negotiation: This is the fourth phase of the requirements analysis process. This phase
emphasizes discussion and exchanging conversation on what is needed and what is to be
eliminated. In the negotiation phase, negotiation is between the developer and the customer
and they dwell on how to go about the project with limited business resources. Customers are
asked to prioritize the requirements and make guesstimates on the conflicts that may arise
along with it. Risks of all the requirements are taken into consideration and negotiated in a way
where the customer and developer are both satisfied with reference to the further
implementation. The following are discussed in the negotiation phase:
 Availability of Resources.
 Delivery Time.
 Scope of requirements.
 Project Cost.
 Estimations on development.
5. Specification: This is the fifth phase of the requirements analysis process. This phase
specifies the following:
 Written document.
 A set of models.
 A collection of use cases.
 A prototype.
In the specification phase, the requirements engineer gathers all the requirements and develops
a working model. This final working product will be the basis of any functions, features or
constraints to be observed. The models used in this phase include ER (Entity Relationship)
diagrams, DFD (Data Flow Diagram), FDD (Function Decomposition Diagrams), and Data
Dictionaries.
A software specification document is submitted to the customer in a language that he/she will
understand, to give a glimpse of the working model.
6. Validation: This is the sixth phase of the requirements analysis process. This phase focuses
on checking for errors and debugging. In the validation phase, the developer scans the
specification document and checks for the following:
 All the requirements have been stated and met correctly
 Errors have been debugged and corrected.
 Work product is built according to the standards.
This requirements validation mechanism is known as the formal technical review. The review
team that works together and validates the requirements include software engineers, customers,
users, and other stakeholders. Everyone in this team takes part in checking the specification by
examining for any errors, missing information, or anything that has to be added or checking for
any unrealistic and problematic errors. Some of the validation techniques are the following-
 Requirements reviews/inspections.
 Prototyping.
 Test-case generation.
 Automated consistency analysis.
7. Requirements Management: This is the last phase of the requirements analysis process.
Requirements management is a set of activities where the entire team takes part in identifying,
controlling, tracking, and establishing the requirements for the successful and smooth
implementation of the project.
In this phase, the team is responsible for managing any changes that may occur during
the project. New requirements emerge, and it is in this phase, responsibility should be taken to
manage and prioritize as to where its position is in the project and how this new change will
affect the overall system, and how to address and deal with the change. Based on this phase,
the working model will be analyzed carefully and ready to be delivered to the customer.
Iteration and incrementation:
A model is a set of UML diagrams that represent one or more aspects of the software product
to be developed. The object-oriented paradigm is an iterative and incremental methodology.
Each workflow consists of a number of steps, and to carry out that workflow, the steps of the
workflow are repeatedly performed until the members of the development team are satisfied that
they have an accurate UML model of the software product they want to develop. That is, even
the most experienced software professionals iterate and reiterate until they are finally satisfied
that the UML diagrams are correct. The nature of software products is such that virtually
everything has to be developed iteratively and incrementally.
The process continues in this way until eventually the software engineers are satisfied that
all the models for a given workflow are correct. In other words, initially the best possible UML
diagrams are drawn in the light of the knowledge available at the beginning of the workflow.
Then, as more knowledge about the real-world system being modeled is gained, the diagrams are
made more accurate (iteration) and extended (incrementation).
1. Just as it is not possible to become an expert on calculus or a foreign language in one single
course, gaining proficiency in the Unified Process requires extensive study and, more important,
unending practice in object-oriented software engineering.
2. The Unified Process was developed primarily for use in developing large, complex software
products. To be able to handle the many intricacies of such software products, the Unified
Process is itself large. It would be hard to cover every aspect of the Unified Process in a textbook
of this size.
3. To teach the Unified Process, it is necessary to present a case study that illustrates the features
of the Unified Process. To illustrate the features that apply to large software products, such a
case study would have to be large.
The five core workflows of the Unified Process
 Requirements workflow
 Analysis workflow
 Design workflow
 Implementation workflow
 Test workflow
Workflow:
A workflow can usually be described using formal or informal flow diagramming
techniques, showing directed flows between processing steps. Single processing steps or
components of a workflow can basically be defined by three parameters:

1. Input description: The information, material and energy required to complete the step
2. Transformation rules: Algorithms which may be carried out by people or machines, or
both
3. Output description: The information, material, and energy produced by the step and
provided as input to downstream steps.
Components can only be plugged together if the output of one previous (set of)
component(s) is equal to the mandatory input requirements of the following component(s). Thus,
the essential description of a component actually comprises only input and output that are
described fully in terms of data types and their meaning (semantics). The algorithms' or rules'
descriptions need only be included when there are several alternative ways to transform one type
of input into one type of output – possibly with different accuracy, speed, etc.
When the components are non-local services that are invoked remotely via a computer
network, such as Web services, additional descriptors (such as QoS and availability) also must be
considered.
Requirements workflow:
Software development is expensive. The development process usually begins when
the client approaches a development organization with regard to a software product that, in the
opinion of the client, is either essential to the profitability of his or her enterprise or somehow
can be justified economically. The aim of the requirements workflow is for the development
organization to determine the client’s needs.
Application domain (domain for short), that is, the specific environment in
which the target software product is to operate. The domain could be banking, automobile
manufacturing, or nuclear physics. At any stage of the process, if the client stops believing that
the software will be cost effective, development will terminate immediately. Throughout this
chapter the assumption is made that the client feels that the cost is justified. Therefore, a vital
aspect of software development is the business model, a document that demonstrates the cost-
effectiveness of the target product. In fact, the “cost” is not always purely financial.
For example, military software often is built for strategic or tactical reasons. Here, the
cost of the software is the potential damage that could be suffered in the absence of the weapon
being developed. At an initial meeting between client and developers, the client outlines the
product as he or she conceptualizes it. From the viewpoint of the developers, the client’s
description of the desired product may be vague, unreasonable, contradictory, or simply
impossible to achieve.
 A major constraint is almost always the deadline.
For example, the client may stipulate that the finished product must be
completed within 14 months. In almost every application domain, it is now
commonplace for a target software product to be mission critical. That is, the client
needs the software product for core activities of his or her organization, and any delay in
delivering the target product is detrimental to the organization.
 A variety of other constraints often are present such as reliability
(For example, the product must be operational 99 percent of the time or the mean
time between failures must be at least 4 months). Another common constraint is the size
of the executable load image
(For example, it has to run on the client’s personal computer or on the hardware
inside the satellite).
 The cost is almost invariably an important constraint. However, the client rarely tells the
developers how much money is available to build the product. Instead, a common practice is
that, once the specifications have been finalized, the client asks the developers to name their
price for completing the project. Clients follow this bidding procedure in the hope that the
amount of the developers’ bid is lower than the amount the client has budgeted for the
project.

Analysis workflow:
Workflow analysis is the process of examining an organization’s workflows, generally
for the purpose of improving operational efficiency. It identifies areas of process
improvement such as redundant tasks or processes, inefficient workplace layouts and bottlenecks
in the workflow.

Improving workflows allows an organization to use its resources more efficiently,


especially for processes where a team must hand off its work to another team. A situation in
which one team is often waiting to receive work from another team is a common type of
bottleneck that workflow analysis can identify. Another scenario in which workflow analysis can
improve efficiency occurs when workers perform unnecessary tasks, typically in established
processes requiring multiple steps to complete.
The analysis workflow, in a more precise language that ensures that the design and
implementation workflows are correctly carried out. In addition, more details are added during
the analysis workflow, details not relevant to the client’s understanding of the target software
product but essential for the software professionals who will develop the software product.

Analysis workflow, two key artifacts are produced

1. Analysis classes - These model key concepts of the business domain.

2. Use case realizations - These illustrate how instances of the analysis classes can interact to
realize behavior specified by a use case.
Design workflow:
The design workflow is to refine the artifacts of the analysis workflow until the material is
in a form that can be implemented by the programmers.
“Workflow design is the process of laying out all tasks and processes in a visual map, in
order to give team members and stakeholders a high-level overview of each task involved in
a particular process.”
The design workflow, the design team determines the internal structure of the product.
The methods are extracted and assigned to the classes. In particular, the interface of each method
(that is, the arguments passed to the method and the arguments returned by the method) must be
specified in detail. For example, a method might measure the water level in a nuclear reactor and
cause an alarm to sound if the level is too low. A method in an avionics product might take as
input two or more sets of coordinates of an incoming enemy missile, compute its trajectory, and
invoke another method to advise the pilot as to possible evasive action. Another important aspect
of the design workflow is choosing appropriate algorithms for each method.

How to Design a Workflow:

To design the most efficient workflow for your processes, follow these step-by-step instructions.
1. Diagram Your Tasks: Start by identifying the different tasks and activities in your process
using a basic flowchart. Create clear start and end points so that you know how the process
should begin and what needs to be completed in order to finalize it. Then document every task
between the start and end points, and ensure that each step is properly integrated into the
workflow design process. Remember to indicate the relationship between the steps (i.e.,
whether steps are contingent on one another or can be completed concurrently).Read our guide
on how to create workflow diagrams to learn more about the common shapes and symbols to
use.
2. Consider Inputs and Outputs: Once you’ve diagrammed your tasks and identified their
relationships to each other, consider both the inputs and outputs of your workflow. Attach all
relevant information, including instructions for how each task should be completed, data or
information needed in order to properly complete each task, links to source files, and checklists
of the actions necessary to complete the tasks.
3. Identify Sub-Workflows: Now that you have both your tasks and their attendant inputs and
outputs documentation, identify any sub-workflows that make up your larger workflow. If
possible, split your larger, linear flows into smaller, more digestible workflows to make complex
processes easier to understand and complete. When possible, reuse processes from commonly
used workflows since you already tested and implemented them successfully.
4. Create Logical Loops in the Workflow Design: You can now identify how the workflow will
progress. There are two main types of workflow loops: the “while … do” loop (while one task
occurs, you complete another task in the meantime) and the “repeat … until” loop (continue
doing a certain task until another task is approved). Identify which type of loop (or loops) best
suits your workflow to avoid bottlenecking processes or lagging behind schedule.
5. Incorporate Automation Wherever Possible: Once you’ve built all the main components of
your workflow, try to incorporate as much automation as possible in order to simplify your
workflow and eliminate repetitive, manual tasks. For example, you can automate the way your
data is centralized or set up automatic notifications to alert stakeholders when a task is ready for
review.

Workflow Design Template: For an easy way to start a workflow, use this helpful workflow
design template to get up and running quickly. In this template, you’ll find an easy way to track
project progress, align your project strategies, and streamline your tasks, giving you the simplest,
easiest-to-digest view of your workflow and all underlying task.
Implementation work flow:

The implementation workflow is to implement the target software product in the chosen
implementation language(s). A small software product is sometimes implemented by the
designer. In contrast, a large software product is partitioned into smaller subsystems, which are
then implemented in parallel by coding teams. The subsystems, in turn, consist of code artifacts
implemented by an individual programmer.

 Implementation work flow is the main focus of the construction phase.

 Implementation is about transforming a design model into an executable code. It’s a part of
the design model.

 Implementation specifies how the design elements are manifest by artifacts, and how these
artifacts are deployed onto nodes. (Artifacts represent the specifications of real world things
such as source files and nodes represent the specifications of hardware or execution
environments on to which those things are deployed).

 Implementation subsystems contain components that package design classes.

 Implementation subsystems maps to real, physical grouping mechanisms of the target


implementation language.
Two aspects in which an explicit implementation modelling activity is important are,

1. Generating code directly from the model we need to specify details such as source files and
components.

2. For reusing components using component based development the allocation of design classes
and interfaces to components become important.
Test workflow: The primary activities of the Test workflow are aimed at building the test model,
which describes how integration and system tests will exercise executable components from the
implementation model. The test model also describes how the team will perform those tests as
well as unit tests.

The Unified Process, testing is carried out in parallel with the other workflows, starting from the
beginning. There are two major aspects to testing:

1. Every developer and maintainer is personally responsible for ensuring that his or her work is
correct. Therefore, a software professional has to test and retest each artifact he or she develops
or maintains.

2. Once the software professional is convinced that an artifact is correct, it is handed over to the
software quality assurance group for independent testing.
Artifact formats: Artifact formats can be specific to individual artifact types or can be used for
multiple types. For example, you might use the text format for a feature or a use case
specification or a custom artifact type. However, the diagram format is typically used exclusively
for creating diagrams. You can create and populate artifacts that are based on several formats,
including these:
Text: Use this format to create rich-text requirement content that can contain text, images, and
embedded artifacts. This format is useful for text-based artifacts types, such as actor and use
case specifications, user stories, features, business goals, and glossary terms.
Collection: Use this format to group a set of related artifacts in a collection.
Module: Use this format to create a structured document that consists of artifacts in a module.
Diagram: Use this format to create graphical artifacts such as wireframes, business process
diagrams, and use case diagrams.

Requirements Artifacts:
If the requirements artifacts are to be testable over the life cycle of the software product, then one
property they must have is traceability.
For example, it must be possible to trace every item in the analysis artifacts back to a requirements
artifact and similarly for the design artifacts and the implementation artifacts. If the requirements have
been presented methodically, properly numbered, cross-referenced, and indexed, then the developers
should have little difficulty tracing through the subsequent artifacts and ensuring that they are indeed a
true reflection of the client’s requirements.

Analysis Artifacts:
A major source of faults in delivered software is faults in the specifications that are not detected
until the software has been installed on the client’s computer and used by the client’s
organization for its intended purpose. Both the analysis team and the SQA group must therefore
check the analysis artifacts assiduously. In addition, they must ensure that the specifications are
feasible;
For example, that a specific hardware component is fast enough or that the client’s current
online disk storage capacity is adequate to handle the new product. An excellent way of checking
the analysis artifacts is by means of a review. Representatives of the analysis team and of the
client are present. The meeting usually is chaired by a member of the SQA group. The aim of the
review is to determine whether the analysis artifacts are correct. The reviewers go through the
analysis artifacts, checking to see if there are any faults.
Artifacts of Analysis/Architectural Modeling
• Conceptual Model
– Static Structure diagram(s)
– Sequence diagrams
– Glossary
– Other models and documents?
• Architectural Model
– Stakeholder Needs
– Architectural views

While there is no default artifact type called "requirement," you can create one or use the default
artifact types that are in the sample project templates. Many artifact types are included in the
sample project templates, including these types:

 Requirements
 Use cases
 Design documents
 Business process diagrams
 Use case diagrams

For a list of sample project templates and their included artifact types, see creating
requirements projects.

If you are a project administrator, you can view the artifact types that are in a project and create
artifact types, attributes, data types, and links types.

Design Artifacts:
A critical aspect of testability is traceability. In the case of the design, this means
that every part of the design can be linked to an analysis artifact. A suitably cross-referenced
design gives the developers and the SQA group a powerful tool for checking whether the design
agrees with the specifications and whether every part of the specifications is reflected in some
part of the design.
Design reviews are similar to the reviews that the specifications undergo.
However, in view of the technical nature of most designs, the client usually is not present.
Members of the design team and the SQA group work through the design as a whole as well as
through each separate design artifact, ensuring that the design is correct. The types of faults to
look for include logic faults, interface faults, lack of exception handling (processing of error
conditions), and most important, nonconformance to the specifications.\
Purposes can include:

 Understanding the general direction of the system


 Exploring structural directions
 Exploring possible interaction mechanisms
 Exploring visual directions
 Understanding decision-making
 Providing construction guidance to developers
 Testing with customers

Implementation Artifacts:

The implementation workflow is the main focus of the Construction phase. Implementation is
about transforming a design model into executable code. The implementation model is really just
the implementation view of the design - that is, it is a part of the design model.

Tools that are used are debuggers, compilers, code analyzers, tools for test management. This
set generally contains source code as implementation of components, their form, interfaces,
and executable that are necessary for stand-alone testing of components.

Testing artifacts:
Test Artifacts are simply integral part of software testing. These are generally set of
documents, which software project tester gets during STLC (Software Testing Life Cycle). Test
artifacts are by-products that are generated or created while performing software testing.

Types of Test Artifacts:

1.TestStrategy:
Test strategy is generally prepared by Test or Project Manager at management level. It
is outline of document that describes testing approach of development cycle of software which
enlists how to achieve expected result using resources that are available.
It simply provides easy understanding of targets, tools, techniques, infrastructure, and
timing of test activities that are to be performed. It is also used to identify all risk factors that
can arise during testing and appropriate solution to reduce or mitigate risk. It also clarifies
major important challenges and approach to complete all testing process of project. Test
strategy is usually derived from Business Requirement Specification Format. To develop this
strategy, there are several points that are needed to be kept in mind. Some of them are given
below:
 What is main objective of testing that why you want to perform this testing?
 What are Guidelines that are needed to be followed for performing testing?
 What all are requirements that are needed for testing such as functional requirements, test
scenarios, resources, etc.?
 What are roles and responsibilities of each and every functions and project manager to
complete test?
 What are different levels of testing?
 What will be main deliverable of this testing?
 What risks are there regarding testing along with project risks?
 Is there any methods to resolve issues that might arise?
2. Test Plan:
The test plan is detailed document that describes software testing scope, test strategy, test
deliverables, risk, objectives, and activities. It is systemic approach generally used for software
application testing. It is the most important and essential activity to simply ensure that there is
initially list of tasks and milestones in baseline plan to track or identify project progress.
It is dynamic document that generally acts as point of reference and only based on that testing
which is carried out within QA (Quality Assurance) team. It is simply blueprint that explains
how testing activity is going to take place in project. There are several points that are needed to
be kept in mind to develop test strategy. Some of them are given below:
 What is main purpose of testing activities?
 What is future scope of testing i.e., exact path that is needed to be followed or covered
while performing testing?
 What is testing approach i.e., how testing would be carried out?
 What all are resources that are required for testing?
 What are exit criteria after completion of testing i.e., set of conditions and activities that are
needed to be fulfilled in order to conclude testing?
 How will you manage risks that can arise?
3. Test Scenario:
A test scenario is statement that is used to describe functionality of application that can be
tested. It is simply used to make sure that end to end testing of feature or software is working
well or not. It is derived from the use cases.
It contains situation or condition in application form which many test cases can be
developed. Test Scenario is also called as Test Condition or Test Possibility. One or more test
cases can be accommodated in single test scenario. Due to this, test scenario has one-to-many
relationship with test cases. It means talking and thinking about requirements in detailed
manner.
4. Test Case:
The test case is detailed document that describes cases which will help execution while
testing. It is document that consists of test case name, precondition, steps/input condition, and
expected results. The development of test cases also helps in identifying or tracking problems
or issues in requirement or design of software application.
It is simply set of conditions or variables under which software tester will identify
whether or not system under test satisfies requirements or work in proper and correct way. To
write good test case, some of points that are needed to be included in test case:
 Write Test case id i.e., unique identification number of test case.
 Write Test case name i.e., strong title for test case.
 Write full details and descriptions regarding test case.
 Write in steps to make it clear and concise i.e., simple.
 Write expected or actual outcome or result of test.
5. Traceability Matrix:
Traceability matrix is matrix that contains tables that shows and explains many to
many relationships among requirements and test cases. It is actually document that co-relates
any of two-baseline documents that require many to many relationships to check, trace, and
maps relationship. It generally helps to ensure transparency and completeness of products of
software testing.
It is the tracing of all requirements of clients with test cases and identifying defects.
The traceability matrix is of two types: Forward traceability matrix and backward traceability
matrix. Some of parameters that are included in Traceability Matrix are given below:
 Requirement ID.
 Requirement type along with description.
 Status of test design along with execution of test status.
 System and unit test cases.
6. Software Test Report:
A software test report is document that describes all testing activities. It provides
detailed information about status of test cases, test suites, or test scripts for given scope. A test
report is very much needed to represent test results in formal way that gives opportunity to find
out testing results quickly. The test reports can be of many types: Individual test report, team
report. The test reports can be generated on daily basis or it can be generated after completion
of testing or at end of testing cycle.

Post-delivery maintenance:
Post-delivery maintenance is not an activity grudgingly carried out after the product has been
delivered and installed on the client’s computer.
The entire software development effort must be carried out in such a way as to
minimize the impact of the inevitable future post-delivery maintenance. A common problem
with post-delivery maintenance is documentation or, rather, lack of it. In the course of
developing software against a time deadline, the original analysis and design artifacts frequently
are not updated and, consequently, are almost useless to the maintenance team. Other
documentation such as the database manual or the operating manual may never be written,
because management decided that delivering the product to the client on time was more
important than developing the documentation in parallel with the software. In many instances,
the source code is the only documentation available to the maintainer
Turning now to testing, there are two aspects to testing changes made to a product when
post-delivery maintenance is performed. The first is checking that the required changes have
been implemented correctly. The second aspect is ensuring that, in the course of making the
required changes to the product, no other inadvertent changes were made. Therefore, once the
programmer has determined that the desired changes have been implemented, the product must
be tested against previous test cases to make certain that the functionality of the rest of the
product has not been compromised. This procedure is called regression testing.

Retirement:
The final stage in the software life cycle is retirement. After many years of service, a
stage is reached when further post-delivery maintenance no longer is cost effective.
• Sometimes, the proposed changes are so drastic that the design as a whole would have to be
changed. In such a case, it is less expensive to redesign and recode the entire product.
• So many changes may have been made to the original design that interdependencies
inadvertently have been built into the product, and even a small change to one minor component
might have a drastic effect on the functionality of the product as a whole.
• The documentation may not have been adequately maintained, thereby increasing the risk of a
regression fault to the extent that it would be safer to recode than maintain.
• The hardware (and operating system) on which the product runs is to be replaced; it may be
more economical to rewrite from scratch than to modify.

Phases of the Unified Process:

One versus two dimensional lifecycle models:

A classical life-cycle model (like the waterfall model of Section 2.9.2) is a one-
dimensional model, as represented by the single axis in Figure (a). Underlying the Unified Process is a
two-dimensional life-cycle model, as represented by the two axes.

The one-dimensional nature of the waterfall model is clearly reflected. In contrast, shows
the evolution-tree model. Model is two-dimensional and should therefore be compared. The
analysis workflow would be completed before starting the design workflow, and so on. In reality,
however, all but the most trivial software products are too large to handle as a single unit.
Instead, the task has to be divided into increments (phases), and within each increment the
developers have to iterate until they have completed the task under construction.
Even subsystems can be too large at times—components may be all that we can
handle until we have a fuller understanding of the software product as a whole. The Unified
Process is the best solution to date for treating a large problem as a set of smaller, largely
independent sub problems. It provides a framework for incrementation and iteration, the
mechanism used to cope with the complexity of large software products. Another challenge that
the Unified Process handles well is the inevitable changes. One aspect of this challenge is
changes in the client’s requirements while a software product is being developed, the so-called
moving-target problem.

Improving software process capability:


Software Process Improvement (SPI) methodology is defined as a sequence of tasks,
tools, and techniques to plan and implement improvement activities to achieve specific goals
such as “increasing development speed, achieving higher product quality or reducing cost”.
How software processes can be improved?
Software Process Improvement (SPI) methodology is defined as a sequence of tasks,
tools, and techniques to plan and implement improvement activities to achieve specific
goals such as increasing development speed, achieving higher product quality or
reducing

 It is simple defined as the definitions of sequence of various tasks, tools, and techniques
that are needed to performed to plan and just implement all the improvement activities.
 It includes three factors: People, Technology and Product.
 It also includes improve planning, implementation, evaluation
 It reduces the cost, increases development speed by installing tools that reduces the
time and work done by humans or to automate the production process.
 It increases product quality.
 It is created to achieve the specific goals such as increasing the development speed,
achieving the higher product quality and many more.
 It improves team performance by hiring best people.
Capability maturity models:
The Software Engineering Institute (SEI) Capability Maturity Model (CMM) specifies an
increasing series of levels of a software development organization. The higher the level, the
better the software development process, hence reaching each level is an expensive and time-
consuming process.
 Level One: Initial - The software process is characterized as inconsistent, and
occasionally even chaotic. Defined processes and standard practices that exist are
abandoned during a crisis. Success of the organization majorly depends on an individual
effort, talent, and heroics. The heroes eventually move on to other organizations taking
their wealth of knowledge or lessons learnt with them.
 Level Two: Repeatable - This level of Software Development Organization has a basic
and consistent project management processes to track cost, schedule, and functionality.
The process is in place to repeat the earlier successes on projects with similar
applications. Program management is a key characteristic of a level two organization.
 Level Three: Defined - The software process for both management and engineering
activities are documented, standardized, and integrated into a standard software process
for the entire organization and all projects across the organization use an approved,
tailored version of the organization's standard software process for developing,testing and
maintaining the application.
 Level Four: Managed - Management can effectively control the software development
effort using precise measurements. At this level, organization set a quantitative quality
goal for both software process and software maintenance. At this maturity level, the
performance of processes is controlled using statistical and other quantitative techniques,
and is quantitatively predictable.
 Level Five: Optimizing –The Key characteristic of this level is focusing on continually
improving process performance through both incremental and innovative technological
improvements. At this level, changes to the process are to improve the process performance
and at the same time maintaining statistical probability to achieve the established quantitative
process-improvement objectives.
Cost and Benefits of software process and improvement:
A focus on quality products is a great asset to any business. However, without the ability to generate
profits, the quality of your product offering has little value. If you cannot afford to stay in business
because your development time and costs outweigh the amount of revenue earned from sales, your
attention to detail and strong commitment to user needs will be lost. There are three primary
benefits that smart businesses can enjoy from a cost-benefit analysis:

 Lossprevention:
When you can clearly see the costs that go into your software program and balance those
with the sales profits, you will be able to prevent pouring more money into a product than you get
out of it.
 Increasedprofits:
Preventing a loss is important but it is in the generation of profits that your business can
really succeed. A cost-benefit analysis can help to illustrate ways that your company can increase
software sales, revenue and ultimately profits.
 Improveddecisionmaking:
Every part of the software development process offers opportunities to streamline operations,
reduce costs, or improve performance if the right information is made available. Having data
readily accessible can help management and development teams make the right decisions at the
right times.

Team Organization:

Teams are an effective way for a business to delineate important work tasks. It can also
be a valuable tool to create interpersonal relationships between coworkers. This can lead to an
increase in employee retention, morale, and productivity. Businesses must understand how to
manage teams strategically, or conflict can reduce these effects. The four types of teams are:
project teams, self-managed teams, virtual teams, and operational teams. The team that is utilized
depends on the need of the organization.
A team is defined as two or more people who work together to accomplish a common
goal. Teams are the most effective when they have a clear organizational structure. This keeps
the group focused and allows everyone to understand their role and responsibilities. This ensures
that tasks are completed promptly. Additionally, a clear organization allows team members to
understand who to see when they have a question or a conflict.

There are two extreme approaches to programming-team organization, democratic teams and
chief programmer teams. The approach taken here is to describe each of the two approaches,
highlight its strengths and weaknesses, and then suggest other ways of organizing a
programming team that incorporate the best features of the two extremes.

Democratic Team Approach:


The democratic leadership style, or participative management, actively involves the people
being led. Democratic leaders often seek feedback and input from subordinates. They
encourage conversation and participation in the decision-making process.
Democratic Team Organization:
It is quite similar to the egoless team organization, but one member is the team leader with
some responsibilities:
 Coordination
 Final decisions, when consensus cannot be reached.
Advantages of Democratic Team Organization:
 Each member can contribute to decisions.
 Members can learn from each other.
 Improved job satisfaction.
Disadvantages of Democratic Team Organization:
 Communication overhead increased.
 Need for compatibility of members.
 Less individual responsibility and authority.

Chief Programmer Team Approach: This team organization is composed of a small


team consisting the following team members:

 The Chief programmer: It is the person who is actively involved in the planning,
specification and design process and ideally in the implementation process as well.
 The project assistant: It is the closest technical co-worker of the chief programmer.
 The project secretary: It relieves the chief programmer and all other programmers of
administration tools.
 Specialists: These people select the implementation language, implement individual
system components and employ software tools and carry out tasks.
Advantages of Chief-programmer team organization:
 Centralized decision-making.
 Reduced communication paths.
 Small teams are more productive than large teams.
 The chief programmer is directly involved in system development and can exercise the
better control function.
Disadvantages of Chief-programmer team organization:
 Project survival depends on one person only.
 Can cause the psychological problems as the “chief programmer” is like the “king” who
takes all the credit and other members are resentful.
 Team organization is limited to only small team and small team cannot handle every
project.
 Effectiveness of team is very sensitive to Chief programmer’s technical and managerial
activities.

Synchronize and Stabilize Teams:


Synchronize-and-stabilize is a software life cycle development model. It allows the teams to
work efficiently in parallel on different individual application modules. This is done while
frequently synchronizing the work as individual’s and as members of parallel teams and
periodically stabilizing and/or debugging the code throughout the development process.
The synchronize-and-stabilize life-cycle model was described in the success of this model
is largely a consequence of the way the teams are organized. Each of the three or four sequential
builds of the synchronize-and-stabilize model is constructed by a number of small parallel teams
led by a manager and consisting of between three and eight developers together with three to
eight testers who work one-to-one with the developers.
The team is provided the specifications of its overall task; individual team members then
are given the freedom to design and implement their portions of that task as they wish. The
reason that this does not rapidly devolve into hacker-induced chaos is the synchronization step
performed each day: The partially completed components are tested and debugged on a daily
basis.

Teams for Agile processes:


Each iteration involves a cross-functional team working in all functions: planning, analysis,
design, coding, unit testing, and acceptance testing. At the end of the iteration a working product
is demonstrated to stakeholders. This minimizes overall risk and allows the product to adapt to
changes quickly.
The role of a project manager within Agile, and how do you best manage Agile teams? As an
Agile project manager, you are a coach, facilitator, and supporter of your team. You need to
facilitate communication between the group and other stakeholders and remove roadblocks to
progress.
Feature of agile processes is that all code is written by a team of two programmers sharing
a single computer, this is referred to as pair programming.
The size of an agile software development team may vary, but the key roles stay the same a team
leader, team members, product owner, and various stakeholders.

An agile software development team is characterized by its ability to maintain a flexible


approach regardless of the complexity of the project. To be able to do this, agile teams are built
upon these principles:

 Communication and feedback - Regular revision of work make agile software development
project move from one stage to the other without experiencing major feedback. Being open
to give and receive feedback enables software engineers to adjust the product faster.
 Adaptability - Fast-paced agile methodology requires its team members to be flexible and
adapt fast to the changing requirements without interrupting the development process.
 Trust - Transparency requires each team member to trust the rest of the stakeholders
involved in the development process and provide them with a safe environment where
mutual trust can be established.
 Collaboration - The foundation of a successful agile software development team lies in the
ability to work together to find the best solutions to the project and make sure the whole team
has access to new and fundamental knowledge.
 Engagement - Readiness to face changes and keep an open mind while learning from others
and respecting their opinion is what helps agile teams move at a faster pace compared to
other software development methodologies.

Open source programming teams: The entire supply chain of code that goes into the
product is open source. This can include multiple upstream projects like enterprise Linux
distributions or sophisticated products like Red Hat Satellite or Open Shift.

Product managers and product marketing managers are the two most common product
management roles, but product management can be further split into any number of roles,
including competitive analysis, business strategy, sales enablement, revenue growth, content
creation, sales tools, and more. With a very large product, even the product management role
may be broken up into separate roles. You may even hear titles like technical marketing
manager, product evangelist, and business owner, not to mention people-management roles for
groups of individual contributor roles. For the purpose of this article, I refer to all of these roles
collectively as "product management."

Product management is a tough career, and there are many frameworks for learning and
understanding it. Instead, I'll emphasize a subset of tasks that are applicable in open source
products:
 Market problems: Talk to customers and figure out what they need
 Product roadmap: Determine what features will be added to the product and when
 Build: Invest in building technology in-house
 Buy: Purchase technology from another company
 Partner: Deliver a solution to a customer by leveraging a partner's technology
 Pricing: Determine the price
 Packaging: Determine what's included in the price
 Channel training: Train the salespeople so that they can educate potential customers.

Individuals volunteer to take part in an open-source project for two main reasons: for the
sheer enjoyment of accomplishing a worthwhile task or for the learning experience.

 In order to attract volunteers to an open-source project and keep them interested, it is


essential that at all times they view the project as “worthwhile.” Individuals are unlikely
to devote a considerable portion of their spare time to a project unless they truly believe
Teams 113 that the project will succeed and that the product will be widely utilized.
Participants will start to drift away if they start viewing the project as futile.
 With regard to the second reason, many software professionals join an open-source
project in order to gain skills in a technology that is new to them, such as a modern
programming language or an operating system with which they are unfamiliar. They can
then leverage the knowledge they gain to obtain a promotion within their own
organization or acquire a better position in another organization.
After all, employers frequently view experience gained working on a large, successful
open-source project as more desirable than acquiring additional academic qualifications.
Conversely, there is no point in devoting months of hard work to a project that ultimately
fails.

In other words, an open-source project succeeds because of the nature of the target product, the
personality of the instigator, and the talents of the members of the core group. The way that a
successful open-source team is organized is essentially irrelevant.

Types of products built on open source:


Most modern software products are delivered by adding new value to the value provided by
the open source supply chain. This could include extra downstream testing, documentation,
quality engineering, performance testing, security testing, industry certifications, a partner
ecosystem, training, professional services, or even extra proprietary code not included upstream
(open core). By considering this new model, many of the old debates about open source can be
clarified:

 Open source products: The entire supply chain of code that goes into the product is open
source. This can include multiple upstream projects like enterprise Linux distributions or
sophisticated products like Red Hat Satellite or Open Shift.
 Open core products: Some of the supply chain of code that goes into the product is open source
while other parts are proprietary. This mix of licensing can be used to control pricing, packaging,
and distribution. It can also have the downside of putting engineering contributions to the
product at odds with the open source supply chain
 Paid software-as-a-service products: The supply chain of SaaS products can be made up of
open source languages and libraries, while the business logic built in-house is often proprietary.
This allows product managers to tightly control pricing and packaging through very measurable
distribution channels. There are many examples of online companies using this model, including
customer resource management platforms, databases, caching layers, identity management,
document management, and office and email solutions.
 Free SaaS products: The supply chain of free SaaS products (e.g., Facebook, Google Search,
YouTube, etc.) is essentially the same as paid SaaS products. Instead of tightly controlling the
pricing and packaging, the product is monetized through user data or advertisements.
 Cloud providers vs. software vendors: The recent interest in and creation of new quasi-open
source licenses like the Server Side Public License, Redis Source Available License,
or PolyForm are better understood by thinking of open source as a supply chain and SaaS as a
pricing and packaging model. These new licenses exert some control on how buyers from the
open source supply chain can price and package their products (e.g., limiting how a large cloud
provider can deliver a service built on freely available code and binaries). This is not unheard of,
even in traditional manufacturing supply chains. It's a defensive play because these licenses don't
deliver new value to customers.
 Open source as awareness: In this model, the buzz around the upstream project is used to
deliver awareness for the products built on it. In marketing, awareness of a technology is a
critical first step for having a product's customers build on it.

For example, if users are aware of and believe in Kubernetes, they are potential
customers for products built on Kubernetes; when people looking for a Kubernetes solution hear
that your product is built on Kubernetes, they immediately become potential customers. In a
way, this is similar to Lenovo laptops advertising "Intel inside" or Arm & Hammer laundry
detergent advertising OxiClean as part of their supply chain. If you like OxiClean, you'll love
Arm & Hammer detergent.

 Open core as marketing: This goes a step further than open source as awareness. A single
vendor almost always controls the upstream open source projects that go into open core products.
They attempt to use the supply chain, often unsuccessfully, to generate market awareness in
what's perceived to be free or inexpensive marketing and lead generation. Open core products
advertise that they include the open source project to provide core value propositions.

People capability maturity model choosing an appropriate team:

“People Capability Maturity Model (PCMM) is a maturity framework that focuses on


continuously improving the management and development of the human assets of a
software or information systems organization.”

People Capability Maturity Model (PCMM) is a maturity framework that focuses on


continuously improving the management and development of the human assets of a software or
information systems organization. PCMM can be perceived as the application of the principles
of Capability Maturity Model to human assets of a software organization. It describes an
evolutionary improvement path from ad hoc, inconsistently performed practices, to a mature,
disciplined, and continuously improving development of the knowledge, skills, and motivation of
the workforce. Although the focus in People CMM is on software or information system
organizations, the processes and practices are applicable for any organization that aims to
improve the capability of its workforce. PCMM will be guiding and effective particularly for
organizations whose core processes are knowledge intensive.

10 Principles of People Capability Maturity Model (PCMM):The People Capability Maturity


Model describes an evolutionary improvement path from ad hoc, inconsistently performed
workforce practices, to a mature infrastructure of practices for continuously elevating workforce
capability. The philosophy implicit the PCMM can be summarized in ten principles.

1. In mature organizations, workforce capability is directly related to business performance.


2. Workforce capability is a competitive issue and a source of strategic advantage.
3. Workforce capability must be defined in relation to the organization’s strategic business
objectives.
4. Knowledge-intense work shifts the focus from job elements to workforce competencies.
5. Capability can be measured and improved at multiple levels, including individuals, workgroups,
workforce competencies, and the organization.
6. An organization should invest in improving the capability of those workforce competencies that
are critical to its core competency as a business.
7. Operational management is responsible for the capability of the workforce.
8. The improvement of workforce capability can be pursued as a process composed from proven
practices and procedures.
9. The organization is responsible for providing improvement opportunities, while individuals are
responsible for taking advantage of them.
10. Since technologies and organizational forms evolve rapidly, organizations must continually
evolve their workforce practices and develop new workforce competencies.
People Capability Maturity Model (PCMM) Levels:The People Capability Maturity
Model consists of five maturity levels. Each maturity level is an evolutionary plateau at which
one or more domains of the organization’s processes are transformed to achieve a new level of
organizational capability.

The five levels of People CMM are defined as follows:

1. At PCMM Level 1, an organization has no consistent way of performing workforce practices.


Most workforce practices are applied without analysis of impact.
2. At PCMM Level 2, organizations establish a foundation on which they deploy common
workforce practices across the organization. The goal of Level 2 is to have managers take
responsibility for managing and developing their people.
For example, the first benefit an organization experiences as it achieves Level 2 is a reduction in
voluntary turnover. The turnover costs that are avoided by improved workforce retention more
than pay for the improvement costs associated with achieving Level 2.

3. At PCMM Level 3, the organization identifies and develops workforce competencies and aligns
workforce and work group competencies with business strategies and objectives.

For example, the workforce practices that were implemented at Level 2 are now standardized
and adapted to encourage and reward growth in the organization’s workforce competencies.

4. At PCMM Level 4, the organization empowers and integrates workforce competencies and
manages performance quantitatively.

For example, the organization is able to predict its capability for performing work because it
can quantify the capability of its workforce and of the competency-based processes they use in
performing their assignments.

5. At PCMM Level 5, the organization continuously improves and aligns personal, work-group,
and organizational capability.

For example, at Maturity Level 5, organizations treat continuous improvement as an orderly


business process to be performed in an orderly way on a regular basis.

Choosing an appropriate team organization:

A comparison of the various types of team organization appears in, which also shows the
section in which each team organization is described. Unfortunately, no one solution solves the
problem of programming team organization or, by extension, the problem of organizing teams
for all the other workflows. The optimal way of organizing a team depends on the product to be
built, previous experience with various team structures, and most important, the culture of the
organization.

You might also like