SOFTWARE ENGINEERING Question Answers Unit 3-6 Imp

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

SOFTWARE ENGINEERING [IMP]

UNIT 3 – ESTIMATION AND SCHEDULING


Q.1] What is the need of project estimation ? What are the steps while estimating
software ?

ANS :

1. Define project scope: This involves defining the project's objectives, deliverables, and requirements. The scope
helps to identify the tasks and activities involved in the project.
2. Breakdown tasks: The next step is to identify the individual tasks involved in the project. This helps to estimate
the effort required for each task and to identify any dependencies between tasks.
3. Estimate effort: The effort required for each task is estimated by considering the complexity of the task, the skill
level of the team members, and any dependencies between tasks.
4. Estimate duration: The duration of the project is estimated by combining the effort estimates for each task and
considering any constraints, such as resource availability.
5. Review and refine: Once the estimates are completed, they should be reviewed and refined to ensure that they
are accurate and realistic. This may involve seeking feedback from stakeholders, adjusting estimates based on
historical data, or considering potential risks and uncertainties.
Q.2] Describe Agile Manifesto

ANS :

1. Individuals and interactions over processes and tools: This value prioritizes communication and
collaboration between team members, as opposed to relying solely on tools and processes.
2. Working software over comprehensive documentation: This principle prioritizes delivering working
software that meets the customer's needs, rather than creating extensive documentation that may
not be relevant or useful.
3. Customer collaboration over contract negotiation: This value emphasizes working closely with
customers throughout the development process to ensure that their needs and requirements are
met.
4. Responding to change over following a plan: This principle recognizes that change is inevitable in
software development and emphasizes the importance of adapting to changes in requirements,
technology, and other factors.
5. The Agile Manifesto also includes twelve principles, which provide more detailed guidance on how
to implement Agile values in software development. These principles include continuous delivery
of working software, regular team communication and collaboration, and a focus on simplicity and
technical excellence.
Q.3] Explain in detail software process and project metrics.

ANS :

1. Software process involves a set of activities, such as requirements gathering, design, coding, testing, and
deployment. The specific activities and methods used can vary depending on the development methodology
being used.
2. Software process models provide a framework for organizing and structuring these activities. Examples of
software process models include the Waterfall model, Agile model, and Spiral model.
3. Project metrics are used to measure various aspects of a software development project, such as progress, quality,
and productivity. Examples of project metrics include lines of code, defect density, and test coverage.
4. The selection of appropriate project metrics depends on the project's goals and objectives, as well as the
development methodology being used. For example, in an Agile project, metrics such as sprint velocity and
burndown charts may be used to measure progress.
5. Software process and project metrics play a critical role in project management and are used to make data-
driven decisions, improve processes, and identify areas for improvement. However, it's important to use metrics
judiciously and avoid relying too heavily on any single metric or set of metrics.
Q.4] What is project decomposition ? What are the work tasks for communication
process using process decomposition.

ANS :

1. Project decomposition helps to simplify a large project by breaking it down into smaller pieces.
This makes it easier to manage and assign tasks to team members.
2. The decomposition process involves breaking down the project into smaller tasks or work
packages. These work packages should be manageable in size and should be organized logically.
3. The communication process is an important part of project management, and process
decomposition can help to facilitate communication. This involves identifying the stakeholders
who need to be involved in the communication process and establishing a communication plan.
4. The work tasks for communication process using process decomposition involve identifying the
communication objectives, selecting the appropriate communication channels, developing the
communication plan, executing the communication plan, and monitoring and evaluating the
effectiveness of the communication.
5. The communication plan should include details such as the frequency and timing of
communication, the content and format of communication, and the stakeholders who need to be
involved. This plan should be reviewed and updated regularly to ensure that it remains relevant
and effective.
Q.5] Explain the FP based estimation technique ?

ANS :

1. FP is a measure of the functionality provided by a software application, based on the user's


perspective. It is measured in terms of five different types of function points: external inputs,
external outputs, external inquiries, internal logical files, and external interface files.
2. FP estimation involves determining the number of function points required for a project based on
the software requirements. This involves analyzing the requirements to determine the types and
number of function points required.
3. Once the function points are determined, they can be used to estimate the effort required for the
project, based on historical data or industry benchmarks.
4. FP estimation is independent of the programming language, technology, or hardware platform
used for software development, making it useful for estimating projects across different platforms
and technologies.
5. FP based estimation is widely used in industry and is recognized as a reliable and effective
technique for estimating software development projects. However, it's important to note that FP
estimation is not a precise science, and the accuracy of the estimates can vary depending on the
quality of the requirements analysis and the complexity of the project.
Q.6] Compare line of code (LOC) and Functional Point (FP) based Estimation
techniques with suitable example ?

ANS :

1. LOC is a measure of the number of lines of code in a software application. FP is a measure of the
functionality provided by a software application, based on the user's perspective.
2. LOC estimation involves determining the number of lines of code required for a project based on
the software requirements. For example, if a project requires a feature that involves writing 100
lines of code, the LOC estimate for that feature would be 100 lines.
3. FP estimation involves determining the number of function points required for a project based on
the software requirements. For example, if a project requires a feature that involves three external
inputs, two external outputs, and one external inquiry, the FP estimate for that feature would be six
function points.
4. LOC estimation is dependent on the programming language, technology, or hardware platform
used for software development. FP estimation, on the other hand, is independent of the
programming language, technology, or hardware platform used for software development.
5. While both techniques have their advantages and disadvantages, FP estimation is generally
considered to be a more accurate and reliable technique for estimating software development
projects, as it takes into account the functionality provided by the software, rather than just the
number of lines of code. However, it's important to note that both techniques have their
limitations and should be used judiciously.
Q.7] Explain Object Oriented view of components level design with suitable example ?

ANS :

1. Object-oriented view of component-level design focuses on designing software components as


objects with their own properties and behavior.
2. Components are modeled as classes in an object-oriented language, with attributes and methods
that define their behavior.
3. Components can interact with each other through well-defined interfaces, which specify the
methods and properties that are accessible to other components.
4. The interaction between components is based on the principles of encapsulation, inheritance, and
polymorphism, which allow for greater flexibility and reusability.
5. An example of object-oriented component-level design is a banking application where each
component is designed as an object, such as customer, account, transaction, etc. Each object has
its own properties and behavior, and can interact with other objects through well-defined
interfaces. For example, the customer object may have methods for creating and managing
accounts, while the account object may have methods for performing transactions.
Q.8] Explain COCOMO Model for project estimation with suitable example.

ANS :

1. COCOMO (Constructive Cost Model) is a widely used model for estimating the effort and cost of
software development projects.
2. COCOMO model categorizes software projects into three levels: Basic, Intermediate, and
Advanced, based on their size and complexity.
3. COCOMO model uses various parameters such as project size, development team size, and
development environment to estimate the effort required for the project.
4. The model estimates the effort required in person-months, and also provides an estimate of the
project duration and cost.
5. An example of COCOMO model estimation would be for a small e-commerce website project that
requires a team of 4 developers and will take around 6 months to complete. Based on the size and
complexity of the project, the COCOMO model would estimate the total effort required to be
around 20-25 person-months, and the total cost to be around $50,000-$60,000, depending on the
development environment and other factors.
Q.9] What is project scheduling ? What are the basic principle of project scheduling ?

ANS :

1. Project scheduling is the process of creating a plan or timeline that outlines the tasks, activities,
and milestones involved in a project, and the order and duration of each task.
2. The basic principle of project scheduling is to identify the project goals and objectives, break them
down into smaller, manageable tasks, and then create a schedule that assigns a start and end date
for each task.
3. The schedule should take into account the available resources, including personnel, equipment,
and budget, and should allocate them in a way that ensures the project is completed on time and
within budget.
4. The schedule should be flexible enough to accommodate changes and unforeseen events, while
still maintaining the overall project timeline and goals.
5. A good project schedule should be clear, concise, and easy to understand, and should provide a
roadmap for the entire project team to follow. It should be regularly reviewed and updated as the
project progresses, to ensure that it remains relevant and accurate.
Q.10] Explain earned value analysis in project scheduling.

ANS :

1. Earned Value Analysis (EVA) is a project management technique used to measure the progress and performance
of a project against its planned schedule and budget.
2. EVA uses three key metrics to evaluate the project: Planned Value (PV), Actual Cost (AC), and Earned Value (EV).
3. PV represents the planned cost of the project, AC represents the actual cost incurred so far, and EV represents
the value of the work that has been completed so far.
4. By comparing these three metrics, project managers can determine if the project is on schedule and within
budget, and can identify any areas where adjustments or corrections may be needed.
5. EVA is a useful tool for project scheduling because it provides a quantitative way to track and measure project
progress, and allows project managers to make informed decisions about resource allocation, risk management,
and overall project strategy.
Q.11] What is a task network in project scheduling ? Explain with example.

ANS :

1. A task network is a visual representation of the tasks, dependencies, and relationships involved in a project.
2. In a task network, tasks are represented as nodes or boxes, and the dependencies between tasks are represented
as arrows or lines.
3. The task network helps project managers to identify the critical path, which is the sequence of tasks that must be
completed on time in order to meet the project deadline.
4. An example of a task network would be a software development project where the tasks include requirements
gathering, design, coding, testing, and deployment. Each task would be represented by a node, and the
dependencies between tasks would be represented by arrows. For example, coding cannot begin until the design
is completed, so there would be an arrow connecting those two tasks.
5. By creating a task network, project managers can identify potential delays and bottlenecks in the project, and can
make adjustments to the schedule or resources to ensure that the project stays on track.
Q.12] What is time line chart ? Explain with a suitable example.

ANS :

1. It shows events or milestones on a horizontal axis that represents time.


2. Each event or milestone is represented by a marker on the timeline.
3. The markers are usually labeled with a description of the event or milestone.
4. The timeline chart can be used to visualize historical events, project timelines, or personal
milestones.
5. For example, a project manager might use a timeline chart to track the progress of a project and
identify any delays or missed deadlines.
SOFTWARE ENGINEERING [IMP]

UNIT 4 – DESIGN ENGINEERING


Q.1] What are the software design quality attributes and quality guidelines ?

ANS :

1. Maintainability: The ease with which a software system can be modified or updated over time.
Guideline: Use modular design principles to make changes to specific components easier and limit
the impact on the rest of the system.
2. Scalability: The ability of a software system to handle increasing amounts of data or traffic.
Guideline: Design for horizontal scalability, where additional instances of the system can be added
to handle increased load.
3. Performance: The speed and efficiency with which a software system performs its tasks. Guideline:
Optimize algorithms and data structures, and use appropriate hardware and infrastructure to
improve performance.
4. Security: The ability of a software system to protect data and prevent unauthorized access.
Guideline: Use secure coding practices and implement appropriate security measures such as
encryption and access control.
5. Usability: The ease with which a software system can be used by its intended audience. Guideline:
Design with user-centered principles in mind, conduct user testing, and provide appropriate
documentation and training to improve usability.
Q.2] Explain different design concepts.

ANS :

1. Abstraction: The process of focusing on essential features of a system and ignoring details that are
less relevant. Abstraction allows designers to simplify complex systems, making them easier to
understand and manage.
2. Modularity: The practice of breaking a system down into smaller, self-contained components that
can be developed and tested independently. Modularity helps to improve maintainability,
scalability, and flexibility.
3. Encapsulation: The practice of hiding implementation details and exposing only the necessary
interfaces. Encapsulation provides a way to protect the integrity of a system's data and logic, and
to reduce the impact of changes on other parts of the system.
4. Cohesion: The degree to which the elements of a module or component work together to achieve
a common purpose. Cohesion helps to improve the clarity and simplicity of a system's design, and
to reduce the risk of errors and bugs.
5. Coupling: The degree to which components or modules depend on each other. High coupling can
make a system more difficult to modify or maintain, while low coupling can make it more flexible
and adaptable. Designers aim to reduce coupling between components wherever possible.
Q.3] Define : Pattern , Information hiding , Architecture , Refinement

ANS :

1. Pattern: In software design, a pattern is a reusable solution to a common problem that occurs
frequently in software development. Design patterns provide a way to standardize and improve
the quality of software design.
2. Information hiding: Information hiding is a design principle that emphasizes the importance of
limiting the visibility of implementation details to other parts of a software system. By hiding
implementation details, changes can be made to the implementation without affecting other parts
of the system.
3. Architecture: In software development, architecture refers to the high-level design of a system,
including its components, modules, and relationships. Software architecture provides a blueprint
for the construction of a system that meets its functional and non-functional requirements.
4. Refinement: Refinement is the process of taking a high-level design and breaking it down into
smaller, more detailed components. During refinement, the design is progressively elaborated to
include more detail and to address issues that may have been overlooked in the initial design
phase. Refinement is an iterative process that continues until the design is sufficiently detailed to
be implemented.
Q.4] What is architecture.

ANS :

In software development, architecture refers to the high-level design of a software system that defines its
structure, components, modules, and relationships. Software architecture provides a blueprint for the
construction of a system that meets its functional and non-functional requirements. The architecture is a
key factor in determining the overall quality of a software system, including its performance, maintainability,
scalability, and security.

Q.5] What do you mean by the term cohesion in the context of software design ? How
is the concept useful in arriving at a good design of software ?

ANS :

1. Clarity: High cohesion improves the clarity and understandability of a software system's design by
making the purpose and behavior of each module or component clear and easy to understand.
2. Simplicity: High cohesion helps to simplify a software system's design by reducing the number of
interactions and dependencies between modules or components.
3. Maintainability: High cohesion improves the maintainability of a software system by making it
easier to modify or update individual modules or components without affecting other parts of the
system.
4. Reusability: High cohesion promotes reusability by making it easier to reuse individual modules or
components in other software systems.
5. Robustness: High cohesion can improve the robustness of a software system by reducing the risk
of errors and bugs that can occur when different elements of a module or component are not
working together effectively.
Q.6] Why software design should have high cohesive and low coupling ? justify .

ANS :

1. Maintainability: High cohesion and low coupling make software systems easier to maintain by
reducing the complexity of individual components and the dependencies between them.
2. Flexibility: High cohesion and low coupling make software systems more flexible and adaptable by
making it easier to modify or replace individual components without affecting the rest of the
system.
3. Reusability: High cohesion and low coupling promote reusability by making it easier to reuse
individual components in other software systems.
4. Scalability: High cohesion and low coupling can improve the scalability of software systems by
making it easier to add or remove components as the system grows or changes.
5. Robustness: High cohesion and low coupling can improve the robustness of software systems by
reducing the risk of errors and bugs that can occur when different components interact in
unexpected ways.
Q.7] Abstraction and refinement are complementary concepts. Justify .

ANS :

1. Abstraction provides a high-level view of the system, while refinement provides a detailed view of
the system. Together, they provide a complete picture of the system at different levels of detail.
2. Abstraction helps to simplify the system by focusing on the most important aspects, while
refinement adds detail and complexity as needed to implement the system.
3. Abstraction helps to identify the essential features of the system, while refinement helps to
develop those features in a more concrete and specific way.
4. Abstraction provides a way to manage complexity by reducing the number of details that need to
be considered, while refinement provides a way to handle complexity by breaking down the
system into smaller, more manageable components.
5. Abstraction and refinement are iterative processes that can be used together to refine the design
of a system over time. As the system evolves and new requirements are identified, abstraction and
refinement can be used to adapt the design and ensure that it continues to meet the needs of its
users.
Q.8] What do you mean by refactoring ?

ANS :

1. It involves making changes to the codebase to improve its design and structure while preserving
its functionality.
2. The purpose of refactoring is to reduce technical debt by addressing issues such as code smells,
poor design, and inefficiencies.
3. Refactoring is typically performed on existing code, rather than during the initial development of a
software system.
4. It is an iterative process that involves making small, incremental changes to the codebase over
time.
5. Refactoring is an important part of software maintenance and is often used to improve the quality
and maintainability of legacy systems.
Q.9] What do you understand by refactoring ? Give the importance of refactoring in
improving the quality of software.

ANS :

1. Improves code quality: Refactoring helps to eliminate code smells and other issues that can make code difficult
to understand and maintain, improving its overall quality.
2. Enhances maintainability: By improving code quality, refactoring makes it easier and more efficient to maintain
and update software systems over time.
3. Reduces technical debt: Refactoring helps to address technical debt by reducing the amount of code that needs
to be rewritten or fixed in the future.
4. Increases agility: Refactoring enables software teams to respond more quickly to changes in requirements and
technology by making code more flexible and adaptable.
5. Boosts productivity: Refactoring helps to eliminate unnecessary code and simplify complex systems, allowing
developers to focus on adding new features and functionality to software systems rather than fixing issues with
the existing codebase.
Q.10] Give the importance of refactoring in improving quality of service.

ANS :

1. Improves system performance: Refactoring can help to optimize code and reduce bottlenecks in
the system, leading to faster response times and better overall performance.
2. Enhances scalability: By improving the architecture and design of the system, refactoring can help
to ensure that it can scale to handle increasing levels of traffic and demand.
3. Increases reliability: Refactoring can help to eliminate bugs and other issues that can cause system
failures or downtime, improving the reliability of the system.
4. Enhances security: Refactoring can help to identify and eliminate vulnerabilities in the system,
improving its security and reducing the risk of cyber attacks.
5. Improves user experience: Refactoring can help to simplify and streamline the user interface and
user experience, making it easier and more intuitive for users to interact with the system and
achieve their goals.
Q.11] Explain the architecture context diagram.

ANS :

1. It provides a visual representation of the system and its surrounding context, including the
stakeholders, users, and other systems that interact with it.
2. It highlights the boundaries and interfaces of the system, including its inputs, outputs, and external
dependencies.
3. It helps to identify the key drivers and requirements that influence the design and development of
the system, such as performance, scalability, and security.
4. It provides a shared understanding of the system and its context among stakeholders, helping to
align their expectations and priorities.
5. It is a useful tool for communicating the system architecture to technical and non-technical
stakeholders, facilitating discussion and feedback on the design and requirements.
Q.12] Explain the user Interface design principles.

ANS :

1. Keep it simple and consistent: A simple and consistent interface design can help to reduce
confusion and improve usability. Use a consistent layout, typography, and color scheme
throughout the interface to create a familiar and predictable experience for users.
2. Prioritize user needs: Understand the needs and goals of your users and prioritize their needs
when designing the interface. Make sure that the interface supports their tasks and goals, and that
it is easy to use and navigate.
3. Provide clear and meaningful feedback: Provide clear and meaningful feedback to users when they
interact with the interface, such as confirming their actions or providing status updates. This helps
to reduce confusion and increase user confidence.
4. Use visual hierarchy: Use visual hierarchy to organize information and guide users' attention to
important elements of the interface. This can be achieved through the use of contrast, color,
typography, and other visual elements.
5. Test and iterate: Test the interface with users and gather feedback on its usability and
effectiveness. Use this feedback to refine and improve the design over time, making sure that it
continues to meet the needs and expectations of users.
Q.13] Enlist the golden rules of User Interface Design.

ANS :

1. Strive for consistency: Consistency in the layout, typography, and color scheme of the UI can help
to reduce confusion and make it easier for users to navigate and understand.
2. Keep it simple: A simple and straightforward interface design can help to reduce cognitive load
and improve usability for users.
3. Provide clear feedback: Clear feedback in response to user actions can help to improve the user's
confidence and understanding of the system.
4. Make it visually appealing: A visually appealing interface can help to engage users and make them
more likely to use and enjoy the system.
5. Prioritize user needs: Understanding the needs and goals of users and prioritizing their needs
when designing the interface can help to create a UI that is useful, usable, and relevant to them.
Q.14] Explain the user interface design issues.

ANS :

1. Complexity: Complex interfaces can be difficult to understand and navigate, leading to confusion
and frustration for users. It is important to simplify the UI as much as possible, using clear and
concise language and intuitive navigation.
2. Inconsistency: Inconsistent UI design can cause confusion and make it difficult for users to
understand how to interact with the system. It is important to maintain consistency in the design
of the interface, using a consistent layout, typography, and color scheme throughout.
3. Feedback: Lack of feedback can make it difficult for users to understand if they have completed an
action successfully or not. Providing clear and meaningful feedback is essential to improve user
confidence and understanding of the system.
4. Accessibility: The interface should be accessible to all users, including those with disabilities.
Designing for accessibility includes providing alternative text for images and making sure the
interface is keyboard accessible.
5. User needs: Failing to consider the needs and goals of users can lead to an interface that is not
useful or relevant to them. Understanding user needs and prioritizing their needs when designing
the interface is essential to create a UI that meets their requirements and expectations.
Q.15] Explain object oriented view of component level design with suitable example.

ANS :

1. Objects: In OOP, objects are instances of classes that encapsulate data and behavior. Each software
component is designed as an object that interacts with other objects to perform its tasks.
2. Inheritance: Inheritance is a mechanism that allows objects to inherit properties and behavior from
a parent class. In component level design, this can be used to create a hierarchy of objects that
share common functionality.
3. Polymorphism: Polymorphism allows objects of different types to be treated as if they are the
same type. This can be used in component level design to allow different objects to be used
interchangeably, as long as they implement the same interface.
4. Encapsulation: Encapsulation is the practice of hiding the internal implementation details of an
object and exposing only the necessary information to other objects. This can help to simplify the
interactions between objects and improve the maintainability of the system.
5. Example: An example of object-oriented component level design is a system for managing a
library. Each component in the system, such as books, borrowers, and loans, could be represented
as an object that encapsulates the relevant data and behavior. Inheritance could be used to create
a hierarchy of objects, such as different types of books or borrowers, that share common
functionality. Polymorphism could be used to allow different objects to be used interchangeably,
such as different types of books that implement the same interface. Encapsulation could be used
to hide the implementation details of the objects and expose only the necessary information to
other objects in the system.
Q.16] Explain guidelines for components level design.

ANS :

1. Modular design: The components should be designed to be modular and independent, with a
well-defined interface. This makes it easier to maintain and test the components and allows for
better reuse in other projects.
2. Cohesion and coupling: The components should have high cohesion and low coupling. High
cohesion means that the components should be designed to perform a single, well-defined task.
Low coupling means that the components should have minimal interdependencies and should not
rely on other components to function properly.
3. Scalability: The components should be designed to be scalable, meaning they can be easily
adapted to handle changing requirements and increasing user demand. This requires designing
components that can be easily extended or replaced without disrupting the system as a whole.
4. Consistency: The components should be designed to be consistent with each other and with the
overall architecture of the system. This includes consistent naming conventions, coding styles, and
design patterns.
5. Reusability: The components should be designed with reusability in mind. This means designing
components that can be easily adapted and reused in other projects, which can save time and
resources in the long run. Components should be designed to be generic, flexible, and adaptable
to different use cases.
Q.17] Enlist and explain the Webapp design principles in detail.

ANS :

1. Responsive design: A responsive design means that the web application is designed to adjust to different
screen sizes and device types. This ensures that the application can be easily accessed and used on a variety
of devices, from desktops to mobile phones.
2. Accessibility: The web application should be designed to be accessible to all users, including those with
disabilities. This includes using alt tags for images, providing captions for videos, and ensuring that the
application is navigable using a keyboard.
3. Simple and intuitive user interface: The user interface of the web application should be simple, intuitive, and
easy to use. Users should be able to easily navigate the application, understand what actions they can
perform, and find what they are looking for.
4. Performance: The web application should be designed to be fast and responsive, with minimal loading
times and delays. This requires optimizing the application code and using efficient database and server-side
technologies.
5. Security: The web application should be designed to be secure, with measures in place to prevent
unauthorized access, data breaches, and other security threats. This includes using secure authentication
methods, encrypting sensitive data, and regularly updating software and security protocols.
Q.18] Describe deployment diagram, terms and concepts.

ANS :
1. Nodes: Nodes represent the physical hardware or software environment on which the software
components are deployed. Nodes can be represented by icons such as servers, workstations,
routers, or printers.
2. Components: Components represent the software modules or artifacts that are deployed on the
nodes. Components can be represented by rectangles or ovals, and they are connected to the
nodes by deployment relationships.
3. Deployment relationships: Deployment relationships indicate how the components are deployed
on the nodes. These relationships can be of various types, such as hosted, deployed, or connected,
and they are represented by arrows or lines.
4. Artifacts: Artifacts are files or data objects that are used by the software components. Artifacts can
be represented by icons such as documents, databases, or configuration files.
5. Stereotypes: Stereotypes are custom labels that can be added to nodes, components, or
deployment relationships to indicate additional information or constraints. For example, a
stereotype can be used to indicate that a node is a backup server or that a component is a
database connector.
Q.19] Describe notation used for deployment diagram.

ANS :

1. Nodes: Nodes are represented by rectangles with two or more compartments. The top
compartment contains the name of the node, and the bottom compartment contains optional
details such as the type of hardware or software.
2. Components: Components are represented by rectangles with two or more compartments. The top
compartment contains the name of the component, and the bottom compartment contains
optional details such as the type of software or version number.
3. Deployment relationships: Deployment relationships are represented by arrows that connect
components to nodes. The arrows indicate the direction of the deployment relationship, and they
can have optional labels that specify the type of relationship, such as "deployed to" or "connected
to."
4. Artifacts: Artifacts are represented by icons such as documents, databases, or configuration files.
The icons are usually placed near the components that use them, and they can have optional
labels that specify the name or type of artifact.
5. Stereotypes: Stereotypes are custom labels that can be added to nodes, components, or
deployment relationships to indicate additional information or constraints. Stereotypes are
represented by labels enclosed in guillemets, such as "<<backup>>" or "<<web server>>."
Q.20] Describe the importance of deployment diagram.

ANS :

1. Visualize system architecture: Deployment diagrams help developers and stakeholders to understand the
physical architecture of a software system, including the hardware and software components, nodes, and
connections.
2. Identify deployment issues: Deployment diagrams can be used to identify potential deployment issues such as
hardware or software dependencies, security concerns, or performance bottlenecks.
3. Optimize deployment process: By visualizing the deployment process, developers can identify opportunities to
optimize the deployment process, reduce deployment time, and ensure that the software system is deployed
efficiently and securely.
4. Facilitate communication: Deployment diagrams provide a common language and visual vocabulary for
developers, testers, and stakeholders to communicate about the software system.
5. Enhance documentation: Deployment diagrams can be used to enhance software documentation and provide a
clear and concise representation of the software system's physical architecture, which can be useful for
maintenance, troubleshooting, and future development.
Q.21] Explain data flow architecture style.

ANS :

1. Emphasis on data flow: The data flow architecture style emphasizes the flow of data between components, with
each component responsible for processing the data in a specific way.
2. Modularity and encapsulation: Components are designed to be modular and encapsulated, meaning that they
can be developed and tested independently and can be easily replaced or updated without affecting other
components.
3. Decoupling of components: The data flow architecture style emphasizes loose coupling between components,
meaning that components are designed to be independent of one another and can be easily replaced or
updated without affecting the system as a whole.
4. Separation of concerns: The data flow architecture style emphasizes the separation of concerns, with each
component responsible for a specific aspect of the system's functionality.
5. Scalability and flexibility: The data flow architecture style is highly scalable and flexible, with the ability to add or
remove components as needed to adapt to changing requirements or workload.
Q.22] Explain layered system architecture.

ANS :

1. Hierarchical organization: The layered system architecture is organized hierarchically, with each layer providing a
specific set of functions or services to the layer above it.
2. Separation of concerns: The layered system architecture emphasizes the separation of concerns, with each layer
responsible for a specific aspect of the system's functionality.
3. Modularity and encapsulation: Components within each layer are designed to be modular and encapsulated,
meaning that they can be developed and tested independently and can be easily replaced or updated without
affecting other layers.
4. Abstraction: The layered system architecture makes use of abstraction, with each layer abstracting away the
details of the layer below it and providing a simplified interface to the layer above it.
5. Scalability and flexibility: The layered system architecture is highly scalable and flexible, with the ability to add or
remove layers as needed to adapt to changing requirements or workload.
Q.23] Explain detail Data-centered Architectural Style.

ANS :

1. Emphasis on data: The Data-centered architectural style places a strong emphasis on data, with
data storage and management being the primary concern.
2. Centralized data storage: Data is stored in a centralized location, such as a database or data
warehouse, and accessed by other components as needed.
3. Decoupling of components: Components within the Data-centered architecture are designed to be
loosely coupled, meaning that they can be replaced or updated without affecting other
components.
4. Separation of concerns: The Data-centered architectural style emphasizes the separation of
concerns, with each component responsible for a specific aspect of data storage and management.
5. Scalability and performance: The Data-centered architectural style is highly scalable and
performant, with the ability to handle large amounts of data and support high levels of concurrent
access.
Q.23] Explain in detail Call and Return Architectural Style.

ANS :
1. Emphasis on functions: The Call and Return architectural style places a strong emphasis on functions, with
functions being the primary building block of the system.
2. Hierarchical organization: Functions are organized hierarchically, with higher-level functions calling lower-level
functions to accomplish a task.
3. Separation of concerns: The Call and Return architectural style emphasizes the separation of concerns, with each
function responsible for a specific aspect of the system's functionality.
4. Modularity and encapsulation: Functions within the Call and Return architecture are designed to be modular and
encapsulated, meaning that they can be developed and tested independently and can be easily replaced or
updated without affecting other functions.
5. Reusability: The Call and Return architectural style promotes reusability, with functions being designed to be
generic and reusable across multiple components of the system.
SOFTWARE ENGINEERING [IMP]
UNIT – RISK AND CONFIGURATION
MANAGEMENT
Q.1] What are different types of software Risks ?

ANS :

1. Schedule risks are risks that the software project will not be completed on time. This can be due to
a number of factors, such as underestimating the time it will take to develop the software, changes
in requirements, or unexpected problems.
2. Budget risks are risks that the software project will not be completed within budget. This can be
due to a number of factors, such as underestimating the cost of development, changes in
requirements, or unexpected problems.
3. Operational risks are risks that the software will not be able to operate as expected once it is
deployed. This can be due to a number of factors, such as hardware or software problems,
unexpected changes in the environment, or user errors.
4. Technical risks are risks that the software will not be able to be developed or maintained as
expected. This can be due to a number of factors, such as the complexity of the software, the use
of new or untested technologies, or the lack of skilled developers.
5. External risks are risks that are beyond the control of the software development team. This can
include factors such as changes in the market, changes in the law, or natural disasters.
Q.2] Explain the risks identification Process for software project ?
ANS :

1. Identify the stakeholders. The first step is to identify all of the stakeholders in the software project.
This includes the project manager, developers, testers, users, and anyone else who will be
affected by the project.
2. Gather information. Once you have identified the stakeholders, you need to gather information
about the project. This includes the project scope, the project schedule, the project budget, and
the project risks.
3. Identify the risks. The next step is to identify the risks that could impact the project. This can be
done by brainstorming, interviewing stakeholders, and reviewing historical data.
4. Assess the risks. Once you have identified the risks, you need to assess them. This involves
determining the probability and impact of each risk.
5. Develop risk mitigation plans. Once you have assessed the risks, you need to develop risk
mitigation plans. This involves developing strategies to reduce the probability or impact of each
risk.
Q.3] What is risks identification ? what is different categories of risks ?
ANS :

1. Identify the stakeholders. The first step is to identify all of the stakeholders in the software project.
This includes the project manager, developers, testers, users, and anyone else who will be
affected by the project.
2. Gather information. Once you have identified the stakeholders, you need to gather information
about the project. This includes the project scope, the project schedule, the project budget, and
the project risks.
3. Identify the risks. The next step is to identify the risks that could impact the project. This can be
done by brainstorming, interviewing stakeholders, and reviewing historical data.
4. Assess the risks. Once you have identified the risks, you need to assess them. This involves
determining the probability and impact of each risk.
5. Develop risk mitigation plans. Once you have assessed the risks, you need to develop risk
mitigation plans. This involves developing strategies to reduce the probability or impact of each
risk.
Q.4] What is RMMM ? Write a short note on it ?
ANS :

1. Identify risks. The first step in RMMM is to identify the risks that could impact the project. This can
be done by brainstorming, interviewing stakeholders, and reviewing historical data.
2. Assess risks. Once the risks have been identified, they need to be assessed. This involves
determining the probability and impact of each risk.
3. Mitigate risks. Once the risks have been assessed, they need to be mitigated. This involves
developing strategies to reduce the probability or impact of each risk.
4. Monitor risks. Once the risks have been mitigated, they need to be monitored. This involves
tracking the risks to ensure that they are not increasing in probability or impact.
5. Manage risks. If a risk does increase in probability or impact, it needs to be managed. This may
involve implementing additional risk mitigation strategies or taking other actions to reduce the
impact of the risk.
Q.5] Write short note on-RMMM.
ANS :
RMMM stands for Risk Mitigation, Monitoring, and Management. It is a process that helps organizations
identify, assess, and manage risks. RMMM is an important part of any project management process, as it
can help to ensure that projects are completed on time, within budget, and to the required quality
standards.

 Be proactive. Don't wait for problems to happen. Identify potential risks early on and develop plans
to mitigate them.
 Get input from others. Don't try to identify risks on your own. Talk to stakeholders, developers, and
other experts to get their input.
 Be realistic. Don't underestimate the risks. Be honest about the potential problems that could
occur.
 Be flexible. Things change. Be prepared to adjust your risk mitigation plans as needed.
Q.5] Explain the change control mechanism in (SCM) ?
ANS :

1. Identify the need for change. The first step is to identify the need for change. This can be done by
stakeholders, developers, or anyone else who is involved in the software development process.
2. Evaluate the impact of change. Once the need for change has been identified, it is important to
evaluate the impact of the change. This involves determining the impact on the project schedule,
budget, and quality.
3. Develop a change plan. Once the impact of the change has been evaluated, it is important to
develop a change plan. This plan should include the steps that need to be taken to implement the
change, as well as the resources that will be needed.
4. Implement the change. Once the change plan has been developed, it is important to implement
the change. This may involve making changes to the software code, documentation, or testing
procedures.
5. Verify the change. Once the change has been implemented, it is important to verify the change.
This involves testing the software to ensure that it meets the requirements and that it does not
introduce any new defects.
SOFTWARE ENGINEERING [IMP]
UNIT 6 – SOFTWARE TESTING
Q.1] What is software testing ?
ANS :

1. Software testing is the process of evaluating software to find errors, gaps, or other defects.
2. The goal of software testing is to ensure that software meets its requirements and works as
expected.
3. Software testing can be performed manually or automatically.
4. There are many different types of software testing, including functional testing, non-functional
testing, and security testing.
5. Software testing is an essential part of the software development process.
Q.2] What are the main objectives of software testing ?
ANS :

1. To find defects. The main objective of software testing is to find defects in the software. Defects
can be anything from small errors to major bugs.
2. To improve quality. By finding and fixing defects, software testing can help to improve the quality
of the software. This can make the software more reliable, efficient, and user-friendly.
3. To meet requirements. Software testing is also important to ensure that the software meets its
requirements. This means that the software should do what it is supposed to do and should meet
the needs of the users.
4. To reduce risk. Software testing can help to reduce the risk of releasing a defective product. This
can save the company time and money in the long run.
5. To gain confidence. Software testing can help to give the developers and users confidence in the
software. This can lead to increased satisfaction and a better user experience.
Q.3] What are the principles of software testing ?
ANS:

1. Testing shows the presence of defects. It does not prove their absence.
2. Exhaustive testing is impossible. There are always too many possible inputs and combinations of
inputs to test every possible scenario.
3. Early testing is better than late testing. Defects found early are easier and less expensive to fix.
4. Defects tend to cluster. Once a defect is found, it is likely that other defects will be found nearby.
5. Testing is context-dependent. The same software can behave differently in different environments
and with different data.
Q.4] Explain the software testing strategies for software development ?
ANS :

1. Planning. The first step is to plan the testing effort. This includes identifying the test cases, the test
environment, and the resources that will be needed.
2. Execution. Once the plan is in place, the next step is to execute the tests. This involves running
the test cases and recording the results.
3. Analysis. After the tests are executed, the results need to be analyzed. This includes identifying
any defects that were found and determining their severity.
4. Reporting. The results of the analysis need to be reported to the stakeholders. This includes
providing information about the defects that were found, their severity, and the steps that will be
taken to fix them.
5. Retesting. Once the defects have been fixed, the software needs to be retested to ensure that
they have been fixed correctly.
Q.5] Differentiate between verfication and validation in detail
ANS :

1. Verification is the process of determining whether the software meets its requirements. This is
done by checking the software against the requirements document.
2. Validation is the process of determining whether the software meets the needs of the users. This
is done by testing the software with users and getting their feedback.
3. Verification is typically done by the developers or quality assurance team.
4. Validation is typically done by users or a representative group of users.
5. Verification is typically done early in the software development life cycle.
6. Validation is typically done late in the software development life cycle.
Q.6] What is the need of stubs and drivers in software testing?
ANS :

1. Stubs and drivers are used to simulate the behavior of other modules. This can be useful when the
other modules are not yet complete or when they are not available.
2. Stubs and drivers can be used to control the flow of execution. This can be useful for testing
specific scenarios or for testing the behavior of the module under test in isolation.
3. Stubs and drivers can be used to collect data. This data can be used to analyze the behavior of
the module under test or to compare the behavior of the module under test to the behavior of the
expected output.
4. Stubs and drivers can be used to simplify the testing process. By simulating the behavior of other
modules, stubs and drivers can reduce the number of modules that need to be tested and the
amount of time that needs to be spent testing.
5. Stubs and drivers can be used to improve the quality of the software. By finding and fixing defects
early in the development process, stubs and drivers can help to ensure that the software is of high
quality.
Q.7] What do you understand by Integration Testing ? Explain objectives of integration testing.
ANS :

Integration testing is a type of software testing that focuses on testing the interfaces between
modules or components of a software system. The goal of integration testing is to ensure that the
modules or components can work together as expected. Integration testing is typically performed
after unit testing, which is a type of testing that focuses on testing individual modules or
components in isolation.
The objectives of integration testing are:

1. To verify that the modules or components can communicate with each other correctly.
2. To verify that the modules or components can work together to achieve the desired functionality.
3. To identify any defects in the interfaces between the modules or components.
4. To ensure that the modules or components are compatible with each other.
5. To improve the overall quality of the software system.
Q.8] How top-down integration is achieved ?
ANS :

1. Identify the top-level modules. The top-level modules are the modules that are at the highest level
of the software system's hierarchy.
2. Create test cases for the top-level modules. The test cases should test the functionality of the top-
level modules.
3. Execute the test cases. The test cases should be executed to verify that the top-level modules are
working correctly.
4. Integrate the lower level modules. The lower level modules are integrated into the top-level
modules one at a time.
5. Repeat steps 2-4 until all of the modules have been integrated. Once all of the modules have been
integrated, the software system should be tested as a whole to verify that it is working correctly.
Q.9] How bottom-up integration is achieved ?
ANS :

1. Identify the lowest level modules. The lowest level modules are the modules that are at the lowest
level of the software system's hierarchy.
2. Create test cases for the lowest level modules. The test cases should test the functionality of the
lowest level modules.
3. Execute the test cases. The test cases should be executed to verify that the lowest level modules
are working correctly.
4. Integrate the higher level modules. The higher level modules are integrated into the lowest level
modules one at a time.
5. Repeat steps 2-4 until all of the modules have been integrated. Once all of the modules have been
integrated, the software system should be tested as a whole to verify that it is working correctly.
Q.10] What do you understand by System testing ?
ANS :

1. System testing is performed on the entire software system, not just individual modules or
components.
2. The goal of system testing is to ensure that the software system meets its requirements and works
as expected.
3. System testing is typically performed by a team of testers, not just a single individual.
4. System testing can be performed manually or using automated tools.
5. System testing is an important part of the software development process and can help to ensure
that the software system is of high quality.
Q.11] What is system testing? Explain any two system testing strategies.
ANS :
System testing is the process of testing a software product as a whole to evaluate its compliance with its
specified requirements. It is the final phase of software testing and is performed after unit testing,
integration testing, and acceptance testing.

1. Black-box testing

Black-box testing is a method of software testing that does not consider the internal structure or
implementation of the software being tested. Instead, it focuses on the external behavior of the
software, as seen by the user.

2. White-box testing

White-box testing is a method of software testing that considers the internal structure or
implementation of the software being tested. This allows the tester to focus on specific areas of
the software that are known to be complex or error-prone.

Q.12] What are the different kinds of system testing that are usually performed on large
software testing ?

ANS :

1. Functional testing: Functional testing is the most basic type of system testing. It is used to verify
that the software system meets its requirements and works as expected. Functional testing can be
performed manually or using automated tools.

2. Non-functional testing: Non-functional testing is used to verify that the software system meets its
non-functional requirements. Non-functional requirements are requirements that do not directly
relate to the functionality of the software system. Examples of non-functional requirements include
performance, reliability, security, and usability.

3. Acceptance testing: Acceptance testing is performed by the customer or end-user to verify that the
software system meets their needs. Acceptance testing can be performed manually or using
automated tools.

4. Regression testing: Regression testing is performed to verify that changes to the software system
have not introduced any new defects. Regression testing is typically performed after each change
to the software system.

5. Performance testing: Performance testing is used to verify that the software system can handle
the expected load. Performance testing can be performed manually or using automated tools.

Q.13] Write short note on Object Oriented testing strategies.

ANS :

1. Unit testing is the testing of individual units of code, such as classes or functions. This is the most
basic level of testing and is essential for ensuring that the code is working as intended.
2. Integration testing is the testing of how individual units of code interact with each other. This is
important for ensuring that the different parts of the system work together correctly.
3. System testing is the testing of the entire system as a whole. This is the most comprehensive level
of testing and is essential for ensuring that the system meets all of its requirements.
4. Acceptance testing is the testing of the system by the customer or user. This is important for
ensuring that the system meets the needs of the people who will be using it.
5. Regression testing is the re-testing of the system after changes have been made to ensure that
the changes have not introduced any new defects.

Q.14] With suitable example illustrate in which situations you will prefer boundary value
analysis over equivalence partitioning.

ANS :

1. When the input domain is small. If the input domain is small, then it is possible to test all possible
values using boundary value analysis. This is not possible with equivalence partitioning, which
requires dividing the input domain into equivalence classes.
2. When the input domain is continuous. If the input domain is continuous, then it is not possible to
test all possible values using equivalence partitioning. This is because there are an infinite number
of possible values in a continuous domain. Boundary value analysis can be used to test the
boundaries of the input domain, which can help to identify errors that occur at these boundaries.
3. When the input domain is complex. If the input domain is complex, then it can be difficult to identify
all possible equivalence classes using equivalence partitioning. Boundary value analysis can be
used to test the boundaries of the input domain, which can help to identify errors that occur at
these boundaries.
4. When the input domain is sensitive to errors. If the input domain is sensitive to errors, then it is
important to test the boundaries of the input domain using boundary value analysis. This can help
to identify errors that occur at these boundaries and prevent them from causing problems in the
system.
5. When time is limited. If time is limited, then boundary value analysis can be a more efficient way to
test the input domain than equivalence partitioning. This is because boundary value analysis only
tests the boundaries of the input domain, while equivalence partitioning requires testing all
possible equivalence classes.

Q.15] Explain Top-down testing and give its advantages.

ANS :

1. Top-down testing is an integration testing technique that starts with the top-level modules of a
system and works its way down to the lower-level modules.
2. Advantages of top-down testing include:
o It can help to identify errors early in the development process.
o It can help to reduce the number of test cases that need to be written.
o It can help to improve the efficiency of the testing process.
3. Top-down testing can be used with any type of software system, but it is most commonly used
with systems that are modular in nature.
4. Top-down testing can be implemented manually or using a variety of automated testing tools.
5. Top-down testing is just one of many integration testing techniques that can be used to ensure the
quality of software systems.

Q.16] What is mean by stub ?

ANS :

1. Stubs are used to simulate the behavior of real modules.


2. Stubs are typically used during unit testing and integration testing.
3. Stubs can be implemented manually or using a variety of automated testing tools.
4. Stubs can be used to improve the efficiency of the testing process.
5. Stubs can help to identify errors in the calling module.

Q.17] How test case designed for object oriented software ?

ANS :

 Identify the objects and classes in the system. The first step in designing test cases for object
oriented software is to identify the objects and classes in the system. Objects are the basic units of
data in object oriented software. Classes are a collection of objects that share the same properties
and methods.
 Identify the methods and properties of each object and class. Once the objects and classes have
been identified, the next step is to identify the methods and properties of each object and class.
Methods are the actions that can be performed on an object. Properties are the data that is
associated with an object.
 Identify the expected behavior of each object and class. The next step is to identify the expected
behavior of each object and class. This can be done by reviewing the system requirements or by
talking to the system users.
 Design test cases that exercise the expected behavior of each object and class. Once the
expected behavior has been identified, the next step is to design test cases that exercise that
behavior. Test cases should be designed to cover all possible scenarios, including normal,
abnormal, and boundary cases.
 Execute the test cases and verify the expected behavior. The final step is to execute the test
cases and verify that the expected behavior is observed. If any errors are found, the test cases
should be updated and the system should be retested.

Q.18] Diferentiate between alpha and beta testing.

ANS :

1. Alpha testing is conducted by internal testers, while beta testing is conducted by external users.
2. Alpha testing is conducted in a controlled environment, while beta testing is conducted in a real-
world environment.
3. Alpha testing is typically focused on finding bugs and defects, while beta testing is typically
focused on getting feedback from users.
4. Alpha testing is typically conducted before the software is released to the public, while beta testing
is typically conducted after the software is released to the public.
5. Alpha testing is typically conducted by a small group of people, while beta testing is typically
conducted by a large group of people.
Q.19] What is GUI testing ? Give advantages and drawbacks of GUI testing.

ANS : GUI testing, also known as User Interface testing, is a type of software testing that focuses
on the graphical user interface (GUI) of an application. The goal of GUI testing is to ensure that
the GUI is easy to use and that it functions as expected.

Advantages of GUI testing:

 It can help to identify usability problems. GUI testing can help to identify problems with the layout
of the GUI, the font size, the color scheme, and other aspects of the GUI that affect usability.
 It can help to identify functional problems. GUI testing can help to identify problems with the
functionality of the GUI, such as buttons that do not work or menus that do not open.
 It can help to identify performance problems. GUI testing can help to identify problems with the
performance of the GUI, such as slow loading times or unresponsiveness.

Drawbacks of GUI testing:

 It can be time-consuming. GUI testing can be time-consuming, as it requires testers to interact


with the GUI in a variety of ways.
 It can be difficult to automate. GUI testing can be difficult to automate, as the GUI is often complex
and changes frequently.
 It can be difficult to test all possible scenarios. There are an infinite number of possible scenarios
that can be tested with a GUI, so it is impossible to test all of them.

You might also like