ITA End Term
ITA End Term
ITA End Term
Cloud Computing: Cloud services provide elastic scalability, allowing enterprises to quickly
adjust their resources based on user demand. It enables enterprises to scale up or down
seamlessly, ensuring optimal performance during peak periods while avoiding overprovisioning
during off-peak times.
Traditional Computing: On-premises infrastructure has limited user demand flexibility. Scaling
up or down requires physical hardware procurement and configuration, leading to longer lead
times and potential underutilization or resource constraints.
Price Flexibility:
Cloud Computing: Cloud services offer a pay-as-you-go model, allowing enterprises to align
their expenses with actual usage. This pricing flexibility enables cost optimization, as enterprises
can scale resources according to their budget and adjust usage patterns as needed.
Time-to-Market Agility:
Cloud Computing: Cloud services enable rapid deployment of applications and services,
reducing time-to-market. With preconfigured infrastructure and managed services, enterprises
can focus on development rather than infrastructure setup, allowing for faster innovation and
quicker response to market demands.
Cloud Computing: Cloud services offer location flexibility, allowing enterprises to deploy their
applications and services across multiple regions and data centers. This provides the ability to
serve customers in different geographic locations efficiently and ensures low latency and better
user experience.
Asset Optimization:
Cloud Computing: Cloud services enable resource optimization through dynamic allocation and
efficient utilization of computing resources. Enterprises can scale resources as needed, ensuring
optimal usage and reducing the risk of underutilized or idle hardware.
Infrastructure Layer: This layer forms the foundation of cloud architecture and includes
physical resources such as servers, storage devices, networking equipment, and data centers. It
provides the underlying infrastructure on which the higher layers operate.
Platform Layer: The platform layer, also known as the platform as a service (PaaS) layer,
provides a platform for developing, deploying, and managing applications. It offers tools,
runtime environments, and services that abstract away the complexities of infrastructure
management, enabling developers to focus on application development.
Software Layer: The software layer, also referred to as the software as a service (SaaS) layer,
encompasses the cloud-based applications and services that end-users directly interact with.
These applications are hosted and provided by cloud service providers, who handle maintenance,
updates, and scalability, while users access the software via web browsers or dedicated
interfaces.
Definition:
Public cloud (off-site and remote) describes cloud computing where resources are dynamically
provisioned on an on-demand, self-service basis over the Internet, via web applications/web
services, open API, from a third-party provider who bills on a utility computing basis.
Private cloud environment is often the first step for a corporation prior to adopting a public cloud
initiative. Corporations have discovered the benefits of consolidating shared services on
virtualized hardware deployed from a primary datacenter to serve local and remote users.
Cost:
Private Cloud: Private clouds involve higher initial capital expenditure as organizations need to
procure and maintain their own infrastructure. The costs also include ongoing management and
maintenance expenses.
Public Cloud: Public clouds operate on a pay-as-you-go model, allowing organizations to pay for
the resources they consume. This eliminates upfront capital costs and provides cost flexibility as
organizations can scale resources according to their needs.
Customization and Flexibility:
Private Cloud: Private clouds offer greater customization options, allowing organizations to
tailor the infrastructure and services to their specific needs and requirements.
Public Cloud: Public clouds provide standardized services and limited customization options.
While organizations have flexibility in selecting available services, they may have limited
control over the underlying infrastructure and configurations.
Big data refers to extremely large and complex datasets that cannot be effectively processed
using traditional data processing techniques. These datasets typically involve massive volumes of
structured, semi-structured, and unstructured data, generated from various sources such as social
media, sensors, transaction records, and more. Big data is characterized by the three V's: volume
(large amount of data), velocity (high speed at which data is generated and processed), and
variety (diverse types of data).
Hadoop Distributed File System (HDFS): HDFS is a distributed file system that stores data
across multiple machines in a cluster. It breaks data into blocks and replicates them for fault
tolerance. HDFS enables high-throughput access to large datasets and provides reliability and
scalability.
Architecture: NameNode: Stores metadata for the files, like the directory structure of a typical
FS. The server holding the NameNode instance is quite crucial, as there is only one. Stores the
actual data in HDFS Can run on any underlying filesystem (ext3/4, NTFS, etc) Notifies
NameNode of what blocks it has
MapReduce: MapReduce is a programming model and processing paradigm used for distributed
data processing in Hadoop. It allows parallel execution of tasks across the nodes in a cluster. The
MapReduce framework divides a task into smaller subtasks, performs mapping and reducing
functions, and aggregates the results.
Architecture: Distributed, with some centralization Main nodes of cluster are where most of the
computational power and storage of the system lies Main nodes run TaskTracker to accept and
reply to MapReduce tasks, Main Nodes run DataNode to store needed blocks closely as possible
Central control node runs NameNode to keep track of HDFS directories & files, and JobTracker
to dispatch compute tasks to TaskTracker Written in Java, also supports Python and Ruby
Hadoop architecture also includes additional components such as YARN (Yet Another Resource
Negotiator) for resource management and job scheduling, and various ecosystem tools like Hive,
Pig, Spark, and HBase that extend the capabilities of Hadoop for data processing, querying,
analytics, and real-time applications. Overall, Hadoop architecture provides a scalable, fault-
tolerant, and cost-effective solution for processing and analyzing Big Data, enabling
organizations to harness the power of massive datasets for valuable insights and decision-
making.
Q5. What is SaaS and Paas? Give 2 business deployment examples.
Software as a Service (SaaS) and Platform as a Service (PaaS) are two prominent models in
cloud computing that have transformed the way businesses access and utilize software
applications and development platforms.
SaaS eliminates the need for organizations to install and maintain software locally, as it is hosted
and managed by a third-party provider. Users can access the software through web browsers,
making it convenient and accessible from anywhere. SaaS allows businesses to focus on their
core operations while relying on reliable and up-to-date software applications. One exemplary
SaaS platform is Salesforce, a leading customer relationship management (CRM) solution.
Salesforce offers a comprehensive suite of CRM tools, including sales, marketing, and customer
support functionalities. By leveraging Salesforce as a SaaS solution, businesses can efficiently
manage customer interactions, track sales activities, and optimize marketing campaigns. The
cloud-based infrastructure of Salesforce ensures seamless updates and maintenance, providing
organizations with the latest features and security enhancements. Another notable SaaS example
is Workday, an enterprise resource planning (ERP) system. Workday streamlines various aspects
of business operations, such as finance, human resources, and supply chain management. It
offers a user-friendly interface and robust analytics capabilities, enabling organizations to
improve efficiency, streamline processes, and make data-driven decisions. With Workday as a
SaaS solution, businesses can leverage a scalable and flexible ERP system without the need for
extensive infrastructure management.
IoT (Internet of Things): The Internet of Things (IoT) refers to a network of interconnected
physical devices, vehicles, appliances, and other objects embedded with sensors, software, and
network connectivity that enables them to collect and exchange data. IoT enables these devices
to interact and communicate with each other and with the cloud, allowing for seamless
integration of the physical and digital worlds. IoT Architecture: The architecture of IoT involves
multiple layers that work together to enable the communication, data processing, and
functionality of IoT systems. Here is a high-level overview of the IoT architecture:
Perception Layer: The perception layer consists of physical devices or sensors that gather data
from the environment. These devices can include sensors, actuators, cameras, RFID (Radio
Frequency Identification) tags, and more. They collect data such as temperature, humidity,
pressure, location, and other relevant information.
Network Layer: The network layer is responsible for transmitting the data collected by the
perception layer to the appropriate destinations. It involves various network technologies such as
Wi-Fi, Bluetooth, Zigbee, cellular networks, or even satellite communication. This layer ensures
the connectivity and communication between the devices and the Internet.
Middleware Layer: The middleware layer acts as a bridge between the devices/sensors and the
application layer. It handles tasks such as data filtering, data aggregation, protocol translation,
and device management. The middleware layer enables seamless integration of diverse devices
and protocols into a unified system.
Application Layer: The application layer comprises the software applications and services that
process and analyze the data collected from the devices. It involves data storage, real-time
analytics, machine learning algorithms, and visualization tools. The application layer provides
actionable insights and enables decision-making based on the collected data.
Business Layer: The business layer represents the domain-specific applications and services
built on top of the IoT infrastructure. It encompasses various use cases and applications, such as
smart homes, industrial automation, healthcare monitoring, transportation systems, and more.
The business layer leverages the capabilities of the underlying IoT architecture to deliver specific
functionalities and value to end users.
Q7. Draw Application Architecture of AI in Healthcare.
In the context of AI in healthcare, the application architecture typically involves multiple layers
and components that work together to enable various AI capabilities. Here's a general overview
of the architecture:
Data Collection: The architecture starts with data collection from various sources such as
electronic health records (EHRs), medical devices, wearables, and imaging systems. These data
sources provide the foundation for AI models and algorithms.
Data Preprocessing: Once the data is collected, it needs to be preprocessed and cleaned to ensure
accuracy and consistency. Preprocessing steps may include data normalization, handling missing
values, removing outliers, and anonymization to protect patient privacy.
Data Storage: Processed data is then stored in secure and scalable data repositories such as data
lakes or data warehouses. These repositories enable efficient data management and retrieval for
AI processing.
AI Model Development: This layer involves the development and training of AI models and
algorithms. Techniques like machine learning, deep learning, natural language processing, and
computer vision are employed to create models that can extract insights, make predictions, and
assist in clinical decision-making.
Model Training and Validation: The developed AI models are trained using the collected and
preprocessed data. Training involves feeding the model with labeled examples to learn patterns
and make predictions. The trained models are then validated using separate datasets to ensure
their accuracy, performance, and generalizability.
Inference and Decision Support: Once trained and validated, the AI models are deployed for
real-time inference. They analyze new data, provide predictions, and offer decision support to
healthcare professionals. This layer includes integrating AI capabilities into existing healthcare
systems and workflows.
User Interface: The final layer of the architecture encompasses the user interface, which enables
healthcare professionals to interact with the AI system. This interface can take the form of a
web-based application, a mobile app, or integration into existing clinical systems. It allows users
to input data, view AI-generated insights, and access recommendations for diagnosis, treatment
planning, or patient monitoring.
BLOCKCHAIN: Foundation, Application and Future:
Foundations: The foundation of blockchain lies in its key features. First, it operates on a
decentralized network of computers, known as nodes, which collectively maintain and validate the
blockchain. This decentralized nature eliminates the need for intermediaries and enhances trust
and security. Each node in the network holds a copy of the entire blockchain, ensuring redundancy
and resilience. Second, blockchain relies on cryptographic techniques to ensure data integrity,
immutability, and privacy. Transactions recorded on the blockchain are cryptographically linked
and stored in blocks, forming a chain of blocks that cannot be altered without consensus from the
network. Cryptographic hash functions ensure the integrity of the data, making it virtually
impossible to tamper with the recorded transactions. Third, blockchain utilizes consensus
algorithms, such as Proof of Work (PoW) or Proof of Stake (PoS), to validate and agree on the
order of transactions, ensuring the integrity of the ledger. In PoW, miners compete to solve
complex mathematical puzzles, while in PoS, validators are selected based on the amount of
cryptocurrency they hold, reducing energy consumption and increasing scalability.