DP900 ExamTopic Questions - 70 To 200

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 49

Questions from 70 to 100

1. Which command-line tool can you use to query Azure SQL databases?

 A. sqlcmd Most Voted

 B. bcp

 C. azdata

 D. Azure CLI

o Explain: The sqlcmd utility lets you enter Transact-SQL statements, system
procedures, and script files at the command prompt.
Incorrect Answers:
B: The bulk copy program utility (bcp) bulk copies data between an instance of
Microsoft SQL Server and a data file in a user-specified format.
D: The Azure CLI is the defacto tool for cross-platform and command-line tools
for building and managing Azure resources.
Reference:
https://docs.microsoft.com/en-us/sql/tools/overview-sql-tools?view=sql-server-
ver15

2.

a. All Yes

i. Azure Defender provides security alerts and advanced threat protection for
virtual machines, SQL databases, containers, web applications, your
network, and more.
Azure Defender provides security alerts and advanced threat protection for
virtual machines, SQL databases, containers, web applications, your
network, and more.
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-database-
paas-overview https://azure.microsoft.com/en-us/blog/announcing-sql-atp-
and-sql-vulnerability-assessment-general-availability/

https://docs.microsoft.com/en-us/azure/security-center/azure-defender

3.

a. A) Yes from https://docs.microsoft.com/en-us/sql/azure-data-studio/what-is?


view=sql-server-ver15 Use Azure Data Studio if you: Are connecting to a SQL
Server 2019 big data cluster

B) Yes from https://docs.microsoft.com/en-us/learn/modules/query-azure-


sql-data-warehouse/4-query-dw-using-ssms

C) Yes from https://docs.microsoft.com/en-us/azure/mariadb/connect-


workbench
4.

https://docs.microsoft.com/en-us/azure/azure-sql/database/sql-database-paas-overview

4.

a. https://azure.microsoft.com/en-gb/blog/hot-patching-sql-server-engine-in-azure-
sql-database/

b. https://azure.microsoft.com/en-us/services/sql-database/#product-overview

5. You are creating Power BI reports.


You need to choose which filters you can use for reports.
Which three types of filters can you use? Each correct answer presents a complete
solution.
NOTE: Each correct selection is worth one point.
 A. drill-down Most Voted

 B. automatic Most Voted

 C. database

 D. manual Most Voted

 E. external

A. Drill-down filters: These filters allow users to navigate from a higher-level


summary to detailed data. By drilling down, users can explore data at
different levels of granularity, such as moving from yearly data to monthly
or daily data. Drill-down filters enhance interactivity and provide deeper
insights into the data.

D. Manual filters: Manual filters are user-defined filters that allow report
viewers to select specific values or ranges. Users can manually choose which
data they want to include or exclude in the report. These filters are flexible
and customizable, enabling users to focus on relevant information.

E. External filters: External filters are applied from external sources or


parameters. For example, you can pass filter values from a URL parameter,
another report, or an application. External filters allow seamless integration
with other systems and enable dynamic filtering based on context.

6. When you create an Azure SQL database, which account can always connect to the database?

 A. the Azure Active Directory (Azure AD) account that created the database

 B. the server admin login account of the logical server Most Voted

 C. the Azure Active Directory (Azure AD) administrator account

 D. the sa account

a. When you first deploy Azure SQL, you specify an admin login and an associated
password for that login. This administrative account is called Server admin.
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/single-database-create-
quickstart
6. You need to reduce the amount of time that the IT team spends on user support.
What are three possible ways to achieve this goal? Each correct answer presents a
complete solution.
NOTE: Each correct selection is worth one point.

 A. Enable Microsoft Office 365 Customer Lockbox

 B. Upgrade all client devices to Windows 10 Most Voted

 C. Use Windows AutoPilot to deploy devices Most Voted

 D. Deploy Microsoft MyAnalytics to devices

 E. Deploy Microsoft Office 365 Professional Plus to all client devices Most Voted

a. Reference:
https://social.technet.microsoft.com/wiki/contents/articles/35748.office-365-what-is-
customer-lockbox-and-how-to-enable-it.aspx
https://docs.microsoft.com/en-us/windows/deployment/windows-autopilot/windows-
autopilot
https://www.microsoft.com/en-us/microsoft-365/blog/2015/03/19/office-365-proplus-it-
control-and-management-update/
7.

Box 1: No -
Microsoft Sentinel data connectors are available for non-Microsoft services like Amazon Web
Services.

Box 2: Yes -
Once you have connected your data sources to Microsoft Sentinel, you can visualize and monitor
the data using the Microsoft Sentinel adoption of Azure Monitor
Workbooks, which provides versatility in creating custom dashboards. While the Workbooks are
displayed differently in Microsoft Sentinel, it may be useful for you to see how to create
interactive reports with Azure Monitor Workbooks. Microsoft Sentinel allows you to create
custom workbooks across your data, and also comes with built-in workbook templates to allow
you to quickly gain insights across your data as soon as you connect a data source.
Box 3: Yes -
To help security analysts look proactively for new anomalies that weren't detected by your
security apps or even by your scheduled analytics rules, Microsoft
Sentinel's built-in hunting queries guide you into asking the right questions to find issues in the
data you already have on your network.
Reference:
https://docs.microsoft.com/en-us/azure/sentinel/data-connectors-reference
https://docs.microsoft.com/en-us/azure/sentinel/monitor-your-data
https://docs.microsoft.com/en-us/azure/sentinel/hunting
8.

Reference:
Transparent data encryption (TDE) helps protect Azure SQL Database, Azure SQL
Managed Instance, and Azure Synapse Analytics against the threat of malicious
offline activity by encrypting data at rest.
https://docs.microsoft.com/en-us/azure/azure-sql/database/transparent-data-encryption-tde-
overview?tabs=azure-portal

9. You need to ensure that users use multi-factor authentication ( when connecting to an
Azure SQL database.
Which type of authentication should you use?

 A. service principal authentication

 B. Azure Active Directory (Azure AD) authentication Most Voted

 C. SQL authentication

 D. certificate authentication

Reference:

https://docs.microsoft.com/en-us/azure/azure-sql/database/authentication-mfa-ssms-
overview

10. What is a benefit of hosting a database on Azure SQL managed instance as compared to an
Azure SQL database?
A. built-in high availability
B. native support for cross-database queries and transactions Most Voted
C. system-initiated automatic backups
D. support for encryption at rest
https://docs.microsoft.com/en-us/azure/azure-sql/database/features-comparison
10.

Box 1: Yes -
The MailItemsAccessed event is a mailbox auditing action and is triggered when mail data is
accessed by mail protocols and mail clients.

Box 2: No -
Basic Audit retains audit records for 90 days.
Advanced Audit retains all Exchange, SharePoint, and Azure Active Directory audit records for
one year. This is accomplished by a default audit log retention policy that retains any audit record
that contains the value of Exchange, SharePoint, or AzureActiveDirectory for the Workload
property (which indicates the service in which the activity occurred) for one year.

Box 3: yes -
Advanced Audit in Microsoft 365 provides high-bandwidth access to the Office 365 Management
Activity API.
Reference:
Answers are correct,
but products are rebranded to: -
Microsoft Purview Audit (Standard)
Microsoft Purview Audit (Premium)
https://docs.microsoft.com/en-us/microsoft-365/compliance/advanced-audit?view=o365-
worldwide
https://docs.microsoft.com/en-us/microsoft-365/compliance/auditing-solutions-overview?
view=o365-worldwide#licensing-requirements
https://docs.microsoft.com/en-us/office365/servicedescriptions/microsoft-365-service-
descriptions/microsoft-365-tenantlevel-services-licensing-guidance/ microsoft-365-security-
compliance-licensing-guidance#advanced-audit
11. You need to design and model a database by using a graphical tool that supports project-
oriented offline database development.
What should you use?

A. Microsoft SQL Server Data Tools (SSDT) Most Voted


B. Microsoft SQL Server Management Studio (SSMS)
C. Azure Databricks
D. Azure Data Studio
Reference:
https://docs.microsoft.com/en-us/sql/ssdt/project-oriented-offline-database-development?
view=sql-server-ver15

11. You have a transactional application that stores data in an Azure SQL managed instance.
When should you implement a read-only database replica?

 A. You need to generate reports without affecting the transactional workload. Most Voted

 B. You need to audit the transactional application.

 C. You need to implement high availability in the event of a regional outage.

 D. You need to improve the recovery point objective (RPO).

Reference:
Use read-only replicas to offload read-only query workloads.
The correct answer is A. You should implement a read-only database replica when you
need to generate reports without affecting the transactional workload. By creating a read-
only replica of the transactional database, you can offload the reporting workload to the
replica, ensuring that the performance of the transactional workload is not impacted. This
allows you to generate reports and perform analytical queries on the replica without
affecting the availability or responsiveness of the primary transactional database.
https://docs.microsoft.com/en-us/azure/azure-sql/database/read-scale-out

12.

13. You need to query a table named Products in an Azure SQL database.
Which three requirements must be met to query the table from the internet? Each correct
answer presents part of the solution.
NOTE: Each correct selection is worth one point.

 A. You must be assigned the Reader role for the resource group that contains the
database.

 B. You must have SELECT access to the Products table. Most Voted

 C. You must have a user in the database. Most Voted

 D. You must be assigned the Contributor role for the resource group that contains the
database.

 E. Your IP address must be allowed to connect to the database. Most Voted

Reference:
https://docs.microsoft.com/en-us/sql/relational-databases/security/authentication-access/getting-
started-with-database-engine-permissions?view=sql-server-ver15

14.

Box 1: Azure SQL Database single database


Serverless is a compute tier for single databases in Azure SQL Database that automatically scales
compute based on workload demand and bills for the amount of compute used per second. The
serverless compute tier also automatically pauses databases during inactive periods when only
storage is billed and automatically resumes databases when activity returns.
Scenarios well suited for serverless compute
Single databases with intermittent, unpredictable usage patterns interspersed with periods of
inactivity, and lower average compute utilization over time.
Single databases in the provisioned compute tier that are frequently rescaled and customers who
prefer to delegate compute rescaling to the service.
New single databases without usage history where compute sizing is difficult or not possible to
estimate prior to deployment in SQL Database.
Box 2: Azure SQL Managed Instance
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the
broadest SQL Server database engine compatibility with all the benefits of a fully managed and
evergreen platform as a service.
Box 3: Azure SQL Database elastic pool
Azure SQL Database elastic pools are a simple, cost-effective solution for managing and scaling
multiple databases that have varying and unpredictable usage demands. The databases in an
elastic pool are on a single server and share a set number of resources at a set price. Elastic pools
in SQL Database enable software as a service (SaaS) developers to optimize the price
performance for a group of databases within a prescribed budget while delivering performance
elasticity for each database.
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview?
view=azuresql
https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-pool-overview?view=azuresql
https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/sql-managed-instance-paas-
overview

15. Which T-SQL statement should be used to instruct a database management system to use
an index instead of performing a full table scan?

 A. SELECT

 B. WHERE Most Voted Check if correct

 C. JOIN

Reference

https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table

Questions from 101 to 140


Azure Synapse Analytics is an platform as a service (PAAS) that combines data integration, warehousing,
and analytics into one solution.
Azure Synapse Analytics offers cloud-based, relational data warehousing services, massively parallel
processing (MPP) scale-out technology, and enough computational power to efficiently manage petabytes
and petabytes of data.
Reference:
https://cswsolutions.com/blog/posts/2021/august/what-is-azure-synapse-analytics/
https://www.integrate.io/blog/what-is-azure-synapse-analytics/

Which Azure service provides the highest compatibility for databases migrated from Microsoft
SQL Server 2019 Enterprise edition?

 A. Azure SQL Database

 B. Azure Database for MySQL

 C. Azure SQL Managed Instance Most Voted

 D. an Azure SQL Database elastic pool

SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition)
database engine, providing a native virtual network (VNet) implementation that addresses common
security concerns, and a business model favorable for existing SQL Server customers.

Note: Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the
broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen
platform as a service.
Reference:
https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/sql-managed-instance-paas-
overview?view=azuresql

Box 1: Key/value -
A key/value store associates each data value with a unique key.

Box 2: Object -
Object storage is optimized for storing and retrieving large binary objects (images, files, video and audio
streams, large application data objects and documents, virtual machine disk images).
Box 3: Graph -
A graph database stores two types of information, nodes and edges. Edges specify relationships between
nodes. Nodes and edges can have properties that provide information about that node or edge, like
columns in a table. Edges can also have a direction indicating the nature of the relationship.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/data-store-overview

You have an Azure Cosmos DB account that uses the Core (SQL) API.
Which two settings can you configure at the container level? Each correct answer presents a
complete solution.
NOTE: Each correct selection is worth one point.

 A. the throughput Most Voted

 B. the read region

 C. the partition key Most Voted

 D. the API

https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-manage-database-account

Which storage solution supports role-based access control (RBAC) at the file and folder level?

 A. Azure Disk Storage

 B. Azure Data Lake Storage Most Voted

 C. Azure Blob storage

 D. Azure Queue storage

https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-access-control

You need to store data in Azure Blob storage for seven years to meet your company's compliance
requirements. The retrieval time of the data is unimportant. The solution must minimize storage
costs.
Which storage tier should you use?

 A. Archive

 B. Hot

 C. Cool
Answer is correct Hot - Optimized for storing data that is accessed frequently. Cool -
Optimized for storing data that is infrequently accessed and stored for at least 30 days.
Archive - Optimized for storing data that is rarely accessed and stored for at least 180 days
with flexible latency requirements (on the order of hours).

https://cloud.netapp.com/blog/azure-blob-storage-pricing-the-complete-guide-azure-cvo-blg#H1_4

Which type of non-relational data store supports a flexible schema, stores data as JSON files, and
stores the all the data for an entity in the same document?

 A. document Most Voted

 B. columnar

 C. graph

 D. time series

Document is correct
https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/data-
store-overview#column-family-databases

https://docs.microsoft.com/en-us/azure/cosmos-db/faq
A key mechanism that allows Azure Data Lake Storage Gen2 to provide file system performance at object
storage scale and prices is the addition of a hierarchical namespace. This allows the collection of
objects/files within an account to be organized into a hierarchy of directories and nested subdirectories in
the same way that the file system on your computer is organized. With a hierarchical namespace enabled,
a storage account becomes capable of providing the scalability and cost-effectiveness of object storage,
with file system semantics that are familiar to analytics engines and frameworks.
One advantage of hierarchical namespace is atomic directory manipulation: Object stores approximate a
directory hierarchy by adopting a convention of embedding slashes (/) in the object name to denote path
segments.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-namespace

WRONG

Correct: By using the Gremlin API, you can query a graph database in Azure Cosmos DB
using the Gremlin query language. Gremlin is a graph traversal language and the primary
query language for graph databases. To query a graph database in Azure Cosmos DB
using the Gremlin API, you would typically use a Gremlin console or a Gremlin client
library. Here's a general outline of how you would query a graph database in Azure
Cosmos DB using the Gremlin API: Connect to the Cosmos DB account: You would need to
provide the connection details for your Azure Cosmos DB account, including the URI and
authentication keys or tokens. Instantiate a Gremlin client: Use a Gremlin client library for
your programming language of choice (e.g., Java, Python, .NET) to establish a connection
to the Cosmos DB account. Execute Gremlin queries: Use the methods provided by the
Gremlin client library to execute Gremlin queries against the graph database in Azure
Cosmos DB. These queries can traverse the graph, retrieve vertices (nodes), edges
(relationships), and perform various graph operations. Process the results returned by the
Gremlin queries as needed for your application logic.

When provisioning an Azure Cosmos DB account, which feature provides redundancy within an
Azure region?

 A. multi-master replication

 B. Availability Zones Most Voted

 C. the strong consistency level


 D. automatic failover

With Availability Zone (AZ) support, Azure Cosmos DB will ensure replicas are placed across multiple
zones within a given region to provide high availability and resiliency to zonal failures.
Note: Azure Cosmos DB provides high availability in two primary ways. First, Azure Cosmos DB
replicates data across regions configured within a Cosmos account. Second, Azure Cosmos DB maintains
4 replicas of data within a region.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/high-availability

Your company needs to design a database that shows how changes in network traffic in one area
of a network affect network traffic in other areas of the network.
Which type of data store should you use?

 A. graph Most Voted

 B. key/value

 C. document

 D. columnar

Data as it appears in the real world is naturally connected. Traditional data modeling focuses on defining
entities separately and computing their relationships at runtime. While this model has its advantages,
highly connected data can be challenging to manage under its constraints.
A graph database approach relies on persisting relationships in the storage layer instead, which leads to
highly efficient graph retrieval operations. Azure Cosmos
DB's Gremlin API supports the property graph model.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/graph-introduction#introduction-to-graph-databases
Box 1: Yes -
Azure Databricks can consume data from SQL Databases using JDBC and from SQL Databases using the
Apache Spark connector.
The Apache Spark connector for Azure SQL Database and SQL Server enables these databases to act as
input data sources and output data sinks for Apache
Spark jobs.

Box 2: Yes -
You can stream data into Azure Databricks using Event Hubs.

Box 3: Yes -
You can run Spark jobs with data stored in Azure Cosmos DB using the Cosmos DB Spark connector.
Cosmos can be used for batch and stream processing, and as a serving layer for low latency access.
You can use the connector with Azure Databricks or Azure HDInsight, which provide managed Spark
clusters on Azure.
Reference:
https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/sql-databases-azure
https://docs.microsoft.com/en-us/azure/databricks/scenarios/databricks-stream-from-eventhubs

Box 1: Azure Cosmos DB -


In Azure Cosmos DB's SQL (Core) API, items are stored as JSON. The type system and expressions are
restricted to deal only with JSON types.

Box 2: Azure Files -


Azure Files offers native cloud file sharing services based on the SMB protocol.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql-query-working-with-json
https://cloud.netapp.com/blog/azure-smb-server-message-block-in-the-cloud-for-azure-files

You need to store data by using Azure Table storage.


What should you create first?

 A. an Azure Cosmos DB instance

 B. a storage account Most Voted

 C. a blob container
 D. a table

First create an Azure storage account, then use Table service in the Azure portal to create a table.
Note: An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, and
tables.
Reference:
https://docs.microsoft.com/en-us/azure/storage/tables/table-storage-quickstart-portal
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create

You need to recommend a data store service that meets the following requirements:
✑ Native SQL API access
✑ Configurable indexes
What should you recommend?

 A. Azure Files

 B. Azure Blob storage

 C. Azure Table storage

 D. Azure Cosmos DB Most Voted

Azure Cosmos DB comes with native Core (SQL) API support.


In Azure Cosmos DB, data is indexed following indexing policies that are defined for each container. The
default indexing policy for newly created containers enforces range indexes for any string or number. This
policy can be overridden with your own custom indexing policy.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-manage-indexing-policy
Box 1: Yes -
For read access to the secondary region, configure your storage account to use read-access geo-redundant
storage (RA-GRS) or read-access geo-zone- redundant storage (RA-GZRS).

Box 2: No -

Box 3: Yes -

Box 4: Yes -
Azure Cosmos DB supports multi-region writes.
Reference:
https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy
https://manojchoudhari.wordpress.com/2019/12/16/azure-cosmos-db-enable-multi-region-writes

A key mechanism that allows Azure Data Lake Storage Gen2 to provide file system performance at object
storage scale and prices is the addition of a hierarchical namespace. This allows the collection of
objects/files within an account to be organized into a hierarchy of directories and nested subdirectories in
the same way that the file system on your computer is organized. With a hierarchical namespace enabled,
a storage account becomes capable of providing the scalability and cost-effectiveness of object storage,
with file system semantics that are familiar to analytics engines and frameworks.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-namespace

You manage an application that stores data in a shared folder on a Windows server.
You need to move the shared folder to Azure Storage.
Which type of Azure Storage should you use?

 A. queue

 B. blob

 C. file

 D. table
Azure file shares can be mounted concurrently by cloud or on-premises deployments of Windows, Linux,
and macOS. Azure file shares can also be cached on
Windows Servers with Azure File Sync for fast access near where the data is being used.
Reference:
https://azure.microsoft.com/en-us/services/storage/files/

You have an application that runs on Windows and requires access to a mapped drive.
Which Azure service should you use?

 A. Azure Files

 B. Azure Blob storage

 C. Azure Cosmos DB

 D. Azure Table storage

Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be seamlessly used in
Windows and Windows Server.
To use an Azure file share with Windows, you must either mount it, which means assigning it a drive
letter or mount point path, or access it via its UNC path.
Reference:
https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows

https://docs.microsoft.com/en-us/azure/storage/files/storage-files-introduction Its correct


name

CHECK THE AWNSER


Box 1: No -
The API determines the type of account to create. Azure Cosmos DB provides five APIs: Core (SQL) and
MongoDB for document data, Gremlin for graph data,
Azure Table, and Cassandra. Currently, you must create a separate account for each API.
Box 2: Yes -
Azure Cosmos DB uses partitioning to scale individual containers in a database to meet the performance
needs of your application. In partitioning, the items in a container are divided into distinct subsets called
logical partitions. Logical partitions are formed based on the value of a partition key that is associated
with each item in a container.

Box 3: No -
Logical partitions are formed based on the value of a partition key that is associated with each item in a
container.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview

Question #: 132 CHECK

Your company is designing an application that will write a high volume of JSON data and will
have an application-defined schema.
Which type of data store should you use?

 A. columnar

 B. key/value

 C. document Most Voted

 D. graph

You need to recommend a non-relational data store that is optimized for storing and retrieving
text files, videos, audio streams, and virtual disk images. The data store must store data, some
metadata, and a unique ID for each file.
Which type of data store should you recommend?

 A. key/value

 B. columnar

 C. object

 D. document

Object storage is optimized for storing and retrieving large binary objects (images, files, video
and audio streams, large application data objects and documents, virtual machine disk images).
Large data files are also popularly used in this model, for example, delimiter file (CSV), parquet,
and ORC. Object stores can manage extremely large amounts of unstructured data.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/data-store-
overview

Question #: 134

Your company is designing a data store for internet-connected temperature sensors.


The collected data will be used to analyze temperature trends.
Which type of data store should you use?

 A. relational

 B. time series

 C. graph

 D. columnar

Time series data is a set of values organized by time. Time series databases typically collect
large amounts of data in real time from a large number of sources.
Updates are rare, and deletes are often done as bulk operations. Although the records written to a
time-series database are generally small, there are often a large number of records, and total data
size can grow rapidly.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/guide/technology-choices/data-store-
overview

Question #: 135

Yes:: You implement ADLS as a Storage Account. Azure Data Lake Storage is a type of
storage account in Azure, specifically optimized for big data analytics workloads. When
you create an Azure Data Lake Storage Gen2 account, you are essentially creating a
specialized type of Storage Account with additional capabilities tailored for data lake
scenarios. When you create an ADLS Gen2 account, it is provisioned under the hood as a
hierarchical namespace on top of Blob storage, which is part of Azure Storage. This means
that you can use your Azure Data Lake Storage account to store and analyze large
amounts of structured and unstructured data, leveraging features such as fine-grained
access control, hierarchical namespaces, and integration with big data analytics services
like Azure Databricks, Azure HDInsight, and Azure Synapse Analytics.

Reference:
https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-get-started-portal
https://docs.microsoft.com/en-us/azure/storage/common/storage-account-overview
https://azure.microsoft.com/en-us/pricing/details/bandwidth/

Question #: 136

Reference : https://azure.microsoft.com/en-us/blog/a-technical-overview-of-azure-cosmos-
db/ API - Gremlin // Container is projected as -Graph // Item is projected as -Nodes and
Edges

Question #: 137

At which two levels can you set the throughput for an Azure Cosmos DB account? Each correct
answer presents a complete solution.
NOTE: Each correct selection is worth one point.

 A. database Most Voted

 B. item

 C. container Most Voted

 D. partition
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/high-availability

Question #: 139

What is a characteristic of non-relational data?

 A. no indexes

 B. self-describing entities Most Voted

 C. a well-defined schema

 D. no unique key values

B. self-describing entities One of the main characteristics of non-relational data is that it is


self-describing, meaning that each data item includes information about its own structure
and schema. This is in contrast to relational data, which requires a well-defined schema
and predefined relationships between tables. *Indexes and unique key values are not
necessarily absent in non-relational data. Many non-relational databases do support
indexing and can enforce uniqueness constraints, but these features are often
implemented differently than in relational databases.

You need to gather real-time telemetry data from a mobile application.


Which type of workload describes this scenario?

 A. Online Transaction Processing (OLTP)

 B. batch
 C. massively parallel processing (MPP)

 D. streaming

Reference:
https://docs.microsoft.com/en-in/azure/azure-monitor/overview

Questions from 141 to 200

Question #: 141
You need to gather real-time telemetry data from a mobile application.
Which type of workload describes this scenario?

 A. Online Transaction Processing (OLTP)

 B. batch

 C. massively parallel processing (MPP)

 D. streaming
Reference:
https://docs.microsoft.com/en-in/azure/azure-monitor/overview

Question #: 142
You have a dedicated SQL pool in Azure Synapse Analytics that is only used actively every
night for eight hours.
You need to minimize the cost of the dedicated SQL pool as much as possible during idle times.
The solution must ensure that the data remains intact.
What should you do on the dedicated SQL pool?

 A. Scale down the data warehouse units (DWUs).

 B. Pause the pool. Most Voted

 C. Create a user-defined restore point.

 D. Delete the pool

Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-
warehouse-manage-compute-overview
https://learn.microsoft.com/en-us/azure/synapse-analytics/plan-manage-costs
Dedicated SQL pool You can control costs for a dedicated SQL pool by pausing the resource
when it is not is use. For example, if you won't be using the database during the night and on
weekends, you can pause it during those times, and resume it during the day. For more
information, see Pause and resume compute in dedicated SQL pool via the Azure portal.

Question #: 143
Which Azure Data Factory component initiates the execution of a pipeline?

 A. a control flow

 B. a trigger

 C. a parameter

 D. an activity

Reference:
https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-
triggers#trigger-execution
https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers
Pipeline runs are typically instantiated by passing arguments to parameters that you define in the
pipeline. You can execute a pipeline either manually or by using a trigger.

Question #: 144
Your company has a reporting solution that has paginated reports. The reports query a
dimensional model in a data warehouse.
Which type of processing does the reporting solution use?

 A. stream processing

 B. batch processing

 C. Online Analytical Processing (OLAP)

 D. Online Transaction Processing (OLTP)

Reference:
https://datawarehouseinfo.com/how-does-oltp-differ-from-olap-database/

Question #: 146
What are three characteristics of an Online Transaction Processing (OLTP) workload? Each
correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

 A. denormalized data

 B. heavy writes and moderate reads Most Voted

 C. light writes and heavy reads

 D. schema on write Most Voted

 E. schema on read

 F. normalized data Most Voted

Reference:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/relational-data/online-transaction-
processing
B: Transactional data tends to be heavy writes, moderate reads.
D: Typical traits of transactional data include: schema on write, strongly enforced What is
Schema On Write Schema on write is defined as creating a schema for data before writing into
the database. If you have done any kind of development with a database you understand the
structured nature of Relational Database(RDBMS) because you have used Structured Query
Language (SQL) to read data from the database.
F: Transactional data tends to be highly normalized.
Reference Online transaction processing (OLTP)
https://learn.microsoft.com/en-us/azure/architecture/data-guide/relational-data/online-
transaction-processing

Question #: 148

Answer is Control flow (

https://docs.microsoft.com/en-us/azure/data-factory/introduction

) Control flow is an orchestration of pipeline activities that includes chaining activities in a


sequence, branching, defining parameters at the pipeline level, and passing arguments while
invoking the pipeline on-demand or from a trigger. It also includes custom-state passing and
looping containers, that is, For-each iterators.

Question #: 149

You need to develop a solution to provide data to executives. The solution must provide an
interactive graphical interface, depict various key performance indicators, and support data
exploration by using drill down.
What should you use in Microsoft Power BI?

 A. a view

 B. a report
 C. a dataflow

 D. Microsoft Power Apps

https://docs.microsoft.com/en-us/power-bi/consumer/end-user-dashboards
https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-visualization-kpi
https://docs.microsoft.com/en-us/power-bi/consumer/end-user-drill
https://learn.microsoft.com/en-us/power-bi/create-reports/service-dashboards#dashboards-
versus-reports

Question #: 150
Which two Azure services can be used to provision Apache Spark clusters? Each correct answer
presents a complete solution.
NOTE: Each correct selection is worth one point.

 A. Azure Time Series Insights

 B. Azure HDInsight

 C. Azure Databricks

 D. Azure Log Analytics

correct answers:
1. Azure Synaps Analytics
2. Azure HDInsight
3. Azure Databricks
https://www.sqlshack.com/a-beginners-guide-to-azure-databricks/
https://docs.microsoft.com/en-us/azure/hdinsight/spark/apache-spark-overview
Question #: 151
You have a quality assurance application that reads data from a data warehouse.
Which type of processing does the application use?

 A. Online Transaction Processing (OLTP)

 B. batch processing

 C. Online Analytical Processing (OLAP) Most Voted

 D. stream processing
The quality assurance application that reads data from a data warehouse typically uses Online
Analytical Processing (OLAP) because it involves querying and analyzing historical data for
reporting and analysis purposes. OLAP is optimized for complex queries and aggregations on
large volumes of data, making it suitable for tasks like data analysis and business intelligence,
which align with quality assurance activities. So, the correct answer is: C. Online Analytical
Processing (OLAP)
Question #: 147
Which two activities can be performed entirely by using the Microsoft Power BI service without
relying on Power BI Desktop? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

 A. report and dashboard creation Most Voted

 B. report sharing and distribution Most Voted

 C. data modeling

 D. data acquisition and preparation

https://docs.microsoft.com/en-us/power-bi/fundamentals/service-service-vs-desktop

Question #: 152
Which three objects can be added to a Microsoft Power BI dashboard? Each correct answer
presents a complete solution.
NOTE: Each correct selection is worth one point.

 A. a report page Most Voted

 B. a Microsoft PowerPoint slide

 C. a visualization from a report Most Voted

 D. a dataflow

 E. a text box Most Voted

A,C,E
https://docs.microsoft.com/en-us/power-bi/create-reports/service-dashboard-pin-live-tile-from-
report
https://docs.microsoft.com/en-us/power-bi/create-reports/service-dashboard-add-widget
Question #: 153
Right Answer is Yes, No, yes.
Dashboard is associate for a single workspace.
Dashboard visualization from different datasets and reports.
Dashboard visualization from other tools example: Excel
https://docs.microsoft.com/en-us/power-bi/fundamentals/service-basic-concepts#dashboards
Question #: 155

Paginated Reports in Power BI now allows users to generate these fixed-layout documents
optimized for printing and archiving, such as PDF and Word files.
These document-style reports with visualizations that provide additional control, like which
tables expand horizontally and vertically to display all their data and continue from page to page
as needed.
Reference:
https://powerbi.microsoft.com/en-us/blog/announcing-paginated-reports-in-power-bi-general-
availability/
Question #: 158
Yes No Yes The first one is Yes, you can copy a dashboard between Microsoft Power BI
workspaces. Here’s how: Open the dashboard you want to copy. Click on the ‘File’ menu in the
upper-left corner of the screen. Select ‘Save As’ from the dropdown menu. Provide a name for
the new dashboard copy. Choose the destination workspace where you want the copy to be
saved. Click ‘Save’.
https://learn.microsoft.com/en-us/power-bi/connect-data/service-datasets-copy-reports
What should you use to build a Microsoft Power BI paginated report?

 A. Charticulator

 B. Power BI Desktop

 C. the Power BI service

 D. Power BI Report Builder

Power BI Report Builder is the standalone tool for authoring paginated reports for the Power BI
service.
Reference:
https://docs.microsoft.com/en-us/power-bi/paginated-reports/paginated-reports-report-builder-
power-bi
Question #: 154
Which Azure Data Factory component provides the compute environment for activities?

 A. SSIS packages

 B. an integration runtime

 C. a control flow

 D. a pipeline

The answer is correct = An integration runtime (IR) is a compute infrastructure that provides the
data integration capabilities for Azure Data Factory. The integration runtime provides the
compute environment for executing activities, which are the building blocks of data pipelines in
Azure Data Factory.
SSIS packages are a type of data integration solution that can be run on premises or in the cloud.
They can be integrated with Azure Data Factory, but they are not a component of the Data
Factory compute environment. A control flow is a logical representation of the steps that are
required to execute a workflow, and a pipeline is a collection of activities that are organized into
a workflow. While both components are important for building data pipelines in Azure Data
Factory, they do not provide the compute environment for executing activities.
https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime
Question #: 156
What are two uses of data visualization? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

 A. Represent trends and patterns over time

 B. Implement machine learning to predict future values

 C. Communicate the significance of data

 D. Enforce business logic across reports

Data visualization is a key component in being able to gain insight into your data. It helps make
big and small data easier for humans to understand. It also makes it easier to detect patterns,
trends, and outliers in groups of data.
Data visualization brings data to help you find key business insights quickly and effectively.
Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-data-visualization
Question #: 157
You need to use Transact-SQL to query files in Azure Data Lake Storage Gen 2 from an Azure
Synapse Analytics data warehouse.
What should you use to query the files?

 A. Azure Functions

 B. Microsoft SQL Server Integration Services (SSIS)

 C. PolyBase

 D. Azure Data Factory

PolyBase enables your SQL Server instance to process Transact-SQL queries that read data from
external data sources. SQL Server 2016 and higher can access external data in Hadoop and Azure
Blob Storage. Starting in SQL Server 2019, you can now use PolyBase to access external data in
SQL Server, Oracle, Teradata, and MongoDB.
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql/load-data-overview
Question #: 159
What are three characteristics of an Online Transaction Processing (OLTP) workload? Each
correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

 A. denormalized data

 B. heavy writes and moderate reads Most Voted


 C. light writes and heavy reads

 D. schema defined in a database Most Voted

 E. schema defined when reading unstructured data from a database

 F. normalized data Most Voted

B: Transactional data tends to be heavy writes, moderate reads.


D: Typical traits of transactional data include schema on write, strongly enforced. The schema is
defined in a database.
F: Transactional data tends to be highly normalized.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/relational-data/online-transaction-
processing
Question #: 160
What is the primary purpose of a data warehouse?

 A. to provide answers to complex queries that rely on data from multiple sources Most
Voted

 B. to provide transformation services between source and target data stores

 C. to provide read-only storage of relational and non-relational historical data

 D. to provide storage for transactional line-of-business (LOB) applications

The ans should be A. In the textbook DP-900T00-A-Microsoft Azure Data Fundamentals


Module 4 Describe modern data warehousing: ''A data warehouse gathers data from many
different sources within an organization...The focus of a data warehouse is to provide answers to
complex queries...''
Question #: 161

Match the Azure services to the appropriate locations in the architecture.


To answer, drag the appropriate service from the column on the left to its location on the right.
Each service may be used once, more than once, or not at all.
NOTE: Each correct match is worth one point.
Select and Place:
Box Ingest: Azure Data Factory -
You can build a data ingestion pipeline with Azure Data Factory (ADF).
Box Preprocess & model: Azure Synapse Analytics
Use Azure Synapse Analytics to preprocess data and deploy machine learning models.
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-data-ingest-adf
https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/sqldw-
walkthrough
Question #: 162

Box 1: Batch -
The batch processing model requires a set of data that is collected over time while the stream
processing model requires data to be fed into an analytics tool, often in micro-batches, and in
real-time.
The batch Processing model handles a large batch of data while the Stream processing model
handles individual records or micro-batches of few records.
In Batch Processing, it processes over all or most of the data but in Stream Processing, it
processes over data on a rolling window or most recent record.

Box 2: Batch -

Box 3: Streaming -
Reference:
https://k21academy.com/microsoft-azure/dp-200/batch-processing-vs-stream-processing
Question #: 163
Note: The data warehouse workload encompasses:
✑ The entire process of loading data into the warehouse
✑ Performing data warehouse analysis and reporting
✑ Managing data in the data warehouse
✑ Exporting data from the data warehouse
Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data-
warehouse-workload-management
Question #: 164

Box 1: No -
A pipeline is a logical grouping of activities that together perform a task.

Box 2: Yes -
You can construct pipeline hierarchies with data factory.

Box 3: Yes -
A pipeline is a logical grouping of activities that together perform a task.
Reference:
https://mrpaulandrew.com/2019/09/25/azure-data-factory-pipeline-hierarchies-generation-
control/
Azure Data Factory has 4 key components:
**Datasets: represent data structures within the data stores. Input dataset represents input for an
activity in the pipeline. Output dataset represents the output for the activity. **Pipeline: a group
of activities; used to group activities into a unit that together performs a task.
**Activities: the actions to perform on your data. Azure Data Factory supports two types of
activities: data movement and data transformation.
**Linked Services: the information needed for ADF to connect to external resources. For
example, Azure Storage linked service specifies a connection string to connect to the Azure
Storage account.
Question #: 165

Answer should be: -


Azure Synapse Analytics: Output data to parquet format –
Azure Data Lake Storage: Store data that is in parquet format –
Azure SQL Database: Persist a tabular representation of data that is stored in parquet format.
Question #: 166

Box 1: Yes -
Compute is separate from storage, which enables you to scale compute independently of the data
in your system.

Box 2: Yes -
You can use the Azure portal to pause and resume the dedicated SQL pool compute resources.
Pausing the data warehouse pauses compute. If your data warehouse was paused for the entire
hour, you will not be charged compute during that hour.

Box 3: No -
Storage is sold in 1 TB allocations. If you grow beyond 1 TB of storage, your storage account
will automatically grow to 2 TBs.
Reference:
https://azure.microsoft.com/en-us/pricing/details/synapse-analytics/
Question #: 168
Box 1: Azure Data factory -
Relevant Azure service for the three ETL phases are Azure Data Factory and SQL Server
Integration Services (SSIS).

Box 2: Azure Synapse Analytics -


You can copy and transform data in Azure Synapse Analytics by using Azure Data Factory
Note: Azure Synapse Analytics connector is supported for the following activities:
✑ Copy activity with supported source/sink matrix table
✑ Mapping data flow
✑ Lookup activity
✑ GetMetadata activity
Reference:
https://docs.microsoft.com/en-us/azure/architecture/data-guide/relational-data/etl
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse
Question #: 169
Reference:
https://docs.microsoft.com/en-us/azure/databricks/scenarios/what-is-azure-databricks
https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-overview
https://docs.microsoft.com/en-us/azure/data-factory/introduction
Question #: 170
Which scenario is an example of a streaming workload?

 A. sending transactions that are older than a month to an archive

 B. sending transactions daily from point of sale (POS) devices

 C. sending telemetry data from edge devices Most Voted

 D. sending cloud infrastructure metadata every 30 minutes

C. sending telemetry data from edge devices.


A streaming workload typically involves the continuous and real-time ingestion and processing
of data as it is generated. In the given scenario, sending telemetry data from edge devices
involves the continuous flow of data from these devices to a central system for immediate
analysis and processing. This aligns with the characteristics of a streaming workload where data
is ingested in real-time and processed as it arrives.
Question #: 171

Don't get tricked with this one:


A) doesn't make any sense as there is no data processing in the background. Batch means to
process a big volume of data in a particular time of the day (e.g. in the night with idle VMs)
B) You can once per day if you define it as such but in any case you are limited by "one day". So
it's no.
C) Nonsense. Batch data is not a design for RT/streaming data OTA. You would use a different
solution as data bricks. Reason that data bricks service exists.
D) Correct. You can define your condition. Once it's met, you can execute it.
Question #: 172

Question #: 173
A bar chart showing year-to-date sales by region is an example of which type of analytics?

 A. predictive

 B. prescriptive

 C. descriptive Most Voted Correct

 D. diagnostic

Question #: 174
Yes - Stream processing has access to the most recent data received or data within a rolling time
window. Stream processing operates on data in near real-time, allowing for analysis and
processing of data as it is received or within a defined time window.

No - Batch processing is not required to occur immediately and can have higher latency. Batch
processing typically operates on larger volumes of data and is often performed at regular
intervals or in scheduled batches, which can have latency in the order of minutes, hours, or even
days.

Yes - Stream processing is commonly used for simple response functions, aggregates, or
calculations such as rolling averages. It enables real-time data analysis and enables quick
calculations and aggregations on streaming data as it arrives.
Question #: 175
You need to perform hybrid transactional and analytical processing (HTAP) queries against
Azure Cosmos DB data sources by using Azure Synapse Analytics.
What should you use?

 A. Synapse pipelines

 B. a Synapse SQL pool

 C. Synapse Link

 D. Synapse Studio

Synapse Link is a feature of Azure Cosmos DB that allows you to enable real-time analytics on
your Cosmos DB data, by creating a seamless connection between Azure Cosmos DB and Azure
Synapse Analytics. It enables you to run analytical queries against your Cosmos DB data using
Synapse SQL pool, and perform complex joins and aggregations across data stored in both
Cosmos DB and other data sources.
Hybrid Transactional and Analytical Processing (HTAP) is a technique for near real time
analytics without a complex ETL solution. In Azure Synapse Analytics, HTAP is supported
through Azure Synapse Link.
https://learn.microsoft.com/en-us/training/paths/work-with-hybrid-transactional-analytical-
processing-solutions/#:~:text=Hybrid%20Transactional%20and%20Analytical%20Processing
%20(HTAP)%20is%20a%20technique%20for,Data%20Engineering%20on%20Microsoft
%20Azure.
Question #: 176
You need to create a visualization of running sales totals per quarter as shown in the following
exhibit.

What should you create in Power BI Desktop?

 A. a waterfall chart

 B. a ribbon chart

 C. a bar chart

 D. a decomposition tree

Reference:
https://docs.microsoft.com/en-us/power-bi/visuals/power-bi-visualization-types-for-reports-and-
q-and-a
Question #: 181
You have an on-premises Microsoft SQL Server database.
You need to migrate the database to the cloud. The solution must meet the following
requirements:
* Minimize maintenance effort.
* Support the Database Mail and Service Broker features.
What should you include in the solution?

 A. Azure SQL Database single database

 B. an Azure SQL Database elastic pool

 C. Azure SQL Managed Instance Most Voted

 D. SQL Server on Azure virtual machines

Azure SQL Database does not support service broker and mail. You need the Managed Instance.
https://docs.microsoft.com/en-us/azure/azure-sql/database/features-comparison?view=azuresql
Question #: 184
Which two features distinguish Delta Lake from Azure Data Lake Storage? Each correct answer
presents a complete solution.

NOTE: Each correct selection is worth one point.

 A. support for batch data

 B. schema enforcement

 C. support for an Apache Spark runtime

 D. transactional consistency

 E. support for streaming data

Question #: 185
A company plans to use Power Apps to connect to a series of custom services. There are no
connectors available for the custom services.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Yes - Custom connectors for customer-specific services are only available to the users within the
same tenant or environment as the creator of the custom connector. They do not need to go
through the review and certification process by Microsoft, which is only required for custom
connectors that are intended to be published and shared publicly.

Yes - Custom connectors are reusable components that can be used across Power Apps and
Power Automate. You can build a custom connector once and use it in multiple apps and flows if
you have the appropriate permissions and licenses.

No- Custom connectors that are meant to be publicly available for all Power Platform users need
to be certified by Microsoft before they can be published in the connector gallery.
Question #: 186
What is a function of a modern data warehouse?

 A. supports batch processing only

 B. supports real-time and batch processing Most Voted

 C. provides built-in or native online analytical processing

 D. stores raw data only

B. supports real-time and batch processing.


A modern data warehouse is designed to support both real-time and batch processing. It allows
organizations to ingest, process, and analyze data in real-time as it's generated, while also
supporting traditional batch processing for historical analysis. This flexibility enables businesses
to gain insights from their data in a timely manner and make informed decisions based on both
current and historical data.
Question #: 187

Hierarchical namespace
https://docs.microsoft.com/en-us/azure/storage/blobs/create-data-lake-storage-account
Question #: 188
What can be used with native notebook support to query and visualize data by using a web-based
interface?

 A. Azure Databricks

 B. pgAdmin

 C. Microsoft Power BI

Notebooks are a common tool in data science and machine learning for developing code and
presenting results. In Azure Databricks, notebooks are the primary tool for creating data science
and machine learning workflows and collaborating with colleagues. Databricks notebooks
provide real-time coauthoring in multiple languages, automatic versioning, and built-in data
visualizations.
Reference
Introduction to Databricks notebooks
https://learn.microsoft.com/en-us/azure/databricks/notebooks/
Question #: 189
Which format was used?

 A. XML

 B. HTML

 C. YAML

 D. JSON Most Voted

Question #: 190

Which format was used?

 A. JSON

 B. YAML

 C. HTML

 D. XML Most Voted

Question #: 191
Which database transaction property ensures that transactional changes to a database are
preserved during unexpected operating system restarts?

 A. consistency

 B. atomicity

 C. durability Most Voted


 D. isolation

C is correct, Durability – when a transaction has been committed, it will remain committed and
saved to hard disk, after system reboot or restart, the data will be loaded back from hard disk so
no data loss occurred.
Question #: 194
Which database transaction property ensures that individual transactions are executed only once
and either succeed in their entirety or roll back?

 A. atomicity Most Voted

 B. durability

 C. isolation

 D. consistency

Atomicity is the right answer. An atomic transaction is an indivisible and irreducible series of
database operations such that either all occurs, or nothing occurs. A guarantee of atomicity
prevents updates to the database occurring only partially, which can cause greater problems than
rejecting the whole series outright. As a consequence, the transaction cannot be observed to be in
progress by another database client.
Question #: 196
Which Azure Storage service implements the key/value model?

 A. Azure Queue

 B. Azure Files

 C. Azure Table Most Voted

 D. Azure Blob

Has to be C : Azure Table storage is a service that stores non-relational structured data (also
known as structured NoSQL data) in the cloud, providing a key/attribute store with a schemeless
design. Because Table storage is schemeless, it's easy to adapt your data as the needs of your
application evolve. Access to Table storage data is fast and cost-effective for many types of
applications and is typically lower in cost than traditional SQL for similar volumes of data.
Question #: 197
Answer: Semi-structured
Semi-structured data is data that does not conform to a strict relational model but still has some
organizational structure, often through key-value pairs or graph nodes and edges. In a graph
database, relationships between social media users and their followers are represented as edges
connecting different nodes (users), providing a form of structure but not as rigid as in traditional
relational databases.
Question #: 199
Which node in the Azure portal should you use to assign a user the Reader role for a resource
group? To answer, select the node in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Box 1: Overview -
When you assign roles, you must specify a scope. Scope is the set of resources the access applies
to. In Azure, you can specify a scope at four levels from broad to narrow: management group,
subscription, resource group, and resource.
1. Sign in to the Azure portal.
2. In the Search box at the top, search for the scope you want to grant access to. For example,
search for Management groups, Subscriptions, Resource groups, or a specific resource.
3. Click the specific resource for that scope.
4. The following shows an example resource group.
Box 2: Access control (IAM)
Access control (IAM) is the page that you typically use to assign roles to grant access to Azure
resources. It's also known as identity and access management
(IAM) and appears in several locations in the Azure portal.
1. Click Access control (IAM).
The following shows an example of the Access control (IAM) page for a resource group.

2. Click the Role assignments tab to view the role assignments at this scope.
3. Click Add > Add role assignment.
If you don't have permissions to assign roles, the Add role assignment option will be disabled.
4. The Add role assignment page opens.
Reference:
https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal
Question #: 200
You plan to deploy an app. The app requires a nonrelational data service that will provide latency
guarantees of less than 10-ms for reads and writes.
What should you include in the solution?
 A. Azure Blob storage

 B. Azure Files

 C. Azure Table storage

 D. Azure Cosmos DB Most Voted

D. Azure Cosmos DB The read latency for all consistency levels is always guaranteed to be less
than 10 milliseconds at the 99th percentile. The average read latency, at the 50th percentile, is
typically 4 milliseconds or less. The write latency for all consistency levels is always guaranteed
to be less than 10 milliseconds at the 99th percentile. The average write latency, at the 50th
percentile, is usually 5 milliseconds or less. Azure Cosmos DB accounts that span several regions
and are configured with strong consistency are an exception to this guarantee.
https://learn.microsoft.com/en-us/azure/cosmos-db/consistency-levels

You might also like