Power Query

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1995

Tell us about your PDF experience.

Power Query documentation


Power Query is the data connectivity and data preparation technology that enables end
users to seamlessly import and reshape data from within a wide range of Microsoft
products, including Excel, Power BI, Analysis Services, Dataverse, and more.

Power Query Overview

e OVERVIEW

What is Power Query?

What are dataflows?

Using Power Query

c HOW-TO GUIDE

Use Power Query

Get data

Transform data

Working with dataflows

p CONCEPT

Create and use dataflows in the Power Platform

Dataflow licenses

Dataflow best practices

Advanced Power Query articles

p CONCEPT

Query folding
Query diagnostics

Error handling

Develop your custom connector

c HOW-TO GUIDE

Custom connector development

Custom connector certification

Reference content

i REFERENCE

Connectors in Power Query

M language reference
What is Power Query?
Article • 06/28/2023

Power Query is a data transformation and data preparation engine. Power Query comes
with a graphical interface for getting data from sources and a Power Query Editor for
applying transformations. Because the engine is available in many products and services,
the destination where the data will be stored depends on where Power Query was used.
Using Power Query, you can perform the extract, transform, and load (ETL) processing of
data.

How Power Query helps with data acquisition


Business users spend up to 80 percent of their time on data preparation, which delays
the work of analysis and decision-making. Several challenges contribute to this situation,
and Power Query helps address many of them.

Existing challenge How does Power Query help?

Finding and connecting to Power Query enables connectivity to a wide range of data
data is too difficult sources, including data of all sizes and shapes.
Existing challenge How does Power Query help?

Experiences for data Consistency of experience, and parity of query capabilities over all
connectivity are too data sources.
fragmented

Data often needs to be Highly interactive and intuitive experience for rapidly and
reshaped before iteratively building queries over any data source, of any size.
consumption

Any shaping is one-off and When using Power Query to access and transform data, you
not repeatable define a repeatable process (query) that can be easily refreshed in
the future to get up-to-date data.
In the event that you need to modify the process or query to
account for underlying data or schema changes, you can use the
same interactive and intuitive experience you used when you
initially defined the query.

Volume (data sizes), velocity Power Query offers the ability to work against a subset of the
(rate of change), and variety entire dataset to define the required data transformations,
(breadth of data sources and allowing you to easily filter down and transform your data to a
data shapes) manageable size.
Power Query queries can be refreshed manually or by taking
advantage of scheduled refresh capabilities in specific products
(such as Power BI) or even programmatically (by using the Excel
object model).
Because Power Query provides connectivity to hundreds of data
sources and over 350 different types of data transformations for
each of these sources, you can work with data from any source
and in any shape.

Power Query experiences


The Power Query user experience is provided through the Power Query Editor user
interface. The goal of this interface is to help you apply the transformations you need
simply by interacting with a user-friendly set of ribbons, menus, buttons, and other
interactive components.

The Power Query Editor is the primary data preparation experience, where you can
connect to a wide range of data sources and apply hundreds of different data
transformations by previewing data and selecting transformations from the UI. These
data transformation capabilities are common across all data sources, whatever the
underlying data source limitations.

When you create a new transformation step by interacting with the components of the
Power Query interface, Power Query automatically creates the M code required to do
the transformation so you don't need to write any code.

Currently, two Power Query experiences are available:

Power Query Online—Found in integrations such as Power BI dataflows, Microsoft


Power Platform dataflows, Azure Data Factory wrangling dataflows, and many
more that provide the experience through an online webpage.
Power Query for Desktop—Found in integrations such as Power Query for Excel
and Power BI Desktop.

7 Note

Although two Power Query experiences exist, they both provide almost the same
user experience in every scenario.

Transformations
The transformation engine in Power Query includes many prebuilt transformation
functions that can be used through the graphical interface of the Power Query Editor.
These transformations can be as simple as removing a column or filtering rows, or as
common as using the first row as a table header. There are also advanced
transformation options such as merge, append, group by, pivot, and unpivot.

All these transformations are made possible by choosing the transformation option in
the menu, and then applying the options required for that transformation. The following
illustration shows a few of the transformations available in Power Query Editor.

More information: Quickstart: Using Power Query in Power BI

Dataflows
Power Query can be used in many products, such as Power BI and Excel. However, using
Power Query within a product limits its usage to only that specific product. Dataflows
are a product-agnostic service version of the Power Query experience that runs in the
cloud. Using dataflows, you can get data and transform data in the same way, but
instead of sending the output to Power BI or Excel, you can store the output in other
storage options such as Dataverse or Azure Data Lake Storage. This way, you can use the
output of dataflows in other products and services.

More information: What are dataflows?

Power Query M formula language


In any data transformation scenario, there are some transformations that can't be done
in the best way by using the graphical editor. Some of these transformations might
require special configurations and settings that the graphical interface doesn't currently
support. The Power Query engine uses a scripting language behind the scenes for all
Power Query transformations: the Power Query M formula language, also known as M.

The M language is the data transformation language of Power Query. Anything that
happens in the query is ultimately written in M. If you want to do advanced
transformations using the Power Query engine, you can use the Advanced Editor to
access the script of the query and modify it as you want. If you find that the user
interface functions and transformations won't perform the exact changes you need, use
the Advanced Editor and the M language to fine-tune your functions and
transformations.

Power Query M

let
Source = Exchange.Contents("[email protected]"),
Mail1 = Source{[Name="Mail"]}[Data],
#"Expanded Sender" = Table.ExpandRecordColumn(Mail1, "Sender", {"Name"},
{"Name"}),
#"Filtered Rows" = Table.SelectRows(#"Expanded Sender", each
([HasAttachments] = true)),
#"Filtered Rows1" = Table.SelectRows(#"Filtered Rows", each ([Subject] =
"sample files for email PQ test") and ([Folder Path] = "\Inbox\")),
#"Removed Other Columns" = Table.SelectColumns(#"Filtered Rows1",
{"Attachments"}),
#"Expanded Attachments" = Table.ExpandTableColumn(#"Removed Other
Columns", "Attachments", {"Name", "AttachmentContent"}, {"Name",
"AttachmentContent"}),
#"Filtered Hidden Files1" = Table.SelectRows(#"Expanded Attachments",
each [Attributes]?[Hidden]? <> true),
#"Invoke Custom Function1" = Table.AddColumn(#"Filtered Hidden Files1",
"Transform File from Mail", each #"Transform File from Mail"
([AttachmentContent])),
#"Removed Other Columns1" = Table.SelectColumns(#"Invoke Custom
Function1", {"Transform File from Mail"}),
#"Expanded Table Column1" = Table.ExpandTableColumn(#"Removed Other
Columns1", "Transform File from Mail", Table.ColumnNames(#"Transform File
from Mail"(#"Sample File"))),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded Table Column1",
{{"Column1", type text}, {"Column2", type text}, {"Column3", type text},
{"Column4", type text}, {"Column5", type text}, {"Column6", type text},
{"Column7", type text}, {"Column8", type text}, {"Column9", type text},
{"Column10", type text}})
in
#"Changed Type"

More information: Power Query M formula language

Where can you use Power Query?


The following table lists Microsoft products and services where Power Query can be
found.

Product M engine1 Power Query Power Query Dataflows4


Desktop2 Online3

Excel for Windows Yes Yes No No

Excel for Mac Yes Yes No No

Power BI Yes Yes Yes Yes

Power Apps Yes No Yes Yes

Power Automate Yes No Yes No

Power BI Report Server Yes Yes No No

Azure Data Factory Yes No Yes Yes

Data Factory in Microsoft Fabric Yes No Yes Yes

SQL Server Integration Services Yes No No No

SQL Server Analysis Services Yes Yes No No

Dynamics 365 Customer Insights Yes No Yes Yes

1M engine The underlying query execution engine that runs queries expressed in
the Power Query formula language ("M").

2Power Query Desktop The Power Query experience found in desktop applications.

3Power Query Online The Power Query experience found in web browser applications.
4Dataflows Power Query as a service that runs in the cloud and is product-agnostic.
The stored result can be used in other applications as services.

See also
Data sources in Power Query
Getting data
Power Query quickstart
Shape and combine data using Power Query
What are dataflows
Getting data overview
Article • 04/10/2023

Power Query can connect to many different data sources so you can work with the data
you need. This article walks you through the steps for bringing in data to Power Query
either in Power Query Desktop or Power Query Online.

Connecting to a data source with Power Query follows a standard set of stages before
landing the data at a destination. This article describes each of these stages.

) Important

In some cases, a connector might have all stages of the get data experience, and in
other cases a connector might have just a few of them. For more information about
the experience of a specific connector, go to the documentation available for the
specific connector by searching on the Connectors in Power Query article.

Power Query Desktop experience


The stages for getting data in the Power Query Desktop experiences are:

1. Connection settings

2. Authentication

3. Data preview

4. Query destination

1. Connection settings
Most connectors initially require at least one parameter to initialize a connection to the
data source. For example, the SQL Server connector requires at least the host name to
establish a connection to the SQL Server database.
In comparison, when trying to connect to an Excel file, Power Query requires that you
use the file path to find the file you want to connect to.

The connector parameters are commonly used to establish a connection to a data


source, and they—in conjunction with the connector used—define what's called a data
source path.

7 Note

Some connectors don't require you to enter any parameters at all. These are called
singleton connectors and will only have one data source path available per
environment. Some examples are Adobe Analytics, MailChimp, and Google
Analytics.

2. Authentication
Every single connection that's made in Power Query has to be authenticated. The
authentication methods vary from connector to connector, and some connectors might
offer multiple methods of authentication.

The currently available methods of authentication for Power Query are:

Anonymous: Commonly used when connecting to a data source that doesn't


require user authentication, such as a webpage or a file available over public HTTP.
Basic: A username and password sent in base64 encoding are accepted for
authentication.
API Key: A single API key is accepted for authentication.
Organizational account or Microsoft account: This method is also known as
OAuth 2.0.
Windows: Can be implicit or explicit.
Database: This is only available in some database connectors.

For example, the available authentication methods for the SQL Server database
connector are Windows, Database, and Microsoft account.

3. Data preview
The goal of the data preview stage is to provide you with a user-friendly way to preview
and select your data.

Depending on the connector that you're using, you can preview data by using either:

Navigator window
Table preview dialog box

Navigator window (navigation table)


The Navigator window consists of two main sections:

The object selection pane is displayed on the left side of the window. The user can
interact with and select these objects.

7 Note

For Power Query in Excel, select the Select multiple items option from the
upper-left corner of the navigation window to select more than one object at
a time in the object selection pane.
7 Note

The list of objects in Power Query Desktop is limited to 10,000 items. This limit
does not exist in Power Query Online. For a workaround in Power Query
Desktop, see Object limitation workaround.

The data preview pane on the right side of the window shows a preview of the
data from the object you selected.

Object limitation workaround

There’s a fixed limit of 10,000 objects in the Navigator in Power Query Desktop. This
limit doesn’t occur in Power Query Online. Eventually, the Power Query Online UI will
replace the one in the desktop.

In the interim, you can use the following workaround:

1. Right-click on the root node of the Navigator, and then select Transform Data.
2. Power Query Editor then opens with the full navigation table in the table preview
area. This view doesn't have a limit on the number of objects, and you can use
filters or any other Power Query transforms to explore the list and find the rows
you want (for example, based on the Name column).

3. Upon finding the item you want, you can get at the contents by selecting the data
link (such as the Table link in the following image).

Table preview dialog box


The table preview dialog box consists of only one section for the data preview. An
example of a connector that provides this experience and window is the Folder
connector.

4. Query destination
This is the stage in which you specify where to load the query. The options vary from
integration to integration, but the one option that's always available is loading data to
the Power Query editor to further transform and enrich the query.
Power Query Online experience
The stages for getting data in Power Query Online are:

1. Connection settings and Authentication

2. Data preview

3. Query editor

1. Connection settings and authentication


In the Power Query Online experience, you begin with the Connect to data source page
where you enter values in two distinct sections:

Connection settings

Connection credentials

Connection settings
In the connection settings section, you define the information needed to establish a
connection to your data source. Depending on your connector, that could be the name
of the server, the name of a database, a folder path, a file path, or other information
required by the connector to establish a connection to your data source. Some
connectors also enable specific subsections or advanced options to give you more
control and options when connecting to your data source.
Connection credentials
The first time that you use Power Query to connect to a specific data source, you're
required to create a new connection associated with that data source. A connection is
the full definition of the gateway, credentials, privacy levels, and other connector-
specific fields that make up the connection credentials required to establish a
connection to your data source.

7 Note

Some connectors offer specific fields inside the connection credentials section to
enable or define any sort of security related to the connection that needs to be
established. For example, the SQL Server connector offers the Use Encrypted
Connection field.
The primary information required by all connectors to define a connection are:

Connection name: This is the name that you can define to uniquely identify your
connections. Note that you can't duplicate the name of a connection in your
environment.
Data gateway: If your data source requires a data gateway, select the gateway
using the dropdown list from this field.
Authentication kind & credentials: Depending on the connector, you're presented
with multiple authentication kind options that are available to establish a
connection, as well as fields where you enter your credentials. For this example, the
Windows authentication kind has been selected and you can see the Username
and Password fields that need to be filled in to establish a connection.
Privacy level: You can define the privacy level for your data source to be either
None, Private, Organizational, or Public.
7 Note

To learn more about what data gateways are and how to register a new gateway for
your environment or tenant, go to Using on-premises data gateway.

) Important

Some Power Query integrations don't currently enable a defined connection or a


privacy level. But, all Power Query Online experiences do provide a way to define
the data gateway, authentication kind, and the credentials needed to establish a
connection with your data source.

Once you've defined a connection in Power Query Online, you can reuse the same
connection later on without reentering all this information again. The Connection field
offers a dropdown menu where you select your already defined connections. Once
you've selected your already defined connection, you don't need to enter any other
details before selecting Next.

After you select a connection from this menu, you can also make changes to the
credentials, privacy level, and other connector-specific fields for your data source in your
project. Select Edit connection, and then change any of the provided fields.
2. Data preview
The goal of the data preview stage is to provide you with a user-friendly way to preview
and select your data.

Depending on the connector that you're using, you can preview data by using either:

Navigator window
Table preview dialog box

Navigator window (navigation table) in Power Query Online


The Navigator window consists of two main sections:

The object selection pane is displayed on the left side of the window. The user can
interact with and select these objects.

The data preview pane on the right side of the window shows a preview of the
data from the object you selected.

Table preview dialog box in Power Query Online


The table preview dialog box consists of only one section for the data preview. An
example of a connector that provides this experience and window is the Folder
connector.

3. Query editor
For Power Query Online, you're required to load the data into the Power Query editor
where you can further transform and enrich the query if you choose to do so.

Additional information
To better understand how to get data using the different product integrations of Power
Query, go to Where to get data.
Where to get data
Article • 07/28/2023

Getting data from available data sources is usually the first encounter you have with
Power Query. This article provides basic steps for getting data from each of the
Microsoft products that include Power Query.

7 Note

Each of these Power Query get data experiences contain different feature sets.
More information: Where can you use Power Query?

Get data in Power BI Desktop


The procedures for where to start getting data in Power BI Desktop are described in
Data sources in Power BI Desktop.

In Power BI Desktop, you can also directly select an Excel worksheet, a Power BI dataset,
a SQL server database, or Dataverse data without using the Get data option. You can
also enter data directly in a table, or select from a data source that was recently used.

Get data in Excel (Windows or Mac)


To get data in Excel for Windows or Excel for Mac:

1. From the Data ribbon, select Get Data.

2. Scroll through the category selections in the get data context menu, and select the
connector you want to use.
You'll then be asked to fill out information that's required for you to access the data. Go
to the individual connector articles for more information about this required
information.

In Excel, you can also directly select Text/CVS, Web, and Excel worksheet data without
using the Get Data option. You can also select from a data source that was recently used
and from existing connections or tables.

7 Note

Not all Excel versions support all of the same Power Query connectors. For a
complete list of the Power Query connectors supported by all versions of Excel for
Windows and Excel for Mac, go to Power Query data sources in Excel versions .
Get data in Power BI service
To get data in Power BI service:

1. On the left side of Power BI service, select Workspaces (but not My Workspace).

2. From the Workspace context menu, select the workspace you want to use.

3. From the workspace (in this example, TestWorkspace01), select the context menu
next to +New.

4. From the New context menu, select Dataflow.

5. From Define new tables, select Add new tables.

6. In the Choose data source page, use Search to search for the name of the
connector, or select View more on the right hand side the connector to see a list
of all the connectors available in Power BI service.

7. If you choose to view more connectors, you can still use Search to search for the
name of the connector, or choose a category to see a list of connectors associated
with that category.

Get data in Power Apps


There are several places in Power Apps where you can get data, either by importing data
to a table or by creating a dataflow.

To import data to a new table in Power Apps:

1. On the left side of Power Apps, select Dataverse > Tables.


2. In the Tables pane, at the top left, select Import > Import data.

You can also choose to get data directly from an Excel worksheet without using the
Import data option.

3. In the Choose data source page, use Search to search for the name of the
connector, or select View more on the right hand side the connector to see a list
of all the connectors available in Power BI service.

4. If you choose to view more connectors, you can still use Search to search for the
name of the connector, or choose a category to see a list of connectors associated
with that category.

To import data to an existing table in Power Apps:

1. On the left side of Power Apps, select Dataverse > Tables.

2. In the Tables pane, either:

Select a table in the Tables pane that you want to import data to, and then
select Import > Import data.
Open the table to its individual pane, and then select Import > Import data.

In either case, you can also choose to get data from an Excel worksheet without
using the Import data option.

3. Select the connector from the list of data sources.

To get data in Power Apps when creating a dataflow:

1. On the left side of Power Apps, select Dataverse > Dataflows.


2. If a dataflow already exists:
a. Double-click on the dataflow.
b. From the Power Query editor, select Get data.
c. Select the connector from the list of data sources.

3. If no dataflow exists and you want to create a new dataflow:

a. Select New dataflow.

b. In the New dataflow dialog box, enter a name for your new dataflow.

c. Select Create.

d. Select the connector from the list of data sources.

Get data in Dynamics 365 Customer Insights


To get data in Customer Insights:

1. On the left side of Customer Insights, select Data > Data sources.

2. In the Data sources pane, select Add data source.

3. In Choose your import method, choose Microsoft Power Query.

4. In Save data source as, enter a name for your data source.

5. Select Next.

6. In the Choose data source page, use Search to search for the name of the
connector, or select View more on the right hand side the connector to see a list
of all the connectors available in Power BI service.

7. If you choose to view more connectors, you can still use Search to search for the
name of the connector, or choose a category to see a list of connectors associated
with that category.

Get data from Data Factory in Microsoft Fabric


(Preview)
To get data in Data Factory:

1. On the left side of Data Factory, select Workspaces (but not My Workspace).

2. From your Data Factory workspace, select New > Dataflow Gen2 (Preview) to
create a new dataflow.

3. In Power Query, either select Get data in the ribbon or select Get data from
another source in the current view.

4. In the Choose data source page, use Search to search for the name of the
connector, or select View more on the right hand side the connector to see a list
of all the connectors available in Power BI service.

5. If you choose to view more connectors, you can still use Search to search for the
name of the connector, or choose a category to see a list of connectors associated
with that category.

Get data in Analysis Services


With Analysis Services, a solution is developed in Visual Studio with the Analysis Services
projects extension. The solution is then deployed to SQL Server Analysis Services, Azure
Analysis Services, or a Power BI Premium workspace.

The Analysis Services documentation contains the following information that describes
the process for getting data:
To set up a Visual Studio solution with the Analysis Services projects extension:
Create a tabular model project

To get data: Get data

Get data in Power Automate


There are a couple of places in Power Automate where you can get data, either by
importing tables or in process adviser.

To get data by importing tables:

1. On the left side of Power Automate, select Data > Tables. At this point, a new tab
with Power Apps will open in your browser.

2. In the Power Apps tab, follow the instructions for importing data to either a new
table or to an existing table in the Power Apps section.

For information about how to get data in process advisor, go to Connect to a data
source.

Get data in Azure Data Factory


Azure Data Factory doesn't use the same Power Query get data interface as the other
apps described in this article. Instead, Azure Data Factory uses its own built-in
functionality to import data.

However, Azure Data Factory does use Power Query to transform data in data wrangling.
The following Azure Data Factory articles describe how to use Power Query for data
wrangling:

What is data wrangling?


Power Query activity in Azure Data Factory
Prepare data with data wrangling

Get data in SQL Server Integration Services


SQL Server Integration Services (SSIS) doesn't use the same Power Query get data
interface as the other apps described in this article. Instead, SSIS uses its own built-in
functionality to import data.
For information on installing, configuring, and using the Power Query Source for SSIS,
go to Power Query Source.
Get data experience
Article • 09/22/2023

Getting data from available data sources is usually the first encounter you have with
Power Query. This article provides an explanation of the different modules in the
modern get data experience.

7 Note

Each of these Power Query get data experiences contain different feature sets.
More information: Where can you use Power Query?

Modern get data experience


The get data experience allows you to discover and connect to your data.

The procedures for where to start getting data in Power BI Desktop are described in
Data sources in Power BI Desktop.

The new modular experience in Power Query Online is separated into different modules
located on the left side navigation bar. These modules include:

Home (all)
Templates (Power BI service only)
OneLake Data Hub (Fabric only)
New (all)
Upload (all)
Blank Table (all)
Blank Query (all)

Home
The home page acts as a summary of all the modules and presents you with different
options to expedite the process and get you closer to your data. Typically, this module
contains any existing data sources and gives you the option to use a new data source,
table, and upload files. From the home page, you can select View more on the right side
of the New sources and OneLake data hub sections to visit those modules.

Templates
A dataflow template provides a predefined set of entities and field mappings to enable
flow of data from your source to the destination, in the Common Data Model. A
dataflow template commoditizes the movement of data, which in turn reduces overall
burden and cost for a business user. It provides you with a head start to ingest data
wherein you don’t need to worry about knowing and mapping the source and
destination entities and fields—we do it for you, through dataflow templates. For more
information about templates, go to Introducing dataflow templates; A quick and
efficient way to build your sales leaderboard and get visibility over your sales pipeline .

OneLake data hub


The OneLake data hub makes it easy to find, explore, and use the Fabric data items in
your organization that you have access to. It provides information about the items and
entry points for working with them. For more information on the OneLake data hub, go
to OneLake Data Hub.

New
The new module provides a full list of connectors that you can select from. On this page,
you can search for a connector across all categories by using the search bar at the top of
page. You can also navigate across the categories to find a specific connector to
integrate with. Selecting a connector here opens the connection settings window, which
begins the process of connecting. For more information on using connectors, go to
Getting data overview.

Upload
The Upload module lets you upload your files directly. The following connectors support
this capability:

Excel
JSON
PDF
Text/CSV
XML

This module is an extension of this capability and lets you select the browse button to
upload a local file, or even drag and drop a file. For more information on uploading files,
go to Upload a file.

Blank table
The Blank table module provides a quick start in creating a table in a dataflow.

Blank query
The Blank query module lets you write or paste your own M script to create a new
query.
Authentication with a data source
Article • 02/17/2023

When you attempt to connect to a data source using a new connector for the first time,
you might be asked to select the authentication method to use when accessing the
data. After you've selected the authentication method, you won't be asked to select an
authentication method for the connector using the specified connection parameters.
However, if you need to change the authentication method later, you can do so.

Select an authentication method


Different connectors show different authentication methods. For example, the OData
Feed connector in Power BI Desktop and Excel displays the following authentication
method dialog box.

If you're using a connector from an online app, such as the Power BI service or Power
Apps, you'll see an authentication method dialog box for the OData Feed connector that
looks something like the following image.
As you can see, a different selection of authentication methods is presented from an
online app. Also, some connectors might ask you to enter the name of an on-premises
data gateway to be able to connect to your data.

Set the level of the authentication method


In connectors that require you to enter a URL, you'll be asked to select the level to which
the authentication method will be applied. For example, if you select the Web connector
with a URL of https://contoso.com/2020data/List_of_countries_and_capitals , the
default level setting for your authentication method will be https://contoso.com .

The level you select for the authentication method you chose for this connector
determines what part of a URL will have the authentication method applied to it. If you
select the top-level web address, the authentication method you select for this
connector will be used for that URL address or any subaddress within that address.
However, you might not want to set the top-level address to a specific authentication
method because different subaddresses can require different authentication methods.
One example might be if you were accessing two separate folders of a single SharePoint
site and wanted to use different Microsoft accounts to access each one.

After you've set the authentication method for a connector's specific address, you won't
need to select the authentication method for that connector using that URL address or
any subaddress again. For example, let's say you select the https://contoso.com/
address as the level you want the Web connector URL settings to apply to. Whenever
you use a Web connector to access any webpage that begins with this address, you
won't be required to select the authentication method again.

Change the authentication method


In some cases, you might need to change the authentication method you use in a
connector to access a specific data source.

To edit the authentication method in Power BI Desktop or Excel

1. Do one of the following:

In Power BI Desktop, on the File tab, select Options and settings > Data
source settings.
In Excel, on the Data tab, select Get Data > Data Source Settings.

2. In the Data source settings dialog box, select Global permissions, choose the
website where you want to change the permission setting, and then select Edit
Permissions.

3. In the Edit Permissions dialog box, under Credentials, select Edit.


4. Change the credentials to the type required by the website, select Save, and then
select OK.

You can also delete the credentials for a particular website in step 3 by selecting Clear
Permissions for a selected website, or by selecting Clear All Permissions for all of the
listed websites.

To edit the authentication method in online services, such as for dataflows in the
Power BI service and Microsoft Power Platform

1. Select the connector, and then select Edit connection.

2. Make the required changes, and then select Next.


Connecting with Azure Active Directory using
the Web and OData connectors
When connecting to data sources and services that require authentication through
OAuth or Azure Active Directory-based authentication, in certain cases where the service
is configured correctly, you can use the built-in Web or OData connectors to
authenticate and connect to data without requiring a service-specific or custom
connector.

This section outlines connection symptoms when the service isn't configured properly. It
also provides information on how Power Query interacts with the service when it's
properly configured.

Symptoms when the service isn't configured properly


If you run into the error We were unable to connect because this credential type isn’t
supported for this resource. Please choose another credential type, this error means
that your service doesn't support the authentication type.

One example of this is the Northwind OData service.

1. Enter the Northwind endpoint in the "Get Data" experience using the OData
connector.

2. Select OK to enter the authentication experience. Normally, because Northwind


isn’t an authenticated service, you would just use Anonymous. To demonstrate lack
of support for Azure Active Directory, choose Organizational account, and then
select Sign in.
3. You'll encounter the error, indicating that OAuth or Azure Active Directory
authentication isn't supported in the service.

Supported workflow
One example of a supported service working properly with OAuth is CRM, for example,
https://*.crm.dynamics.com/api/data/v8.2 .

1. Enter the URL in the "Get Data" experience using the OData connector.
2. Select Organizational Account, and then select Sign-in to proceed to connect
using OAuth.

3. The request succeeds and the OAuth flow continues to allow you to authenticate
successfully.

When you select Sign-in in Step 2 above, Power Query sends a request to the provided
URL endpoint with an Authorization header with an empty bearer token.

GET https://*.crm.dynamics.com/api/data/v8.2 HTTP/1.1


Authorization: Bearer
User-Agent: Microsoft.Data.Mashup (https://go.microsoft.com/fwlink/?
LinkID=304225)
Host: pbi.crm.dynamics.com
Connection: Keep-Alive

The service is then expected to respond with a 401 response with a WWW_Authenticate
header indicating the Azure AD authorization URI to use. This response should include
the tenant to sign into, or /common/ if the resource isn’t associated with a specific
tenant.
HTTP/1.1 401 Unauthorized
Cache-Control: private
Content-Type: text/html
Server:
WWW-Authenticate: Bearer
authorization_uri=https://login.microsoftonline.com/3df2eaf6-33d0-4a10-8ce8-
7e596000ebe7/oauth2/authorize
Date: Wed, 15 Aug 2018 15:02:04 GMT
Content-Length: 49

Power Query can then initiate the OAuth flow against the authorization_uri. Power
Query requests an Azure AD Resource or Audience value equal to the domain of the
URL being requested. This value would be the value you use for your Azure Application
ID URL value in your API/service registration. For example, if accessing
https://api.myservice.com/path/to/data/api , Power Query would expect your

Application ID URL value to be equal to https://api.myservice.com .

The following Azure Active Directory client IDs are used by Power Query. You might
need to explicitly allow these client IDs to access your service and API, depending on
your overall Azure Active Directory settings.

Client ID Title Description

a672d62c-fc7b-4e81-a576- Power Query for Excel Public client, used in Power BI Desktop
e60dc46e951d and Gateway.

b52893c8-bc2e-47fc-918b- Power BI Data Refresh Confidential client, used in Power BI


77022b299bbc service.

7ab7862c-4c57-491e-8a45- Power Apps and Confidential client, used in Power Apps


d52a7e023983 Power Automate and Power Automate.

You might need to explicitly allow these client IDs to access your service and API,
depending on your overall Azure Active Directory settings. Go to step 8 of Add a scope
for more details.

If you need more control over the OAuth flow (for example, if your service must respond
with a 302 rather than a 401 ), or if your application’s Application ID URL or Azure AD
Resource value don't match the URL of your service, then you’d need to use a custom
connector. For more information about using our built-in Azure AD flow, go to Azure
Active Directory authentication.
Connections and authentication in
Power Query Online
Article • 02/17/2023

In Power Query Online, a connection refers to the unique identifier and associated
credentials used to establish a connection to a particular data source. One convenient
feature of connections in Power Query is that you can create multiple connections
against the same data source with different credentials.

There are two categories of connections:

Connection Icon Description


category name
Connection Icon Description
category name

Cloud Any connection for data source that's cloud-based and


doesn't use a gateway.

On-premises Any connection that requires a gateway.

Creating a connection
During the get data experience in Power Query Online, you'll find a dialog where you
enter information to create and establish a connection to your data source. The process
is standard for all connectors in Power Query, but some connectors might require more
information in order to create a connection.

Creating a cloud connection


Using the Azure SQL Server database connector as an example, you can enter the
connector settings to establish a connection. For the Azure SQL Server connector, the
required setting is just the server name, but you can also enter the name of the
database and select any other advanced options available for the connector.

After entering the values for the connector settings in the Connection settings section,
you can proceed with the Connection credentials section. In this section, you can create
a connection specific to the connection settings you previously entered.
The following table contains the fields and values used in the Connection settings
section.

Field Description Sample value

Connection The name you can enter for your new connection. Sample
Name Connection

Data Gateway An optional field that lets you bind a gateway to your none
connection. For cloud connections, there's no gateway
binding to the connection.

Authentication The authentication kind you select to use that's supported Organizational
Kind by the connector. account

Credentials Depending on the authentication kind you select, there will Derived from
be available a contextual set of fields to input your OAuth2 flow, but
credentials, a button to launch an OAuth2 flow, or even no is shown as a
fields at all for an authentication kind such as Anonymous. Sign in button in
the image
7 Note

By default, the field for the connection name tries to provide a default name when
you create a connection using the information from the connection settings.

After finishing the Connection settings section, you select the Next button to move
forward in the get data experience.

 Tip

Some connectors provide an auto sign in experience. To learn more about this
feature, go to auto sign in.

Creating an on-premises connection


As an alternative, you can also bind a gateway to your connection, which converts your
connection from a cloud connection to an on-premises connection.

7 Note

To create a gateway you can read the article on using an on-premises data
gateway in dataflows.

Using a local SQL Server database as an example, you enter the connector settings to
establish a connection. For the SQL Server connector, the required setting is just the
server name, but you can also enter the name of the database and select any other
advanced options available for the connector. For demonstration purposes, both the
server name and database have been entered.
After entering the values for the connector in Connection settings, you can proceed
with the Connection credentials section. In this section, you can create a connection
specific to the connection settings you previously entered.

The following table contains the fields and values used in the Connection settings
section.

Field Description Sample value

Connection The name you can enter for your new localhost;AdventureWorks2019
Name connection

Data Gateway An optional field that lets you bind a gateway Mike Test
to your connection.

Authentication The authentication kind you select to use Basic


Kind that's supported by the connector.
Field Description Sample value

Credentials Depending on the authentication kind Username and password


selected, there will be available a contextual
set of fields to input your credentials, a button
to launch an OAuth2 flow, or even no fields at
all for an authentication kind such as
Anonymous.

After finishing the Connection settings section, you select the Next button to move
forward within the get data experience.

Components of a connection
Each connection is made up of a set of components. The following table contains more
information for each component.

Component Required Description Sample


name or
optional

Data Source Required The data source for which the connection is SQL Server, File,
kind being established. Folder, Azure
Data Lake
Storage

Data Source Required A string that represents the values or Server Name,
path parameters used to establish a connection to Database Name
your data source.

Authentication Required Encompasses both the authentication method Windows,


and the credentials used for authentication. OAuth2,
Anonymous

Gateway Optional Used when a gateway is needed to establish the Any gateway
connection and execute any query evaluation.

Privacy level Optional Establishes the security for each connection, None, Public,
which is taken into consideration when queries Organizational,
from different connections are combined. Private

) Important

Currently, the privacy level is set to None for any new connections created. When
you try to combine multiple data sources, a new dialog prompts you to define the
data privacy levels of the data sources that you want to combine.
Known connections
When Power Query recognizes a set of connection settings, it tries to look up in its
respective credentials storage to see if there's a connection that matches those settings
and, if so, automatically selects that connection.

To override this behavior, you can take either of the following two actions:

Display the dropdown menu to scan a list of available connections for the given
connection settings. You can then select the one that you'd like to use or create a
new one.
Select Edit connection to modify the existing connection or select Create new
connection from the dropdown menu to create a new named connection.
More resources
List of connectors in Power Query
On-premises data gateways documentation
Change the gateway used in a dataflow
Troubleshooting dataflow issues: Connection to the data source
Auto sign in for Azure Active Directory
data sources
Article • 10/06/2022

The auto sign-in feature attempts to automatically sign you in as the current user when
connecting to data sources in Power Query that use Azure Active Directory as one of
their authentication kinds. It does this auto sign-in to expedite the authentication
process and minimize the time it takes to start working with your data.

More technically, the auto sign-in feature for Azure Active Directory data sources uses
the information derived from the currently authenticated user in the Power Query
Online experience. This information is then used to request a new access token for a
selected data source during the connection settings and authentication steps of the get
data process.

7 Note

This functionality is currently only available in Power Query Online and is enabled
by default for a select set of connectors. No configuration is needed to enable this
feature.

When selecting a connector that has this capability, it automatically signs you in with
Organizational account set as the authentication kind.
 Tip

If you'd like to authenticate with a different account, select the Switch account link
shown in the dialog.

Further reading
Authentication in Power Query Online
Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow
Upload a file (Preview)
Article • 02/17/2023

You can upload files to your Power Query project when using Power Query Online.

The following connectors currently support the upload a file feature.

Excel
JSON
PDF
Text / CSV
XML

7 Note

Only files with the following extensions are supported for upload: .csv, .json, .pdf,
.prn, .tsv, .txt, .xl, .xls, .xlsb, .xlsm, .xlsw, .xlsx, .xml.

Upload a file experience


When you select a connector that offers the Upload file experience, you will notice a
radio button on the top that enables this feature.
When you select Upload file, the option underneath opens a simple dialog to let you
either drag a file or browse in your local file system to upload a file.

After you've selected your file, a progress bar shows you how the upload process is
going. Once the upload process is finished, you'll be able to see a green check mark
underneath your file name, with the message Upload successful and the file size right
next to it.
7 Note

The files that are uploaded through this feature are stored in your personal
Microsoft OneDrive for Business account.

Before you select the next button, you need to change the authentication kind from
Anonymous to Organizational account and go through the authentication process.
Start this process by selecting Sign in.

After going through the authentication process, a You are currently signed in message
underneath the Authentication Kind selection let's you know that you've successfully
signed in. After you've signed in, select Next. The file is then stored in your personal
Microsoft OneDrive for Business account, and a new query is created from the file that
you've uploaded.
Drag and drop experience in the query editor
When using the Power Query editor, you can drop a file on either the diagram view or
the queries pane to upload a file.

When dropping the file on either of the previously mentioned sections, a dialog with the
appropriate connector settings page will be shown, based on the file extension of the
file that's being uploaded.
SharePoint and OneDrive for Business
files import
Article • 02/17/2023

Power Query offers a series of ways to gain access to files that are hosted on either
SharePoint or OneDrive for Business.

Browse files

7 Note

Currently, you can only browse for OneDrive for Business files of the authenticated
user inside of Power Query Online for PowerApps.

2 Warning

This feature requires your browser to allow third party cookies. If your browser has
blocked third party cookies, the Browse dialog will appear but it'll be completely
blank with no option to close the dialog.

Power Query provides a Browse OneDrive button next to the File path or URL text box
when you create a dataflow in PowerApps using any of these connectors:

Excel
JSON
PDF
XML
TXT/CSV
When you select this button, you'll be prompted to go through the authentication
process. After completing this process, a new window appears with all the files inside the
OneDrive for Business of the authenticated user.

You can select the file of your choice, and then select the Open button. After selecting
Open, you'll be taken back to the initial connection settings page where you'll see that
the File path or URL text box now holds the exact URL to the file you've selected from
OneDrive for Business.
You can select the Next button at the bottom-right corner of the window to continue
the process and get your data.

From the Web connector using a file URL


1. Navigate to your OneDrive for Business location using a browser. Right-click the
file you want to use, and select Open in Excel.

7 Note

Your browser interface might not look exactly like the following image. There
are many ways to select Open in Excel for files in your OneDrive for Business
browser interface. You can use any option that allows you to open the file in
Excel.
2. In Excel, select File > Info, and then select the Copy path button.

To use the link you just copied in Power Query, take the following steps:

1. Select Get Data > Web.


2. In the From Web dialog box, select the Basic option and paste the link in URL.

3. Remove the ?web=1 string at the end of the link so that Power Query can properly
navigate to your file, and then select OK.
4. If Power Query prompts you for credentials, choose either Windows (for on-
premises SharePoint sites) or Organizational Account (for Microsoft 365 or
OneDrive for Business sites). The select Connect.

U Caution

When working with files hosted on OneDrive for Home, the file that you want
to connect to needs to be publicly available. When setting the authentication
method for this connection, select the Anonymous option.

When the Navigator dialog box appears, you can select from the list of tables, sheets,
and ranges found in the Excel workbook. From there, you can use the OneDrive for
Business file just like any other Excel file. You can create reports and use it in datasets
like you would with any other data source.

7 Note

To use a OneDrive for Business file as a data source in the Power BI service, with
Service Refresh enabled for that file, make sure you select OAuth2 as the
Authentication method when configuring your refresh settings. Otherwise, you
may encounter an error (such as, Failed to update data source credentials) when you
attempt to connect or to refresh. Selecting OAuth2 as the authentication method
remedies that credentials error.

SharePoint folder connector


You can read a detailed step-by-step guide on how to connect to the files hosted on
your SharePoint site in the SharePoint folder article.

After successfully establishing the connection, you'll be prompted with a table preview
that shows the files in your SharePoint site. Select the Transform data button at the
bottom right of the window.

Selecting the Transform Data button will take you to a view of the data called the File
system view. Each of the rows in this table represents a file that was found in your
SharePoint site.

The table has a column named Content that contains your file in a binary format. The
values in the Content column have a different color than the rest of the values in the
other columns of the table, which indicates that they're selectable.

By selecting a Binary value in the Content column, Power Query will automatically add a
series of steps in your query to navigate to the file and interpret its contents where
possible.

For example, from the table shown in the previous image, you can select the second row
where the Name field has a value of 02-February.csv. Power Query will automatically
create a series of steps to navigate and interpret the contents of the file as a CSV file.

7 Note

You can interact with the table by applying filters, sortings, and other transforms
before navigating to the file of your choice. Once you've finished these transforms,
select the Binary value you want to view.

OneDrive for Business experience


The SharePoint folder connector and its experience also work for files hosted on
OneDrive for Business. However, the URL that you need to use is different from the one
you would use for a SharePoint site. To locate your unique URL, go to your OneDrive for
Business portal and copy the URL from your browser. This URL may look similar to the
following example:

https://contoso-

my.sharepoint.com/personal/user123_contoso_com/_layouts/15/onedrive.aspx

You don't need the full URL, but only the first few parts. The URL you need to use in
Power Query will have the following format:

https://<unique_tenant_name>.sharepoint.com/personal/<user_identifier>

For example:
https://contoso-my.sharepoint/personal/user123_contoso_com

SharePoint.Contents function
While the SharePoint folder connector offers you an experience where you can see all
the files available in your SharePoint or OneDrive for Business site at once, you can also
opt for a different experience. In this experience, you can navigate through your
SharePoint or OneDrive for Business folders and reach the folder or file(s) that you're
interested in.

This experience is provided through the SharePoint.Contents function. Take the


following steps to use this function:

1. Create a Blank Query.

2. Change the code in the formula bar to be SharePoint.Contents("url") where url


is the same format used for the SharePoint folder connector. For example:

SharePoint.Contents("https://contoso.sharepoint.com/marketing/data")

7 Note

By default, this function tries to use SharePoint API Version 14 to connect. If


you aren't certain of the API version being used by your SharePoint site, you
might want to try using the following example code:
SharePoint.Contents("https://contoso.sharepoint.com/marketing/data",

[ApiVersion="Auto"]) .

3. Power Query will request that you add an authentication method for your
connection. Use the same authentication method that you'd use for the SharePoint
files connector.

4. Navigate through the different documents to the specific folder or file(s) that
you're interested in.

For example, imagine a SharePoint site with a Shared Documents folder. You can
select the Table value in the Content column for that folder and navigate directly
to that folder.
Inside this Shared Documents folder there's a folder where the company stores all
the sales reports. This folder is named Sales Reports. You can select the Table value
on the Content column for that row.

With all the files inside the Sales Reports folder, you could select the Combine files
button (see Combine files overview) to combine the data from all the files in this
folder to a single table. Or you could navigate directly to a single file of your
choice by selecting the Binary value from the Content column.

7 Note

The experience provided by the SharePoint.Contents function is optimal for


SharePoint and OneDrive for Business environments with a large number of files.
Lack of Support for Microsoft Graph in
Power Query
Article • 11/18/2022

Connecting to Microsoft Graph REST APIs from Power Query isn't recommended or
supported. Instead, we recommend users explore alternative solutions for retrieving
analytics data based on Graph, such as Microsoft Graph data connect.

You might find you can make certain REST calls to Microsoft Graph API endpoints work
through the Web.Contents or OData.Feed functions, but these approaches aren't reliable
as long-term solutions.

This article outlines the issues associated with Microsoft Graph connectivity from Power
Query and explains why it isn't recommended.

Authentication
The built-in Organizational Account authentication flow for Power Query's Web.Contents
and OData.Feed functions isn't compatible with most Graph endpoints. Specifically,
Power Query's Azure Active Directory (Azure AD) client requests the user_impersonation
scope, which isn't compatible with Graph's security model. Graph uses a rich set of
permissions that aren't available through our generic Web and OData connectors.

Implementing your own Azure AD credential retrieval flows directly from your query, or
using hardcoded or embedded credentials, also isn't recommended for security reasons.

OData libraries' incompatibility


Certain Graph endpoints and extensions to Graph may require the use of OData libraries
and features that aren't supported by Power Query's built-in OData.Feed function
because Graph and Power Query might be using two different versions of OData
libraries. These issues generally result in errors retrieving the service's $metadata
document. You might discover common guidance related to passing the Implementation
= "2.0" option to the OData.Feed function call to ensure the latest supported OData
libraries are used. While this approach does resolve certain OData incompatibilities, you
might still encounter errors over time as Graph and Power Query adopt new versions of
the OData libraries at different times.
Performance
The Microsoft Graph API is designed to support many application scenarios, but is
suboptimal for the large-scale data retrieval required for most analytics scenarios. If you
try to retrieve large amounts of data from Graph APIs, you might encounter
performance issues. Details around scenario applicability can be found in the Graph
documentation.

Using a custom connector


Some Power Query users have enabled Graph connectivity through custom connectors,
limiting their functionality to certain parts of the Graph API. This approach allows
connector developers to resolve general authentication issues by defining their own
Azure AD client with Graph specific permissions. Some custom connectors work around
OData challenges by using Web.Contents and simulating OData support within their
connector logic. However, this approach isn't recommended as users frequently hit the
performance and scalability issues described above. Developers who take this route
should continue with these limitations in mind.
Alternatives to out-of-box connectivity
in Power BI Desktop
Article • 02/17/2023

While Power BI Desktop offers out-of-box connectivity to over 150 data sources, there
might be cases where you want to connect to a data source for which no out-of-box
connector is available.

Connectivity through generic interfaces


It might be possible to connect to certain data sources without a built-in out-of-box
connector by using generic interface connectors.

For example, the ODBC connector can connect to services with ODBC interfaces, and the
Web connector can connect to services with REST API interfaces.

Using available Power BI out-of-box generic interface connectors to connect through


interfaces that the end data source supports lets you connect to many more data
sources on the internet than there are specific out-of-box connectors for.

To learn more about connectivity through generic interfaces, go to Connect to data


using generic interfaces.

Connectivity through a custom connector


The Power Query SDK lets you create custom connectors to unblock connectivity
scenarios to Power BI Desktop. You can create and distribute custom connectors to end
services and data sources they can authenticate to.

Community members and organizations can also share custom connectors that they've
created. While Microsoft doesn't offer any support, ownership, or guarantees for these
custom connectors, you might be able to use them for your scenarios. The Power BI
Partner Program also includes many partners that can build custom connectors. To learn
more about the program or find a partner, go to Contact a Power BI Partner .

Users that own an end service or data source can create a custom connector and might
be eligible to certify the connector to have it made available publicly out-of-box within
Power BI Desktop.
Request the data source owner to build and
certify a connector
As only the data source owner or an approved third party can build and certify a custom
connector for any service, end users are encouraged to share the demand for a
connector directly with the data source owner to encourage investment into creating
and certifying one.

Request in Power BI Ideas forum


In addition to directly engaging the data source owner, you should also create or vote
on ideas in the Power BI Ideas Forum to demonstrate the need and demand for a
connector. This feedback also helps encourage the data source owner to invest in a
certified connector that's available for users out-of-box in Power BI Desktop.
Connect to data using generic interfaces
Article • 04/06/2023

You can connect to a multitude of different data sources using built-in connectors that
range from Access databases to Zendesk resources. You can also connect to all sorts of
other data sources to further expand your connectivity options, by using the generic
interfaces (such as ODBC or REST APIs) built into Power Query Desktop and Power
Query Online.

Power Query data interfaces


Power Query includes an ever-growing collection of connectors that are built to connect
to a specific data source. For example, the SharePoint list connector provides specific
fields and supporting information during the connection sequence that are designed for
SharePoint lists. Other data sources found in the window that appears when you choose
to get data (in Power Query Desktop) or choose a data source (in Power Query Online)
work in the same way.

In addition, you can also connect to data sources that aren't identified in the get data
and choose data source lists by using one of the following generic data interfaces:

ODBC
OLE DB
OData
REST APIs
R Scripts

By providing the appropriate parameters in the connection windows that these generic
interfaces provide, the world of data sources you can access and use in Power Query
grows significantly.

In the following sections, you can find lists of data sources that can be accessed by
these generic interfaces.

Data sources accessible through ODBC


The ODBC connector in Power Query lets you import data from any third-party ODBC
driver simply by specifying a Data Source Name (DSN) or a connection string. As an
option, you can also specify a SQL statement to execute against the ODBC driver.
Data sources accessible through OLE DB
The OLE DB connector in Power Query lets you import data from any third-party OLE DB
driver simply by specifying a connection string. As an option, you can also specify a SQL
statement to execute against the OLE DB driver.
The following list details a few examples of the data sources that Power Query can
connect to using the generic OLE DB interface.

Power Query generic connector External data source Link for more information

OLE DB SAS OLE DB SAS provider for OLE DB

OLE DB Sybase OLE DB Sybase provider for OLE DB

Data sources accessible through OData


The OData feed connector in Power Query lets you import data from any OData URL
simply by typing in or pasting the OData URL. You can add multiple URL parts by typing
or pasting those links in the text boxes provided in the OData feed window.
The following list details a few examples of the data sources that Power Query can
connect to using the generic OData interface.

Power Query generic External data Link for more information


connector source

OData Coming soon Check back soon for OData data


sources

Data sources accessible through REST APIs


You can connect to data sources using the REST APIs and thereby use data from all sorts
of data sources that support REST.
The following list details a few examples of the data sources that Power BI Desktop can
connect to using the generic REST APIs interface.

Power BI Desktop generic External data Link for more information


connector source

REST APIs Couchbase DB Couchbase REST API


information

Data sources accessible through R Script


You can use R scripts to access data sources, and use that data in Power BI Desktop.
The following list details a few examples of the data sources that Power BI Desktop can
connect to using the generic R scripts interface.

Power BI Desktop generic connector External data source Link for more information

R Script SAS Files R script guidance from CRAN

R Script SPSS Files R script guidance from CRAN

R Script R Statistical Files R script guidance from CRAN


Manage connections (Preview)
Article • 09/18/2023

7 Note

This feature is currently available only in Power Query Online and is in public
preview.

With Power Query, you can connect to a multitude of data sources. When you connect
to a data source, you fundamentally create a connection to it. This connection consists
of your data source, credentials, and more information, such as privacy levels and
optional parameters for each data source. The Manage connections dialog is one
centralized way in your Power Query project to manage the connections that are being
referenced by your project.

The entry to the Manage connections dialog is available in the Power Query Home tab
in the ribbon's Data sources group.

Manage connections displays a list of all connections being referenced in your Power
Query project. It also notes the ability to unlink and edit from any of the connections in
your project.

Unlink a connection
Right next to the name of connection, and before the Source type column, there's an
icon to unlink the connection.

When you unlink a connection, you're simply removing the reference of the connection
from your project. Unlinking means that the definition of the connection isn't removed
from the back-end, but it can't be used in your project.

A new prompt then asks you to confirm that you want to unlink the connection.

7 Note

If you want to delete a connection from within a specific product integration, such
as Microsoft Power BI or Microsoft Fabric, be sure to check out the documentation
for each product on how a centralized connections portal can be used.

Edit a connection
Right next to the name of connection, and before the disconnect icon, there's an icon to
edit the connection.

The edit functionality allows you to change the following values:

Data gateway: If your data source uses a data gateway, you can modify the
gateway using the dropdown list from this field.
Authentication kind & credentials: Depending on the connector, you're presented
with multiple authentication kind options that are available to establish a
connection, and fields where you enter your credentials.
More Resources
Get Data experience in Power Query
Connectors available in Power Query
Change the gateway used in a dataflow
project
Article • 05/25/2023

When creating a new dataflow project in Power Query Online, you can select the on-
premises data gateway used for your specific data sources during the get data
experience. This article showcases how you can modify or assign a gateway to an
existing dataflow project.

7 Note

Before being able to change a gateway, make sure that you have the needed
gateways already registered under your tenant and with access for the authors of
the dataflow project. You can learn more about data gateways from Using an on-
premises data gateway in Power Platform dataflows.

Changing the gateway


To showcase how to change the gateway in a dataflow project, this article uses a query
that connects to a local folder as an example.

This query previously used a gateway named "Gateway A" to connect to the folder. But
"Gateway A" no longer has access to the folder due to new company policies. A new
gateway named "Gateway B" has been registered and now has access to the folder that
the query requires. The goal is to change the gateway used in this dataflow project so it
uses the new "Gateway B".

To change the gateway:

1. From the Home tab in Power Query, select Options.


2. In the Options dialog box, select Data load, and then select the gateway to use for
your project, in this case, Gateway B.

 Tip

If there were recent changes to your gateways, select the small refresh icon to
the right of the drop-down menu to update the list of available gateways.
3. After selecting the correct gateway for the project, in this case Gateway B, select
OK to go back to the Power Query editor.

Validate your data source path


After changing your data gateway, we recommended that you check your data source
paths to make sure that your queries are connecting to your data sources as expected.

7 Note

The M engine identifies a data source using a combination of its kind and path.

The kind defines what connector or data source function is being used, such as SQL
Server, folder, Excel workbook, or others.

The path value is derived from the required parameters of your data source
function and, for this example, that would be the folder path.

The best way to validate the data source path is to go into the query where your data
source function is being used and check the parameters being used for it. For this
example, there's only one query that connects to a folder and this query has the Source
step with the data source path defined in it. You can double-click the Source step to get
the dialog that indicates the parameters used for your data source function. Make sure
that the folder path, or the correct parameters for your data source function, is the
correct one in relation to the gateway being used.

Modify authentication
To modify the credentials used against your data source, select Get data in the Power
Query editor ribbon to launch the Choose data source dialog box, then define a new or
existing connection to your data source. In this example, the connector that's used is the
Folder connector.
Once in Connection settings, create a new connection or select or modify a different
connection for your data source.
After defining the connection details, select Next at the bottom-right corner and
validate that your query is loading in the Power Query editor.

7 Note

This process is the same as if you were to connect again to your data source. But by
doing the process again, you're effectively re-defining what authentication method
and credentials to use against your data source.
The Power Query user interface
Article • 04/10/2023

With Power Query, you can connect to many different data sources and transform the
data into the shape you want.

In this article, you'll learn how to create queries with Power Query by discovering:

How the "Get Data" experience works in Power Query.


How to use and take advantage of the Power Query user interface.
How to perform common transformations like grouping and merging data.

If you're new to Power Query, you can sign up for a free trial of Power BI before you
begin. You can use Power BI dataflows to try out the Power Query Online experiences
described in this article.

You can also download Power BI Desktop for free .

Examples in this article connect to and use the Northwind OData feed .

https://services.odata.org/V4/Northwind/Northwind.svc/

Connect to an OData feed

7 Note

To learn more about where to get data from each of the Microsoft products that
include Power Query, go to Where to get data.

To start, locate the OData feed connector from the "Get Data" experience. You can select
the Other category from the top, or search for OData in the search bar in the top-right
corner.
Once you select this connector, the screen displays the connection settings and
credentials.

For URL, enter the URL to the Northwind OData feed shown in the previous
section.
For On-premises data gateway, leave as none.
For Authentication kind, leave as anonymous.

Select the Next button.

The Navigator now opens, where you select the tables you want to connect to from the
data source. Select the Customers table to load a preview of the data, and then select
Transform data.
The dialog then loads the data from the Customers table into the Power Query editor.

The above experience of connecting to your data, specifying the authentication method,
and selecting the specific object or table to connect to is called the get data experience
and is documented with further detail in the Getting data article.

7 Note

To learn more about the OData feed connector, go to OData feed.

The Power Query editor user experience


The Power Query editor represents the Power Query user interface, where you can add
or modify queries, manage queries by grouping or adding descriptions to query steps,
or visualize your queries and their structure with different views. The Power Query user
interface has five distinct components.

1. Ribbon: the ribbon navigation experience, which provides multiple tabs to add
transforms, select options for your query, and access different ribbon buttons to
complete various tasks.
2. Queries pane: a view of all your available queries.
3. Current view: your main working view, that by default, displays a preview of the
data for your query. You can also enable the diagram view along with the data
preview view. You can also switch between the schema view and the data preview
view while maintaining the diagram view.
4. Query settings: a view of the currently selected query with relevant information,
such as query name, query steps, and various indicators.
5. Status bar: a bar displaying relevant important information about your query, such
as execution time, total columns and rows, and processing status. This bar also
contains buttons to change your current view.

7 Note

The schema and diagram view are currently only available in Power Query Online.

Using the Power Query editor


In this section, you'll begin transforming your data using Power Query. But before you
start working on transforming the data, we'll discuss some of the UI panes that can be
expanded or collapsed depending on their context. Selecting the appropriate panes lets
you focus on the view that matters the most to you. We'll also discuss the different
views that are available in the Power Query UI.

The ribbon
The ribbon is the component where you'll find most of the transforms and actions that
you can do in the Power Query editor. It has multiple tabs, whose values depend on the
product integration. Each of the tabs provides specific buttons and options, some of
which might be redundant across the whole Power Query experience. These buttons and
options provide you with easy access to the transforms and actions that you may need.

The Power Query interface is responsive and tries to adjust your screen resolution to
show you the best experience. In scenarios where you'd like to use a compact version of
the ribbon, there's also a collapse button at the bottom-right corner of the ribbon to
help you switch to the compact ribbon.

You can switch back to the standard ribbon view by simply clicking on the expand icon
at the bottom-right corner of the ribbon

Expand and collapse panes


You'll notice that throughout the Power Query user interface there are icons that help
you collapse or expand certain views or sections. For example, there's an icon on the top
right-hand corner of the Queries pane that collapses the queries pane when selected,
and expands the pane when selected again.

Switch between views


Apart from being able to collapse certain panes and sections in the Power Query user
interface, you can also switch what views are displayed. To switch views, go to the View
tab in the ribbon and you'll find the Preview and Layout groups, which control how the
Power Query user interface will look.

You're encouraged to try all of these options to find the view and layout that you feel
most comfortable working with. As an example, select Schema view from the ribbon.

The right side of the status bar also contains icons for the diagram, data, and schema
views. You can use these icons to change between views. You can also use these icons to
enable or disable the view of your choice.

What is schema view


The schema view offers you a quick and straightforward way to interact only with the
components of the schema for your table, such as the column names and data types.
We recommend the schema view when you want to do schema-related actions, such as
removing columns, renaming columns, changing column data types, reordering
columns, or duplicating columns.

7 Note

To learn more about schema view, go to Using Schema view.

For example, in schema view, select the check mark next to the Orders and
CustomerDemographics columns, and from the ribbon select the Remove columns
action. This selection applies a transformation to remove these columns from your data.

What is diagram view


You can now switch back to the data preview view and enable diagram view to use a
more visual perspective of your data and query.
The diagram view helps you visualize how your query is structured and how it might
interact with other queries in your project. Each step in your query has a distinct icon to
help you recognize the transform that was used. There are also lines that connect steps
to illustrate dependencies. Since both data preview view and diagram view are enabled,
the diagram view displays on top of the data preview.

7 Note

To learn more about diagram view, go to Diagram view.

Begin transforming your data


With diagram view enabled, select the plus sign. You can search for a new transform to
add to your query. Search for Group by and select the transform.
The Group by dialog then appears. You can set the Group by operation to group by the
country and count the number of customer rows per country.

1. Keep the Basic radio button selected.


2. Select Country to group by.
3. Select Customers and Count rows as the column name and operation respectively.

Select OK to perform the operation. Your data preview refreshes to show the total
number of customers by country.

An alternative way to launch the Group by dialog would be to use the Group by button
in the ribbon or by right-clicking the Country column.

For convenience, transforms in Power Query can often be accessed from multiple places,
so users can opt to use the experience they prefer.

Adding a new query


Now that you have a query that provides the number of customers per country, you can
add context to this data by finding the total number of suppliers for each territory.

First, you'll need to add the Suppliers data. Select Get Data and from the drop-down
menu, and then select OData.

The OData connection experience reappears. Enter the connection settings as described
in Connect to an OData feed to connect to the Northwind OData feed. In the Navigator
experience, search for and select the Suppliers table.

Select Create to add the new query to the Power Query editor. The queries pane should
now display both the Customers and the Suppliers query.

Open the Group by dialog again, this time by selecting the Group by button on the
ribbon under the Transform tab.
In the Group by dialog, set the Group by operation to group by the country and count
the number of supplier rows per country.

1. Keep the Basic radio button selected.


2. Select Country to group by.
3. Select Suppliers and Count rows as the column name and operation respectively.

7 Note

To learn more about the Group by transform, go to Grouping or summarizing


rows.

Referencing queries
Now that you have a query for customers and a query for suppliers, your next goal is to
combine these queries into one. There are many ways to accomplish this, including
using the Merge option in the Customers table, duplicating a query, or referencing a
query. For this example, you'll create a reference by right-clicking the Customers table
and selecting Reference, which effectively creates a new query that references the
Customers query.

After creating this new query, change the name of the query to Country Analysis and
disable the load of the Customers table by unmarking the Enable load option from the
Suppliers query.

Merging queries
A merge queries operation joins two existing tables together based on matching values
from one or multiple columns. In this example, the goal is to join both the Customers
and Suppliers tables into one table only for the countries that have both Customers and
Suppliers.

Inside the Country Analysis query, select the Merge queries option from the Home tab
in the ribbon.

A new dialog for the Merge operation appears. You can then select the query to merge
with your current query. Select the Suppliers query and select the Country field from
both queries. Finally, select the Inner join kind, as you only want the countries where
you have Customers and Suppliers for this analysis.
After selecting the OK button, a new column is added to your Country Analysis query
that contains the data from the Suppliers query. Select the icon next to the Suppliers
field, which displays a menu where you can select which fields you want to expand.
Select only the Suppliers field, and then select the OK button.
The result of this expand operation is a table with only 12 rows. Rename the
Suppliers.Suppliers field to just Suppliers by double-clicking the field name and
entering the new name.

7 Note

To learn more about the Merge queries feature, go to Merge queries overview.

Applied steps
Every transformation that is applied to your query is saved as a step in the Applied
steps section of the query settings pane. If you ever need to check how your query is
transformed from step to step, you can select a step and preview how your query
resolves at that specific point.
You can also right-click a query and select the Properties option to change the name of
the query or add a description for the query. For example, right-click the Merge queries
step from the Country Analysis query and change the name of the query to be Merge
with Suppliers and the description to be Getting data from the Suppliers query for
Suppliers by Country.

This change adds a new icon next to your step that you can hover over to read its
description.

7 Note

To learn more about Applied steps, go to Using the Applied Steps list.

Before moving on to the next section, disable the Diagram view to only use the Data
preview.
Adding a new column
With the data for customers and suppliers in a single table, you can now calculate the
ratio of customers-to-suppliers for each country. Select the last step of the Country
Analysis query, and then select both the Customers and Suppliers columns. In the Add
column tab in the ribbon and inside the From number group, select Standard, and then
Divide (Integer) from the dropdown.

This change creates a new column called Integer-division that you can rename to Ratio.
This change is the final step of your query, and provides the customer-to-supplier ratio
for the countries where the data has customers and suppliers.

Data profiling
Another Power Query feature that can help you better understand your data is Data
Profiling. By enabling the data profiling features, you'll get feedback about the data
inside your query fields, such as value distribution, column quality, and more.

We recommended that you use this feature throughout the development of your
queries, but you can always enable and disable the feature at your convenience. The
following image shows all the data profiling tools enabled for your Country Analysis
query.
7 Note

To learn more about Data profiling, go to Using the data profiling tools.

The advanced editor


If you want to observe the code that the Power Query editor is creating with each step,
or want to create your own shaping code, you can use the advanced editor. To open the
advanced editor, select the View tab on the ribbon, and then select Advanced Editor. A
window appears, showing the existing query code.
You can directly edit the code in the Advanced Editor window. The editor indicates if
your code is free of syntax errors. To close the window, select the Done or Cancel
button.

Accessing Power Query help


There are various levels of help that can be accessed in the Microsoft apps that use
Power Query. This section describes these help levels.

Power Query help in Excel


There are a couple of ways to access Power Query help information that specifically
applies to Power Query in Excel. One way to access the online Excel Power Query
documentation is to select File > Help > Help in the Power Query editor.

To access the inline Power Query help information in Excel, select the Help tab on the
Excel ribbon, and then enter Power Query in the search text box.

Power Query help in Power BI Desktop


There's a Help tab in the Power Query editor running on Power BI Desktop, but at this
time all of the options go to locations that supply Power BI information. You can also
reach this same Power BI Help information in the Power Query editor in Power BI
Desktop by selecting File > Help, and then selecting one of the help options.

Power Query help in Power BI service, Power Apps,


Customer Insights, and Azure Data Factory
The Power Query Help tab in Power BI service, Power Apps, Dynamics 365 Customer
Insights, and Azure Data Factory contains links to important sources of information
about Power Query.

The options in the help tab include links to:

Power Query documentation: Links to the Power Query documentation


Keyboard shortcuts: Links to the keyboard shortcuts article in the Power Query
documentation.
Support: Links to the support website of the Microsoft product that Power Query is
currently running in.
Power Query websight : Links to the Power Query website.
Power Query blog : Links to the Power Query blog
Power Query community : Links to the Power Query community.
Submit an idea: Links to the ideas website of the Microsoft product that Power
Query is currently running in.
Send feedback: Opens a window in Power Query that asks you to rate your
experience with Power Query, and to provide any additional information you
would like to supply.

7 Note

Currently, Azure Analysis Services doesn't contain any inline Power Query help links.
However, you can get help for Power Query M functions. More information is
contained in the next section.

Help for Power Query M functions


In addition to the Power Query help functions listed in the previous sections, you can
also get help for individual M functions from inside the Power Query editor. The
following steps describe how to get to this inline help.

1. With the Power Query editor open, select the insert step ( ) button.
2. In the formula bar, enter the name of a function you want to check.
a. If you are using Power Query Desktop, enter an equal sign, a space, and the
name of a function.
b. If you are using Power Query Online, enter the name of a function.
3. Select the properties of the function.
a. If you are using Power Query Desktop, in the Query Settings pane, under
Properties, select All properties.
b. If you are using Power Query Online, in the Query Settings pane, select
Properties.

These steps will open the inline help information for your selected function, and let you
enter individual properties used by the function.

Summary
In this article, you created a series of queries with Power Query that provides a
customer-to-supplier ratio analysis at the country level for the Northwind corporation.

You learned the components of the Power Query user interface, how to create new
queries inside the query editor, reference queries, merge queries, understand the
applied steps section, add new columns, and how to use the data profiling tools to
better understand your data.
Power Query is a powerful tool used to connect to many different data sources and
transform the data into the shape you want. The scenarios outlined in this article are
examples to show you how you can use Power Query to transform raw data into
important actionable business insights.
Using the Applied Steps list
Article • 08/07/2023

The Applied steps list is part of the Query settings pane in Power Query. Any
transformations to your data is dislayed in the Applied steps list. For instance, if you
change the first column name, the new column name is displayed in the Applied steps
list as Renamed columns.

Selecting any step displays the results of that particular step, so you can see exactly how
your data changes as you add steps to the query.

Access the Applied steps list


In most cases, the Applied steps list is displayed automatically when you connect to
your data source and open the Power Query editor. However, there may be times when
the Query settings section has been closed, and you need to reopen it.

If you're using Power Query Desktop (Excel, Power BI Desktop, Analysis Services) and the
Query Settings pane is closed, select the View tab from the ribbon, and then select
Query Settings.
The Query Settings pane then opens on the right side with the Applied Steps list.

If you're using Power Query Online (Power BI service, Power Apps, Data Factory
(preview), Microsoft 365 Customer Insights) and the Query settings pane is closed,
select the < icon above Query settings to open the pane.
The Query settings pane then opens on the right side with the Applied steps list.
The following image shows the different parts of the Applied steps list. Currently, not all
of these parts are found in Power Query Desktop. The Applied steps list in Power Query
Desktop only contains the delete step, step name, step description, and step settings
elements. The step icon and query folding indicator are found only in Power Query
Online.

The full list of the step parts are:


Delete step: When a step is selected, the delete icon appears on the left side of the
Applied steps list. If you select the icon, the step is deleted. More information:
Delete step
Step icon: Automatically assigned to the step, based on its type, and can't be
changed by the user. The step icon isn't available in Power Query Desktop.
Step name: A name automatically assigned to the step. The step name can be
changed using the step menu. More information: Rename step
Step description: A description of the step. Typically, no description is added when
the step is created. However, the step description can be added using the step
menu. More information: Edit step names and their descriptions
Step settings: Change any settings that are specific to the selected step. Not all
steps have settings. More information: Edit settings
Query folding indicator: Provides a graphic representation of the query folding
status of the selected step. This indicator isn't available in Power Query Desktop.
More information: Query folding indicators

In Power Query Online, if you hover the mouse cursor over one of the applied steps, an
informational display opens, listing the step name, step label, step description, step
settings, information about the step query folding, and a learn more link to the Query
folding indicators article. For more information about query folding, go to Power Query
query folding. The step label is automatically generated when the step is created, and
indicates the step type, as does the step icon. The step label and the step icon can't be
changed.

Step menu selections


When you right-click on a step in the applied steps list, a step menu pops up. The
following sections describe each of the options available in the step menu.
Edit settings
The Edit settings option lets you change specific settings for each individual step in the
Applied settings list. Not all steps have settings that can be changed. If no settings are
available, this option is grayed out in the step menu. For the steps that do have settings
that can be changed, selecting the Edit settings option brings up the settings page for
that particular step. For example, if the step is for an OData source, the OData source
page is displayed, where you can enter the OData source URL. As another example, if
the step is for a fuzzy cluster column, the Cluster values page is displayed, where you
can change the column to create the fuzzy clusters from.

You can also edit the settings for any step that contains the step settings icon. The two
places where the icon appears is in the applied settings step, and in the informational
display for the step. Just select the icon and the settings page for that particular step is
displayed.

Rename step
To rename a step, right-click the step and select Rename.
Enter in the name you want, and then either select Enter or select away from the step.

Delete step
To delete a step, right-click the step and select Delete.
Alternatively, select the X next to the step.
Delete until end
To delete a series of steps, right-click the first step of the series and select Delete until
end. This action deletes the selected step and all the subsequent steps.
Select Delete in the new window.

The following image shows the Applied steps list after using the Delete until end.
Insert step after
To add a new step, right-click on the last step in the list and select Insert step after.
To insert a new intermediate step, right-click on a step and select Insert step after. Then
select Insert on the new window.

To set a transformation for the new step, select the new step in the list and make the
change to the data. It automatically links the transformation to the selected step.

Move step
To move a step up one position in the list, right-click the step and select Move before.
To move a step down one position in the list, right-click the step and select Move after.
Alternatively, or to move more than a single position, drag and drop the step to the
desired location.
Extract the previous steps into query
You can also separate a series of transformations into a different query. This allows the
query to be referenced for other sources, which can be helpful if you're trying to apply
the same transformation to multiple datasets. To extract all the previous steps into a
new query, right-click the first step you do not want to include in the query and select
Extract Previous.
Name the new query and select OK. To access the new query, navigate to the Queries
pane on the left side of the screen.

Edit step names and their descriptions


To edit the step, right-click the step and select Properties.
In the window, change the step name and description, then select OK to save the
changes.
Adding a description to a step adds a small icon next to the step to denote that the step
has a description. In Power Query Desktop, you can hover over this icon to display the
description as a tooltip.
In Power Query Online, the step description is included in the informational display that
appears when you hover over any of the steps in the applied steps list.


Global search box (Preview)
Article • 07/30/2022

The global search box offers you the ability to search for:

Queries found in your project.


Actions available in your version of Power Query that are commonly found in the
ribbon.
Get data connectors that can also be found through the Get Data dialog box.

The global search box is located at the top center of the Power Query editor. The search
box follows the same design principles that you find in Microsoft Search in Office , but
contextualized to Power Query.

Search results
To make use of the global search box, select the search box or press Alt + Q. Before you
enter anything, you'll be presented with some default options to choose from.
When you start entering something to search for, the results will be updated in real
time, displaying queries, actions, and get data connectors that match the text that
you've entered.

For scenarios where you'd like to see all available options for a given search query, you
can also select the See more results for option. This option is positioned as the last
result of the search box query when there are multiple matches to your query.

Overview of query evaluation and query
folding in Power Query
Article • 02/17/2023

This article provides a basic overview of how M queries are processed and turned into
data source requests.

Power Query M script


Any query, whether created by Power Query, manually written by you in the advanced
editor, or entered using a blank document, consists of functions and syntax from the
Power Query M formula language. This query gets interpreted and evaluated by the
Power Query engine to output its results. The M script serves as the set of instructions
needed to evaluate the query.

 Tip

You can think of the M script as a recipe that describes how to prepare your data.

The most common way to create an M script is by using the Power Query editor. For
example, when you connect to a data source, such as a SQL Server database, you'll
notice on the right-hand side of your screen that there's a section called applied steps.
This section displays all the steps or transforms used in your query. In this sense, the
Power Query editor serves as an interface to help you create the appropriate M script for
the transforms that you're after, and ensures that the code you use is valid.

7 Note

The M script is used in the Power Query editor to:

Display the query as a series of steps and allow the creation or modification of
new steps.
Display a diagram view.
The previous image emphasizes the applied steps section, which contains the following
steps:

Source: Makes the connection to the data source. In this case, it's a connection to a
SQL Server database.
Navigation: Navigates to a specific table in the database.
Removed other columns: Selects which columns from the table to keep.
Sorted rows: Sorts the table using one or more columns.
Kept top rows: Filters the table to only keep a certain number of rows from the top
of the table.

This set of step names is a friendly way to view the M script that Power Query has
created for you. There are several ways to view the full M script. In Power Query, you can
select Advanced Editor in the View tab. You can also select Advanced Editor from the
Query group in the Home tab. In some versions of Power Query, you can also change
the view of the formula bar to show the query script by going into the View tab and
from the Layout group, select Script view > Query script.
Most of the names found in the Applied steps pane are also being used as is in the M
script. Steps of a query are named using something called identifiers in the M language.
Sometimes extra characters are wrapped around step names in M, but these characters
aren’t shown in the applied steps. An example is #"Kept top rows" , which is categorized
as a quoted identifier because of these extra characters. A quoted identifier can be used
to allow any sequence of zero or more Unicode characters to be used as an identifier,
including keywords, whitespace, comments, operators, and punctuators. To learn more
about identifiers in the M language, go to lexical structure.

Any changes that you make to your query through the Power Query editor will
automatically update the M script for your query. For example, using the previous image
as the starting point, if you change the Kept top rows step name to be Top 20 rows, this
change will automatically be updated in the script view.
While we recommend that you use the Power Query editor to create all or most of the
M script for you, you can manually add or modify pieces of your M script. To learn more
about the M language, go to the official docs site for the M language.

7 Note

M script, also referred to as M code, is a term used for any code that uses the M
language. In the context of this article, M script also refers to the code found inside
a Power Query query and accessible through the advanced editor window or
through the script view in the formula bar.

Query evaluation in Power Query


The following diagram explores the process that occurs when a query is evaluated in
Power Query.
1. The M script, found inside the advanced editor, is submitted to the Power Query
engine. Other important information is also included, such as credentials and data
source privacy levels.
2. Power Query determines what data needs to be extracted from the data source
and submits a request to the data source.
3. The data source responds to the request from Power Query by transferring the
requested data to Power Query.
4. Power Query receives the incoming data from the data source and does any
transformations using the Power Query engine if necessary.
5. The results derived from the previous point are loaded to a destination.

7 Note

While this example showcases a query with a SQL Database as a data source, the
concept applies to queries with or without a data source.

When Power Query reads your M script, it runs the script through an optimization
process to more efficiently evaluate your query. In this process, it determines which
steps (transforms) from your query can be offloaded to your data source. It also
determines which other steps need to be evaluated using the Power Query engine. This
optimization process is called query folding, where Power Query tries to push as much of
the possible execution to the data source to optimize your query's execution.
) Important

All rules from the Power Query M formula language (also known as the M
language) are followed. Most notably, lazy evaluation plays an important role
during the optimization process. In this process, Power Query understands what
specific transforms from your query need to be evaluated. Power Query also
understands what other transforms don't need to be evaluated because they're not
needed in the output of your query.

Furthermore, when multiple sources are involved, the data privacy level of each
data source is taken into consideration when evaluating the query. More
information: Behind the scenes of the Data Privacy Firewall

The following diagram demonstrates the steps that take place in this optimization
process.

1. The M script, found inside the advanced editor, is submitted to the Power Query
engine. Other important information is also supplied, such as credentials and data
source privacy levels.
2. The Query folding mechanism submits metadata requests to the data source to
determine the capabilities of the data source, table schemas, relationships between
different entities at the data source, and more.
3. Based on the metadata received, the query folding mechanism determines what
information to extract from the data source and what set of transformations need
to happen inside the Power Query engine. It sends the instructions to two other
components that take care of retrieving the data from the data source and
transforming the incoming data in the Power Query engine if necessary.
4. Once the instructions have been received by the internal components of Power
Query, Power Query sends a request to the data source using a data source query.
5. The data source receives the request from Power Query and transfers the data to
the Power Query engine.
6. Once the data is inside Power Query, the transformation engine inside Power
Query (also known as mashup engine) does the transformations that couldn't be
folded back or offloaded to the data source.
7. The results derived from the previous point are loaded to a destination.

7 Note

Depending on the transformations and data source used in the M script, Power
Query determines if it will stream or buffer the incoming data.

Query folding overview


The goal of query folding is to offload or push as much of the evaluation of a query to a
data source that can compute the transformations of your query.

The query folding mechanism accomplishes this goal by translating your M script to a
language that can be interpreted and executed by your data source. It then pushes the
evaluation to your data source and sends the result of that evaluation to Power Query.

This operation often provides a much faster query execution than extracting all the
required data from your data source and running all transforms required in the Power
Query engine.

When you use the get data experience, Power Query guides you through the process
that ultimately lets you connect to your data source. When doing so, Power Query uses
a series of functions in the M language categorized as accessing data functions. These
specific functions use mechanisms and protocols to connect to your data source using a
language that your data source can understand.

However, the steps that follow in your query are the steps or transforms that the query
folding mechanism attempts to optimize. It then checks if they can be offloaded to your
data source instead of being processed using the Power Query engine.

) Important
All data source functions, commonly shown as the Source step of a query, queries
the data at the data source in its native language. The query folding mechanism is
utilized on all transforms applied to your query after your data source function so
they can be translated and combined into a single data source query or as many
transforms that can be offloaded to the data source.

Depending on how the query is structured, there could be three possible outcomes to
the query folding mechanism:

Full query folding: When all of your query transformations get pushed back to the
data source and minimal processing occurs at the Power Query engine.
Partial query folding: When only a few transformations in your query, and not all,
can be pushed back to the data source. In this case, only a subset of your
transformations is done at your data source and the rest of your query
transformations occur in the Power Query engine.
No query folding: When the query contains transformations that can't be
translated to the native query language of your data source, either because the
transformations aren't supported or the connector doesn't support query folding.
For this case, Power Query gets the raw data from your data source and uses the
Power Query engine to achieve the output you want by processing the required
transforms at the Power Query engine level.

7 Note

The query folding mechanism is primarily available in connectors for structured


data sources such as, but not limited to, Microsoft SQL Server and OData Feed.
During the optimization phase, the engine might sometimes reorder steps in the
query.

Leveraging a data source that has more processing resources and has query folding
capabilities can expedite your query loading times as the processing occurs at the
data source and not at the Power Query engine.

Next steps
For detailed examples of the three possible outcomes of the query folding mechanism,
go to Query folding examples.

For information about query folding indicators found in the Applied Steps pane, go to
Query folding indicators
Power Query query folding
Article • 08/31/2022

This article targets data modelers developing models in Power Pivot or Power BI
Desktop. It describes what Power Query query folding is, and why it's important in your
data model designs. This article also describes the data sources and transformations that
can achieve query folding, and how to determine that your Power Query queries can be
folded—whether fully or partially.

Query folding is the ability for a Power Query query to generate a single query
statement to retrieve and transform source data. The Power Query mashup engine
strives to achieve query folding whenever possible for reasons of efficiency.

Query folding is an important topic for data modeling for several reasons:

Import model tables: Data refresh will take place efficiently for Import model
tables (Power Pivot or Power BI Desktop), in terms of resource utilization and
refresh duration.
DirectQuery and Dual storage mode tables: Each DirectQuery and Dual storage
mode table (Power BI only) must be based on a Power Query query that can be
folded.
Incremental refresh: Incremental data refresh (Power BI only) will be efficient, in
terms of resource utilization and refresh duration. In fact, the Power BI Incremental
Refresh configuration window will notify you of a warning should it determine that
query folding for the table can't be achieved. If it can't be achieved, the goal of
incremental refresh is defeated. The mashup engine would then be required to
retrieve all source rows, and then apply filters to determine incremental changes.

Query folding may occur for an entire Power Query query, or for a subset of its steps.
When query folding cannot be achieved—either partially or fully—the Power Query
mashup engine must compensate by processing data transformations itself. This process
can involve retrieving source query results, which for large datasets is very resource
intensive and slow.

We recommend that you strive to achieve efficiency in your model designs by ensuring
query folding occurs whenever possible.

Sources that support folding


Most data sources that have the concept of a query language support query folding.
These data sources can include relational databases, OData feeds (including SharePoint
lists), Exchange, and Active Directory. However, data sources like flat files, blobs, and
web typically do not.

Transformations that can achieve folding


Relational data source transformations that can be query folded are those that can be
written as a single SELECT statement. A SELECT statement can be constructed with
appropriate WHERE, GROUP BY, and JOIN clauses. It can also contain column
expressions (calculations) that use common built-in functions supported by SQL
databases.

Generally, the following list describes transformations that can be query folded.

Removing columns.

Renaming columns (SELECT column aliases).

Filtering rows, with static values or Power Query parameters (WHERE clause
predicates).

Grouping and summarizing (GROUP BY clause).

Expanding record columns (source foreign key columns) to achieve a join of two
source tables (JOIN clause).

Non-fuzzy merging of fold-able queries based on the same source (JOIN clause).

Appending fold-able queries based on the same source (UNION ALL operator).

Adding custom columns with simple logic (SELECT column expressions). Simple
logic implies uncomplicated operations, possibly including the use of M functions
that have equivalent functions in the SQL data source, like mathematic or text
manipulation functions. For example, the following expressions return the year
component of the OrderDate column value (to return a numeric value).

Power Query M

Date.Year([OrderDate])

Pivoting and unpivoting (PIVOT and UNPIVOT operators).

Transformations that prevent folding


Generally, the following list describes transformations that prevent query folding. This
list isn't intended to be an exhaustive list.

Merging queries based on different sources.

Appending (union-ing) queries based on different sources.

Adding custom columns with complex logic. Complex logic implies the use of M
functions that have no equivalent functions in the data source. For example, the
following expressions format the OrderDate column value (to return a text value).

Power Query M

Date.ToText([OrderDate], "yyyy")

Adding index columns.

Note that when a Power Query query encompasses multiple data sources,
incompatibility of data source privacy levels can prevent query folding from taking
place. For more information, see the Power BI Desktop privacy levels article.

Determine when a query can be folded


In the Power Query Editor window, it's possible to determine when a Power Query query
can be folded. In the Query Settings pane, when you right-click the last applied step, if
the View Native Query option is enabled (not greyed out), then the entire query can be
folded.
7 Note

The View Native Query option is only available for certain relational DB/SQL
generating connectors. It doesn't work for OData based connectors, for example,
even though there is folding occurring on the backend. The Query Diagnostics
feature is the best way to see what folding has occurred for non-SQL connectors
(although the steps that fold aren't explicitly called out—you just see the resulting
URL that was generated).

To view the folded query, you select the View Native Query option. You're then be
presented with the native query that Power Query will use to source data.
If the View Native Query option isn't enabled (greyed out), this is evidence that not all
query steps can be folded. However, it could mean that a subset of steps can still be
folded. Working backwards from the last step, you can check each step to see if the
View Native Query option is enabled. If so, then you've learned where, in the sequence
of steps, that query folding could no longer be achieved.
Next steps
For more information about query folding and related articles, check out the following
resources:

Overview of query evaluation and query folding in Power Query


Query folding guidance in Power BI Desktop
Query folding examples
Query folding indicators
Query plan for Power Query
Query folding on native queries
Query folding examples
Article • 08/31/2022

This article provides some example scenarios for each of the three possible outcomes
for query folding. It also includes some suggestions on how to get the most out of the
query folding mechanism, and the effect that it can have in your queries.

The scenario
Imagine a scenario where, using the Wide World Importers database for Azure Synapse
Analytics SQL database, you're tasked with creating a query in Power Query that
connects to the fact_Sale table and retrieves the last 10 sales with only the following
fields:

Sale Key
Customer Key
Invoice Date Key
Description
Quantity

7 Note

For demonstration purposes, this article uses the database outlined on the tutorial
on loading the Wide World Importers database into Azure Synapse Analytics. The
main difference in this article is the fact_Sale table only holds data for the year
2000, with a total of 3,644,356 rows.

While the results might not exactly match the results that you get by following the
tutorial from the Azure Synapse Analytics documentation, the goal of this article is
to showcase the core concepts and impact that query folding can have in your
queries.
This article showcases three ways to achieve the same output with different levels of
query folding:

No query folding
Partial query folding
Full query folding

No query folding example

) Important

Queries that rely solely on unstructured data sources or that don't have a compute
engine, such as CSV or Excel files, don't have query folding capabilities. This means
that Power Query evaluates all the required data transformations using the Power
Query engine.

After connecting to your database and navigating to the fact_Sale table, you select the
Keep bottom rows transform found inside the Reduce rows group of the Home tab.

After selecting this transform, a new dialog appears. In this new dialog, you can enter
the number of rows that you'd like to keep. For this case, enter the value 10, and then
select OK.
 Tip

For this case, performing this operation yields the result of the last ten sales. In
most scenarios, we recommend that you provide a more explicit logic that defines
which rows are considered last by applying a sort operation on the table.

Next, select the Choose columns transform found inside the Manage columns group of
the Home tab. You can then select the columns you want to keep from your table and
remove the rest.

Lastly, inside the Choose columns dialog, select the Sale Key , Customer Key , Invoice
Date Key , Description , and Quantity columns, and then select OK.
The following code sample is the full M script for the query you created:

Power Query M

let
Source = Sql.Database(ServerName, DatabaseName),
Navigation = Source{[Schema = "wwi", Item = "fact_Sale"]}[Data],
#"Kept bottom rows" = Table.LastN(Navigation, 10),
#"Choose columns" = Table.SelectColumns(#"Kept bottom rows", {"Sale Key",
"Customer Key", "Invoice Date Key", "Description", "Quantity"})
in
#"Choose columns""

No query folding: Understanding the query evaluation


Under Applied steps in the Power Query editor, you’ll notice that the query folding
indicators for Kept bottom rows and Choose columns are marked as steps that will be
evaluated outside the data source or, in other words, by the Power Query engine.
You can right-click the last step of your query, the one named Choose columns, and
select the option that reads View Query plan. The goal of the query plan is to provide
you with a detailed view of how your query is run. To learn more about this feature, go
to Query plan.

Each box in the previous image is called a node. A node represents the operation
breakdown to fulfill this query. Nodes that represent data sources, such as SQL Server in
the example above and the Value.NativeQuery node, represent which part of the query
is offloaded to the data source. The rest of the nodes, in this case Table.LastN and
Table.SelectColumns highlighted in the rectangle in the previous image, are evaluated
by the Power Query engine. These two nodes represent the two transforms that you
added, Kept bottom rows and Choose columns. The rest of the nodes represent
operations that happen at your data source level.

To see the exact request that is sent to your data source, select View details in the
Value.NativeQuery node.
This data source request is in the native language of your data source. For this case, that
language is SQL and this statement represents a request for all the rows and fields from
the fact_Sale table.

Consulting this data source request can help you better understand the story that the
query plan tries to convey:

Sql.Database : This node represents the data source access. Connects to the
database and sends metadata requests to understand its capabilities.
Value.NativeQuery : Represents the request that was generated by Power Query to
fulfill the query. Power Query submits the data requests in a native SQL statement
to the data source. In this case, that represents all records and fields (columns)
from the fact_Sale table. For this scenario, this case is undesirable, as the table
contains millions of rows and the interest is only in the last 10.
Table.LastN : Once Power Query receives all records from the fact_Sale table, it
uses the Power Query engine to filter the table and keep only the last 10 rows.
Table.SelectColumns : Power Query will use the output of the Table.LastN node
and apply a new transform called Table.SelectColumns , which selects the specific
columns that you want to keep from a table.

For its evaluation, this query had to download all rows and fields from the fact_Sale
table. This query took an average of 6 minutes and 1 second to be processed in a
standard instance of Power BI dataflows (which accounts for the evaluation and loading
of data to dataflows).

Partial query folding example


After connecting to the database and navigating to the fact_Sale table, you start by
selecting the columns that you want to keep from your table. Select the Choose
columns transform found inside the Manage columns group from the Home tab. This
transform helps you to explicitly select the columns that you want to keep from your
table and remove the rest.

Inside the Choose columns dialog, select the Sale Key , Customer Key , Invoice Date
Key , Description , and Quantity columns and then select OK.
You now create logic that will sort the table to have the last sales at the bottom of the
table. Select the Sale Key column, which is the primary key and incremental sequence
or index of the table. Sort the table using only this field in ascending order from the
context menu for the column.

Next, select the table contextual menu and choose the Keep bottom rows transform.
In Keep bottom rows, enter the value 10, and then select OK.

The following code sample is the full M script for the query you created:

Power Query M

let
Source = Sql.Database(ServerName, DatabaseName),
Navigation = Source{[Schema = "wwi", Item = "fact_Sale"]}[Data],
#"Choose columns" = Table.SelectColumns(Navigation, {"Sale Key", "Customer
Key", "Invoice Date Key", "Description", "Quantity"}),
#"Sorted rows" = Table.Sort(#"Choose columns", {{"Sale Key",
Order.Ascending}}),
#"Kept bottom rows" = Table.LastN(#"Sorted rows", 10)
in
#"Kept bottom rows"

Partial query folding example: Understanding the query


evaluation
Checking the applied steps pane, you notice that the query folding indicators are
showing that the last transform that you added, Kept bottom rows , is marked as a step
that will be evaluated outside the data source or, in other words, by the Power Query
engine.

You can right-click the last step of your query, the one named Kept bottom rows , and
select the Query plan option to better understand how your query might be evaluated.
Each box in the previous image is called a node. A node represents every process that
needs to happen (from left to right) in order for your query to be evaluated. Some of
these nodes can be evaluated at your data source while others, like the node for
Table.LastN , represented by the Kept bottom rows step, are evaluated using the Power

Query engine.

To see the exact request that is sent to your data source, select View details in the
Value.NativeQuery node.
This request is in the native language of your data source. For this case, that language is
SQL and this statement represents a request for all the rows, with only the requested
fields from the fact_Sale table ordered by the Sale Key field.

Consulting this data source request can help you better understand the story that the
full query plan tries to convey. The order of the nodes is a sequential process that starts
by requesting the data from your data source:

Sql.Database : Connects to the database and sends metadata requests to


understand its capabilities.
Value.NativeQuery : Represents the request that was generated by Power Query to
fulfill the query. Power Query submits the data requests in a native SQL statement
to the data source. For this case, that represents all records, with only the
requested fields from the fact_Sale table in the database sorted in ascending
order by the Sales Key field.
Table.LastN : Once Power Query receives all records from the fact_Sale table, it
uses the Power Query engine to filter the table and keep only the last 10 rows.

For its evaluation, this query had to download all rows and only the required fields from
the fact_Sale table. It took an average of 3 minutes and 4 seconds to be processed in a
standard instance of Power BI dataflows (which accounts for the evaluation and loading
of data to dataflows).
Full query folding example
After connecting to the database and navigating to the fact_Sale table, start by
selecting the columns that you want to keep from your table. Select the Choose
columns transform found inside the Manage columns group from the Home tab. This
transform helps you to explicitly select the columns that you want to keep from your
table and remove the rest.

In Choose columns, select the Sale Key , Customer Key , Invoice Date Key , Description ,
and Quantity columns, and then select OK.
You now create logic that will sort the table to have the last sales at the top of the table.
Select the Sale Key column, which is the primary key and incremental sequence or
index of the table. Sort the table only using this field in descending order from the
context menu for the column.

Next, select the table contextual menu and choose the Keep top rows transform.
In Keep top rows, enter the value 10, and then select OK.

The following code sample is the full M script for the query you created:

Power Query M
let
Source = Sql.Database(ServerName, DatabaseName),
Navigation = Source{[Schema = "wwi", Item = "fact_Sale"]}[Data],
#"Choose columns" = Table.SelectColumns(Navigation, {"Sale Key", "Customer
Key", "Invoice Date Key", "Description", "Quantity"}),
#"Sorted rows" = Table.Sort(#"Choose columns", {{"Sale Key",
Order.Descending}}),
#"Kept top rows" = Table.FirstN(#"Sorted rows", 10)
in
#"Kept top rows"

Full query folding example: Understanding the query


evaluation
When checking the applied steps pane, you'll notice that the query folding indicators
are showing that the transforms that you added, Choose columns, Sorted rows, and
Kept top rows, are marked as steps that will be evaluated at the data source.

You can right-click the last step of your query, the one named Kept top rows, and select
the option that reads Query plan.
This request is in the native language of your data source. For this case, that language is
SQL and this statement represents a request for all the rows and fields from the
fact_Sale table.

Consulting this data source query can help you better understand the story that the full
query plan tries to convey:

Sql.Database : Connects to the database and sends metadata requests to

understand its capabilities.


Value.NativeQuery : Represents the request that was generated by Power Query to
fulfill the query. Power Query submits the data requests in a native SQL statement
to the data source. For this case, that represents a request for only the top 10
records of the fact_Sale table, with only the required fields after being sorted in
descending order using the Sale Key field.

7 Note

While there's no clause that can be used to SELECT the bottom rows of a table in
the T-SQL language, there's a TOP clause that retrieves the top rows of a table.
For its evaluation, this query only downloads 10 rows, with only the fields that you
requested from the fact_Sale table. This query took an average of 31 seconds to be
processed in a standard instance of Power BI dataflows (which accounts for the
evaluation and loading of data to dataflows).

Performance comparison
To better understand the affect that query folding has in these queries, you can refresh
your queries, record the time it takes to fully refresh each query, and compare them. For
simplicity, this article provides the average refresh timings captured using the Power BI
dataflows refresh mechanic while connecting to a dedicated Azure Synapse Analytics
environment with DW2000c as the service level.

The refresh time for each query was as follows:

Example Label Time in seconds

No query folding None 361

Partial query folding Partial 184

Full query folding Full 31

It's often the case that a query that fully folds back to the data source outperforms
similar queries that don't completely fold back to the data source. There could be many
reasons why this is the case. These reasons range from the complexity of the transforms
that your query performs, to the query optimizations implemented at your data source,
such as indexes and dedicated computing, and network resources. Still, there are two
specific key processes that query folding tries to use that minimizes the affect that both
of these processes have with Power Query:

Data in transit
Transforms executed by the Power Query engine

The following sections explain the affect that these two processes have in the previously
mentioned queries.

Data in transit
When a query gets executed, it tries to fetch the data from the data source as one of its
first steps. What data is fetched from the data source is defined by the query folding
mechanism. This mechanism identifies the steps from the query that can be offloaded to
the data source.

The following table lists the number of rows requested from the fact_Sale table of the
database. The table also includes a brief description of the SQL statement sent to
request such data from the data source.

Example Label Rows Description


requested

No None 3644356 Request for all fields and all records from the fact_Sale table
query
folding

Partial Partial 3644356 Request for all records, but only required fields from the
query fact_Sale table after it was sorted by the Sale Key field
folding

Full Full 10 Request for only the required fields and the TOP 10 records of
query the fact_Sale table after being sorted in descending order by
folding the Sale Key field
When requesting data from a data source, the data source needs to compute the results
for the request and then send the data to the requestor. While the computing resources
have already been mentioned, the network resources of moving the data from the data
source to Power Query, and then have Power Query be able to effectively receive the
data and prepare it for the transforms that will happen locally can take some time
depending on the size of the data.

For the showcased examples, Power Query had to request over 3.6 million rows from the
data source for the no query folding and partial query folding examples. For the full
query folding example, it only requested 10 rows. For the fields requested, the no query
folding example requested all the available fields from the table. Both the partial query
folding and the full query folding examples only submitted a request for exactly the
fields that they needed.

U Caution

We recommend that you implement incremental refresh solutions that leverage


query folding for queries or entities with large amounts of data. Different product
integrations of Power Query implement timeouts to terminate long running
queries. Some data sources also implement timeouts on long running sessions,
trying to execute expensive queries against their servers. More information: Using
incremental refresh with dataflows and Incremental refresh for datasets

Transforms executed by the Power Query engine


This article showcased how you can use the Query plan to better understand how your
query might be evaluated. Inside the query plan, you can see the exact nodes of the
transform operations that will be performed by the Power Query engine.

The following table showcases the nodes from the query plans of the previous queries
that would have been evaluated by the Power Query engine.

Example Label Power Query engine transform nodes

No query folding None Table.LastN , Table.SelectColumns

Partial query folding Partial Table.LastN

Full query folding Full —

For the examples showcased in this article, the full query folding example doesn't
require any transforms to happen inside the Power Query engine as the required output
table comes directly from the data source. In contrast, the other two queries required
some computation to happen at the Power Query engine. Because of the amount of
data that needs to be processed by these two queries, the process for these examples
takes more time than the full query folding example.

Transforms can be grouped into the following categories:

Type of Description
Operator

Remote Operators that are data source nodes. The evaluation of these operators occurs
outside of Power Query.
Type of Description
Operator

Streaming Operators are pass-through operators. For example, Table.SelectRows with a simple
filter can usually filter the results as they pass through the operator, and won’t need
to gather all rows before moving the data. Table.SelectColumns and
Table.ReorderColumns are other examples of these sort of operators.

Full scan Operators that need to gather all the rows before the data can move on to the next
operator in the chain. For example, to sort data, Power Query needs to gather all the
data. Other examples of full scan operators are Table.Group , Table.NestedJoin , and
Table.Pivot .

 Tip

While not every transform is the same from a performance standpoint, in most
cases, having fewer transforms is usually better.

Considerations and suggestions


Follow the best practices when creating a new query, as stated in Best practices in
Power Query.
Use the query folding indicators to check which steps are preventing your query
from folding. Reorder them if necessary to increase folding.
Use the query plan to determine which transforms are happening at the Power
Query engine for a particular step. Consider modifying your existing query by re-
arranging your steps. Then check the query plan of the last step of your query
again and see if the query plan looks better than the previous one. For example,
the new query plan has less nodes than the previous one, and most of the nodes
are “Streaming” nodes and not “full scan”. For data sources that support folding,
any nodes in the query plan other than Value.NativeQuery and data source access
nodes represent transforms that didn’t fold.
When available, you can use the View Native Query (or View data source query)
option to ensure that your query can be folded back to the data source. If this
option is disabled for your step, and you're using a source that normally enables it,
you've created a step that stops query folding. If you're using a source that doesn't
support this option, you can rely on the query folding indicators and query plan.
Use the query diagnostics tools to better understand the requests being sent to
your data source when query folding capabilities are available for the connector.
When combining data sourced from the use of multiple connectors, Power Query
tries to push as much work as possible to both of the data sources while
complying with the privacy levels defined for each data source.
Read the article on privacy levels to protect your queries from running against a
Data Privacy Firewall error.
Use other tools to check query folding from the perspective of the request being
received by the data source. Based on the example in this article, you can use the
Microsoft SQL Server Profiler to check the requests being sent by Power Query and
received by the Microsoft SQL Server.
If you add a new step to a fully folded query and the new step also folds, Power
Query might send a new request to the data source instead of using a cached
version of the previous result. In practice, this process can result in seemingly
simple operations on a small amount of data taking longer to refresh in the
preview than expected. This longer refresh is due to Power Query requerying the
data source rather than working off a local copy of the data.
Query folding indicators
Article • 02/17/2023

7 Note

Before reading this article, we recommended that you read Overview of query
evaluation and query folding in Power Query to better understand how folding
works in Power Query.

Query folding indicators help you understand the steps that fold or don't fold.

With query folding indicators, it becomes obvious when you make a change that breaks
folding. This feature helps you to more easily resolve issues quickly, avoid performance
issues in the first place, and have better insight into your queries. In most cases you run
into, steps will fold or won't fold. But there are many cases where the outcome isn't as
obvious, and these cases are discussed in Step diagnostics indicators (Dynamic, Opaque,
and Unknown).

7 Note

The query folding indicators feature is available only for Power Query Online.

Interpreting query folding diagnostics


When you examine the query folding indicator next to a step, the most important thing
to understand is that the diagnostic state isn't sequential. In other words, the indicator
for that step describes whether the query as a whole, up to that point, folds or not. If
you have an indicator that shows that the query doesn't fold, followed by an indicator
that shows it does fold, it means that your query up to that point does fold.

This interpretation works even with a simple query against a SQL source. For example,
using the AdventureWorks sample database, connect to the Production.Product table
and load data. Loading this sample through the Power Query navigator gives the
following query:

Power Query M

let
Source = Sql.Database("ServerName", "AdventureWorks"),
Navigation = Source{[Schema = "Production", Item = "Product"]}[Data]
in
Navigation

If you examine how this code shows up in query folding indicators, you'll note that the
first step is inconclusive. But the second step does fold, which means that the query up
to that point does fold.

In this example, the initial steps can't be confirmed to fold (is inconclusive), but the final
step generated when you load data initially does fold. How the first steps (Source, and
sometimes other Navigation steps) are handled depends on the connector. With SQL,
for example, it's handled as a catalog table value, which doesn't fold. However, as soon
as you select data for that connector, it will fold.

Conversely, this can also mean that your query folds up to a point and then stops
folding. Unlike in the case where you have a folding indicator for the step that shows
that everything folds, when you have a not-folding indicator it doesn't mean that
everything doesn't fold. Instead, it means that "not everything" folds. Generally,
everything up to the last folding indicator will fold, with more operations happening
after.

Modifying the example from above, you can give a transform that never folds—
Capitalize Each Word.

Power Query M

let
Source = Sql.Database("ServerName", "AdventureWorks"),
Navigation = Source{[Schema = "Production", Item = "Product"]}[Data],
#"Capitalized each word" = Table.TransformColumns(Navigation, {{"Name",
each Text.Proper(_), type text}})
in
#"Capitalized each word"

In the query folding indicators, you have the same indicators as above, except the final
step doesn't fold. Everything up to this final step will be performed on the data source,
while the final step will be performed locally.
Step diagnostics indicators
Query folding indicators use an underlying query plan, and require it to be able to get
information about the query to report it. Currently the query plan only supports tables,
so some cases (lists, records, primitives) won't report as folding or not. Similarly,
constant tables report as opaque.

Indicator Icon Description

Folding The folding indicator tells you that the query up to this step will be evaluated
by the data source.

Not The not-folding indicator tells you that some part of the query up to this step
folding will be evaluated outside the data source. You can compare it with the last
folding indicator, if there is one, to see if you can rearrange your query to be
more performant.

Might Might fold indicators are uncommon. They mean that a query "might" fold.
fold They indicate either that folding or not folding will be determined at runtime,
when pulling results from the query, and that the query plan is dynamic. These
indicators will likely only appear with ODBC or OData connections.

Opaque Opaque indicators tell you that the resulting query plan is inconclusive for
some reason. It generally indicates that there's a true "constant" table, or that
that transform or connector isn't supported by the indicators and query plan
tool.

Unknown Unknown indicators represent an absence of a query plan, either due to an


error or attempting to run the query plan evaluation on something other than
a table (such as a record, list, or primitive).

Example analysis
For an example analysis, start by connecting to the Production.Product table in
Adventure Works (SQL). The initial load, similar to the example above, looks like the
following image.
Adding more steps that fold will extend that green line on the right side. This extension
occurs because this step also folds.

Adding a step that doesn't fold displays a different indicator. For example, Capitalize
each word never folds. The indicator changes, showing that as of this step, it's stopped
folding. As mentioned earlier, the previous steps will still fold.

Adding more steps downstream that depend on Capitalize each step will continue to
not fold.

However, if you remove the column you applied the capitalization to so that the
optimized query plan can all fold once more, you'll get a result like the following image.
However, something like this is uncommon. This image illustrates how it's not just the
order of steps, but the actual transformations that apply as well.
Query plan for Power Query (Preview)
Article • 02/17/2023

Query plan for Power Query is a feature that provides a better view of your query's
evaluation. It's useful to help determine why a particular query might not fold at a
particular step.

Through a practical example, this article will demonstrate the main use case and
potential benefits of using the query plan feature to review your query steps. The
examples used in this article have been created using the AdventureWorksLT sample
database for Azure SQL Server, which you can download from AdventureWorks sample
databases.

7 Note

The query plan feature for Power Query is only available in Power Query Online.

This article has been divided in a series of recommended steps in order to interpret the
query plan. These steps are:

1. Review the query folding indicators.


2. Select the query step to review its query plan.
3. Implement changes to your query.

Use the following steps to create the query in your own Power Query Online
environment.

1. From Power Query - Choose data source, select Blank query.

2. Replace the blank query's script with the following query.

Power Query M
let
Source = Sql.Database("servername", "database"),
Navigation = Source{[Schema = "Sales", Item = "SalesOrderHeader"]}
[Data],
#"Removed other columns" = Table.SelectColumns(Navigation,
{"SalesOrderID", "OrderDate", "SalesOrderNumber",
"PurchaseOrderNumber", "AccountNumber", "CustomerID", "TotalDue"}),
#"Filtered rows" = Table.SelectRows(#"Removed other columns", each
[TotalDue] > 1000),
#"Kept bottom rows" = Table.LastN(#"Filtered rows", 5)
in
#"Kept bottom rows"

3. Change servername and database with the correct names for your own
environment.

4. (Optional) If you're trying to connect to a server and database for an on-premises


environment, be sure to configure a gateway for that environment.

5. Select Next.

6. In the Power Query Editor, select Configure connection and provide the
credentials to your data source.

7 Note

For more information about connecting to a SQL Server, go to SQL Server


database.

After following these steps, your query will look like the one in the following image.

This query connects to the SalesOrderHeader table, and selects a few columns from the
last five orders with a TotalDue value above 1000.
7 Note

This article uses a simplified example to showcase this feature, but the concepts
described in this article apply to all queries. We recommend that you have a good
knowledge of query folding before reading the query plan. To learn more about
query folding, go to Query folding basics.

1. Review the query folding indicators

7 Note

Before reading this section, we recommend that you review the article on Query
folding indicators.

Your first step in this process is to review your query and pay close attention to the
query folding indicators. The goal is to review the steps that are marked as not folded.
Then you can see if making changes to the overall query could make those
transformations fold completely.

For this example, the only step that can't be folded is Kept bottom rows, which is easy
to identify through the not folded step indicator. This step is also the last step of the
query.

The goal now is to review this step and understand what's being folded back to the data
source and what can't be folded.

2. Select the query step to review its query plan


You've identified the Kept bottom rows step as a step of interest since it doesn't fold
back to the data source. Right-click the step and select the View Query plan option. This
action displays a new dialog that contains a diagram for the query plan of the selected
step.

Power Query tries to optimize your query by taking advantage of lazy evaluation and
query folding, as mentioned in Query folding basics. This query plan represents the
optimized translation of your M query into the native query that's sent to the data
source. It also includes any transforms that are performed by the Power Query Engine.
The order in which the nodes appear follows the order of your query starting from the
last step or output of your query, which is represented on the far left of the diagram and
in this case is the Table.LastN node that represents the Kept bottom rows step.

At the bottom of the dialog, there's a bar with icons that help you zoom in or out of the
query plan view, and other buttons to help you manage the view. For the previous
image, the Fit to view option from this bar was used to better appreciate the nodes.

7 Note

The query plan represents the optimized plan. When the engine is evaluating a
query, it tries to fold all operators into a data source. In some cases, it might even
do some internal reordering of the steps to maximize folding. With this in mind, the
nodes/operators left in this optimized query plan typically contain the "folded" data
source query and any operators that couldn't be folded and are evaluated locally.

Identify folded nodes from other nodes


You can identify the nodes in this diagram as two groups:

Folded nodes: This node can be either Value.NativeQuery or "data source" nodes
such as Sql.Database . These can also be identified with the label remote under
their function name.
Non-folded nodes: Other table operators, such as Table.SelectRows ,
Table.SelectColumns , and other functions that couldn't be folded. These can also

be identified with the labels Full scan and Streaming.

The following image shows the folded nodes inside the red rectangle. The rest of the
nodes couldn't be folded back to the data source. You'll need to review the rest of the
nodes since the goal is to attempt to have those nodes fold back to the data source.

You can select View details at the bottom of some nodes to display extended
information. For example, the details of the Value.NativeQuery node show the native
query (in SQL) that will be sent to the data source.


The query shown here might not be exactly the same query sent to the data source, but
it's a good approximation. For this case, it tells you exactly what columns will be queried
from the SalesOrderHeader table and then how it will filter that table using the TotalDue
field to only get rows where the value for that field is larger than 1000. The node next to
it, Table.LastN, is calculated locally by the Power Query engine, as it can't be folded.

7 Note

The operators might not exactly match the functions used in the query's script.

Review non-folded nodes and consider actions to make


your transform fold
You've now determined which nodes couldn't be folded and will be evaluated locally.
This case only has the Table.LastN node, but in other scenarios it could have many
more.

The goal is to apply changes to your query so that the step can be folded. Some of the
changes you might implement could range from rearranging your steps to applying an
alternative logic to your query that's more explicit to the data source. This doesn't mean
that all queries and all operations are foldable by applying some changes. But it's a
good practice to determine through trial and error if your query could be folded back.

Since the data source is a SQL Server database, if the goal is to retrieve the last five
orders from the table, then a good alternative would be to take advantage of the TOP
and ORDER BY clauses in SQL. Since there's no BOTTOM clause in SQL, the Table.LastN
transform in PowerQuery can't be translated into SQL. You could remove the
Table.LastN step and replace it with:

A sort descending step by the SalesOrderID column in the table, since this column
determines which order goes first and which has been entered last.
Select the top five rows since the table has been sorted, this transform
accomplishes the same as if it was a Kept bottom rows ( Table.LastN ).

This alternative is equivalent to the original query. While this alternative in theory seems
good, you need to make the changes to see if this alternative will make this node fully
fold back to the data source.

3. Implement changes to your query


Implement the alternative discussed in the previous section:

1. Close the query plan dialog and go back to the Power Query Editor.

2. Remove the Kept bottom rows step.

3. Sort the SalesOrderID column in descending order.

4. Select the table icon on the top-left corner of the data preview view and select the
option that reads Keep top rows. In the dialog, pass the number five as the
argument and hit OK.

After implementing the changes, check the query folding indicators again and see if it's
giving you a folded indicator.


Now it's time to review the query plan of the last step, which is now Keep top rows.
Now there are only folded nodes. Select View details under Value.NativeQuery to verify
which query is being sent to the database.

While this article is suggesting what alternative to apply, the main goal is for you to
learn how to use the query plan to investigate query folding. This article also provides
visibility of what's being sent to your data source and what transforms will be done
locally.

You can adjust your code to see the impact that it has in your query. By using the query
folding indicators, you'll also have a better idea of which steps are preventing your
query from folding.
Query folding on native queries
Article • 02/17/2023

In Power Query, you're able to define a native query and run it against your data source.
The Import data from a database using native database query article explains how to do
this process with multiple data sources. But, by using the process described in that
article, your query won't take advantage of any query folding from subsequent query
steps.

This article showcases an alternative method to create native queries against your data
source using the Value.NativeQuery function and keep the query folding mechanism
active for subsequent steps of your query.

7 Note

We recommend that you read the documentation on query folding and the query
folding indicators to better understand the concepts used throughout this article.

Supported data connectors


The method described in the next sections applies to the following data connectors:

Amazon Redshift
Dataverse (when using enhanced compute)
Google BigQuery
PostgreSQL
SAP HANA
Snowflake
SQL Server

Connect to target from data source

7 Note

To showcase this process, this article uses the SQL Server connector and the
AdventureWorks2019 sample database. The experience may vary from connector
to connector, but this article showcases the fundamentals on how to enable query
folding capabilities over native queries for the supported connectors.
When connecting to the data source, it's important that you connect to the node or
level where you want to execute your native query. For the example in this article, that
node will be the database level inside the server.

After defining the connection settings and supplying the credentials for your
connection, you'll be taken to the navigation dialog for your data source. In that dialog,
you'll see all the available objects that you can connect to.

From this list, you need to select the object where the native query is run (also known as
the target). For this example, that object is the database level.

At the navigator window in Power Query, right-click the database node in the navigator
window and select the Transform Data option. Selecting this option creates a new query
of the overall view of your database, which is the target you need to run your native
query.
Once your query lands in the Power Query editor, only the Source step should show in
the Applied steps pane. This step contains a table with all the available objects in your
database, similar to how they were displayed in the Navigator window.

Use Value.NativeQuery function


The goal of this process is to execute the following SQL code, and to apply more
transformations with Power Query that can be folded back to the source.

SQL

SELECT DepartmentID, Name FROM HumanResources.Department WHERE GroupName =


'Research and Development'

The first step was to define the correct target, which in this case is the database where
the SQL code will be run. Once a step has the correct target, you can select that step—in
this case, Source in Applied Steps—and then select the fx button in the formula bar to
add a custom step. In this example, replace the Source formula with the following
formula:

Power Query M

Value.NativeQuery(Source, "SELECT DepartmentID, Name FROM


HumanResources.Department WHERE GroupName = 'Research and Development' ",
null, [EnableFolding = true])

The most important component of this formula is the use of the optional record for the
forth parameter of the function that has the EnableFolding record field set to true.

7 Note

You can read more about the Value.NativeQuery function from the official
documentation article.

After you have entered the formula, a warning will be shown that will require you to
enable native queries to run for your specific step. You can click continue for this step to
be evaluated.

This SQL statement yields a table with only three rows and two columns.
Test query folding
To test the query folding of your query, you can try to apply a filter to any of your
columns and see if the query folding indicator in the applied steps section shows the
step as folded. For this case, you can filter the DepartmentID column to have values that
are not equal to two.

After adding this filter, you can check that the query folding indicators still show the
query folding happening at this new step.
To further validate what query is being sent to the data source, you can right-click the
Filtered rows step and select the option that reads View query plan to check the query
plan for that step.

In the query plan view, you can see that a node with the name Value.NativeQuery at the
left side of the screen that has a hyperlink text that reads View details. You can click this
hyperlink text to view the exact query that is being sent to the SQL Server database.

The native query is wrapped around another SELECT statement to create a subquery of
the original. Power Query will do its best to create the most optimal query given the
transforms used and the native query provided.
 Tip

For scenarios where you get errors because query folding wasn't possible, it is
recommended that you try validating your steps as a subquery of your original
native query to check if there might be any syntax or context conflicts.
Using the data profiling tools
Article • 08/14/2023

The data profiling tools provide new and intuitive ways to clean, transform, and
understand data in Power Query Editor. They include:

Column quality

Column distribution

Column profile

To enable the data profiling tools, go to the View tab on the ribbon. In Power Query
Desktop, enable the options you want in the Data preview group, as shown in the
following image.

In Power Query Online, select Data view, then enable the options you want in the drop-
down list.
After you enable the options, you'll see something like the following image in Power
Query Editor.
7 Note

By default, Power Query performs this data profiling over the first 1,000 rows of
your data. To have it operate over the entire dataset, select the Column profiling
based on top 1000 rows message in the lower-left corner of your editor window to
change column profiling to Column profiling based on entire dataset.

Column quality
The column quality feature labels values in rows in five categories:

Valid, shown in green.

Error, shown in red.

Empty, shown in dark grey.

Unknown, shown in dashed green. Indicates when there are errors in a column, the
quality of the remaining data is unknown.

Unexpected error, shown in dashed red.

These indicators are displayed directly underneath the name of the column as part of a
small bar chart, as shown in the following image.
The number of records in each column quality category is also displayed as a
percentage.

By hovering over any of the columns, you are presented with the numerical distribution
of the quality of values throughout the column. Additionally, selecting the ellipsis button
(...) opens some quick action buttons for operations on the values.

Column distribution
This feature provides a set of visuals underneath the names of the columns that
showcase the frequency and distribution of the values in each of the columns. The data
in these visualizations is sorted in descending order from the value with the highest
frequency.

By hovering over the distribution data in any of the columns, you get information about
the overall data in the column (with distinct count and unique values). You can also
select the ellipsis button and choose from a menu of available operations.
Column profile
This feature provides a more in-depth look at the data in a column. Apart from the
column distribution chart, it contains a column statistics chart. This information is
displayed underneath the data preview section, as shown in the following image.
Filter by value
You can interact with the value distribution chart on the right side and select any of the
bars by hovering over the parts of the chart.

Right-click to display a set of available transformations for that value.

Copy data
In the upper-right corner of both the column statistics and value distribution sections,
you can select the ellipsis button (...) to display a Copy shortcut menu. Select it to copy
the data displayed in either section to the clipboard.

Group by value
When you select the ellipsis button (...) in the upper-right corner of the value
distribution chart, in addition to Copy you can select Group by. This feature groups the
values in your chart by a set of available options.

The image below shows a column of product names that have been grouped by text
length. After the values have been grouped in the chart, you can interact with individual
values in the chart as described in Filter by value.
Using the Queries pane
Article • 02/17/2023

In Power Query, you'll be creating many different queries. Whether it be from getting
data from many tables or from duplicating the original query, the number of queries will
increase.

You'll be using the Queries pane to navigate through the queries.

Navigating with the Queries pane


The most basic usage of the Queries pane is to navigate to all of the queries. The
navigation is similar to a file explorer. To switch between the queries, just select the
query you want to go to.

7 Note

Some actions in the Power Query Online editor may be different than actions in the
Power Query Desktop editor. These differences will be noted in this article.

Basic actions in the Query pane


Similar to features throughout the ribbon and the editor, the context menu of a query
lets you make transformations directly onto the query.
To reach these actions, open the context menu (the right-click menu) in the Query pane.

Differences between online and desktop:

Power Query Online

Power Query Desktop


To be more comprehensive, we'll be touching on all of the context menu actions that are
relevant for either.

Rename a query
To directly change the name of the query, double-select on the name of the query. This
action will allow you to immediately change the name.

Other options to rename the query are:

Go to the context menu and select Rename.

Go to Query Settings and enter in a different name in the Name input field.

Delete a query
To delete a query, open the context pane on the query and select Delete. There will be
an additional pop-up confirming the deletion. To complete the deletion, select the
Delete button.

Duplicating a query
Duplicating a query will create a copy of the query you're selecting.

To duplicate your query, open the context pane on the query and select Duplicate. A
new duplicate query will pop up on the side of the query pane.
Referencing a query
Referencing a query will create a new query. The new query uses the steps of a previous
query without having to duplicate the query. Additionally, any changes on the original
query will transfer down to the referenced query.

To reference your query, open the context pane on the query and select Reference. A
new referenced query will pop up on the side of the query pane.

Copy and paste


Copy and paste can be used when you have a copied query to place in the Power Query
editor.

7 Note

To learn more about how to copy and paste queries in Power Query, go to Sharing
a query.

Context pane options in the Queries pane


There are some additional context pane options in the Queries pane that you can use.
These options are New query, Merge queries, New parameter, and New group.
To reach these options, open the context menu (the right-click menu) in the Queries
pane.

Differences between online and desktop:

Power Query Online

Power Query Desktop

For the sake of being more comprehensive, we'll once again describe all of the context
menu actions that are relevant for either.

New query
You can import data into the Power Query editor as an option from the context menu.

This option functions the same as the Get Data feature.


7 Note

To learn about how to get data into Power Query, go to Getting data

Merge queries
When you select the Merge queries option from the context menu, the Merge queries
input screen opens.

This option functions the same as the Merge queries feature located on the ribbon and
in other areas of the editor.

7 Note

To learn more about how to use the Merge queries feature, go to Merge queries
overview.

New parameter
When you select the New parameter option from the context menu, the New
parameter input screen opens.

This option functions the same as the New parameter feature located on the ribbon.

7 Note

To learn more about Parameters in Power Query, go to Using parameters.


New group
You can make folders and move the queries into and out of the folders for
organizational purposes. These folders are called groups.

To move the query into a group, open the context menu on the specific query.

In the menu, select Move to group.

Then, select the group you want to put the query in.

The move will look like the following image. Using the same steps as above, you can
also move the query out of the group by selecting Queries (root) or another group.
In desktop versions of Power Query, you can also drag and drop the queries into the
folders.
Diagram view
Article • 02/17/2023

Diagram view offers a visual way to prepare data in the Power Query editor. With this
interface, you can easily create queries and visualize the data preparation process.
Diagram view simplifies the experience of getting started with data wrangling. It speeds
up the data preparation process and helps you quickly understand the dataflow, both
the "big picture view" of how queries are related and the "detailed view" of the specific
data preparation steps in a query.

This article provides an overview of the capabilities provided by diagram view.

This feature is enabled by selecting Diagram view in the View tab on the ribbon. With
diagram view enabled, the steps pane and queries pane will be collapsed.

7 Note

Currently, diagram view is only available in Power Query Online.


Authoring queries using diagram view
Diagram view provides you with a visual interface to create, view, or modify your
queries. In diagram view, you can connect to many different types of data sources using
the get data experience.

Diagram view is also connected to the Data Preview and the ribbon so that you can
select columns in the Data Preview.

You can add a new step within a query, after the currently selected step, by selecting the
+ button, and then either search for the transform or choose the item from the shortcut
menu. These are the same transforms you'll find in the Power Query editor ribbon.

By searching and selecting the transform from the shortcut menu, the step gets added
to the query, as shown in the following image.

7 Note

To learn more about how to author queries in the Query editor using the Power
Query editor ribbon or data preview, go to Power Query Quickstart.
Query level actions
You can perform two quick actions on a query—expand/collapse a query and highlight
related queries. These quick actions show up on an active selected query or when
hovering over a query.

You can perform more query level actions such as duplicate, reference, and so on, by
selecting the query level context menu (the three vertical dots). You can also right-click
in the query and get to the same context menu.

Expand or collapse query


To expand or collapse a query, right-click in the query and select Expand/Collapse from
the query's context menu. You can also double-click in the query to expand or collapse a
query.
Highlight related queries
To view all the related queries for a given query, right-click in a query and select
Highlight related queries. You can also select the highlight related queries button on
the top-right of a query.
For example, if you select the highlight related queries button in the Top US Customers
query, you can see that the Customers and Orders queries are highlighted.

Delete query
To delete a query, right-click in a query and select Delete from the context menu. There
will be an additional pop-up to confirm the deletion.
Rename query
To rename a query, right-click in a query and select Rename from the context menu.
Enable load
To ensure that the results provided by the query are available for downstream use such
as report building, by default Enable load is set to true. In case you need to disable load
for a given query, right-click in a query and select Enable load. The queries where
Enable load is set to false will be displayed with a grey outline.
Duplicate
To create a copy of a given query, right-click in the query and select Duplicate. A new
duplicate query will appear in the diagram view.
Reference
Referencing a query will create a new query. The new query will use the steps of the
previous query without having to duplicate the query. Additionally, any changes on the
original query will transfer down to the referenced query. To reference a query, right-
click in the query and select Reference.
Move to group
You can make folders and move the queries into these folders for organizational
purposes. These folders are called groups. To move a given query to a Query group,
right-click in a query and select Move to group. You can choose to move the queries to
an existing group or create a new query group.
You can view the query groups above the query box in the diagram view.

Create function
When you need to apply the same set of transformations in different queries or values,
creating custom Power Query functions can be valuable. To learn more about custom
functions, go to Using custom functions. To convert a query into a reusable function,
right-click in a given query and select Create function.
Convert to parameter
A parameter provides the flexibility to dynamically change the output of your queries
depending on their value and promotes reusability. To convert a non-structured value
such as date, text, number, and so on, right-click in the query and select Convert to
Parameter.
7 Note

To learn more about parameters, go to Power Query parameters.

Advanced editor
With the advanced editor, you can see the code that Power Query editor is creating with
each step. To view the code for a given query, right-click in the query and select
Advanced editor.
7 Note

To learn more about the code used in the advanced editor, go to Power Query M
language specification.

Edit query name and description


To edit the name of a query or add a description, right-click in a query and select
Properties.
This action will open a dialog box where you can edit the name of the query or add to
or modify the query description.
Queries with query description will have an affordance (i icon). You can view the query
description by hovering near the query name.

Append queries/Append queries as new


To append or perform a UNION of queries, right-click in a query and select Append
queries. This action will display the Append dialog box where you can add more tables
to the current query. Append queries as new will also display the Append dialog box,
but will allow you to append multiple tables into a new query.

7 Note

To learn more about how to append queries in Power Query, go to Append


queries.

Merge queries/Merge queries as new


To merge or JOIN queries, right-click in a query and select Merge queries. This action
will display the Merge dialog box, with the selected query as the left table of the merge
operation. Merge queries as new will also display the Merge dialog box but will allow
you to merge two tables into a new query.

7 Note

To learn more about how to merge queries in Power Query, go to Merge queries
overview.

Step level actions


By right-clicking a step, you can perform step level actions such as Edit settings, Rename,
and so on.
You can also perform step level actions by hovering over the step and selecting the
ellipsis (three vertical dots).

Edit settings
To edit the step level settings, right-click the step and choose Edit settings. Instead, you
can double-click the step (that has step settings) and directly get to the settings dialog
box. In the settings dialog box, you can view or change the step level settings. For
example, the following image shows the settings dialog box for the Split column step.
Rename step
To rename a step, right-click the step and select Rename. This action opens the Step
properties dialog. Enter the name you want, and then select OK.

Delete step
To delete a step, right-click the step and select Delete. To delete a series of steps until
the end, right-click the step and select Delete until end.

Move before/Move after


To move a step one position before, right-click a step and select Move before. To move
a step one position after, right-click a step and select Move after.
Extract previous
To extract all previous steps into a new query, right-click the first step that you do not
want to include in the query and then select Extract previous.

Edit step name and description


To add step descriptions, right-click a step in a query and then choose Properties.
You can also get to the step level context menu by hovering over the step and selecting
the ellipsis (three vertical dots).

This action will open a dialog box where you can add the step description. This step
description will come handy when you come back to the same query after a few days or
when you share your queries or dataflows with other users.
By hovering over each step, you can view a call out that shows the step label, step name,
and step descriptions (that were added).

By selecting each step, you can see the corresponding data preview for that step.

Expand and collapse queries


To ensure that you can view your queries in the diagram view, you can collapse the ones
that you aren't actively working on and expand the ones that you care about. Expand or
collapse queries by selecting the Expand/Collapse button on the top-right of a query.
Alternatively, double-clicking an expanded query will collapse the query and vice-versa.

You can also expand or collapse a query by selecting the query level actions from the
query's context menu.
To expand all or collapse all queries, select the Expand all/Collapse all button next to
the layout options in the diagram view pane.

You can also right-click any empty space in the diagram view pane and see a context
menu to expand all or collapse all queries.

In the collapsed mode, you can quickly look at the steps in the query by hovering over
the number of steps in the query. You can select these steps to navigate to that specific
step within the query.
Layout Options
There are five layout options available in the diagram view: zoom out, zoom in, mini-
map, full screen, fit to view, and reset.

Zoom out/zoom in
With this option, you can adjust the zoom level and zoom out or zoom in to view all the
queries in the diagram view.

Mini-map
With this option, you can turn the diagram view mini-map on or off. More information:
Show mini-map

Full screen
With this option, you can view all the queries and their relationships through the Full
screen mode. The diagram view pane expands to full screen and the data preview pane,
queries pane, and steps pane remain collapsed.

Fit to view
With this option, you can adjust the zoom level so that all the queries and their
relationships can be fully viewed in the diagram view.

Reset
With this option, you can reset the zoom level back to 100% and also reset the pane to
the top-left corner.

View query relationships


To view all the related queries for a given query, select the Highlight related queries
button. For instance, by selecting the highlight related queries button in the Top US
Customers query, the Customers and Orders queries are highlighted, as shown in the
following image.
You can also select the dongle on the left of a given query to see the direct and indirect
referenced queries.

Similarly, you can select the right dongle to view direct and indirect dependent queries.

You can also hover on the link icon below a step to view a callout that shows the query
relationships.

Diagram view settings


There are two ways to modify diagram view settings. The first way is to select the lower
half of the Diagram View button inside the View tab in the ribbon.

The second way to modify diagram view settings is to right-click over a blank part of the
diagram view background.

Step labels and step names


We show step labels by default within the diagram view.

You can change diagram view settings to show step names to match the applied steps
within the query settings pane.

Auto-highlight related queries


By selecting Auto-highlight related queries within diagram view settings, related
queries are always highlighted so that you can visually see the query dependencies
better

Compact view
When you have queries with multiple steps, it can be challenging to scroll horizontally to
view all your steps within the viewport.

To address this, diagram view offers Compact view, which compresses the steps from
top to bottom instead of left to right. This view can be especially useful when you have
queries with multiple steps, so that you can see as many queries as possible within the
viewport.

To enable this view, navigate to diagram view settings and select Compact view inside
the View tab in the ribbon.

Show mini-map
Once the number of queries begin to overflow the diagram view, you can use the scroll
bars at the bottom and right side of the diagram view to scroll through the queries. One
other method of scrolling is to use the diagram view mini-map control. The mini-map
control lets you keep track of the overall dataflow "map", and quickly navigate, while
looking at an specific area of the map in the main diagram view area.

To open the mini-map, either select Show mini-map from the diagram view menu or
select the mini-map button in the layout options.
Right-click and hold the rectangle on the mini-map, then move the rectangle to move
around in the diagram view.

Show animations
When the Show animations menu item is selected, the transitions of the sizes and
positions of the queries is animated. These transitions are easiest to see when collapsing
or expanding the queries or when changing the dependencies of existing queries. When
cleared, the transitions will be immediate. Animations are turned on by default.

Maximize data preview


You may want to see more data within the data preview to understand and analyze the
data. To do so, expand the data preview so that you can see as much data as before
within the data preview without leaving diagram View.

Expand or collapse all queries


By default, the queries within diagram view are collapsed. There are options to expand
or collapse each query in a single click.

You can also expand or collapse related queries from the query level context menu.

Multi-select queries
You select multiple queries within the diagram view by holding down the Ctrl key and
clicking queries. Once you multi-select, right-clicking will show a context menu that
allows performing operations such as merge, append, move to group, expand/collapse
and more.

Inline rename
You can double-click the query name to rename the query.

Double-clicking the step name allows you to rename the step, provided the diagram
view setting is showing step names.

When step labels are displayed in diagram view, double-clicking the step label shows
the dialog box to rename the step name and provide a description.
Accessibility
Diagram view supports accessibility features such as keyboard navigation, high-contrast
mode, and screen reader support. The following table describes the keyboard shortcuts
that are available within diagram view. To learn more about keyboard shortcuts available
within Power Query Online, go to keyboard shortcuts in Power Query.

Action Keyboard shortcut

Expand selected query Ctrl+Right arrow key

Collapse selected query Ctrl+Left arrow key

Move focus from query level to step level Alt+Down arrow key

Move focus from step level to query level Esc

Expand all queries Ctrl+Shift+Right arrow key

Collapse all queries Ctrl+Shift+Left arrow key

Insert new step using+button (after selected step) Ctrl+Alt+N

Highlight related queries Ctrl+Alt+R


Action Keyboard shortcut

Select all queries Ctrl+A

Copy queries Ctrl+C

Paste queries Ctrl+V


Using Schema view (Preview)
Article • 02/17/2023

Schema view is designed to optimize your flow when working on schema level
operations by putting your query's column information front and center. Schema view
provides contextual interactions to shape your data structure, and lower latency
operations as it only requires the column metadata to be computed and not the
complete data results.

This article walks you through schema view and the capabilities it offers.

7 Note

The Schema view feature is available only for Power Query Online.

Overview
When working on data sets with many columns, simple tasks can become incredibly
cumbersome because even finding the right column by horizontally scrolling and
parsing through all the data is inefficient. Schema view displays your column
information in a list that's easy to parse and interact with, making it easier than ever to
work on your schema.
In addition to an optimized column management experience, another key benefit of
schema view is that transforms tend to yield results faster. These results are faster
because this view only requires the columns information to be computed instead of a
preview of the data. So even working with long running queries with a few columns will
benefit from using schema view.

You can turn on schema view by selecting Schema view in the View tab. When you're
ready to work on your data again, you can select Data view to go back.

Reordering columns
One common task when working on your schema is reordering columns. In Schema
View this can easily be done by dragging columns in the list and dropping in the right
location until you achieve the desired column order.

Applying transforms
For more advanced changes to your schema, you can find the most used column-level
transforms right at your fingertips directly in the list and in the Schema tools tab. Plus,
you can also use transforms available in other tabs on the ribbon.
Share a query
Article • 12/17/2022

You can use Power Query to extract and transform data from external data sources.
These extraction and transformations steps are represented as queries. Queries created
with Power Query are expressed using the M language and executed through the M
Engine.

You can easily share and reuse your queries across projects, and also across Power
Query product integrations. This article covers the general mechanisms to share a query
in Power Query.

Copy / Paste
In the queries pane, right-click the query you want to copy. From the dropdown menu,
select the Copy option. The query and its definition will be added to your clipboard.

7 Note

The copy feature is currently not available in Power Query Online instances.
To paste the query from your clipboard, go to the queries pane and right-click on any
empty space in it. From the menu, select Paste.

When pasting this query on an instance that already has the same query name, the
pasted query will have a suffix added with the format (#) , where the pound sign is
replaced with a number to distinguish the pasted queries.

You can also paste queries between multiple instances and product integrations. For
example, you can copy the query from Power BI Desktop, as shown in the previous
images, and paste it in Power Query for Excel as shown in the following image.
2 Warning

Copying and pasting queries between product integrations doesn't guarantee that
all functions and functionality found in the pasted query will work on the
destination. Some functionality might only be available in the origin product
integration.

Copy the M code


You can also copy the full code of your query.
1. Select the query that you want to share.
2. In the ribbon, select the View tab and then select Advanced Editor.
3. In the Advanced Editor window, select all the code and copy it.

With the code of your query in your clipboard, you can share this query through the
means of your choice. The recipient of this code needs to create a blank query and
follow the same steps as described above. But instead of copying the code, the recipient
will replace the code found in their blank query with the code that you provided.

7 Note

To create a blank query, go to the Get Data window and select Blank query from
the options.

Product-specific query sharing capabilities


Some Power Query product integrations might offer more ways to share queries such as
but not limited to:

In Microsoft Excel—Creating an Office Data Connection (.odc) to share with other


users.
In Power BI Desktop—Creating a Power BI Template (.pbit) to share with other
users.
We recommend that you read the documentation of the product integration that you're
interested in to learn more about the query sharing capabilities found in those products.
Using custom functions
Article • 02/17/2023

If you find yourself in a situation where you need to apply the same set of
transformations to different queries or values, creating a Power Query custom function
that can be reused as many times as you need could be beneficial. A Power Query
custom function is a mapping from a set of input values to a single output value, and is
created from native M functions and operators.

While you can manually create your own Power Query custom function using code as
shown in Understanding Power Query M functions, the Power Query user interface
offers you features to speed up, simplify, and enhance the process of creating and
managing a custom function.

This article focuses on this experience, provided only through the Power Query user
interface, and how to get the most out of it.

) Important

This article outlines how to create a custom function with Power Query using
common transforms accessible in the Power Query user interface. It focuses on the
core concepts to create custom functions, and links to additional articles in Power
Query documentation for more information on specific transforms that are
referenced in this article.

Create a custom function from a table


reference

7 Note

The following example was created using the desktop experience found in Power BI
Desktop and can also be followed using the Power Query experience found in Excel
for Windows.

You can follow along with this example by downloading the sample files used in this
article from the following download link . For simplicity, this article will be using the
Folder connector. To learn more about the Folder connector, go to Folder. The goal of
this example is to create a custom function that can be applied to all the files in that
folder before combining all of the data from all files into a single table.

Start by using the Folder connector experience to navigate to the folder where your files
are located and select Transform Data or Edit. This will take you to the Power Query
experience. Right-click on the Binary value of your choice from the Content field and
select the Add as New Query option. For this example, you'll see that the selection was
made for the first file from the list, which happens to be the file April 2019.csv.

This option will effectively create a new query with a navigation step directly to that file
as a Binary, and the name of this new query will be the file path of the selected file.
Rename this query to be Sample File.

Create a new parameter with the name File Parameter. Use the Sample File query as the
Current Value, as shown in the following image.
7 Note

We recommend that you read the article on Parameters to better understand how
to create and manage parameters in Power Query.

Custom functions can be created using any parameters type. There's no


requirement for any custom function to have a binary as a parameter.

The binary parameter type is only displayed inside the Parameters dialog Type
dropdown menu when you have a query that evaluates to a binary.

It's possible to create a custom function without a parameter. This is commonly


seen in scenarios where an input can be inferred from the environment where the
function is being invoked. For example, a function that takes the environment's
current date and time, and creates a specific text string from those values.
Right-click File Parameter from the Queries pane. Select the Reference option.

Rename the newly created query from File Parameter (2) to Transform Sample file.

Right-click this new Transform Sample file query and select the Create Function option.
This operation will effectively create a new function that will be linked with the
Transform Sample file query. Any changes that you make to the Transform Sample file
query will be automatically replicated to your custom function. During the creation of
this new function, use Transform file as the Function name.

After creating the function, you'll notice that a new group will be created for you with
the name of your function. This new group will contain:

All parameters that were referenced in your Transform Sample file query.
Your Transform Sample file query, commonly known as the sample query.
Your newly created function, in this case Transform file.

Applying transformations to a sample query


With your new function created, select the query with the name Transform Sample file.
This query is now linked with the Transform file function, so any changes made to this
query will be reflected in the function. This is what is known as the concept of a sample
query linked to a function.

The first transformation that needs to happen to this query is one that will interpret the
binary. You can right-click the binary from the preview pane and select the CSV option
to interpret the binary as a CSV file.
The format of all the CSV files in the folder is the same. They all have a header that
spans the first top four rows. The column headers are located in row five and the data
starts from row six downwards, as shown in the next image.

The next set of transformation steps that need to be applied to the Transform Sample
file are:

1. Remove the top four rows—This action will get rid of the rows that are considered
part of the header section of the file.
7 Note

To learn more about how to remove rows or filter a table by row position, go
to Filter by row position.

2. Promote headers—The headers for your final table are now in the first row of the
table. You can promote them as shown in the next image.
Power Query by default will automatically add a new Changed Type step after
promoting your column headers that will automatically detect the data types for each
column. Your Transform Sample file query will look like the next image.

7 Note

To learn more about how to promote and demote headers, go to Promote or


demote column headers.
U Caution

Your Transform file function relies on the steps performed in the Transform
Sample file query. However, if you try to manually modify the code for the
Transform file function, you'll be greeted with a warning that reads The definition
of the function 'Transform file' is updated whenever query 'Transform Sample

file' is updated. However, updates will stop if you directly modify function

'Transform file'.

Invoke a custom function as a new column


With the custom function now created and all the transformation steps incorporated,
you can go back to the original query where you have the list of files from the folder.
Inside the Add Column tab in the ribbon, select Invoke Custom Function from the
General group. Inside the Invoke Custom Function window, enter Output Table as the
New column name. Select the name of your function, Transform file, from the Function
query dropdown. After selecting the function from the dropdown menu, the parameter
for the function will be displayed and you can select which column from the table to use
as the argument for this function. Select the Content column as the value / argument to
be passed for the File Parameter.
After you select OK, a new column with the name Output Table will be created. This
column has Table values in its cells, as shown in the next image. For simplicity, remove
all columns from this table except Name and Output Table.

7 Note
To learn more about how to choose or remove columns from a table, go to Choose
or remove columns.

Your function was applied to every single row from the table using the values from the
Content column as the argument for your function. Now that the data has been
transformed into the shape that you're looking for, you can expand the Output Table
column, as shown in the image below, without using any prefix for the expanded
columns.

You can verify that you have data from all files in the folder by checking the values in the
Name or Date column. For this case, you can check the values from the Date column, as
each file only contains data for a single month from a given year. If you see more than
one, it means that you've successfully combined data from multiple files into a single
table.

7 Note
What you've read so far is fundamentally the same process that happens during the
Combine files experience, but done manually.

We recommend that you also read the article on Combine files overview and
Combine CSV files to further understand how the combine files experience works
in Power Query and the role that custom functions play.

Add new parameter to existing custom function


Imagine that there's a new requirement on top of what you've built. The new
requirement requires that before you combine the files, you filter the data inside them
to only get the rows where the Country is equals to Panama.

To make this requirement happen, create a new parameter called Market with the text
data type. For the Current Value, enter the value Panama.
With this new parameter, select the Transform Sample file query and filter the Country
field using the value from the Market parameter.
7 Note

To learn more about how to filter columns by values, go to Filter values.

Applying this new step to your query will automatically update the Transform file
function, which will now require two parameters based on the two parameters that your
Transform Sample file uses.

But the CSV files query has a warning sign next to it. Now that your function has been
updated, it requires two parameters. So the step where you invoke the function results
in error values, since only one of the arguments was passed to the Transform file
function during the Invoked Custom Function step.

To fix the errors, double-click Invoked Custom Function in the Applied Steps to open
the Invoke Custom Function window. In the Market parameter, manually enter the
value Panama.
You can now check your query to validate that only rows where Country is equal to
Panama show up in the final result set of the CSV Files query.

Create a custom function from a reusable piece


of logic
If you have multiple queries or values that require the same set of transformations, you
could create a custom function that acts as a reusable piece of logic. Later, this custom
function can be invoked against the queries or values of your choice. This custom
function could save you time and help you in managing your set of transformations in a
central location, which you can modify at any moment.

For example, imagine a query that has several codes as a text string and you want to
create a function that will decode those values, as in the following sample table:

code

PTY-CM1090-LAX
code

LAX-CM701-PTY

PTY-CM4441-MIA

MIA-UA1257-LAX

LAX-XY2842-MIA

You start by having a parameter that has a value that serves as an example. For this case,
it will be the value PTY-CM1090-LAX.
From that parameter, you create a new query where you apply the transformations that
you need. For this case, you want to split the code PTY-CM1090-LAX into multiple
components:

Origin = PTY
Destination = LAX
Airline = CM
FlightID = 1090
The M code for that set of transformations is shown below.

Power Query M

let
Source = code,
SplitValues = Text.Split( Source, "-"),
CreateRow = [Origin= SplitValues{0}, Destination= SplitValues{2},
Airline=Text.Start( SplitValues{1},2), FlightID= Text.End( SplitValues{1},
Text.Length( SplitValues{1} ) - 2) ],
RowToTable = Table.FromRecords( { CreateRow } ),
#"Changed Type" = Table.TransformColumnTypes(RowToTable,{{"Origin", type
text}, {"Destination", type text}, {"Airline", type text}, {"FlightID", type
text}})
in
#"Changed Type"

7 Note

To learn more about the Power Query M formula language, go to Power Query M
formula language.

You can then transform that query into a function by doing a right-click on the query
and selecting Create Function. Finally, you can invoke your custom function into any of
your queries or values, as shown in the next image.
After a few more transformations, you can see that you've reached your desired output
and leveraged the logic for such a transformation from a custom function.


Promote or demote column headers
Article • 12/17/2022

When creating a new query from unstructured data sources such as text files, Power
Query analyzes the contents of the file. If Power Query identifies a different pattern for
the first row, it will try to promote the first row of data to be the column headings for
your table. However, Power Query might not identify the pattern correctly 100 percent
of the time, so this article explains how you can manually promote or demote column
headers from rows.

To promote rows to column headers


In the following example, Power Query wasn't able to determine the column headers for
the table because the table contains a set of header rows for the first three rows. The
actual column headers for the table are contained in row 5.

Before you can promote the headers, you need to remove the first four rows of the
table. To make that happen, select the table menu in the upper-left corner of the
preview window, and then select Remove top rows.
In the Remove top rows window, enter 4 in the Number of rows box.

7 Note

To learn more about Remove top rows and other table operations, go to Filter by
row position.

The result of that operation will leave the headers as the first row of your table.
Locations of the promote headers operation
From here, you have a number of places where you can select the promote headers
operation:

On the Home tab, in the Transform group.

On the Transform tab, in the Table group.

On the table menu.


After you do the promote headers operation, your table will look like the following
image.

7 Note

Table column names must be unique. If the row you want to promote to a header
row contains multiple instances of the same text string, Power Query will
disambiguate the column headings by adding a numeric suffix preceded by a dot
to every text string that isn't unique.
To demote column headers to rows
In the following example, the column headers are incorrect: they're actually part of the
table's data. You need to demote the headers to be part of the rows of the table.

Locations of the demote headers operation


You have a number of places where you can select the demote headers operation:

On the Home tab, in the Transform group.

On the Transform tab, in the Table group.

After you do this operation, your table will look like the following image.
As a last step, select each column and type a new name for it. The end result will
resemble the following image.

See also
Filter by row position
Filter a table by row position
Article • 12/17/2022

Power Query has multiple options to filter a table based on the positions of its rows,
either by keeping or removing those rows. This article covers all the available methods.

Keep rows
The keep rows set of functions will select a set of rows from the table and remove any
other rows that don't meet the criteria.

There are two places where you can find the Keep rows buttons:

On the Home tab, in the Reduce Rows group.

On the table menu.


7 Note

In the data preview section in the middle of the Power Query window, you can see
the position of your rows on the left side of the table. Each row position is
represented by a number. The top row starts with position 1.

Keep top rows


Imagine the following table that comes out of a system with a fixed layout.
This report always contains seven rows of data, and below the data it has a section for
comments with an unknown number of rows. In this example, you only want to keep the
first seven rows of data. To do that, select Keep top rows from the table menu. In the
Keep top rows dialog box, enter 7 in the Number of rows box.

The result of that change will give you the output table you're looking for. After you set
the data types for your columns, your table will look like the following image.
Keep bottom rows
Imagine the following table that comes out of a system with a fixed layout.

This report always contains seven rows of data at the end of the report page. Above the
data, the report has a section for comments with an unknown number of rows. In this
example, you only want to keep those last seven rows of data and the header row.

To do that, select Keep bottom rows from the table menu. In the Keep bottom rows
dialog box, enter 8 in the Number of rows box.
The result of that operation will give you eight rows, but now your header row is part of
the table.

You need to promote the column headers from the first row of your table. To do this,
select Use first row as headers from the table menu. After you define data types for
your columns, you'll create a table that looks like the following image.

More information: Promote or demote column headers

Keep a range of rows


Imagine the following table that comes out of a system with a fixed layout.
This report always contains five rows for the header, one row of column headers below
the header, seven rows of data below the column headers, and then an unknown
number of rows for its comments section. In this example, you want to get the eight
rows after the header section of the report, and only those eight rows.

To do that, select Keep range of rows from the table menu. In the Keep range of rows
dialog box, enter 6 in the First row box and 8 in the Number of rows box.

Similar to the previous example for keeping bottom rows, the result of this operation
gives you eight rows with your column headers as part of the table. Any rows above the
First row that you defined (row 6) are removed.
You can perform the same operation as described in Keep bottom rows to promote the
column headers from the first row of your table. After you set data types for your
columns, your table will look like the following image.

Remove rows
This set of functions will select a set of rows from the table, remove them, and keep the
rest of the rows in the table.

There are two places where you can find the Remove rows buttons:

On the Home tab, in the Reduce Rows group.

On the table menu.


Remove top rows
Imagine the following table that comes out of a system with a fixed layout.
This report always contains a fixed header from row 1 to row 5 of the table. In this
example, you want to remove these first five rows and keep the rest of the data.

To do that, select Remove top rows from the table menu. In the Remove top rows
dialog box, enter 5 in the Number of rows box.

In the same way as the previous examples for "Keep bottom rows" and "Keep a range of
rows," the result of this operation gives you eight rows with your column headers as part
of the table.

You can perform the same operation as described in previous examples to promote the
column headers from the first row of your table. After you set data types for your
columns, your table will look like the following image.
Remove bottom rows
Imagine the following table that comes out of a system with a fixed layout.

This report always contains a fixed section or footer that occupies the last five rows of
the table. In this example, you want to remove those last five rows and keep the rest of
the data.

To do that, select Remove bottom rows from the table menu. In the Remove top rows
dialog box, enter 5 in the Number of rows box.

The result of that change will give you the output table that you're looking for. After you
set data types for your columns, your table will look like the following image.
Remove alternate rows
Imagine the following table that comes out of a system with a dynamic layout.

The way this report is structured is that you have elements in pairs of rows. Every odd
row (1, 3, 5...) contains the data that you need. Every even row, directly underneath each
odd row, contains comments about each of those records. You don't need the
comments, and you want to remove all of them.

To do that, select Remove alternate rows from the table menu. In the Remove alternate
rows dialog box, enter the following values:

In the First row to remove box, enter 2.


You want to start counting from the second row. Any rows above this First row to
remove will be kept.

In the Number of rows to remove box, enter 1.

Here you start defining the pattern for removing rows. After you find the second
row, you only want to remove that specific row, so you specify that you only need
to remove one row.

In the Number of rows to keep box, enter 1.

After you remove one row, you keep the next row. The process starts again for the
next row.

The result of that selection will give you the output table that you're looking for. After
you set the data types to your columns, your table will look like the following image.
Filter by values in a column
Article • 12/17/2022

In Power Query, you can include or exclude rows according to a specific value in a
column. You can choose from three methods to filter the values in your column:

Sort and filter menu


Cell shortcut menu
Type-specific filter

After you apply a filter to a column, a small filter icon appears in the column heading, as
shown in the following illustration.

Sort and filter menu


In the column header, you'll see an icon with an inverse triangle. When you select this
icon, the sort and filter menu is displayed. With this menu, you can apply or remove any
filters to or from your column.
7 Note

In this article, we'll focus on aspects related to filtering data. To learn more about
the sort options and how to sort columns in Power Query, go to Sort columns.

Remove empty
The Remove empty command applies two filter rules to your column. The first rule gets
rid of any null values. The second rule gets rid of any blank values. For example, imagine
a table with just one text column with five rows, where you have one null value and one
blank cell.
7 Note

A null value is a specific value in the Power Query language that represents no
value.

You then select Remove empty from the sort and filter menu, as shown in the following
image.

You can also select this option from the Home tab in the Reduce Rows group in the
Remove Rows drop-down options, as shown in the next image.
The result of the Remove empty operation gives you the same table without the empty
values.

Clear filter
When a filter is applied to a column, the Clear filter command appears on the sort and
filter menu.
Auto filter
The list in the sort and filter menu is called the auto filter list, which shows the unique
values in your column. You can manually select or deselect which values to include in the
list. Any selected values will be taken into consideration by the filter; any values that
aren't selected will be ignored.

This auto filter section also has a search bar to help you find any values from your list.
7 Note

When you load the auto filter list, only the top 1,000 distinct values in the column
are loaded. If there are more than 1,000 distinct values in the column in the that
you're filtering, a message will appear indicating that the list of values in the filter
list might be incomplete, and the Load more link appears. Select the Load more
link to load another 1,000 distinct values.

If exactly 1,000 distinct values are found again, the list is displayed with a
message stating that the list might still be incomplete.
If fewer than 1,000 distinct values are found, the full list of values is shown.

Cell shortcut menu


You can right-click a particular cell in a column to open the shortcut menu for that value.
Point to the small filter icon, and then select the filter option you want to use.
7 Note

Power Query displays a type-specific filter based on the data type of the column.

Type-specific filters
Depending on the data type of your column, you'll see different commands in the sort
and filter menu. The following images show examples for date, text, and numeric
columns.
Filter rows
When selecting any of the type-specific filters, you'll use the Filter rows dialog box to
specify filter rules for the column. This dialog box is shown in the following image.

The Filter rows dialog box has two modes: Basic and Advanced.

Basic

With basic mode, you can implement up to two filter rules based on type-specific filters.
In the preceding image, notice that the name of the selected column is displayed after
the label Keep rows where, to let you know which column these filter rules are being
implemented on.
For example, imagine that in the following table, you want to filter the Account Code by
all values that start with either PA or PTY.

To do that, you can go to the Filter rows dialog box for the Account Code column and
specify the set of filter rules you want.

In this example, first select the Basic button. Then under Keep rows where "Account
Code", select begins with, and then enter PA. Then select the or button. Under the or
button, select begins with, and then enter PTY. The select OK.

The result of that operation will give you the set of rows that you're looking for.

Advanced
With advanced mode, you can implement as many type-specific filters as necessary from
all the columns in the table.

For example, imagine that instead of applying the previous filter in basic mode, you
wanted to implement a filter to Account Code to show all values that end with 4. Also,
you want to show values over $100 in the Sales column.

In this example, first select the Advanced button. In the first row, select Account Code
under Column name, ends with under Operator, and select 4 for the Value. In the
second row, select and, and then select Sales under Column Name, is greater than
under Operator, and 100 under Value. Then select OK

The result of that operation will give you just one row that meets both criteria.

7 Note

You can add as many clauses as you'd like by selecting Add clause. All clauses act
at the same level, so you might want to consider creating multiple filter steps if you
need to implement filters that rely on other filters.
Choose or remove columns
Article • 12/17/2022

Choose columns and Remove columns are operations that help you define what
columns your table needs to keep and which ones it needs to remove. This article will
showcase how to use the Choose columns and Remove columns commands by using
the following sample table for both operations.

The goal is to create a table that looks like the following image.

Choose columns
On the Home tab, in the Manage columns group, select Choose columns.

The Choose columns dialog box appears, containing all the available columns in your
table. You can select all the fields that you want to keep and remove specific fields by
clearing their associated check box. For this example, you want to remove the GUID and
Report created by columns, so you clear the check boxes for those fields.
After selecting OK, you'll create a table that only contains the Date, Product,
SalesPerson, and Units columns.

Remove columns
When you select Remove columns from the Home tab, you have two options:

Remove columns: Removes the selected columns.


Remove other columns: Removes all columns from the table except the selected
ones.
Remove selected columns
Starting from the sample table, select the GUID and the Report created columns. Right-
click to select any of the column headings. A new shortcut menu appears, where you
can select the Remove columns command.

After selecting Remove columns, you'll create a table that only contains the Date,
Product, SalesPerson, and Units columns.

Remove other columns


Starting from the sample table, select all the columns from the table except GUID and
Report created. On the Home tab, select Remove columns > Remove other columns.
After selecting Remove other columns, you'll create a table that only contains the Date,
Product, SalesPerson, and Units columns.
Grouping or summarizing rows
Article • 03/21/2023

In Power Query, you can group values in various rows into a single value by grouping
the rows according to the values in one or more columns. You can choose from two
types of grouping operations:

Column groupings.

Row groupings.

For this tutorial, you'll be using the following sample table.

Where to find the Group by button


You can find the Group by button in three places:

On the Home tab, in the Transform group.

On the Transform tab, in the Table group.


On the shortcut menu when you right-click to select columns.

Use an aggregate function to group by one or


more columns
In this example, your goal is to summarize the total units sold at the country and sales
channel level. You'll use the Country and Sales Channel columns to perform the group
by operation.

1. Select Group by on the Home tab.


2. Select the Advanced option, so you can select multiple columns to group by.
3. Select the Country column.
4. Select Add grouping.
5. Select the Sales Channel column.
6. In New column name, enter Total units, in Operation, select Sum, and in Column,
select Units.
7. Select OK
This operation gives you the following table.

Operations available
With the Group by feature, the available operations can be categorized in two ways:

Row level operation


Column level operation

The following table describes each of these operations.

Operation Category Description


Name
Operation Category Description
Name

Sum Column Sums up all values from a column


operation

Average Column Calculates the average value from a column


operation

Median Column Calculates the median from a column


operation

Min Column Calculates the minimum value from a column


operation

Max Column Calculates the maximum value from a column


operation

Percentile Column Calculates the percentile, using an input value from 0 to 100,
operation from a column

Count distinct Column Calculates the number of distinct values from a column
values operation

Count rows Row Calculates the total number of rows from a given group
operation

Count distinct Row Calculates the number of distinct rows from a given group
rows operation

All rows Row Outputs all grouped rows in a table value with no
operation aggregations

7 Note

The Count distinct values and Percentile operations are only available in Power
Query Online.

Perform an operation to group by one or more


columns
Starting from the original sample, in this example you'll create a column containing the
total units and two other columns that give you the name and units sold for the top-
performing product, summarized at the country and sales channel level.
1. Use the following columns as Group by columns:

Country
Sales Channel

2. Create two new columns by following these steps:


a. Aggregate the Units column by using the Sum operation. Name this column
Total units.
b. Add a new Products column by using the All rows operation.

After that operation is complete, notice how the Products column has [Table] values
inside each cell. Each [Table] value contains all the rows that were grouped by the
Country and Sales Channel columns from your original table. You can select the white
space inside the cell to see a preview of the contents of the table at the bottom of the
dialog box.
7 Note

The details preview pane might not show all the rows that were used for the group-
by operation. You can select the [Table] value to see all rows pertaining to the
corresponding group-by operation.

Next, you need to extract the row that has the highest value in the Units column of the
tables inside the new Products column, and call that new column Top performer
product.

Extract the top performer product information


With the new Products column with [Table] values, you create a new custom column by
going to the Add Column tab on the ribbon and selecting Custom column from the
General group.

Name your new column Top performer product. Enter the formula
Table.Max([Products], "Units" ) under Custom column formula.
The result of that formula creates a new column with [Record] values. These record
values are essentially a table with just one row. These records contain the row with the
maximum value for the Units column of each [Table] value in the Products column.

With this new Top performer product column that contains [Record] values, you can
select the expand icon, select the Product and Units fields, and then select OK.
After removing your Products column and setting the data type for both newly
expanded columns, your result will resemble the following image.

Fuzzy grouping

7 Note

The following feature is only available in Power Query Online.

To demonstrate how to do "fuzzy grouping," consider the sample table shown in the
following image.
The goal of fuzzy grouping is to do a group-by operation that uses an approximate
match algorithm for text strings. Power Query uses the Jaccard similarity algorithm to
measure the similarity between pairs of instances. Then it applies agglomerative
hierarchical clustering to group instances together. The following image shows the
output that you expect, where the table will be grouped by the Person column.

To do the fuzzy grouping, you perform the same steps previously described in this
article. The only difference is that this time, in the Group by dialog box, you select the
Use fuzzy grouping check box.
For each group of rows, Power Query will pick the most frequent instance as the
"canonical" instance. If multiple instances occur with the same frequency, Power Query
will pick the first one. After you select OK in the Group by dialog box, you'll get the
result that you were expecting.

However, you have more control over the fuzzy grouping operation by expanding Fuzzy
group options.
The following options are available for fuzzy grouping:

Similarity threshold (optional): This option indicates how similar two values must
be to be grouped together. The minimum setting of 0 will cause all values to be
grouped together. The maximum setting of 1 will only allow values that match
exactly to be grouped together. The default is 0.8.
Ignore case: When comparing text strings, case will be ignored. This option is
enabled by default.
Group by combining text parts: The algorithm will try to combine text parts (such
as combining Micro and soft into Microsoft) to group values.
Show similarity scores: Show similarity scores between the input values and the
computed representative values after fuzzy grouping. Requires the addition of an
operation such as All rows to showcase this information on a row-by-row level.
Transformation table (optional): You can select a transformation table that will
map values (such as mapping MSFT to Microsoft) to group them together.

For this example, a transformation table will be used to demonstrate how values can be
mapped. The transformation table has two columns:

From: The text string to look for in your table.


To: The text string to use to replace the text string in the From column.
The following image shows the transformation table used in this example.

) Important

It's important that the transformation table has a the same columns and column
names as shown above (they have to be "From" and "To"), otherwise Power Query
will not recognize these.

Return to the Group by dialog box, expand Fuzzy group options, change the operation
from Count rows to All rows, enable the Show similarity scores option, and then select
the Transformation table drop-down menu.

After you select the transformation table, select OK. The result of that operation gives
you the following information:
In this example, the Ignore case option was enabled, so the values in the From column
of the Transformation table are used to look for the text string without considering the
case of the string. This transformation operation occurs first, and then the fuzzy
grouping operation is performed.

The similarity score is also shown in the table value next to the person column, which
reflects exactly how the values were grouped and their respective similarity scores. You
can expand this column if needed or use the values from the new Frequency columns
for other sorts of transformations.

7 Note

When grouping by multiple columns, the transformation table performs the replace
operation in all columns if replacing the value increases the similarity score.

See also
Add a custom column
Remove duplicates
Unpivot columns
Article • 12/17/2022

In Power Query, you can transform columns into attribute-value pairs, where columns
become rows.

For example, given a table like the following, where country rows and date columns
create a matrix of values, it's difficult to analyze the data in a scalable way.

Instead, you can transform the table into a table with unpivoted columns, as shown in
the following image. In the transformed table, it's easier to use the date as an attribute
to filter on.
The key in this transformation is that you have a set of dates in the table that should all
be part of a single column. The respective value for each date and country should be in
a different column, effectively creating an attribute-value pair.

Power Query will always create the attribute-value pair by using two columns:

Attribute: The name of the column headings that were unpivoted.


Value: The values that were underneath each of the unpivoted column headings.

There are multiple places in the user interface where you can find Unpivot columns. You
can right-click the columns that you want to unpivot, or you can select the command
from the Transform tab in the ribbon.
There are three ways that you can unpivot columns from a table:

Unpivot columns
Unpivot other columns
Unpivot only selected columns

Unpivot columns
For the scenario described above, you first need to select the columns you want to
unpivot. You can select Ctrl as you select as many columns as you need. For this
scenario, you want to select all the columns except the one named Country. After
selecting the columns, right-click any of the selected columns, and then select Unpivot
columns.
The result of that operation will yield the result shown in the following image.

Special considerations
After creating your query from the steps above, imagine that your initial table gets
updated to look like the following screenshot.

Notice that you've added a new column for the date 9/1/2020 (September 1, 2020), and
two new rows for the countries UK and Mexico.

If you refresh your query, you'll notice that the operation will be done on the updated
column, but won't affect the column that wasn't originally selected (Country, in this
example). This means that any new column that's added to the source table will be
unpivoted as well.

The following image shows what your query will look like after the refresh with the new
updated source table.
Unpivot other columns
You can also select the columns that you don't want to unpivot and unpivot the rest of
the columns in the table. This operation is where Unpivot other columns comes into
play.
The result of that operation will yield exactly the same result as the one you got from
Unpivot columns.
7 Note

This transformation is crucial for queries that have an unknown number of columns.
The operation will unpivot all columns from your table except the ones that you've
selected. This is an ideal solution if the data source of your scenario got new date
columns in a refresh, because those will get picked up and unpivoted.

Special considerations
Similar to the Unpivot columns operation, if your query is refreshed and more data is
picked up from the data source, all the columns will be unpivoted except the ones that
were previously selected.

To illustrate this, say that you have a new table like the one in the following image.

You can select the Country column, and then select Unpivot other column, which will
yield the following result.
Unpivot only selected columns
The purpose of this last option is to only unpivot specific columns from your table. This
is important for scenarios where you're dealing with an unknown number of columns
from your data source and you only want to unpivot the selected columns.

To perform this operation, select the columns to unpivot, which in this example is all the
columns except the Country column. Then right-click any of the columns you selected,
and then select Unpivot only selected columns.
Notice how this operation will yield the same output as the previous examples.

Special considerations
After doing a refresh, if our source table changes to have a new 9/1/2020 column and
new rows for UK and Mexico, the output of the query will be different from the previous
examples. Say that our source table, after a refresh, changes to the table in the following
image.
The output of our query will look like the following image.

It looks like this because the unpivot operation was applied only on the 6/1/2020,
7/1/2020, and 8/1/2020 columns, so the column with the header 9/1/2020 remains
unchanged.
Pivot columns
Article • 12/17/2022

In Power Query, you can create a table that contains an aggregate value for each unique
value in a column. Power Query groups each unique value, does an aggregate
calculation for each value, and pivots the column into a new table.

Imagine a table like the one in the following image.

This table contains values by country and date in a simple table. In this example, you
want to transform this table into the one where the date column is pivoted, as shown in
the following image.

7 Note
During the pivot columns operation, Power Query will sort the table based on the
values found on the first column—at the left side of the table—in ascending order.

To pivot a column

1. Select the column that you want to pivot.

2. On the Transform tab in the Any column group, select Pivot column.

3. In the Pivot column dialog box, in the Value column list, select Value.

By default, Power Query will try to do a sum as the aggregation, but you can select
the Advanced option to see other available aggregations.
The available options are:

Don't aggregate
Count (all)
Count (not blank)
Minimum
Maximum
Median
Sum
Average

Pivoting columns that can't be aggregated


You can pivot columns without aggregating when you're working with columns that
can't be aggregated, or aggregation isn't required for what you're trying to do. For
example, imagine a table like the following image, that has Country, Position, and
Product as fields.
Let's say you want to pivot the Position column in this table so you can have its values
as new columns. For the values of these new columns, you'll use the values from the
Product column. Select the Position column, and then select Pivot column to pivot that
column.

In the Pivot column dialog box, select the Product column as the value column. Select
the Advanced option button in the Pivot columns dialog box, and then select Don't
aggregate.
The result of this operation will yield the result shown in the following image.

Errors when using the Don't aggregate option


The way the Don't aggregate option works is that it grabs a single value for the pivot
operation to be placed as the value for the intersection of the column and row pair. For
example, let's say you have a table like the one in the following image.

You want to pivot that table by using the Date column, and you want to use the values
from the Value column. Because this pivot would make your table have just the Country
values on rows and the Dates as columns, you'd get an error for every single cell value
because there are multiple rows for every combination of Country and Date. The
outcome of the Pivot column operation will yield the results shown in the following
image.

Notice the error message "Expression.Error: There were too many elements in the
enumeration to complete the operation." This error occurs because the Don't aggregate
operation only expects a single value for the country and date combination.
Transpose a table
Article • 12/17/2022

The transpose table operation in Power Query rotates your table 90 degrees, turning
your rows into columns and your columns into rows.

Imagine a table like the one in the following image, with three rows and four columns.

The goal of this example is to transpose that table so you end up with four rows and
three columns.

On the Transform tab in the ribbon, select Transpose.

The result of that operation will look like the following image.
7 Note

Only the contents of the table will be transposed during the transpose operation;
the column headers of the initial table will be lost. The new columns will have the
name Column followed by a sequential number.

The headers you need in this example are in the first row of the table. To promote the
first row to headers, select the table icon in the upper-left corner of the data preview,
and then select Use first row as headers.
The result of that operation will give you the output that you're looking for.

7 Note

To learn more about the promote headers operation, also known as Use first row
as headers, go to Promote or demote column headers.
Reverse rows
Article • 12/17/2022

With Power Query, it's possible to reverse the order of rows in a table.

Imagine a table with two columns, ID and Country, as shown in the following image.

On the Transform tab, select Reverse rows.

The result of this operation will look as follows.


Data types in Power Query
Article • 12/17/2022

Data types in Power Query are used to classify values to have a more structured dataset.
Data types are defined at the field level—values inside a field are set to conform to the
data type of the field.

The data type of a column is displayed on the left side of the column heading with an
icon that symbolizes the data type.

7 Note

Power Query provides a set of contextual transformations and options based on


the data type of the column. For example, when you select a column with a data
type of Date, you get transformations and options that apply to that specific data
type. These transformations and options occur throughout the Power Query
interface, such as on the Transform and Add column tabs and the smart filter
options.

The most common data types used in Power Query are listed in the following table.
Although beyond the scope of this article, you can find the complete list of data types in
the Power Query M formula language Types article.

Data type Icon Description

Text A Unicode character data string. Can be strings, numbers, or dates


represented in a text format. Maximum string length is
268,435,456 Unicode characters (where each Unicode character is
two bytes) or 536,870,912 bytes.

True/False A Boolean value of either True or False.


Data type Icon Description

Decimal number Represents a 64-bit (eight-byte) floating-point number. It's the


most common number type, and corresponds to numbers as you
usually think of them. Although designed to handle numbers with
fractional values, it also handles whole numbers. The Decimal
Number type can handle negative values from –1.79E +308
through –2.23E –308, 0, and positive values from 2.23E –308
through 1.79E + 308. For example, numbers like 34, 34.01, and
34.000367063 are valid decimal numbers. The largest precision
that can be represented in a Decimal Number type is 15 digits
long. The decimal separator can occur anywhere in the number.
The Decimal Number type corresponds to how Excel stores its
numbers. Note that a binary floating-point number can't
represent all numbers within its supported range with 100%
accuracy. Thus, minor differences in precision might occur when
representing certain decimal numbers.

Fixed decimal Also known as the Currency type, this data type has a fixed
number location for the decimal separator. The decimal separator always
has four digits to its right and allows for 19 digits of significance.
The largest value it can represent is 922,337,203,685,477.5807
(positive or negative). Unlike Decimal Number, the Fixed Decimal
Number type is always precise and is thus useful in cases where
the imprecision of floating-point notation might introduce errors.

Whole number Represents a 64-bit (eight-byte) integer value. Because it's an


integer, it has no digits to the right of the decimal place. It allows
for 19 digits; positive or negative whole numbers between –
9,223,372,036,854,775,807 (–2^63+1) and
9,223,372,036,854,775,806 (2^63–2). It can represent the largest
possible precision of the various numeric data types. As with the
Fixed Decimal Number type, the Whole Number type can be
useful in cases where you need to control rounding.

Percentage Fundamentally the same as a Decimal Number type, but it has a


mask to format the values in the column as a percentage in the
Power Query Editor window.

Date/Time Represents both a date and time value. Underneath the covers,
the Date/Time value is stored as a Decimal Number type, so you
can actually convert between the two. The time portion of a date
is stored as a fraction to whole multiples of 1/300 seconds (3.33
ms). Dates between the years 1900 and 9999 are supported.

Date Represents just a date (no time portion). When converted into the
model, a Date is the same as a Date/Time value with zero for the
fractional value.
Data type Icon Description

Time Represents just time (no date portion). When converted into the
model, a Time value is the same as a Date/Time value with no
digits to the left of the decimal place.

Date/Time/Timezone Represents a UTC Date/Time with a time-zone offset. It's


converted into Date/Time when loaded into the model.

Duration Represents a length of time, which is converted into a Decimal


Number type when loaded into the model. As a Decimal Number
type, it can be added or subtracted from a Date/Time field with
correct results. Because it's a Decimal Number type, you can easily
use it in visualizations that show magnitude.

Binary The Binary data type can be used to represent any other data with
a binary format.

Any The Any data type is the status given to a column that doesn't
have an explicit data type definition. Any is the data type that
classifies all values. We recommend that you always explicitly
define the column data types for your queries from unstructured
sources, and avoid having any columns with the Any data type as
the output of your query.

Data type detection


Data type detection occurs automatically when connecting to:

Structured data sources such as databases, Power Query reads the table schema
from the data source and automatically displays the data by using the correct data
type for each column.

Unstructured sources such as Excel, CSV, and text files, Power Query automatically
detects data types by inspecting the values in the table. By default, automatic data
type detection is enabled in Power Query for unstructured sources.

You can also use the Detect data type command in the Any column group on the
Transform tab to automatically detect the data types of the columns in your table.
How to define a column data type
You can define or change the data type of a column in any of four places:

On the Home tab, in the Transform group, on the Data type drop-down menu.

On the Transform tab, in the Any column group, on the Data type drop-down
menu.

By selecting the icon on the left side of the column heading.


On the column shortcut menu, under Change Type.
Automatic detection of column data type and headers
This setting is specifically for unstructured sources. It helps you by automatically
inspecting and detecting column types and headers based on the first 200 rows of your
table. When this setting is enabled, Power Query automatically adds two steps to your
query:

Promote column headers: Promotes the first row of the table to be the column
header.
Changed type: Converts the values from the Any data type to a data type based
on the inspection of the values from each column.

By default, this setting is enabled. To disable or enable this setting, follow the steps that
apply to your Power Query experience.

To configure automatic data type detection in Power Query Online


On the Home tab, select Options, and then select Project options. In the Project
options window, select the Automatically detect column types and headers for
unstructured sources check box.

To configure automatic data type detection in Power Query for Desktop

You can define this behavior both at the global and per-file level in the Options window
(in the Power Query Editor, on the File tab, select Options and settings > Options).

Global: On the left pane under Global, select Data load. On the right pane under
Type detection, you can select any of three type detection configurations that will
be applied to every new file created in your application:
Always detect column types and headers for unstructured sources
Detect column types and headers for unstructured sources according to each
file's setting
Never detect column types and headers for unstructured sources
Current file: On the left pane under Current file, select Data load. On the right
pane under Type detection, select whether you want to enable or disable type
detection for the current file.
Document or project locale
Power Query handles two distinct components that manage the way that things look
and are interpreted:

Localization: the component that tells Power Query in what language it should be
displayed.
Globalization: the component that handles the formatting of the values, in addition
to the interpretation of text values.

Locale is a single value that holds both the localization and globalization components.
Locale is used to interpret text values and convert them into other data types. For
example, the locale English (United States) means that the localization is in United
States English and the globalization, or format of the value, is based on the standards
used in the United States.

When Power Query defines a column data type or converts from one data type to
another, it has to interpret the values to be converted before it can transform them to a
different data type.

In Power Query Online, this interpretation is defined in Project options, under


Locale.

In Power Query for Desktop, Power Query automatically recognizes your operating
system regional format and uses that to interpret the values for data type
conversion. To override this locale configuration, open the query Options window,
and in the left pane under Current file, select Regional settings. From here, you
can change the locale to the setting you want.
This locale setting is important for interpreting text values into a specific data type. For
example, imagine that you have your locale set as English (United States), but a column
in one of your CSV files has dates formatted in the United Kingdom format of
day/month/year.

When you try setting the data type of the Date column to be Date, you get error values.
These errors occur because the locale being used is trying to interpret the date in the
English (United States) format, which is month/day/year. Because there's no month 22 in
the calendar, it causes an error.

Instead of trying to just select the Date data type, you can right-click the column
heading, select Change type, and then select Using locale.
In the Change column type with locale dialog box, you select the data type that you
want to set, but you also select which locale to use, which in this case needs to be
English (United Kingdom).
Using this locale, Power Query will be able to interpret values correctly and convert
those values to the right data type.

To verify final date values

The formatting of the values is driven by the globalization value. If you have any doubts
about the value displayed by Power Query, you can verify the conversion of date values
by adding new columns for the day, month, and year from the value. To do this, select
the Date column and go to the Add column tab on the ribbon. In the Date and time
column group, you'll see the options for a date column.
From here, you can extract parts of the date value, such as the year number, the month
number, the day number, or even more columns extracted from the Date column.

By using these columns, you can verify that your date value has been converted
correctly.

Data type conversion matrix


The following matrix is designed to give you a quick look at the feasibility of data type
conversion of a value from one data type to another.
7 Note

Conversion in this matrix starts with the original data type in the Data types
column. Each result of a conversion to the new type is shown in the original data
type’s row.

Data Types

Decimal number —

Currency —

Whole number —

Percentage —

Date/Time —

Date —

Time —

Date/Time/Timezone —

Duration —

Text —

True/False —

Icon Description

Possible

Not possible

Possible, but it adds values to the original value

Possible, but it truncates the original value


Dealing with errors in Power Query
Article • 12/17/2022

In Power Query, you can encounter two types of errors:

Step-level errors
Cell-level errors

This article provides suggestions for how to fix the most common errors you might find
at each level, and describes the error reason, error message, and error detail for each.

Step-level error
A step-level error prevents the query from loading and displays the error components in
a yellow pane.

Error reason: The first section before the colon. In the example above, the error
reason is Expression.Error.
Error message: The section directly after the reason. In the example above, the
error message is The column 'Column' of the table wasn't found.
Error detail: The section directly after the Details: string. In the example above, the
error detail is Column.

Common step-level errors


In all cases, we recommend that you take a close look at the error reason, error
message, and error detail to understand what's causing the error. You can select the Go
to error button, if available, to view the first step where the error occurred.

Can't find the source - DataSource.Error


This error commonly occurs when the data source is inaccessible by the user, the user
doesn't have the correct credentials to access the data source, or the source has been
moved to a different place.

Example: You have a query from a text tile that was located in drive D and created by
user A. User A shares the query with user B, who doesn't have access to drive D. When
this person tries to execute the query, they get a DataSource.Error because there's no
drive D in their environment.

Possible solutions: You can change the file path of the text file to a path that both users
have access to. As user B, you can change the file path to be a local copy of the same
text file. If the Edit settings button is available in the error pane, you can select it and
change the file path.

The column of the table wasn't found


This error is commonly triggered when a step makes a direct reference to a column
name that doesn't exist in the query.

Example: You have a query from a text file where one of the column names was Column.
In your query, you have a step that renames that column to Date. But there was a
change in the original text file, and it no longer has a column heading with the name
Column because it was manually changed to Date. Power Query is unable to find a
column heading named Column, so it can't rename any columns. It displays the error
shown in the following image.

Possible solutions: There are multiple solutions for this case, but they all depend on
what you'd like to do. For this example, because the correct Date column header already
comes from your text file, you can just remove the step that renames the column. This
will allow your query to run without this error.

Other common step-level errors


When combining or merging data between multiple data sources, you might get a
Formula.Firewall error such as the one shown in the following image.

This error can be caused by a number of reasons, such as the data privacy levels
between data sources or the way that these data sources are being combined or
merged. For more information about how to diagnose this issue, go to Data privacy
firewall.

Cell-level error
A cell-level error won't prevent the query from loading, but displays error values as Error
in the cell. Selecting the white space in the cell displays the error pane underneath the
data preview.

7 Note

The data profiling tools can help you more easily identify cell-level errors with the
column quality feature. More information: Data profiling tools

Handling errors at the cell level


When encountering any cell-level errors, Power Query provides a set of functions to
handle them either by removing, replacing, or keeping the errors.
For the next sections, the provided examples will be using the same sample query as the
start point. In this query, you have a Sales column that has one cell with an error caused
by a conversion error. The value inside that cell was NA, but when you transformed that
column to a whole number Power Query couldn't convert NA to a number, so it displays
the following error.

Remove errors
To remove rows with errors in Power Query, first select the column that contains errors.
On the Home tab, in the Reduce rows group, select Remove rows. From the drop-down
menu, select Remove errors.

The result of that operation will give you the table that you're looking for.
Replace errors
If instead of removing rows with errors, you want to replace the errors with a fixed value,
you can do so as well. To replace rows that have errors, first select the column that
contains errors. On the Transform tab, in the Any column group, select Replace values.
From the drop-down menu, select Replace errors.

In the Replace errors dialog box, enter the value 10 because you want to replace all
errors with the value 10.

The result of that operation will give you the table that you're looking for.

Keep errors
Power Query can serve as a good auditing tool to identify any rows with errors even if
you don't fix the errors. This is where Keep errors can be helpful. To keep rows that have
errors, first select the column that contains errors. On the Home tab, in the Reduce rows
group, select Keep rows. From the drop-down menu, select Keep errors.

The result of that operation will give you the table that you're looking for.

Common cell-level errors


As with any step-level error, we recommend that you take a close look at the error
reasons, error messages, and error details provided at the cell level to understand what's
causing the errors. The following sections discuss some of the most frequent cell-level
errors in Power Query.

Data type conversion errors

Commonly triggered when changing the data type of a column in a table. Some values
found in the column could not be converted to the desired data type.

Example: You have a query that includes a column named Sales. One cell in that column
has NA as a cell value, while the rest have whole numbers as values. You decide to
convert the data type of the column from text to whole number, but the cell with the
NA value causes an error.
Possible solutions: After identifying the row with the error, you can either modify the
data source to reflect the correct value rather than NA, or you can apply a Replace error
operation to provide a value for any NA values that cause an error.

Operation errors

When trying to apply an operation that isn't supported, such as multiplying a text value
by a numeric value, an error occurs.

Example: You want to create a custom column for your query by creating a text string
that contains the phrase "Total Sales: " concatenated with the value from the Sales
column. An error occurs because the concatenation operation only supports text
columns and not numeric ones.
Possible solutions: Before creating this custom column, change the data type of the
Sales column to be text.

Nested values shown as errors

When working with data that contains nested structured values (such as tables, lists, or
records), you may sometimes encounter the following error:
Expression.Error: We cannot return a value of type {value} in this context

Details: In the past we would have returned a text value of {value}, but we
now return this error. Please see https://go.microsoft.com/fwlink/?
linkid=2099726 for more information.

These errors usually occur for two reasons:

When the Data Privacy Firewall buffers a data source, nested non-scalar values are
automatically converted to errors.
When a column defined with the Any data type contains non-scalar values, such
values will be reported as errors during load (such as in a Workbook in Excel or the
data model in Power BI Desktop).

Possible solutions:

Remove the column that contains the error, or set a non- Any data type for such a
column.
Change the privacy levels of the data sources involved to one that allows them to
be combined without being buffered.
Flatten the tables before doing a merge to eliminate columns that contain nested
structured values (such as table, record, or list).
Working with duplicate values
Article • 12/17/2022

You can work with duplicate sets of values through transformations that can remove
duplicates from your data or filter your data to show duplicates only, so you can focus
on them.

2 Warning

Power Query is case-sensitive. When working with duplicate values, Power Query
considers the case of the text, which might lead to undesired results. As a
workaround, users can apply an uppercase or lowercase transform prior to
removing duplicates.

For this article, the examples use the following table with id, Category, and Total
columns.

Remove duplicates
One of the operations that you can perform is to remove duplicate values from your
table.

1. Select the columns that contain duplicate values.

2. Go to the Home tab.

3. In the Reduce rows group, select Remove rows.

4. From the drop-down menu, select Remove duplicates.


2 Warning

There's no guarantee that the first instance in a set of duplicates will be chosen
when duplicates are removed. To learn more about how to preserve sorting, go to
Preserve sort.

Remove duplicates from multiple columns


In this example, you want to identify and remove the duplicates by using all of the
columns from your table.

You have four rows that are duplicates. Your goal is to remove those duplicate rows so
there are only unique rows in your table. Select all columns from your table, and then
select Remove duplicates.

The result of that operation will give you the table that you're looking for.
7 Note

This operation can also be performed with a subset of columns.

Remove duplicates from a single column


In this example, you want to identify and remove the duplicates by using only the
Category column from your table.

You want to remove those duplicates and only keep unique values. To remove duplicates
from the Category column, select it, and then select Remove duplicates.

The result of that operation will give you the table that you're looking for.
Keep duplicates
Another operation you can perform with duplicates is to keep only the duplicates found
in your table.

1. Select the columns that contain duplicate values.

2. Go to the Home tab.

3. In the Reduce rows group, select Keep rows.

4. From the drop-down menu, select Keep duplicates.

Keep duplicates from multiple columns


In this example, you want to identify and keep the duplicates by using all of the columns
from your table.

You have four rows that are duplicates. Your goal in this example is to keep only the
rows that are duplicated in your table. Select all the columns in your table, and then
select Keep duplicates.

The result of that operation will give you the table that you're looking for.

Keep duplicates from a single column


In this example, you want to identify and keep the duplicates by using only the id
column from your table.

In this example, you have multiple duplicates and you want to keep only those
duplicates from your table. To keep duplicates from the id column, select the id column,
and then select Keep duplicates.

The result of that operation will give you the table that you're looking for.

See also
Data profiling tools
Fill values in a column
Article • 12/17/2022

You can use fill up and fill down to replace null values with the last non-empty value in a
column. For example, imagine the following table where you'd like to fill down in the
Date column and fill up in the Comments column.

The outcome that you'd expect is shown in the following image.

Fill down
The fill down operation takes a column and traverses through the values in it to fill any
null values in the next rows until it finds a new value. This process continues on a row-
by-row basis until there are no more values in that column.
In the following example, you want to fill down on the Date column. To do that, you can
right-click to select the Date column, and then select Fill > Down.

The result of that operation will look like the following image.
Fill up
In the same way as the fill down operation, fill up works on a column. But by contrast, fill
up finds the last value of the column and fills any null values in the previous rows until it
finds a new value. Then the same process occurs for that value. This process continues
until there are no more values in that column.

In the following example, you want to fill the Comments column from the bottom up.
You'll notice that your Comments column doesn't have null values. Instead it has what
appears to be empty cells. Before you can do the fill up operation, you need to
transform those empty cells into null values: select the column, go to the Transform tab,
and then select Replace values.

In the Replace values dialog box, leave Value to find blank. For Replace with, enter null.

More information: Replace values

After all empty cells are replaced with null, select the Comments column, go to the
Transform tab, and then select Fill > Up.
The result of that operation will look like the following image.

Cleaning up your table


1. Filter the Units column to show only rows that aren't equal to null.
2. Rename the Comments column as Sales Person.

3. Remove the Sales Person: values from the Sales Person column so you only get
the names of the salespeople.

Now you should have exactly the table you were looking for.

See also
Replace values
Sort columns
Article • 08/09/2023

You can sort a table in Power Query by one column or multiple columns. For example,
take the following table with the columns named Competition, Competitor, and
Position.

For this example, the goal is to sort this table by the Competition and Position fields in
ascending order.

Sort ascending sorts alphabetical rows in a column from A to Z, then a to z. Sort


descending sorts alphabetical rows in a column from z to a, then Z to A. For example,
examine the following unsorted column:
When sorted using sort ascending, an alphabetical column is sorted in the following
way:

When sorted using sort descending, an alphabetical column is sorted in the following
way:
To sort a table by using columns
To sort the table, first select the column to be sorted. After the column has been
selected, you can select the sort operation from one of two places:

On the Home tab, in the Sort group, there are icons to sort your column in either
ascending or descending order.

From the column heading drop-down menu. Next to the name of the column
there's a drop-down menu indicator . When you select the icon, you'll see the
option to sort the column.
In this example, first you need to sort the Competition column. You'll perform the
operation by using the buttons in the Sort group on the Home tab. This action creates a
new step in the Applied steps section named Sorted rows.

A visual indicator, displayed as an arrow pointing up, gets added to the Competitor
drop-down menu icon to show that the column is being sorted in ascending order.

Now you'll sort the Position field in ascending order as well, but this time you'll use the
Position column heading drop-down menu.

Notice that this action doesn't create a new Sorted rows step, but modifies it to perform
both sort operations in one step. When you sort multiple columns, the order that the
columns are sorted in is based on the order the columns were selected in. A visual
indicator, displayed as a number to the left of the drop-down menu indicator, shows the
place each column occupies in the sort order.

To clear a sort operation from a column


Do one of the following actions:

Select the down arrow next to the column heading, and then select Clear sort.
In Applied steps on the Query Settings pane, delete the Sorted rows step.
Rename columns
Article • 12/17/2022

In Power Query, you can rename columns to format the dataset in a clear and concise
way.

As an example, let's start with a dataset that has two columns.

Column 1 Column 2

Panama Panama

USA New York

Canada Toronto

The column headers are Column 1 and Column 2, but you want to change those names
to more friendly names for your columns.

Column 1 becomes Country


Column 2 becomes City

The end result that you want in Power Query looks like the following table.

How to rename a column


There are three ways to rename a column in Power Query.

Double-click the column header: The double-click action immediately lets you
rename the column.
Right-click the column of your choice: A contextual menu is displayed and you
can select the Rename option to rename the selected column.

Rename option in the Transform tab: In the Transform tab, under the Any column
group, select the Rename option.

Avoiding duplicate column names


Power Query requires table column names to be unique across all columns. This means
that if you try to rename a column to a column name that already exists in the table, an
error with the message Column Name Conflict appears. You'll have to rename the
column to something else.

For example, for the first sample table provided in this article, imagine that you try to
rename both Column 1 and Column 2 to "Geography". An error message pops up that
prevents you from renaming the second column to "Geography".

Column name disambiguation


With many actions performed in Power Query that might result in a Column Name
Conflict, Power Query tries to disambiguate by renaming all duplicate instances of the
same column name. The way that Power Query renames these columns is by adding a
suffix to the original column name that has a separator (commonly a dot or an
underscore), and then a number that represents the instance of the duplicated column
name in the order that it was found by Power Query. This renaming can often happen
with actions such as, but not limited to:

Promoting your column headers from your first row: For example, if you tried
promoting the first row of the sample table in this article, Power Query renames
the columns to Panama and Panama_1.

7 Note
To learn more about how to promote headers from your first row, go
toPromote or demote column headers.

Expanding a column with a field name that also exists in the current table: This
can happen, for example, when you perform a merge operation and the column
with the merged table has field names that also exist in the table. When you try to
expand the fields from that column, Power Query automatically tries to
disambiguate to prevent Column Name Conflict errors.
Move columns
Article • 02/17/2023

A common process when preparing data is to move columns in the dataset.

To accomplish this move, you can either select the Move option or drag and drop the
column.

Move option
The following example shows the different ways of moving columns. This example
focuses on moving the Contact Name column.

You move the column using the Move option. This option located in the Any column
group under the Transform tab. In the Move option, the available choices are:

Before
After
To beginning
To end

You can also find this option when you right-click a column.
If you want to move one column to the left, then select Before.

The new location of the column is now one column to the left of its original location.
If you want to move one column to the right, then select After.

The new location of the column is now one column to the right of its original location.

If you want to move the column to the most left space of the dataset, then select To
beginning.
The new location of the column is now on the far left side of the table.

If you want to move the column to the most right space of the dataset, then select To
end.

The new location of the column is now on the far right side of the table.

Drag and drop


Another way to move a column through the dataset is to drag and drop the column.
Move the column to the place where you would like to place it.
Go to column feature
If you want to find a specific column, then go to the View tab in the ribbon and select
Go to column.

From there, you can specifically select the column you would like to view, which is
especially useful if there are many columns.
Replace values and errors
Article • 12/17/2022

With Power Query, you can replace one value with another value wherever that value is
found in a column. The Replace values command can be found:

On the cell shortcut menu. Right-click the cell to replace the selected value in the
column with another value.

On the column shortcut menu.

On the Home tab, in the Transform group.


On the Transform tab, in the Any column group.

The replace values operation has two modes:

Replace entire cell contents: This is the default behavior for non-text columns,
where Power Query searches for and replaces the full contents of a cell. You can
enable this mode for text columns by selecting Advanced options, and then
selecting the Match entire cell contents check box.
Replace instances of a text string: This is the default behavior for text columns,
where Power Query will search for a specific text string in all rows of a column and
replace as many instances of the text string that it finds.

Advanced options are only available in columns of the Text data type. Within that set of
options is the Replace using special characters option.
Replace entire cell contents
Imagine a table like the following, where you have columns for Account ID, Category
Name, and Sales Goal.

The value of -1 in the Sales Goal column is an error in the source and needs to be
replaced with the standard sales goal defined by the business for these instances, which
is 250,000. To do that, right-click the -1 value, and then select Replace values. This action
will bring up the Replace values dialog box with Value to find set to -1. Now all you
need to do is enter 250000 in the Replace with box.

The outcome of that operation will give you the result that you're looking for.

Replace instances of a text string


Continuing with the previous table, let's say you want to remove the text string
"Category Name: " from the Category Name column. To do that, go to the Transform
group on the Home tab, and select Replace values. In the Replace values dialog box,
enter the text string Category Name: (followed by a space) in the Value to find box,
leave the Replace with box empty, and then select OK.
The result of that operation gives you the table in the following image.
Parse text as JSON or XML
Article • 12/17/2022

In Power Query, you can parse the contents of a column with text strings by identifying
the contents as either a JSON or XML text string.

You can perform this parse operation by selecting the Parse button found inside the
following places in the Power Query Editor:

Transform tab—This button will transform the existing column by parsing its
contents.

Add column tab—This button will add a new column to the table parsing the
contents of the selected column.

For this article, you'll be using the following sample table that contains the following
columns that you need to parse:

SalesPerson—Contains unparsed JSON text strings with information about the


FirstName and LastName of the sales person, as in the following example.

JSON

{
"id" : 249319,
"FirstName": "Lesa",
"LastName": "Byrd"
}
Country—Contains unparsed XML text strings with information about the Country
and the Division that the account has been assigned to, as in the following
example.

XML

<root>
<id>1</id>
<Country>USA</Country>
<Division>BI-3316</Division>
</root>

The sample table looks as follows.

The goal is to parse the above mentioned columns and expand the contents of those
columns to get this output.

As JSON
Select the SalesPerson column. Then select JSON from the Parse dropdown menu inside
the Transform tab. These steps will transform the SalesPerson column from having text
strings to having Record values, as shown in the next image. You can select anywhere in
the whitespace inside the cell of the Record value to get a detailed preview of the
record contents at the bottom of the screen.
Select the expand icon next to the SalesPerson column header. From the expand
columns menu, select only the FirstName and LastName fields, as shown in the
following image.

The result of that operation will give you the following table.

As XML
Select the Country column. Then select the XML button from the Parse dropdown menu
inside the Transform tab. These steps will transform the Country column from having
text strings to having Table values as shown in the next image. You can select anywhere
in the whitespace inside the cell of the Table value to get a detailed preview of the
contents of the table at the bottom of the screen.

Select the expand icon next to the Country column header. From the expand columns
menu, select only the Country and Division fields, as shown in the following image.

You can define all the new columns as text columns. The result of that operation will
give you the output table that you're looking for.
Add a column from examples
Article • 12/17/2022

When you add columns from examples, you can quickly and easily create new columns
that meet your needs. This is useful for the following situations:

You know the data you want in your new column, but you're not sure which
transformation, or collection of transformations, will get you there.
You already know which transformations you need, but you're not sure what to
select in the UI to make them happen.
You know all about the transformations you need by using a custom column
expression in the M language, but one or more of those transformations aren't
available in the UI.

The Column from examples command is located on the Add column tab, in the
General group.

Add a new column from examples


In this example, you start with the table shown in the following image.
Your goal in this example is to create two new columns:

Range: Create bins for the Monthly Income column in discrete increments of
5,000.
Full Name: Concatenate the Last Name and First Name columns to a single
column.

Column from examples, from selected columns


One of the options you have when creating your new column is to select which columns
will be used in your calculations. For this example, you'll be creating the Range column
from the values in the Monthly Income column.

To do this, select the Monthly Income column, select the Column from examples
command, and then select From selection.
The preview pane displays a new, editable column where you can enter your examples.
For the first example, the value from the selected column is 19500. So in your new
column, enter the text 15000 to 20000, which is the bin where that value falls.

When Power Query finds a matching transformation, it fills the transformation results
into the remaining rows using light-colored text. You can also see the M formula text for
the transformation above the table preview.

After you select OK, you'll see your new column as part of your query. You'll also see a
new step added to your query.

Column from examples, from all columns


The next goal is to create a Full Name column by using the values from the First Name
and Last Name columns.

To do this, select the Column from examples command, and then select From all
columns.
Now you'll enter your first Full Name example as Enders, Maria.

After you select OK, you'll see your new column as part of your query. You'll also see a
new step added to your query.

Your last step is to remove the First Name, Last Name, and Monthly Income columns.
Your final table now contains the Range and Full Name columns with all the data you
produced in the previous steps.
Tips and considerations
When providing examples, Power Query offers a helpful list of available fields, values,
and suggested transformations for the selected columns. You can view this list by
selecting any cell of the new column.

It's important to note that the Column from examples experience works only on the top
100 rows of your data preview. You can apply steps before the Column from examples
step to create your own data sample. After the Column from examples column has
been created, you can delete those prior steps; the newly created column won't be
affected.

List of supported transformations


Many, but not all, transformations are available when you use Column from examples.
The following list shows the supported transformations.

General
Conditional Column

Reference

Reference to a specific column, including trim, clean, and case transformations

Text transformations

Combine (supports combination of literal strings and entire column values)


Replace
Length
Extract
First Characters
Last Characters
Range
Text before Delimiter
Text after Delimiter
Text between Delimiters
Length
Remove Characters
Keep Characters

7 Note

All Text transformations take into account the potential need to trim, clean, or apply
a case transformation to the column value.

Date transformations

Day
Day of Week
Day of Week Name
Day of Year
Month
Month Name
Quarter of Year
Week of Month
Week of Year
Year
Age
Start of Year
End of Year
Start of Month
End of Month
Start of Quarter
Days in Month
End of Quarter
Start of Week
End of Week
Day of Month
Start of Day
End of Day

Time transformations

Hour
Minute
Second
To Local Time

7 Note

All Date and Time transformations take into account the potential need to convert
the column value to Date, Time, or DateTime.

Number transformations

Absolute Value
Arccosine
Arcsine
Arctangent
Convert to Number
Cosine
Cube
Divide
Exponent
Factorial
Integer Divide
Is Even
Is Odd
Ln
Base-10 Logarithm
Modulo
Multiply
Round Down
Round Up
Sign
Sine
Square Root
Square
Subtract
Sum
Tangent
Bucketing/Ranges
Add an index column
Article • 12/17/2022

The Index column command adds a new column to the table with explicit position
values, and is usually created to support other transformation patterns.

By default, the starting index will start from the value 0 and have an increment of 1 per
row.
You can also configure the behavior of this step by selecting the Custom option and
configuring two parameters:

Starting index: Specifies the initial index value.


Increment: Specifies how much to increment each index value.

For the example in this article, you start with the following table that has only one
column, but notice the data pattern in the column.

Let's say that your goal is to transform that table into the one shown in the following
image, with the columns Date, Account, and Sale.

Step 1. Add an index column


You first need to add a new Index column to your table that starts from 0.

Step 2. Add a modulo column from the index


column
The next step is to select the newly added index column, and then on the Add column
tab, select Standard > Modulo.

In the Modulo dialog box, enter the number from which to find the remainder for each
value in the column. In this case, your pattern repeats itself every three rows, so you'll
enter 3.
The result of that operation will give you a new column named Modulo.

Step 3. Add an integer-divide column from the


index column
Select the Index column, go to the Add column tab, and then select Standard > Divide
(Integer).
In the Integer-divide dialog box, enter a number by which to divide each value in the
column. In this case, your pattern repeats itself every three rows, so enter the value 3.

Remove the Index column, because you no longer need it. Your table now looks like the
following image.
Step 4. Pivot a column
Your table now has three columns where:

Column1 contains the values that should be in the final table.


Modulo provides the column position of the value (similar to the y coordinates of
an xy chart).
Integer-division provides the row position of the value (similar to the x
coordinates of an xy chart).

To achieve the table you want, you need to pivot the Modulo column by using the
values from Column1 where these values don't get aggregated. On the Transform tab,
select the Modulo column, and then select Pivot column from the Any column group.
In the Pivot column dialog box, select the Advanced option button. Make sure Value
column is set to Column1 and Aggregate values function is set to Don't aggregate.

More information: Pivot columns

The result of that operation will give you a table with four columns, as shown in the
following image.

Step 5. Clean the table


You can now delete the Integer-division column and promote the first row of the table
to become the headers of your table. More information: Promote or demote column
headers

After defining the correct data types for your columns, you'll create a table that looks
like the following table, with exactly the three columns that you needed and the shape
that you were looking for.
Add a custom column
Article • 12/17/2022

If you need more flexibility for adding new columns than the ones provided out of the
box in Power Query, you can create your own custom column using the Power Query M
formula language.

Imagine that you have a table with the following set of columns.

Using the Units, Unit Price, and Discount columns, you'd like to create two new
columns:

Total Sale before Discount: Calculated by multiplying the Units column times the
Unit Price column.
Total Sale after Discount: Calculated by multiplying the Total Sale before Discount
column by the net percentage value (one minus the discount value).

The goal is to create a table with new columns that contain the total sales before the
discount and the total sales after the discount.

Create a custom column


On the Add column tab, select Custom column.
The Custom column dialog box appears. This dialog box is where you define the
formula to create your column.

The Custom column dialog box contains:

The initial name of your custom column in the New column name box. You can
rename this column.

A dropdown menu where you can select the data type for your new column.

An Available columns list on the right underneath the Data type selection.

A Custom column formula box where you can enter a Power Query M formula.

To add a new custom column, select a column from the Available columns list. Then,
select the Insert column button below the list to add it to the custom column formula.
You can also add a column by selecting it in the list. Alternatively, you can write your
own formula by using the Power Query M formula language in Custom column formula.
7 Note

If a syntax error occurs when you create your custom column, you'll see a yellow
warning icon, along with an error message and reason.

Adding the Total Sale before Discount column


The formula you can use to create the Total Sale before Discount column is [Units] *
[Unit Price] . In Data type, select the Currency data type.

The result of that operation adds a new Total Sale before Discount column to your
table.

7 Note
If you're using Power Query Desktop, you'll notice that the Data type field isn't
available in Custom column. This means that you'll need to define a data type for
any custom columns after creating the columns. More information: Data types in
Power Query

Adding the Total Sale after Discount column


The formula that you can use to create the Total Sale before Discount is [Total Sale
before Discount]* (1-[Discount]) . Set the data type of this new column to Currency.

The result of that operation adds a new Total Sale after Discount column to your table.

Modify an existing custom column


Power Query adds your custom column to the table and adds the Added custom step to
the Applied steps list in Query settings.
To modify your custom column, select the Added custom step in the Applied steps list.

The Custom column dialog box appears with the custom column formula you created.

7 Note

Depending on the formula you've used for your custom column, Power Query
changes the settings behavior of your step for a more simplified and native
experience. For this example, the Added custom step changed its behavior from a
standard custom column step to a Multiplication experience because the formula
from that step only multiplies the values from two columns.

Next steps
You can create a custom column in other ways, such as creating a column based on
examples you provide to Power Query Editor. More information: Add a column
from an example
For Power Query M reference information, go to Power Query M function
reference.
Add a conditional column
Article • 12/17/2022

With Power Query, you can create new columns whose values will be based on one or
more conditions applied to other columns in your table.

The Conditional column command is located on the Add column tab, in the General
group.

Adding a conditional column


In this example, you'll be using the table shown in the following image.

In this table, you have a field that gives you the CustomerGroup. You also have different
prices applicable to that customer in the Tier 1 Price, Tier 2 Price, and Tier 3 Price fields.
In this example, your goal is to create a new column with the name Final Price based on
the value found in the CustomerGroup field. If the value in the CustomerGroup field is
equal to 1, you'll want to use the value from the Tier 1 Price field; otherwise, you'll use
the value from the Tier 3 Price.

To add this conditional column, select Conditional column. In the Add conditional
column dialog box, you can define three sections numbered in the following image.
1. New column name: You can define the name of your new column. In this example,
you'll use the name Final Price.
2. Conditional clauses: Here you define your conditional clauses. You can add more
clauses by selecting Add clause. Each conditional clause will be tested on the order
shown in the dialog box, from top to bottom. Each clause has four parts:

Column name: In the drop-down list, select the column to use for the
conditional test. For this example, select CustomerGroup.
Operator: Select the type of test or operator for the conditional test. In this
example, the value from the CustomerGroup column has to be equal to 1, so
select equals.
Value: You can enter a value or select a column to be used for the conditional
test. For this example, enter 1.
Output: If the test is positive, the value entered here or the column selected
will be the output. For this example, if the CustomerGroup value is equal to 1,
your Output value should be the value from the Tier 1 Price column.

3. Final Else clause: If none of the clauses above yield a positive test, the output of
this operation will be the one defined here, as a manually entered value or a value
from a column. In this case, the output will be the value from the Tier 3 Price
column.

The result of that operation will give you a new Final Price column.

7 Note
New conditional columns won't have a data type defined. You can add a new step
to define a data type for this newly created column by following the steps
described in Data types in Power Query.

Adding and organizing multiple clauses


For this example, let's change your goal. Your new conditional clauses are:

If the value from the CustomerGroup column is equal to 1, the Output will be the
value from the Tier 1 Price column.
If the value from the CustomerGroup column is equal to 2, the Output will be the
value from the Tier 2 Price column.
If none of the previous tests are positive, the Output will be the value from the Tier
3 Price column.

7 Note

At the end of each clause, you can select the ellipsis button (...) to delete, move up,
or move down the clause.

The result of that operation will give you the result that you're looking for.
Rank column (Preview)
Article • 07/30/2022

The Rank column command adds a new column to a table with the ranking defined by
one or more other columns from the table. A Rank method option can be used to
define how ties should be handled.

7 Note

Currently, the rank column feature is only available in Power Query Online.

Adding a rank column


The following table has fields for Team , Total Points , and Bonus modifier . Imagine that
you're tasked with the creation of a new column that effectively ranks the teams using
the values from the Total Points and Bonus modifier fields.

Team Total Points Bonus modifier

A 20 0.5

B 30 0.8

C 40 0.2

D 10 0.45

E 20 0.75

The teams have shared a list of ways that they want to rank each other:

Using only the values from the Total Points field where higher values rank higher
using standard competition as the rank method
Using only the values from the Total Points field where higher values rank higher
using dense as the rank method
Ranking first by the Total Points and then by Bonus modifier where higher values
rank higher using the standard competition as rank method

Single column rank


For the first requirement shared by the teams of using only the values from the Total
Points field where higher values rank higher using standard competition as the rank
method, use the following steps:

1. With the original table already in Power Query, select the Total Points column.
Then from the Power Query Add column tab, select Rank column.

2. In Rank, Rank by will be the field selected ( Total Points ) and the Rank criteria will
be Higher value ranks higher.

3. By default, the rank method for this dialog is standard competition, so just select
OK. This action will give you a new step with the added Rank column.

Single column rank with specific rank method


For the second requirement shared by the teams of using only the values from the
Total Points field where higher values rank higher using dense as the rank method, use
the following steps:

1. With the original table already in Power Query, select the Total Points column.
Then from the Power Query Add column tab, select Rank column.

2. In Rank, Rank by will be the field selected ( Total Points ) and the Rank criteria will
be Higher value ranks higher.

3. Select Advanced at the top of the dialog box. This selection enables the advanced
section. In Rank method, change the value from Standard competition to Dense.

4. After selecting the rank method, select OK. This action will give you a new step
with the added Rank column.

Multiple column rank with specific rank method


For the third requirement shared by the teams of ranking first by the Total Points and
then by Bonus modifier where higher values rank higher using the standard competition
as rank method, use the following steps:
1. With the original table already in Power Query, select the Total Points and then
Bonus modifier columns. Then from the Power Query Add column tab, select Rank
column.

2. The rank dialog appears with its advanced section open, with both fields selected
in the Rank by column. Total Points is in the first row and then Bonus modifier
below it. Both rows use the Rank criteria of Higher value ranks higher.

3. Make sure that Rank method at the bottom is set to Standard competition.

4. After verifying the above, select OK. This action will give you a new step with the
added Rank column.

Rank methods
A rank method establishes the strategy in which ties are handled by the ranking
algorithm. This option is only available in the advanced section of the Rank dialog.

The following table lists all three available rank methods and provides a description for
each.
Rank Description
method

Standard Items that compare equally receive the same ranking number, and then a gap is
competition left in the ranking numbers. For example, 1224.

Dense Items that compare equally receive the same ranking number, and the next items
receive the immediately following ranking number. For example, 1223.

Ordinal All items receive distinct ordinal numbers, including items that compare equally.
For example, 1234.
Cluster values
Article • 12/17/2022

Cluster values automatically create groups with similar values using a fuzzy matching
algorithm, and then maps each column's value to the best-matched group. This
transform is very useful when you're working with data that has many different
variations of the same value and you need to combine values into consistent groups.

Consider a sample table with an id column that contains a set of IDs and a Person
column containing a set of variously spelled and capitalized versions of the names
Miguel, Mike, William, and Bill.

In this example, the outcome you're looking for is a table with a new column that shows
the right groups of values from the Person column and not all the different variations of
the same words.
7 Note

The Cluster values feature is available only for Power Query Online.

Create a Cluster column


To cluster values, first select the Person column, go to the Add column tab in the
ribbon, and then select the Cluster values option.

In the Cluster values dialog box, confirm the column that you want to use to create the
clusters from, and enter the new name of the column. For this case, name this new
column Cluster.

The result of that operation yields the result shown in the next image.
7 Note

For each cluster of values, Power Query picks the most frequent instance from the
selected column as the "canonical" instance. If multiple instances occur with the
same frequency, Power Query picks the first one.

Using the fuzzy cluster options


The following options are available for clustering values in a new column:

Similarity threshold (optional): This option indicates how similar two values must
be to be grouped together. The minimum setting of 0 causes all values to be
grouped together. The maximum setting of 1 only allows values that match exactly
to be grouped together. The default is 0.8.
Ignore case: When comparing text strings, case is ignored. This option is enabled
by default.
Group by combining text parts: The algorithm tries to combine text parts (such as
combining Micro and soft into Microsoft) to group values.
Show similarity scores: Shows similarity scores between the input values and
computed representative values after fuzzy clustering.
Transformation table (optional): You can select a transformation table that maps
values (such as mapping MSFT to Microsoft) to group them together.

For this example, a new transformation table with the name My transform table is used
to demonstrate how values can be mapped. This transformation table has two columns:

From: The text string to look for in your table.


To: The text string to use to replace the text string in the From column.
) Important

It's important that the transformation table has the same columns and column
names as shown in the previous image (they have to be named "From" and "To"),
otherwise Power Query won't recognize this table as a transformation table, and no
transformation will take place.

Using the previously created query, double-click the Clustered values step, then in the
Cluster values dialog box, expand Fuzzy cluster options. Under Fuzzy cluster options,
enable the Show similarity scores option. For Transformation table (optional), select
the query that has the transform table.
After selecting your transformation table and enabling the Show similarity scores
option, select OK. The result of that operation will give you a table that contains the
same id and Person columns as the original table, but also includes two new columns
on the right called Cluster and Person_Cluster_Similarity. The Cluster column contains
the properly spelled and capitalized versions of the names Miguel for versions of Miguel
and Mike, and William for versions of Bill, Billy, and William. The
Person_Cluster_Similarity column contains the similarity scores for each of the names.
Append queries
Article • 12/17/2022

The append operation creates a single table by adding the contents of one or more
tables to another, and aggregates the column headers from the tables to create the
schema for the new table.

7 Note

When tables that don't have the same column headers are appended, all column
headers from all tables are appended to the resulting table. If one of the appended
tables doesn't have a column header from other tables, the resulting table shows
null values in the respective column, as shown in the previous image in columns C
and D.

You can find the Append queries command on the Home tab in the Combine group.
On the drop-down menu, you'll see two options:

Append queries displays the Append dialog box to add additional tables to the
current query.
Append queries as new displays the Append dialog box to create a new query by
appending multiple tables.

The append operation requires at least two tables. The Append dialog box has two
modes:
Two tables: Combine two table queries together. This mode is the default mode.
Three or more tables: Allow an arbitrary number of table queries to be combined.

7 Note

The tables will be appended in the order in which they're selected, starting with the
Primary table for the Two tables mode and from the primary table in the Tables to
append list for the Three or more tables mode.

Append two tables


For the example in this article, we'll use the following two tables with sample data:

Online Sales: Sales made through an online channel.

Store Sales: Sales made through the company's physical locations.

To append these tables, first select the Online Sales table. On the Home tab, select
Append queries, which creates a new step in the Online Sales query. The Online Sales
table will be the primary table. The table to append to the primary table will be Store
Sales.
Power Query performs the append operation based on the names of the column
headers found on both tables, and not based on their relative position in the headers
sections of their respective tables. The final table will have all columns from all tables
appended.

In the event that one table doesn't have columns found in another table, null values will
appear in the corresponding column, as shown in the Referer column of the final query.

Append three or more tables


In this example, you want to append not only the Online Sales and Store Sales tables,
but also a new table named Wholesale Sales.

The new approach for this example is to select Append queries as new, and then in the
Append dialog box, select the Three or more tables option button. In the Available
table(s) list, select each table you want to append, and then select Add. After all the
tables you want appear in the Tables to append list, select OK.

After selecting OK, a new query will be created with all your tables appended.
Combine files overview
Article • 02/17/2023

With Power Query, you can combine multiple files that have the same schema into a
single logical table.

This feature is useful when you want to combine all the files you have in the same folder.
For example, if you have a folder that contains monthly files with all the purchase orders
for your company, you can combine these files to consolidate the orders into a single
view.

Files can come from a variety of sources, such as (but not limited to):

Local folders
SharePoint sites
Azure Blob storage
Azure Data Lake Storage (Gen1 and Gen2)

When working with these sources, you'll notice that they share the same table schema,
commonly referred to as the file system view. The following screenshot shows an
example of the file system view.

In the file system view, the Content column contains the binary representation of each
file.

7 Note

You can filter the list of files in the file system view by using any of the available
fields. It's good practice to filter this view to show only the files you need to
combine, for example by filtering fields such as Extension or Folder Path. More
information: Folder

Selecting any of the [Binary] values in the Content column automatically creates a series
of navigation steps to that specific file. Power Query will try to interpret the binary by
using one of the available connectors, such as Text/CSV, Excel, JSON, or XML.

Combining files takes place in the following stages:

Table preview
Combine files dialog box
Combined files output

Table preview
When you connect to a data source by using any of the previously mentioned
connectors, a table preview opens. If you're certain that you want to combine all the files
in the folder, select Combine in the lower-right corner of the screen.

Alternatively, you can select Transform data to access the Power Query Editor and
create a subset of the list of files (for example, by using filters on the folder path column
to only include files from a specific subfolder). Then combine files by selecting the
column that contains the binaries in the Content column and then selecting either:

The Combine files command in the Combine group on the Home tab.
The Combine files icon in the column header of the column that contains [Binary]
values.

Combine files dialog box


After you select the Combine or Combine files command, the Combine files dialog box
opens and the following occurs:

1. Power Query analyzes the example file (by default, the first file in the list) and
determines the correct file connector to use to open that file.
2. The dialog box provides the file connector experience exactly as if you were to
connect directly to that example file.

If you want to use a different file for the example file, you can choose it from
the Example file drop-down menu.
Optional: You can select Skip files with errors to exclude from the final
output any files that result in errors.

In the following image, Power Query has detected that the first file has a .csv file name
extension, so it uses the Text/CSV connector to interpret the file.
Combined files output
After the Combine files process is finished, Power Query automatically performs the
following actions:

1. Creates an example query that performs all the required extraction steps for a
single file. It uses the file that was selected as the example file in the Combine files
dialog box.

This example query has the name Transform Sample file in the Queries pane.

2. Creates a function query that parameterizes the file/binary input to the example
query. The example query and the function query are linked, so that changes to
the example query are reflected in the function query.

These queries are listed in the Helper queries group.

3. Applies the function query to the original query with input binaries (for example,
the folder query) so it applies the function query for binary inputs on each row,
and then expands the resulting data extraction as top-level columns.

4. Creates a new group with the prefix Transform file from and the initial query as
the suffix, and organizes all the components used to create these combined files in
that group.
You can easily combine all files within a given folder, as long as they have the same file
type and structure (including the same columns). You can also apply additional
transformation or extraction steps by modifying the automatically generated example
query, without having to worry about modifying or creating additional function query
steps.

7 Note

You can modify the steps inside the example query to change the function applied
to each binary in your query. The example query is linked to the function, so any
changes made to the example query will be reflected in the function query.

If any of the changes affect column names or column data types, be sure to check
the last step of your output query. Adding a Change column type step can
introduce a step-level error that prevents you from visualizing your table. More
information: Dealing with errors

See also
Combine CSV files
Combine CSV files
Article • 02/17/2023

In Power Query, you can combine multiple files from a given data source. This article
describes how the experience works when the files that you want to combine are CSV
files. More information: Combine files overview

 Tip

You can follow along with this example by downloading the sample files used in
this article from this download link . You can place those files in the data source
of your choice, such as a local folder, SharePoint folder, Azure Blob storage, Azure
Data Lake Storage, or other data source that provides the file system view.

For simplicity, the example in this article uses the Folder connector. More information:
Folder

About the sample files used


To combine files, it's imperative that they all have the same structure and the same
extension. All the files used in this example have the same structure and extension (.csv).

There are 12 CSV files, one for each month of the calendar year 2019. The following
image shows the first 15 rows of the file for the month of January.
The number of rows varies from file to file, but all files have a header section in the first
four rows. They have column headers in the fifth row, and the data for the table begins
in the sixth row and continues through all subsequent rows.

The goal is to combine all 12 files into a single table. This combined table contains the
header row at the top of the table, and includes the source name, date, country, units,
and revenue data for the entire year in separate columns after the header row.
Table preview
When connecting to the folder that hosts the files that you want to combine—in this
example, the name of that folder is CSV Files—you're shown the table preview dialog
box, which displays your folder path in the upper-left corner. The data preview shows
the file system view.

For this example, select Combine.

7 Note
In a different situation, you might select Transform data to further filter and
transform your data before combining the files. Selecting Combine is only
recommended when you're certain that the folder contains only the files that you
want to combine.

Combine files dialog box


After you select Combine in the table preview, the Combine files dialog box appears.

7 Note

Power Query automatically detects what connector to use based on the first file
found in the list. To learn more about the CSV connector, go to Text/CSV.

For this example, leave all the default settings (Example file set to First file, and the
default values for File origin, Delimiter, and Data type detection).

Now select Transform data in the lower-right corner to go to the output query.
Output query
After selecting Transform data in the Combine files dialog box, you'll be taken back to
the Power Query Editor in the query that you initially created from the connection to the
local folder. The output query now contains the source file name in the left-most
column, along with the data from each of the source files in the remaining columns.

However, the data isn't in the correct shape. You need to remove the top four rows from
each file before combining them. To make this change in each file before you combine
them, select the Transform Sample file query in the Queries pane on the left side of
your screen.

Modify the Transform Sample file query


In this Transform Sample file query, the values in the Date column indicate that the data
is for the month of April, which has the year-month-day (YYYY-MM-DD) format. April
2019.csv is the first file that's displayed in the table preview.
You now need to apply a new set of transformations to clean the data. Each
transformation will be automatically converted to a function inside the Helper queries
group that will be applied to every file in the folder before combining the data from
each file.

The transformations that need to be added to the Transform Sample file query are:

1. Remove top rows: To perform this operation, select the table icon menu in the
upper-left corner of the table, and then select Remove top rows.
In the Remove top rows dialog box, enter 4, and then select OK.

After selecting OK, your table will no longer have the top four rows.
2. Use first row as headers: Select the table icon again, and then select Use first row
as headers.

The result of that operation will promote the first row of the table to the new
column headers.
After this operation is completed, Power Query by default will try to automatically detect
the data types of the columns and add a new Changed column type step.

Revising the output query


When you go back to the CSV Files query, you'll notice that the last step is giving you an
error that reads "The column 'Column1' of the table wasn't found." The reason behind
this error is that the previous state of the query was doing an operation against a
column named Column1. But because of the changes made to the Transform Sample
file query, this column no longer exists. More information: Dealing with errors in Power
Query
You can remove this last step of the query from the Applied steps pane by selecting the
X delete icon on the left side of the name of the step. After deleting this step, your
query will show the correct results.

However, notice that none of the columns derived from the files (Date, Country, Units,
Revenue) have a specific data type assigned to them. Assign the correct data type to
each column by using the following table.

Column name Data type

Date Date

Country Text

Units Whole number

Revenue Currency

After defining the data types for each column, you'll be ready to load the table.
7 Note

To learn how to define or change column data types, go to Data types.

Verification
To validate that all files have been combined, you can select the filter icon on the
Source.Name column heading, which will display all the names of the files that have
been combined. If you get the warning "List may be incomplete," select Load more at
the bottom of the menu to display more available values in the column.
After you select Load more, all available file names will be displayed.
Merge queries overview
Article • 08/30/2023

A merge queries operation joins two existing tables together based on matching values
from one or multiple columns. You can choose to use different types of joins, depending
on the output you want.

Merging queries
You can find the Merge queries command on the Home tab, in the Combine group.
From the drop-down menu, you'll see two options:

Merge queries: Displays the Merge dialog box, with the selected query as the left
table of the merge operation.
Merge queries as new: Displays the Merge dialog box without any preselected
tables for the merge operation.

Identify tables for merging


The merge operation requires two tables:

Left table for merge: The first selection, from top to bottom of your screen.
Right table for merge: The second selection, from top to bottom of your screen.
7 Note

The position—left or right—of the tables becomes very important when you select
the correct join kind to use.

Select column pairs


After you've selected both the left and right tables, you can select the columns that
drive the join between the tables. In the example below, there are two tables:

Sales: The CountryID field is a key or an identifier from the Countries table.
Countries: This table contains the CountryID and the name of the country.
The goal is to join these tables by using the CountryID column from both tables, so you
select the CountryID column from each table. After you make the selections, a message
appears with an estimated number of matches at the bottom of the dialog box.

7 Note

Although this example shows the same column header for both tables, this isn't a
requirement for the merge operation. Column headers don't need to match
between tables. However, it's important to note that the columns must be of the
same data type, otherwise the merge operation might not yield correct results.

You can also select multiple columns to perform the join by selecting Ctrl as you select
the columns. When you do so, the order in which the columns were selected is
displayed in small numbers next to the column headings, starting with 1.

For this example, you have the Sales and Countries tables. Each of the tables has
CountryID and StateID columns, which you need to pair for the join between both
columns.

First select the CountryID column in the Sales table, select Ctrl, and then select the
StateID column. (This will show the small numbers in the column headings.) Next,
perform the same selections in the Countries table. The following image shows the
result of selecting those columns.

7 Note
When selecting multiple columns for a join, the order you select the columns in
each table must match. For example, the first column selected in the left table is
matched with the first column selected in the right table, and so on. Otherwise,
you'll observe incorrect join results.

Use the column pair suggestion


When selecting the left and right table in the Merge dialog, you can check the lightbulb
icon on the top right corner of the dialog to see if there are any suggestions when it
comes to column-pair-mapping between both tables.

There can be many suggestions in place but, for this scenario, there's only one
suggestion by the feature, where it maps the column CountryID from the Sales table to
the CountryID column from the Countries table. You can select it and the column-pair-
mapping will be automatically applied to your dialog.

7 Note
Only the column-pair-mapping is suggested. Other options in this dialog, such as
the join kind or fuzzy matching configuration, are out of scope for this suggestion.

Expand or aggregate the new merged table column


After selecting OK in the Merge dialog box, the base table of your query will have all the
columns from your left table. Also, a new column will be added with the same name as
your right table. This column holds the values corresponding to the right table on a row-
by-row basis.

From here, you can choose to expand or aggregate the fields from this new table
column, which will be the fields from your right table.

7 Note

Currently, the Power Query Online experience only provides the expand operation
in its interface. The option to aggregate will be added later this year.

Join kinds
A join kind specifies how a merge operation will be performed. The following table
describes the available join kinds in Power Query.
Join kind Icon Description

Left outer All rows from the left table, matching rows from the right table

Right outer All rows from the right table, matching rows from the left table

Full outer All rows from both tables

Inner Only matching rows from both tables

Left anti Only rows from the left table

Right anti Only rows from the right table

Fuzzy matching
You use fuzzy merge to apply fuzzy matching algorithms when comparing columns, to
try to find matches across the tables you're merging. You can enable this feature by
selecting the Use fuzzy matching to perform the merge check box in the Merge dialog
box. Expand Fuzzy matching options to view all available configurations.

7 Note
Fuzzy matching is only supported for merge operations over text columns.
Left outer join
Article • 12/17/2022

One of the join kinds available in the Merge dialog box in Power Query is a left outer
join, which keeps all the rows from the left table and brings in any matching rows from
the right table. More information: Merge operations overview

This article uses sample data to show how to do a merge operation with the left outer
join. The sample source tables for this example are:

Sales: This table includes the fields Date, CountryID, and Units. CountryID is a
whole number value that represents the unique identifier from the Countries table.

Countries: This table is a reference table with the fields id and Country. The id field
represents the unique identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the
Countries table as the right one. The join will be made between the following columns.

Field from the Sales table Field from the Countries table

CountryID id

The goal is to create a table like the following, where the name of the country appears
as a new Country column in the Sales table as long as the CountryID exists in the
Countries table. If there are no matches between the left and right tables, a null value is
the result of the merge for that row. In the following image, this is shown to be the case
for CountryID 4, which was brought in from the Sales table.

To do a left outer join

1. Select the Sales query, and then select Merge queries.


2. In the Merge dialog box, under Right table for merge, select Countries.
3. In the Sales table, select the CountryID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Left outer.
6. Select OK.
From the newly created Countries column, expand the Country field. Don't select the
Use original column name as prefix check box.
After performing this operation, you'll create a table that looks like the following image.
Right outer join
Article • 12/17/2022

One of the join kinds available in the Merge dialog box in Power Query is a right outer
join, which keeps all the rows from the right table and brings in any matching rows from
the left table. More information: Merge operations overview

This article uses sample data to show how to do a merge operation with the right outer
join. The sample source tables for this example are:

Sales: This table includes the fields Date, CountryID, and Units. The CountryID is a
whole number value that represents the unique identifier from the Countries table.

Countries: This table is a reference table with the fields id and Country. The id field
represents the unique identifier for each record.

In this example, you'll merge both tables, with the Sales table as the left table and the
Countries table as the right one. The join will be made between the following columns.
Field from the Sales table Field from the Countries table

CountryID id

The goal is to create a table like the following, where the name of the country appears
as a new Country column in the Sales table. Because of how the right outer join works,
all rows from the right table will be brought in, but only matching rows from the left
table will be kept.

To do a right outer join

1. Select the Sales query, and then select Merge queries.


2. In the Merge dialog box, under Right table for merge, select Countries.
3. In the Sales table, select the CountryID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Right outer.
6. Select OK.
From the newly created Countries column, expand the Country field. Don't select the
Use original column name as prefix check box.
After performing this operation, you'll create a table that looks like the following image.
Full outer join
Article • 12/17/2022

One of the join kinds available in the Merge dialog box in Power Query is a full outer
join, which brings in all the rows from both the left and right tables. More information:
Merge operations overview

This article uses sample data to show how to do a merge operation with the full outer
join. The sample source tables for this example are:

Sales: This table includes the fields Date, CountryID, and Units. CountryID is a
whole number value that represents the unique identifier from the Countries table.

Countries: This is a reference table with the fields id and Country. The id field
represents the unique identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the
Countries table as the right one. The join will be made between the following columns.

Field from the Sales table Field from the Countries table

CountryID id

The goal is to create a table like the following, where the name of the country appears
as a new Country column in the Sales table. Because of how the full outer join works, all
rows from both the left and right tables will be brought in, regardless of whether they
only appear in one of the tables.

To perform a full outer join

1. Select the Sales query, and then select Merge queries.


2. In the Merge dialog box, under Right table for merge, select Countries.
3. In the Sales table, select the CountryID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Full outer.
6. Select OK
7 Note

You can merge on more than one column by selecting and holding Ctrl and then
selecting the columns.

 Tip

Take a closer look at the message at the bottom of the dialog box that reads "The
selection matches 4 of 4 rows from the first table, and 3 of 4 rows from the second
table." This message is crucial for understanding the result that you get from this
operation.
In the Countries table, you have the Country Spain with id of 4, but there are no records
for CountryID 4 in the Sales table. That's why only three of four rows from the right
table found a match. All rows from the right table that didn't have matching rows from
the left table will be grouped and shown in a new row in the output table with no values
for the fields from the left table.

From the newly created Countries column after the merge operation, expand the
Country field. Don't select the Use original column name as prefix check box.

After performing this operation, you'll create a table that looks like the following image.
Inner join
Article • 12/17/2022

One of the join kinds available in the Merge dialog box in Power Query is an inner join,
which brings in only matching rows from both the left and right tables. More
information: Merge operations overview

This article uses sample data to show how to do a merge operation with the inner join.
The sample source tables for this example are:

Sales: This table includes the fields Date, CountryID, and Units. CountryID is a
whole number value that represents the unique identifier from the Countries table.

Countries: This is a reference table with the fields id and Country. The id field
represents the unique identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the
Countries table as the right one. The join will be made between the following columns.

Field from the Sales table Field from the Countries table

CountryID id

The goal is to create a table like the following, where the name of the country appears
as a new Country column in the Sales table. Because of how the inner join works, only
matching rows from both the left and right tables will be brought in.

To perform an inner join

1. Select the Sales query, and then select Merge queries.


2. In the Merge dialog box, under Right table for merge, select Countries.
3. In the Sales table, select the CountryID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Inner.
6. Select OK.
 Tip

Take a closer look at the message at the bottom of the dialog box that reads "The
selection matches 1 of 4 rows from the first table, and 1 of 2 rows from the second
table." This message is crucial to understanding the result that you get from this
operation.

In the Sales table, you have a CountryID of 1 and 2, but neither of these values are
found in the Countries table. That's why the match only found one of four rows in the
left (first) table.

In the Countries table, you have the Country Spain with the id 4, but there are no
records for a CountryID of 4 in the Sales table. That's why only one of two rows from
the right (second) table found a match.

From the newly created Countries column, expand the Country field. Don't select the
Use original column name as prefix check box.

After performing this operation, you'll create a table that looks like the following image.
Left anti join
Article • 12/17/2022

One of the join kinds available in the Merge dialog box in Power Query is a left anti join,
which brings in only rows from the left table that don't have any matching rows from
the right table. More information: Merge operations overview

This article uses sample data to show how to do a merge operation with the left anti
join. The sample source tables for this example are:

Sales: This table includes the fields Date, CountryID, and Units. CountryID is a
whole number value that represents the unique identifier from the Countries table.

Countries: This table is a reference table with the fields id and Country. The id field
represents the unique identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the
Countries table as the right one. The join will be made between the following columns.

Field from the Sales table Field from the Countries table

CountryID id

The goal is to create a table like the following, where only the rows from the left table
that don't match any from the right table are kept.

To do a left anti join

1. Select the Sales query, and then select Merge queries.


2. In the Merge dialog box, under Right table for merge, select Countries.
3. In the Sales table, select the CountryID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Left anti.
6. Select OK.
 Tip

Take a closer look at the message at the bottom of the dialog box that reads "The
selection excludes 1 of 4 rows from the first table." This message is crucial to
understanding the result that you get from this operation.

In the Sales table, you have a CountryID of 1 and 2, but neither of them are found in the
Countries table. That's why the match only found one of four rows in the left (first) table.

In the Countries table, you have the Country Spain with an id of 4, but there are no
records for CountryID 4 in the Sales table. That's why only one of two rows from the
right (second) table found a match.
From the newly created Countries column, expand the Country field. Don't select the
Use original column name as prefix check box.

After doing this operation, you'll create a table that looks like the following image. The
newly expanded Country field doesn't have any values. That's because the left anti join
doesn't bring any values from the right table—it only keeps rows from the left table.
Right anti join
Article • 12/17/2022

One of the join kinds available in the Merge dialog box in Power Query is a right anti
join, which brings in only rows from the right table that don't have any matching rows
from the left table. More information: Merge operations overview

This article uses sample data to show how to do a merge operation with the right anti
join. The sample source tables for this example are:

Sales: This table includes the fields Date, CountryID, and Units. CountryID is a
whole number value that represents the unique identifier from the Countries table.

Countries: This is a reference table with the fields id and Country. The id field
represents the unique identifier for each record.
In this example, you'll merge both tables, with the Sales table as the left table and the
Countries table as the right one. The join will be made between the following columns.

Field from the Sales table Field from the Countries table

CountryID id

The goal is to create a table like the following, where only the rows from the right table
that don't match any from the left table are kept. As a common use case, you can find
all the rows that are available in the right table but aren't found in the left table.

To do a right anti join

1. Select the Sales query, and then select Merge queries.


2. In the Merge dialog box, under Right table for merge, select Countries.
3. In the Sales table, select the CountryID column.
4. In the Countries table, select the id column.
5. In the Join kind section, select Right anti.
6. Select OK.
 Tip

Take a closer look at the message at the bottom of the dialog box that reads "The
selection excludes 1 of 2 rows from the second table." This message is crucial to
understanding the result that you get from this operation.

In the Countries table, you have the Country Spain with an id of 4, but there are no
records for CountryID 4 in the Sales table. That's why only one of two rows from the
right (second) table found a match. Because of how the right anti join works, you'll never
see any rows from the left (first) table in the output of this operation.

From the newly created Countries column, expand the Country field. Don't select the
Use original column name as prefix check box.
After performing this operation, you'll create a table that looks like the following image.
The newly expanded Country field doesn't have any values. That's because the right anti
join doesn't bring any values from the left table—it only keeps rows from the right table.
Fuzzy merge
Article • 12/17/2022

Fuzzy merge is a smart data preparation feature you can use to apply fuzzy matching
algorithms when comparing columns, to try to find matches across the tables that are
being merged.

You can enable fuzzy matching at the bottom of the Merge dialog box by selecting the
Use fuzzy matching to perform the merge option button. More information: Merge
operations overview

7 Note

Fuzzy matching is only supported on merge operations over text columns. Power
Query uses the Jaccard similarity algorithm to measure the similarity between pairs
of instances.

Sample scenario
A common use case for fuzzy matching is with freeform text fields, such as in a survey.
For this article, the sample table was taken directly from an online survey sent to a
group with only one question: What is your favorite fruit?

The results of that survey are shown in the following image.


The nine records reflect the survey submissions. The problem with the survey
submissions is that some have typos, some are plural, some are singular, some are
uppercase, and some are lowercase.

To help standardize these values, in this example you have a Fruits reference table.

7 Note
For simplicity, this Fruits reference table only includes the name of the fruits that
will be needed for this scenario. Your reference table can have as many rows as you
need.

The goal is to create a table like the following, where you've standardized all these
values so you can do more analysis.

Fuzzy merge operation


To do the fuzzy merge, you start by doing a merge. In this case, you'll use a left outer
join, where the left table is the one from the survey and the right table is the Fruits
reference table. At the bottom of the dialog box, select the Use fuzzy matching to
perform the merge check box.
After you select OK, you can see a new column in your table because of this merge
operation. If you expand it, you'll notice that there's one row that doesn't have any
values in it. That's exactly what the dialog box message in the previous image stated
when it said "The selection matches 8 of 9 rows from the first table."
Fuzzy matching options
You can modify the Fuzzy matching options to tweak how the approximate match
should be done. First, select the Merge queries command, and then in the Merge dialog
box, expand Fuzzy matching options.
The available options are:

Similarity threshold (optional): A value between 0.00 and 1.00 that provides the
ability to match records above a given similarity score. A threshold of 1.00 is the
same as specifying an exact match criteria. For example, Grapes matches with
Graes (missing the letter p) only if the threshold is set to less than 0.90. By default,
this value is set to 0.80.
Ignore case: Allows matching records no matter what the case of the text.
Match by combining text parts: Allows combining text parts to find matches. For
example, Micro soft is matched with Microsoft if this option is enabled.
Show similarity scores: Shows similarity scores between the input and the matches
values after fuzzy matching.
Number of matches (optional): Specifies the maximum number of matching rows
that can be returned for every input row.
Transformation table (optional): Allows matching records based on custom value
mappings. For example, Grapes is matched with Raisins if a transformation table is
provided where the From column contains Grapes and the To column contains
Raisins.

Transformation table
For the example in this article, you can use a transformation table to map the value that
has a missing pair. That value is apls, which needs to be mapped to Apple. Your
transformation table has two columns:

From contains the values to find.


To contains the values that will be used to replace the values found by using the
From column.

For this article, the transformation table will look as follows:

From To

apls Apple

You can go back to the Merge dialog box, and in Fuzzy matching options under
Number of matches, enter 1. Enable the Show similarity scores option, and then, under
Transformation table, select Transform Table from the drop-down menu.
After you select OK, you can go to the merge step. When you expand the column with
table values, you'll notice that besides the Fruit field you'll also see the Similarity score
field. Select both and expand them without adding a prefix.
After expanding these two fields, they'll be added to your table. Note the values you get
for the similarity scores of each value. These scores can help you with further
transformations if needed to determine if you should lower or raise your similarity
threshold.

For this example, the Similarity score serves only as additional information and isn't
needed in the output of this query, so you can remove it. Note how the example started
with nine distinct values, but after the fuzzy merge, there are only four distinct values.
Cross join
Article • 12/17/2022

A cross join is a type of join that returns the Cartesian product of rows from the tables in
the join. In other words, it combines each row from the first table with each row from
the second table.

This article demonstrates, with a practical example, how to do a cross join in Power
Query.

Sample input and output tables


For this example, the sample source tables are:

Product: A table with all the generic products that you sell.

Colors: A table with all the product variations, as colors, that you can have in your
inventory.

The goal is to perform a cross-join operation with these two tables to create a list of all
unique products that you can have in your inventory, as shown in the following table.
This operation is necessary because the Product table only contains the generic product
name, and doesn't give the level of detail you need to see what product variations (such
as color) there are.
Perform a cross join
To do a cross-join operation in Power Query, first go to the Product table. From the Add
column tab on the ribbon, select Custom column. More information: Add a custom
column

In the Custom column dialog box, enter whatever name you like in the New column
name box, and enter Colors in the Custom column formula box.
) Important

If your query name has spaces in it, such as Product Colors, the text that you need
to enter in the Custom column formula section has to follow the syntax #"Query
name" . For Product Colors, you need to enter #"Product Colors" .

You can check the name of your queries in the Query settings pane on the right
side of your screen or in the Queries pane on the left side.

After you select OK in the Custom column dialog box, a new column is added to the
table. In the new column heading, select Expand to expand the contents of this newly
created column, and then select OK.
After you select OK, you'll reach your goal of creating a table with all possible
combinations of Product and Colors.
Split columns by delimiter
Article • 12/17/2022

In Power Query, you can split a column through different methods. In this case, the
column(s) selected can be split by a delimiter.

Where to find Split Columns > By Delimiter


You can find the Split Columns: By Delimiter option in three places:

Home tab—under the Split column dropdown menu inside the Transform group.

Transform tab—under the Split column dropdown menu inside the Text column
group.

Right-click a column—inside the Split column option.


Split columns by delimiter into columns
In this example, the initial table will be the one shown in the image below, with only one
column for Accounts.
This column holds two values:

Account number
Account name

In this example, you want to split this column into two columns. The values are
delimited by a space—the first space from left to right. To do this split, select the
column, and then select the option to split the column by a delimiter. In Split Column
by Delimiter, apply the following configuration:

Select or enter delimiter: Space


Split at: Left-most delimiter
The result of that operation will give you a table with the two columns that you're
expecting.

7 Note

Power Query will split the column into as many columns as needed. The name of
the new columns will contain the same name as the original column. A suffix that
includes a dot and a number that represents the split sections of the original
column will be appended to the name of the new columns.

Split columns by delimiter into rows


In this example, your initial table will be the one shown in the image below, with the
columns Cost Center and Accounts.

The Accounts column has values in pairs separated by a comma. These pairs are
separated by a semicolon. The goal of this example is to split this column into new rows
by using the semicolon as the delimiter.

To do that split, select the Accounts column. Select the option to split the column by a
delimiter. In Split Column by Delimiter, apply the following configuration:

Select or enter delimiter: Semicolon


Split at: Each occurrence of the delimiter
Split into: Rows
The result of that operation will give you a table with the same number of columns, but
many more rows because the values inside the cells are now in their own cells.

Final Split
Your table still requires one last split column operation. You need to split the Accounts
column by the first comma that it finds. This split will create a column for the account
name and another one for the account number.

To do that split, select the Accounts column and then select Split Column > By
Delimiter. Inside the Split column window, apply the following configuration:

Select or enter delimiter: Comma


Split at: Each occurrence of the delimiter

The result of that operation will give you a table with the three columns that you're
expecting. You then rename the columns as follows:

Previous Name New Name


Previous Name New Name

Accounts.1 Account Name

Accounts.2 Account Number

Your final table looks like the one in the following image.
Split columns by number of characters
Article • 12/17/2022

In Power Query, you can split a column through different methods. In this case, the
column(s) selected can be split by the number of characters.

Where to find Split Columns > By Number of


Characters
You can find the Split Columns > By Number of Characters option in three places:

Home tab—under the Split Column dropdown menu inside the Transform group.

Transform tab—under the Split Column dropdown menu inside the Text Column
group.

Right-click a column—inside the Split Column option.


Split columns by number of characters into
columns
The initial table for this example will be the one below, with only one column for
Column1.

This column holds three values:

Account Name—in the first six characters


Date—in the following eight characters with the format yyyymmdd
Units—the remaining characters

In this example, you want to split this column into three columns containing the values
described in the list above.
To do this split, select the column and then select the option to split the column by the
number of characters. In Split column by Number of Characters, apply the following
configuration:

Number of characters: 6
Split: Once, as far left as possible

The result of that operation will give you a table with two columns. One for the account
name and the other one that contains the combined values for the date and units.

7 Note

Power Query will split the column into only two columns. The name of the new
columns will contain the same name as the original column. A suffix containing a
dot and a number that represents the split section of the column will be appended
to the names of the new columns.

Now continue to do the same operation over the new Column1.2 column, but with the
following configuration:

Number of characters: 8
Split: Once, as far left as possible
The result of that operation will yield a table with three columns. Notice the new names
of the two columns on the far right. Column1.2.1 and Column1.2.2 were automatically
created by the split column operation.

You can now change the name of the columns and also define the data types of each
column as follows:

Original Column Name New Column name Data type

Column1.1 Account Name Text

Column1.2.1 Date Date

Column1.2.2 Units Whole Number

Your final table will look like the one in the following image.
Split columns by number of characters into
rows
The initial table for this example will be the one below, with the columns Group and
Account.

The Account column can hold multiple values in the same cell. Each value has the same
length in characters, with a total of six characters. In this example, you want to split
these values so you can have each account value in its own row.

To do that, select the Account column and then select the option to split the column by
the number of characters. In Split column by Number of Characters, apply the following
configuration:

Number of characters: 6
Split: Repeatedly
Split into: Rows

The result of that operation will give you a table with the same number of columns, but
many more rows because the fragments inside the original cell values in the Account
column are now split into multiple rows.
Split columns by positions
Article • 12/17/2022

In Power Query, you can split a column through different methods. In this case, the
column(s) selected can be split by positions.

Where to find Split Columns > By Positions


You can find the Split Columns > By Positions option in three places:

Home tab—under the Split Column dropdown menu inside the Transform group.

Transform tab—under the Split Column dropdown menu inside the Text Column
group.

Right-click a column—inside the Split Column option.


Split columns by positions into columns
The initial table for this example will be the one shown in the image below, with only
one column for Column1.

This column holds three values:

Account Name—in the first six characters


Date—in the next eight characters with the format yyyymmdd
Units—the rest of the characters

In this example, you want to split this column into the three columns made from the
values in the list above. To do this split, select the column and then select the option to
split the column by positions. In Split Column by Positions, apply the following
configuration:
Positions: 0,6,14
Positions are zero-based and comma-separated, where position zero is the start
of the string.

7 Note

This operation will first start creating a column from position 0 to position 6, then
from position 7 to position 14. There will be another column should there be values
with a length of 16 or more characters in the current data preview contents.

The result of that operation will give you a table with three columns.

7 Note

Power Query will split the column into only two columns. The name of the new
columns will contain the same name as the original column. A suffix created by a
dot and a number that represents the split section of the column will be appended
to the name of the new columns.

You can now change the name of the columns, and also define the data types of each
column as follows:

Original Column Name New Column name Data type

Column1.1 Account Name Text


Original Column Name New Column name Data type

Column1.2 Date Date

Column1.3 Units Whole Number

Your final table will look the one in the following image.

Split columns by positions into rows


The initial table for this example will be like the one in the image below, with the
columns Group and Account.

The Account column can only hold two values in the same cell. Each value has the same
length in characters, with a total of six characters. In this example, you want to split
these values so you can have each account value in its own row. To do that, select the
Account column and then select the option to split the column by positions. In Split
Column by Positions, apply the following configuration:

Positions: 0, 6
Split into: Rows
7 Note

This operation will first start creating a column from position 0 to position 6. There
will be another column should there be values with a length of 8 or more
characters in the current data preview contents.

The result of that operation will give you a table with the same number of columns, but
many more rows because the values inside the cells are now in their own cells.
Split columns by lowercase to uppercase
Article • 02/17/2023

In Power Query, you can split a column through different methods. If your data contains
CamelCased text or a similar pattern, then the column(s) selected can be split by every
instance of the last lowercase letter to the next uppercase letter easily.

Where to find Split Columns > By Lowercase to


Uppercase
You can find the Split Columns: By Lowercase to Uppercase option in three places:

Home tab—under the Split Column dropdown menu inside the Transform group.

Transform tab—under the Split Column dropdown menu inside the Text Column
group.

Right-click a column—inside the Split Column option.


Split columns by lowercase to uppercase into
columns
The initial table in this example will be the one shown in the image below, with one
column for Word and an extra column named Syllables. You'll only focus on the first
column.

This column holds two values in each row:

FirstWord—The first half of the compound word.


SecondWord—The second half of the compound word.
In this example, you want to split this column into the two columns described in the list
above. Select the column and then select the option to split the column by lowercase to
uppercase.

This single column will split into multiple columns, given every instance of the last
lowercase letter to the next uppercase letter. In this case, it only splits into two columns.

Your final table will look like the following image.


Split columns by uppercase to lowercase
Article • 02/17/2023

In Power Query, you can split a column through different methods. In this case, the
column(s) selected can be split by every instance of the last uppercase letter to the next
lowercase letter.

Where to find Split Columns > By Uppercase to


Lowercase
You can find the Split Columns > By Uppercase to Lowercase option in three places:

Home tab—under the Split Column dropdown menu inside the Transform group.

Transform tab—under the Split Column dropdown menu inside the Text Column
group.

Right-click a column—inside the Split Column option.


Split columns by uppercase to lowercase into
columns
Your initial table in this example will be the one shown in the image below, with only
one column for Column1.

This column holds two values:

FirstWord—The first half of a product that is in camel case.


SecondWord—The second half of a product that is in camel case.
In this example, you want to split this column into the two columns described in the list
above. Select the column and then select the option to split the column by uppercase to
lowercase.

This single column will split into multiple columns, given every instance of the last
uppercase letter to the next lowercase letter. In this case, it only splits into two columns.

Your final table will look like the following image.


Split columns by digit to non-digit
Article • 02/17/2023

In Power Query, you can split a column through different methods. In this case, the
column(s) selected can be split by every instance of a digit followed by a non-digit.

Where to find Split columns > By Digit to Non-


Digit
You can find the Split Columns: By Digit to Non-Digit option in three places:

Home tab—under the Split Column dropdown menu inside the Transform group.

Transform tab—under the Split Column dropdown menu inside the Text Column
group.

Right-click a column—inside the Split Column option.


Split columns by digit to non-digit into
columns
The initial table in this example will be the one shown in the image below, with a column
for Pets and other extra columns. This example will only focus on the Pets column.

This column holds two values in each row:

Rank—The rank of the animal.


AnimalType—The second part of the word is the type of animal.
In this example, you want to split this column into the two columns described in the list
above. Select the column and then select the option to split the column by digit to non-
digit.

This single column will split into multiple columns, given every instance of a digit
followed with a non-digit. In this case, it only splits it into two.

Your final table will look like the following image.


Split columns by non-digit to digit
Article • 02/17/2023

In Power Query, you can split a column through different methods. In this case, the
column(s) selected can be split by every instance of a non-digit followed by a digit.

Where to find Split Columns > By Non-Digit to


Digit
You can find the Split Columns > By Non-Digit to Digit option in three places:

Home tab—under the Split Column dropdown menu inside the Transform group.

Transform tab—under the Split Column dropdown menu inside the Text Column
group.

Right-click a column—inside the Split Column option.


Split columns by non-digit to digit into
columns
The initial table in this example will be the one shown in the image below, with a column
for Pets and other extra columns. In this example, you'll only focus on the Pets column.

This column holds two values in each row:

AnimalType—The first part is the type of animal.


Number—The animal number that came into the person's life.

In this example, you want to split this column into the two columns described in the list
above. Select the column and then select the option to split the column by non-digit to
digit.

This single column will split into multiple columns, given every instance of a digit
followed by a non-digit. In this case, it only splits into two columns.

Your final table will look like the following image.


What are dataflows?
Article • 07/27/2023

 Tip

Try out Dataflow Gen2 in Data Factory in Microsoft Fabric, an all-in-one analytics
solution for enterprises. Microsoft Fabric covers everything from data movement to
data science, real-time analytics, business intelligence, and reporting. Learn how to
start a new trial for free!

Dataflows are a self-service, cloud-based, data preparation technology. Dataflows


enable customers to ingest, transform, and load data into Microsoft Dataverse
environments, Power BI workspaces, or your organization's Azure Data Lake Storage
account. Dataflows are authored by using Power Query, a unified data connectivity and
preparation experience already featured in many Microsoft products, including Excel
and Power BI. Customers can trigger dataflows to run either on demand or automatically
on a schedule; data is always kept up to date.

Dataflows can be created in multiple Microsoft


products
Dataflows are featured in multiple Microsoft products and don't require a dataflow-
specific license to be created or run. Dataflows are available in Power Apps, Power BI,
and Dynamics 365 Customer Insights. The ability to create and run dataflows is bundled
with those products' licenses. Dataflow features are mostly common across all products
they're featured in, but some product-specific features might exist in dataflows created
in one product versus another.

How does the dataflow function?


The previous image shows an overall view of how a dataflow is defined. A dataflow gets
data from different data sources (more than 80 data sources are supported already).
Then, based on the transformations configured with the Power Query authoring
experience, the dataflow transforms the data by using the dataflow engine. Finally, the
data is loaded to the output destination, which can be a Microsoft Power Platform
environment, a Power BI workspace, or the organization's Azure Data Lake Storage
account.

Dataflows run in the cloud


Dataflows are cloud-based. When a dataflow is authored and saved, its definition is
stored in the cloud. A dataflow also runs in the cloud. However, if a data source is on-
premises, an on-premises data gateway can be used to extract the data to the cloud.
When a dataflow run is triggered, the data transformation and computation happens in
the cloud, and the destination is always in the cloud.
Dataflows use a powerful transformation engine
Power Query is the data transformation engine used in the dataflow. This engine is
capable enough to support many advanced transformations. It also uses a
straightforward, yet powerful, graphical user interface called Power Query Editor. You
can use dataflows with this editor to develop your data integration solutions faster and
more easily.

Dataflow integration with Microsoft Power Platform and


Dynamics 365
Because a dataflow stores the resulting entities in cloud-based storage, other services
can interact with the data produced by dataflows.
For example, Power BI, Power Apps, Power Automate, Power Virtual Agents, and
Dynamics 365 applications can get the data produced by the dataflow by connecting to
Dataverse, a Power Platform dataflow connector, or directly through the lake, depending
on the destination configured at dataflow creation time.

Benefits of dataflows
The following list highlights some of the benefits of using dataflows:

A dataflow decouples the data transformation layer from the modeling and
visualization layer in a Power BI solution.

The data transformation code can reside in a central location, a dataflow, rather
than be spread out among multiple artifacts.

A dataflow creator only needs Power Query skills. In an environment with multiple
creators, the dataflow creator can be part of a team that together builds the entire
BI solution or operational application.

A dataflow is product-agnostic. It's not a component of Power BI only; you can get
its data in other tools and services.

Dataflows take advantage of Power Query, a powerful, graphical, self-service data


transformation experience.

Dataflows run entirely in the cloud. No additional infrastructure is required.


You have multiple options for starting to work with dataflows, using licenses for
Power Apps, Power BI, and Customer Insights.

Although dataflows are capable of advanced transformations, they're designed for


self-service scenarios and require no IT or developer background.

Use-case scenarios for dataflows


You can use dataflows for many purposes. The following scenarios provide a few
examples of common use cases for dataflows.

Data migration from legacy systems


In this scenario, the decision has been made by an organization to use Power Apps for
the new user interface experience rather than the legacy on-premises system. Power
Apps, Power Automate, and AI Builder all use Dataverse as the primary data storage
system. The current data in the existing on-premises system can be migrated into
Dataverse by using a dataflow, and then these products can use that data.

Using dataflows to build a data warehouse


You can use dataflows as a replacement for other extract, transform, load (ETL) tools to
build a data warehouse. In this scenario, the data engineers of a company decide to use
dataflows to build their star schema–designed data warehouse, including fact and
dimension tables in Data Lake Storage. Then Power BI is used to generate reports and
dashboards by getting data from the dataflows.
Using dataflows to build a dimensional model
You can use dataflows as a replacement for other ETL tools to build a dimensional
model. For example, the data engineers of a company decide to use dataflows to build
the star-schema designed dimensional model, including fact and dimension tables in
Azure Data Lake Storage Gen2. Then Power BI is used to generate reports and
dashboards by getting data from the dataflows.

Centralize data preparation and reuse of datasets across


multiple Power BI solutions
If multiple Power BI solutions are using the same transformed version of a table, the
process to create the table will be repeated multiple times. This increases the load on
the source system, consumes more resources, and creates duplicate data with multiple
points of failure. Instead, a single dataflow can be created to compute the data for all
solutions. Power BI can then reuse the result of the transformation in all solutions. The
dataflow, if used in such a way, can be part of a robust Power BI implementation
architecture that avoids the Power Query code duplicates and reduces the maintenance
costs of the data integration layer.

Next steps
The following articles provide further study materials for dataflows.

Create and use dataflows in Microsoft Power Platform

Creating and using dataflows in Power BI


Understanding the differences between
dataflow types
Article • 08/04/2023

Dataflows are used to extract, transform, and load data to a storage destination where it
can be leveraged for different scenarios. Because not all storage destinations share the
same characteristics, some dataflow features and behaviors differ depending on the
storage destination the dataflow loads data into. Before you create a dataflow, it's
important to understand how the data is going to be used, and choose the storage
destination according to the requirements of your solution.

Selecting a storage destination of a dataflow determines the dataflow's type. A dataflow


that loads data into Dataverse tables is categorized as a standard dataflow. Dataflows
that load data to analytical tables is categorized as an analytical dataflow.

Dataflows created in Power BI are always analytical dataflows. Dataflows created in


Power Apps can either be standard or analytical, depending on your selection when
creating the dataflow.

Standard dataflows
A standard dataflow loads data to Dataverse tables. Standard dataflows can only be
created in Power Apps. One benefit of creating this type of dataflow is that any
application that depends on data in Dataverse can work with the data created by
standard dataflows. Typical applications that leverage Dataverse tables are Power Apps,
Power Automate, AI Builder and Power Virtual Agents.
Standard dataflows versions
We've been working on significant updates to standard dataflows to improve their
performance and reliability. These improvements will eventually be available to all
standard dataflows. But in the interim, we'll differentiate between existing standard
dataflows (version 1) and new standard dataflows (version 2) by adding a version
indicator in Power Apps.
Standard dataflow versions feature comparison
The following table lists the major features differences between standard dataflows V1
and V2, and provides information about each feature's behavior in each version.

Feature Standard V1 Standard V2

Maximum number of 50 Unlimited


dataflows that can be
saved with automatic
schedule per customer
tenant

Maximum number of 500,000 Unbounded. The maximum number of


records ingested per records that can be ingested per query
query/table or table now depends on Dataverse
service protection limits at the time of
ingestion.

Ingestion speed into Baseline performance Improved performance by a few factors.


Dataverse Actual results may vary and depend on
characteristics of the data ingested, and
load on Dataverse service at the time of
ingestion.

Incremental Refresh Not supported Supported


policy

Resiliency When Dataverse service When Dataverse service protection


protection limits are limits are encountered, a record will be
encountered, a record will retried up to 3 times.
be retried up to 3 times.

Power Automate Not supported Supported


integration

Analytical dataflows
An analytical dataflow loads data to storage types optimized for analytics—Azure Data
Lake Storage. Microsoft Power Platform environments and Power BI workspaces provide
customers with a managed analytical storage location that's bundled with those product
licenses. In addition, customers can link their organization’s Azure Data Lake storage
account as a destination for dataflows.

Analytical dataflows are capable additional analytical features. For example, integration
with Power BI’s AI features or use of computed entities which will be discussed later.
You can create analytical dataflows in Power BI. By default, they'll load data to Power BI’s
managed storage. But you can also configure Power BI to store the data in the
organization’s Azure Data Lake Storage.

You can also create analytical dataflows in Power Apps and Dynamics 365 customer
insights portals. When you're creating a dataflow in Power Apps portal, you can choose
between Dataverse managed analytical storage or in your organization’s Azure Data
Lake Storage account.
AI Integration
Sometimes, depending on the requirement, you might need to apply some AI and
machine learning functions on the data through the dataflow. These functionalities are
available in Power BI dataflows and require a Premium workspace.

The following articles discuss how to use AI functions in a dataflow:

Azure Machine Learning integration in Power BI


Cognitive Services in Power BI
Automated Machine Learning in Power BI

Note that the features listed above are Power BI specific and are not available when
creating a dataflow in the Power Apps or Dynamics 365 customer insights portals.

Computed tables
One of the reasons to use a computed table is the ability to process large amounts of
data. The computed table helps in those scenarios. If you have an table in a dataflow,
and another table in the same dataflow uses the first table's output, this action creates a
computed table.

The computed table helps with the performance of the data transformations. Instead of
re-doing the transformations needed in the first table multiple times, the transformation
is done only once in the computed table. Then the result is used multiple times in other
tables.

To learn more about computed tables, go to Creating computed tables in dataflows.

Computed tables are available only in an analytical dataflow.

Standard vs. analytical dataflows


The following table lists some differences between a standard table and an analytical
table.
Operation Standard Analytical

How to create Power Platform dataflows Power BI dataflows


Power Platform dataflows by selecting the
Analytical entities only checkbox when
creating the dataflow

Storage options Dataverse Power BI provided Azure Data Lake storage


for Power BI dataflows, Dataverse provided
Azure Data Lake storage for Power
Platform dataflows, or customer provided
Azure Data Lake storage

Power Query Yes Yes


transformations

AI functions No Yes

Computed table No Yes

Can be used in Yes, through Dataverse Power BI dataflows: Only in Power BI


other applications Power Platform dataflows or Power BI
external dataflows: Yes, through Azure Data
Lake Storage

Mapping to Yes Yes


standard table

Incremental load Default incremental-load Default full-load


Possible to change using the Possible to set up incremental refresh by
Delete rows that no longer setting up the incremental refresh in the
exist in the query output dataflow settings
checkbox at the load settings

Scheduled Refresh Yes Yes, the possibility of notifying the dataflow


owners upon the failure

Scenarios to use each dataflow type


Here are some sample scenarios and best practice recommendations for each type of
dataflow.

Cross-platform usage—standard dataflow


If your plan for building dataflows is to use stored data in multiple platforms (not only
Power BI, but also other Microsoft Power Platform services, Dynamics 365, and so on), a
standard dataflow is a great choice. Standard dataflows store the data in Dataverse,
which you can access through many other platforms and services.

Heavy data transformations on large data tables—


analytical dataflow
Analytical dataflows are an excellent option for processing large amounts of data.
Analytical dataflows also enhance the computing power behind the transformation.
Having the data stored in Azure Data Lake Storage increases the writing speed to a
destination. Compared to Dataverse (which might have many rules to check at the time
of data storage), Azure Data Lake Storage is faster for read/write transactions on a large
amount of data.

AI features—analytical dataflow
If you're planning to use any AI functionality through the data transformation stage,
you'll find it helpful to use an analytical dataflow because you can use all the supported
AI features with this type of dataflow.
Create and use dataflows in Microsoft
Power Platform
Article • 04/06/2023

Using dataflows with Microsoft Power Platform makes data preparation easier, and lets
you reuse your data preparation work in subsequent reports, apps, and models.

In the world of ever-expanding data, data preparation can be difficult and expensive,
consuming as much as 60 to 80 percent of the time and cost for a typical analytics
project. Such projects can require wrangling fragmented and incomplete data, complex
system integration, data with structural inconsistency, and a high skillset barrier.

To make data preparation easier and to help you get more value out of your data, Power
Query and Power Platform dataflows were created.

With dataflows, Microsoft brings the self-service data preparation capabilities of Power
Query into the Power BI and Power Apps online services, and expands existing
capabilities in the following ways:

Self-service data prep for big data with dataflows: Dataflows can be used to
easily ingest, cleanse, transform, integrate, enrich, and schematize data from a
large and ever-growing array of transactional and observational sources,
encompassing all data preparation logic. Previously, extract, transform, load (ETL)
logic could only be included within datasets in Power BI, copied over and over
between datasets, and bound to dataset management settings.

With dataflows, ETL logic is elevated to a first-class artifact within Microsoft Power
Platform services, and includes dedicated authoring and management experiences.
Business analysts, BI professionals, and data scientists can use dataflows to handle
the most complex data preparation challenges and build on each other's work,
thanks to a revolutionary model-driven calculation engine, which takes care of all
the transformation and dependency logic—cutting time, cost, and expertise to a
fraction of what's traditionally been required for those tasks. You can create
dataflows by using the well-known, self-service data preparation experience of
Power Query. Dataflows are created and easily managed in app workspaces or
environments, in Power BI or Power Apps, respectively, enjoying all the capabilities
these services have to offer, such as permission management and scheduled
refreshes.

Load data to Dataverse or Azure Data Lake Storage: Depending on your use case,
you can store data prepared by Power Platform dataflows in the Dataverse or your
organization's Azure Data Lake Storage account:

Dataverse lets you securely store and manage data that's used by business
applications. Data within Dataverse is stored in a set of tables. A table is a set of
rows (formerly referred to as records) and columns (formerly referred to as
fields/attributes). Each column in the table is designed to store a certain type of
data, for example, name, age, salary, and so on. Dataverse includes a base set of
standard tables that cover typical scenarios, but you can also create custom
tables specific to your organization and populate them with data by using
dataflows. App makers can then use Power Apps and Power Automate to build
rich applications that use this data.

Azure Data Lake Storage lets you collaborate with people in your organization
using Power BI, Azure Data, and AI services, or using custom-built Line of
Business Applications that read data from the lake. Dataflows that load data to
an Azure Data Lake Storage account store data in Common Data Model folders.
Common Data Model folders contain schematized data and metadata in a
standardized format, to facilitate data exchange and to enable full
interoperability across services that produce or consume data stored in an
organization’s Azure Data Lake Storage account as the shared storage layer.
Advanced Analytics and AI with Azure: Power Platform dataflows store data in
Dataverse or Azure Data Lake Storage—which means that data ingested through
dataflows is now available to data engineers and data scientists to leverage the full
power of Azure Data Services, such as Azure Machine Learning, Azure Databricks,
and Azure Synapse Analytics for advanced analytics and AI. This enables business
analysts, data engineers, and data scientists to collaborate on the same data within
their organization.

Support for Common Data Model: Common Data Model is a set of a standardized
data schemas and a metadata system to allow consistency of data and its meaning
across applications and business processes. Dataflows support Common Data
Model by offering easy mapping from any data in any shape into the standard
Common Data Model entities, such as Account and Contact. Dataflows also land
the data, both standard and custom entities, in schematized Common Data Model
form. Business analysts can take advantage of the standard schema and its
semantic consistency, or customize their entities based on their unique needs.
Common Data Model continues to evolve as part of the Open Data Initiative .

Dataflow capabilities in Microsoft Power


Platform services
Most dataflow capabilities are available in both Power Apps and Power BI. Dataflows are
available as part of these services' plans. Some dataflow features are either product-
specific or available in different product plans. The following table describes dataflow
features and their availability.

Dataflow capability Power Apps Power BI

Scheduled refresh Up to 48 per day Up to 48


per day

Maximum per entity refresh time Up to 2 hours Up to 2


hours

Dataflow authoring with Power Yes Yes


Query Online

Dataflow management In Power Apps admin portal In Power BI


admin
portal

New connectors Yes Yes


Dataflow capability Power Apps Power BI

Standardized schema / built-in Yes Yes


support for the Common Data
Model

Dataflows Data Connector in Power For dataflows with Azure Data Lake Storage Yes
BI Desktop as the destination

Integration with the organization's Yes Yes


Azure Data Lake Storage

Integration with Dataverse Yes No

Dataflow linked entities For dataflows with Azure Data Lake Storage Yes
as the destination

Computed Entities (in-storage For dataflows with Azure Data Lake Storage Power BI
transformations using M) as the destination Premium
only

Dataflow incremental refresh For dataflows with Azure Data Lake Storage Power BI
as the destination, requires Power Apps Premium
Plan2 only

Running on Power BI Premium No Yes


capacity / parallel execution of
transforms

Known limitations
Copying dataflows as part of a Power Platform environments copy operation is not
supported.

Next steps
More information about dataflows in Power Apps:

Self-service data prep in Power Apps


Creating and using dataflows in Power Apps
Connect Azure Data Lake Storage Gen2 for dataflow storage
Add data to a table in Dataverse by using Power Query
Visit the Power Apps dataflow community and share what you’re doing, ask
questions, or submit new ideas
Visit the Power Apps dataflow community forum and share what you’re doing, ask
questions, or submit new ideas
More information about dataflows in Power BI:

Self-service data prep in Power BI


Create and use dataflows in Power BI
Dataflows whitepaper
Detailed video of a dataflows walkthrough
Visit the Power BI dataflows community and share what you're doing, ask
questions, or submit new ideas

The following articles go into more detail about common usage scenarios for dataflows.

Using incremental refresh with dataflows


Creating computed entities in dataflows
Connect to data sources for dataflows
Link entities between dataflows

For more information about Common Data Model and the Common Data Model folder
standard, read the following articles:

Common Data Model - overview


Common Data Model folders
Common Data Model folder model file definition
Create and use dataflows in Microsoft
Teams (Preview)
Article • 02/17/2023

Microsoft Dataverse for Teams delivers a built-in, low-code data platform for Microsoft
Teams. It provides relational data storage, rich data types, enterprise-grade governance,
and one-click solution deployment. Dataverse for Teams enables everyone to easily
build and deploy apps.

Before today, the way to get data into Dataverse for Teams was by manually adding data
directly into a table. This process can be prone to errors and isn't scalable. But now, with
self-service data prep you can find, clean, shape, and import your data into Dataverse
for Teams.

With your organizational data already sitting in a different location, you can use Power
Query dataflows to directly access your data through the connectors and load the data
into Dataverse for Teams. When you update in your organizational data, you can refresh
your dataflows by just one click and the data in Dataverse for Teams is updated too. You
can also use the Power Query data transformations to easily validate and clean your
data and enforce data quality for your Apps.

Dataflows were introduced to help organizations retrieve data from disparate sources
and prepare it for consumption. You can easily create dataflows using the familiar, self-
service Power Query experience to ingest, transform, integrate, and enrich data. When
creating a dataflow, you'll connect to data, transform the data, and load data into
Dataverse for Teams tables. Once the dataflow is created, it begins the process of
importing data into the Dataverse table. Then you can start building apps to leverage
that data.

Create a dataflow from the dataflows page


In this example, you're going to load data from an Excel file that's located on OneDrive
into Dataverse for Teams.

1. Sign in to Teams web version , and then select the link for Power Apps.

2. Select the Build tab, and then select Dataflows (Preview).


3. To create a new dataflow, select the New button.

4. Select the Excel workbook connector.


5. Enter a URL address in File path or URL, or use the Browse OneDrive button to
navigate through your OneDrive folders. Select the file you want, and then select
the Next button. For more information about using the OneDrive connection or
getting data, see SharePoint and OneDrive for Business files import or Getting data
from other sources.

6. In Navigator, select the tables that are present in your Excel file. If your Excel file
has multiple sheets and tables, select only the tables you're interested in. When
you're done, select Transform data.

7. Clean and transform your data using Power Query. You can use the out-of-the box
transformations to delete missing values, delete unnecessary columns, or to filter
your data. With Power Query, you can apply more than 300 different
transformations on your data. To learn more about Power Query transformations,
see Use Power Query to transform data. After you're finished with preparing your
data, select Next.

8. In Map tables, select Load to new table to create a new table in Dataverse for
Teams. You can also choose to load your data into an existing table. In the Map
tables screen, you can also specify a Unique primary name column and an
Alternate key column (optional). In this example, leave these selections with the
default values. To learn more about mapping your data and the different settings,
see Field mapping considerations for standard dataflows.
9. Select Create to finish your dataflow. Once you’ve created your dataflow, data
begins loading into Dataverse for Teams. This process can take some time and you
can use the management page to check the status. When a dataflow completes a
run, its data is available to use.

Managing your dataflows


You can manage any dataflow you created from the Dataflows (Preview) tab. Here, you
can see the status of all dataflows, when your dataflow was last refreshed, and take
action from the action bar.

In the Last Refresh column, you can see when your data was last refreshed. If your
refresh failed, an error indication appears. If you select the error indication, the details of
the error and recommended steps to address it appear.

In the Status column, you can see the current status of the dataflow. Possible states are:
Refresh in progress: the dataflow is extracting, transforming, and loading your
data from the source to the Dataverse Tables. This process can take several
minutes depending on the complexity of transformations and data source's
performance. We recommend that you check the status of the dataflow frequently.

To navigate to the action bar, select the three dots “…” next to your dataflow.

Here you can:

Edit your dataflow if you want to change your transformation logic or mapping.
Rename your dataflow. At creation, an autogenerated name is assigned.
Refresh your dataflow. When you refresh your dataflows, the data will be updated.
Delete your dataflow.
Show refresh history. This gives you the results from the last refresh.

Select Show refresh history to see information about the last refresh of your dataflow.
When the dataflow refresh is successful, you can see how many rows were added or
updated in Dataverse. When your dataflow refresh wasn't successful, you can investigate
why with the help of the error message.

Dataflows in Teams is a lightweight version


Dataflows in Teams are a lightweight version of dataflows in the maker portal and can
only load data into Dataverse for Teams. Dataflows in Teams are optimized for a one-
time import of data, but you can refresh your data manually through the refresh button
in the dataflow management page. If you want full dataflows functionality, you can
upgrade your environment.

Supported data sources in dataflows in Teams are:


Excel (OneDrive)
Text/CSV (OneDrive)
PDF (OneDrive)
SharePoint Online Folder
SharePoint Online list
XML (OneDrive)
JSON (OneDrive)
OData
Web API

7 Note

Dataflows in Teams don't support on-premises data sources, such as on premises


file locations.

The following table lists the major feature differences between dataflows for Dataverse
in Teams and dataflows for Dataverse.

Dataflow capability Dataverse for Teams Dataverse

Standard dataflows Yes Yes

Analytical dataflows No Yes

Gateway support No Yes

Manual refresh Yes Yes

Scheduled refresh No Yes

Incremental refresh No Yes

Standard tables No Yes

Custom tables Yes Yes

Full PQ functionality Yes Yes

Supported connectors Subset of connectors All connectors

Small data volumes Yes Yes

Larger data volumes No 1 Yes

1
Although there's no limitation on the amount of data you can load into Dataverse for
Teams, for better performance in loading larger amounts of data, we recommend a
Dataverse environment.
Consume data from dataflows
Article • 08/04/2023

The ways you can consume data from Microsoft dataflows depends on several factors,
like storage and type of dataflow. In this article, you learn how to choose the right
dataflow for your needs.

Type of dataflow
There are multiple types of dataflows available for you to create. You can choose
between a Power BI dataflow, standard dataflow, or an analytical dataflow. To learn more
about the differences and how to select the right type based on your needs, go to
Understanding the differences between dataflow types.

Storage type
A dataflow can write to multiple output destination types. In short, you should be using
the Dataflows connector unless your destination is a Dataverse table. Then you use the
Dataverse/CDS connector.

Azure Data Lake Storage


Azure Data Lake storage is available in Power BI dataflows and Power Apps analytical
dataflows. By default you're using a Microsoft Managed Data Lake. However, you can
also connect a self-hosted data lake to the dataflow environment. The following articles
describe how to connect the data lake to your environment:

Connect Data Lake Gen 2 storage to a Power BI Workspace


Connect Data Lake Gen 2 storage to a Power Apps Environment

When you've connected your data lake, you should still use the Dataflows connector. If
this connector doesn't meet your needs, you could consider using the Azure Data Lake
connector instead.

Dataverse
A standard dataflow writes the output data to a Dataverse table. Dataverse lets you
securely store and manage data that's used by business applications. After you load
data in the Dataverse table, you can consume the data using the Dataverse connector.
Dataflows can get data from other dataflows
If you'd like to reuse data created by one dataflow in another dataflow, you can do so by
using the Dataflow connector in the Power Query editor when you create the new
dataflow.

When you get data from the output of another dataflow, a linked table is created.
Linked tables provide a way to make data created in an upstream dataflow available in a
downstream dataflow, without copying the data to the downstream dataflow. Because
linked tables are just pointers to tables created in other dataflows, they're kept up to
date by the refresh logic of the upstream dataflow. If both dataflows reside in the same
workspace or environment, those dataflows are refreshed together, to keep data in both
dataflows always up to date. More information: Link tables between dataflows

Separating data transformation from data


consumption
When you use the output of a dataflow in other dataflows or datasets, you can create an
abstraction between the data transformation layer and the rest of the data model. This
abstraction is important because it creates a multi-role architecture, in which the Power
Query customer can focus on building the data transformations, and data modelers can
focus on data modeling.

Frequently asked questions


My dataflow table doesn't show up in the dataflow connector in Power BI

You're probably using a Dataverse table as the destination for your standard dataflow.
Use the Dataverse/CDS connector instead or consider switching to an analytical
dataflow.
There's a difference in the data when I remove duplicates in dataflows—how can I
resolve this?

There could be a difference in data between design-time and refresh-time. We don't


guarantee which instance is being kept during refresh time. For information that will
help you avoid inconsistencies in your data, go to Working with duplicate values.

Next Steps
The following articles provide more details about related articles.

Creating and using dataflows in Power BI


Connect to data created by Power BI dataflows in Power BI Desktop (Beta)
Create and use dataflows in Power Platform
Working with duplicate values in dataflows
Overview of solution-aware dataflows
Article • 03/14/2023

When you include your dataflows in a solution, their definitions become portable,
making it easier to move them from one environment to another, saving time required
to author the dataflow.

A typical use case is for an independent software vendor (ISV) to develop a solution
containing a dataflow, that extracts and transforms data from a data source to Dataverse
tables, in a sandbox environment. The ISV would then move that dataflow and
destination tables to a test environment to test with their test data source to validate
that the solution works well and is ready for production. After testing completes, the ISV
would provide the dataflow and tables to clients who will import them into their
production environment to operate on client’s data. This process is much easier when
you add both the dataflows and tables they load data to into solutions, and then move
the solutions and their contents between environments.

Dataflows added to a solution are known as solution-aware dataflows. You can add
multiple dataflows to a single solution.

7 Note

Only dataflows created in Power Platform environments can be solution-


aware.
The data loaded by dataflows to their destination isn't portable as part of
solutions, only the dataflow definitions are. To recreate the data after a
dataflow was deployed as part of a solution, you need to refresh the dataflow.

Add an existing dataflow to a solution


Follow these steps to add a dataflow to a solution.

Prerequisites
You need to have created a solution before you can add a dataflow to it. More
information: Create solutions
You need to be the owner of at least one dataflow in the environment. More
information: Create dataflows
Add the dataflow
1. Sign in to Power Apps .

2. Select Solutions from the navigation bar.

3. Select the solution you'll add your dataflow to, and from the context menu select
Edit.

4. Select Add Exiting > Automation > Dataflow.

5. Optional: If your dataflow loads data into a custom Dataverse table, add the
custom table to the solution as well.

In this example, the dataflow you added to the solution loads data into a custom
table called Full Order Details, which you want to also include in the solution with
the dataflow.

Once both the dataflow and table it loads data to are added to the solution, it has
the two artifacts added to the solution. In this case, the artifacts are
cr0c8_FullOrderDetails and Import Sales Data.

To save your work, be sure to publish all customizations. Now, the solution is ready
for you to export from the source environment and import to the destination
environment.

Exporting and importing solutions containing


dataflows
Exporting and importing solutions containing dataflows is identical to doing the same
operations for other artifacts. For the most up-to-date instructions, go to the
documentation on exporting and importing solutions.

Updating a dataflow's connections after


solution import
For security reasons, credentials of connections used by dataflows aren't persisted by
solutions. Once a dataflow is deployed as part of a solution, you'll need to edit its
connections before it can be scheduled to run.

1. On the left navigation pane, select the down arrow next to Dataverse and select
Dataflows. Identify the dataflow that was imported, and select Edit from the
context menu.

2. In the Dataflow list, locate and double-click the dataflow that was added as part of
the solution you’ve imported.

3. You'll be asked to enter credentials required for the dataflow.

Once the credentials for the connection have been updated, all queries that use
that connection automatically load.

4. If your dataflow loads data in Dataverse tables, select Next to review the mapping
configuration.

5. The mapping configuration is also saved as part of the solution. Since you also
added the destination table to the solutions, there's no need to recreate the table
in this environment and you can publish the dataflow.

That's it. Your dataflow now refreshes and loads data to the destination table.

Known limitations
Dataflows can't be created from within solutions. To add a dataflow to a solution,
follow the steps outlined in this article.
Dataflows can't be edited directly from within solutions. Instead, the dataflow must
be edited in the dataflows experience.
Dataflows can't use connection references for any connector.
Environment variables can't be used by dataflows.
Dataflows don't support adding required components, such as custom tables they
load data to. Instead, the custom table should be manually added to the solution.
Dataflows can't be deployed by application users (service principals).
Incremental refresh configuration isnt supported when deploying solutions. After
deployment of the dataflow via solution, the incremental refresh configuration
should be reapplied.
Linked tables to other dataflows aren't supported when deploying solutions. After
deployment of the dataflow via solution, please edit the dataflow and edit the
connection to the linked dataflow.
Using incremental refresh with
dataflows
Article • 08/04/2023

With dataflows, you can bring large amounts of data into Power BI or your
organization's provided storage. In some cases, however, it's not practical to update a
full copy of source data in each refresh. A good alternative is incremental refresh, which
provides the following benefits for dataflows:

Refresh occurs faster: Only data that's changed needs to be refreshed. For
example, refresh only the last five days of a 10-year dataflow.
Refresh is more reliable: For example, it's not necessary to maintain long-running
connections to volatile source systems.
Resource consumption is reduced: Less data to refresh reduces overall
consumption of memory and other resources.

Incremental refresh is available in dataflows created in Power BI and dataflows created in


Power Apps. This article shows screens from Power BI, but these instructions apply to
dataflows created in Power BI or in Power Apps.

7 Note

When the schema for a table in an analytical dataflow changes, a full refresh takes
place to ensure that all the resulting data matches the new schema. As a result, any
data stored incrementally is refreshed and in some cases, if the source system
doesn't retain historic data, is lost.
Using incremental refresh in dataflows created in Power BI requires that the dataflow
reside in a workspace in Premium capacity. Incremental refresh in Power Apps requires
Power Apps per-app or per-user plans, and is only available for dataflows with Azure
Data Lake Storage as the destination.

In either Power BI or Power Apps, using incremental refresh requires that source data
ingested into the dataflow have a DateTime field on which incremental refresh can filter.

Configuring incremental refresh for dataflows


A dataflow can contain many tables. Incremental refresh is set up at the table level,
allowing one dataflow to hold both fully refreshed tables and incrementally refreshed
tables.

To set up an incremental-refreshed table, start by configuring your table as you would


any other table.
After the dataflow is created and saved, select Incremental refresh in the table view,
as shown in the following image.

When you select the icon, the Incremental refresh settings window appears. Turn on
incremental refresh.
The following list explains the settings in the Incremental refresh settings window.

Incremental refresh on/off toggle: Turns the incremental refresh policy on or off
for the table.

Filter field drop-down: Selects the query field on which the table should be filtered
for increments. This field only contains DateTime fields. You can't use incremental
refresh if your table doesn't contain a DateTime field.

Store/refresh rows from the past: The example in the previous image illustrates
these next few settings.

In this example, we define a refresh policy to store five years of data in total and
incrementally refresh 10 days of data. Assuming that the table is refreshed daily,
the following actions are carried out for each refresh operation:

Add a new day of data.


Refresh 10 days, up to the current date.

Remove calendar years that are older than five years before the current date.
For example, if the current date is January 1, 2019, the year 2013 is removed.

The first dataflow refresh might take a while to import all five years, but
subsequent refreshes are likely to be completed much more quickly.

Detect data changes: An incremental refresh of 10 days is much more efficient


than a full refresh of five years, but you might be able to do even better. When you
select the Detect data changes check box, you can select a date/time column to
identify and refresh only the days where the data has changed. This assumes such
a column exists in the source system, which is typically for auditing purposes. The
maximum value of this column is evaluated for each of the periods in the
incremental range. If that data hasn't changed since the last refresh, there's no
need to refresh the period. In the example, this might further reduce the days
incrementally refreshed from 10 to perhaps 2.

 Tip

The current design requires that the column used to detect data changes be
persisted and cached into memory. You might want to consider one of the
following techniques to reduce cardinality and memory consumption:
Persist only the maximum value of this column at time of refresh, perhaps
by using a Power Query function.
Reduce the precision to a level that's acceptable given your refresh-
frequency requirements.

Only refresh complete periods: Imagine that your refresh is scheduled to run at
4:00 AM every day. If data appears in the source system during those first four
hours of that day, you might not want to account for it. Some business metrics,
such as barrels per day in the oil and gas industry, aren't practical or sensible to
account for based on partial days.

Another example where only refreshing complete periods is appropriate is


refreshing data from a financial system. Imagine a financial system where data for
the previous month is approved on the 12th calendar day of the month. You can
set the incremental range to one month and schedule the refresh to run on the
12th day of the month. With this option selected, the system will refresh January
data (the most recent complete monthly period) on February 12.
7 Note

Dataflow incremental refresh determines dates according to the following logic: if a


refresh is scheduled, incremental refresh for dataflows uses the time zone defined
in the refresh policy. If no schedule for refreshing exists, incremental refresh uses
the time from the computer running the refresh.

After incremental refresh is configured, the dataflow automatically alters your query to
include filtering by date. If the dataflow was created in Power BI, you can also edit the
automatically generated query by using the advanced editor in Power Query to fine-
tune or customize your refresh. Read more about incremental refresh and how it works
in the following sections.

Incremental refresh and linked tables vs.


computed tables
For linked tables, incremental refresh updates the source table. Because linked tables are
simply a pointer to the original table, incremental refresh has no impact on the linked
table. When the source table is refreshed according to its defined refresh policy, any
linked table should assume the data in the source is refreshed.

Computed tables are based on queries running over a data store, which can be another
dataflow. As such, computed tables behave the same way as linked tables.

Because computed tables and linked tables behave similarly, the requirements and
configuration steps are the same for both. One difference is that for computed tables, in
certain configurations, incremental refresh can't run in an optimized fashion because of
the way partitions are built.

Changing between incremental and full refresh


Dataflows support changing the refresh policy between incremental and full refresh.
When a change occurs in either direction (full to incremental or incremental to full), the
change affects the dataflow after the next refresh.

When moving a dataflow from full refresh to incremental, the new refresh logic updates
the dataflow by adhering to the refresh window and increment as defined in the
incremental refresh settings.
When moving a dataflow from incremental to full refresh, all data accumulated in the
incremental refresh is overwritten by the policy defined in the full refresh. You must
approve this action.

Time zone support in incremental refresh


Dataflow incremental refresh is dependent on the time at which it's run. The filtering of
the query is dependent on the day on which it's run.

To accommodate those dependencies and to ensure data consistency, incremental


refresh for dataflows implements the following heuristic for refresh now scenarios:

In the case where a scheduled refresh is defined in the system, incremental refresh
uses the time-zone settings from the scheduled refresh section. This ensures that
whatever time zone the person refreshing the dataflow is in, it will always be
consistent with the system's definition.

If no scheduled refresh is defined, dataflows use the time zone from the computer
of the user who's performing the refresh.

Incremental refresh can also be invoked by using APIs. In this case, the API call can hold
a time-zone setting that's used in the refresh. Using APIs can be helpful for testing and
validation purposes.

Incremental refresh implementation details


Dataflows use partitioning for incremental refresh. Incremental refresh in dataflows
keeps the minimum number of partitions to meet refresh policy requirements. Old
partitions that go out of range are dropped, which maintains a rolling window. Partitions
are opportunistically merged, reducing the total number of partitions required. This
improves compression and, in some cases, can improve query performance.

The examples in this section share the following refresh policy:

Store rows in the last 1 Quarter


Refresh rows in the last 10 Days
Detect data changes = False
Only refresh complete days = True

Merge partitions
In this example, day partitions are automatically merged to the month level after they
go outside the incremental range. Partitions in the incremental range need to be
maintained at daily granularity to allow only those days to be refreshed. The refresh
operation with Run Date 12/11/2016 merges the days in November, because they fall
outside the incremental range.

Drop old partitions


Old partitions that fall outside the total range are removed. The refresh operation with
Run Date 1/2/2017 drops the partition for Q3 of 2016 because it falls outside the total
range.

Recovery from prolonged failure


This example simulates how the system recovers gracefully from prolonged failure. Let's
say refresh doesn't run successfully because data source credentials expired, and the
issue takes 13 days to resolve. The incremental range is only 10 days.
The next successful refresh operation, with Run Date 1/15/2017, needs to backfill the
missing 13 days and refresh them. It also needs to refresh the previous nine days
because they weren't refreshed on the normal schedule. In other words, the incremental
range is increased from 10 to 22 days.

The next refresh operation, with Run Date 1/16/2017, takes the opportunity to merge
the days in December and the months in Q4 of 2016.

Dataflow incremental refresh and datasets


Dataflow incremental refresh and dataset incremental refresh are designed to work in
tandem. It's acceptable and supported to have an incrementally refreshing table in a
dataflow, fully loaded into a dataset, or a fully loaded table in a dataflow incrementally
loaded to a dataset.

Both approaches work according to your specified definitions in the refresh settings.
More information: Incremental refresh in Power BI Premium

See also
This article described incremental refresh for dataflows. Here are some more articles that
might be useful:

Self-service data prep in Power BI


Creating computed tables in dataflows
Connect to data sources for dataflows
Link tables between dataflows
Create and use dataflows in Power BI
Using dataflows with on-premises data sources
Developer resources for Power BI dataflows
For more information about Power Query and scheduled refresh, you can read these
articles:

Query overview in Power BI Desktop


Configuring scheduled refresh

For more information about Common Data Model, you can read its overview article:

Common Data Model - overview


Connect to data sources for dataflows
Article • 02/17/2023

With Microsoft Power BI and Power Platform dataflows, you can connect to many
different data sources to create new dataflows, or add new entities to an existing
dataflow.

This article describes how to create dataflows by using these data sources. For an
overview of how to create and use dataflows, go to Creating a dataflow for Power BI
service and Create and use dataflows in Power Apps.

Create a dataflow from a data source


To create a dataflow from a data source, you'll first have to connect to your data.

Power BI service

To connect to data in Power BI:

1. Open a workspace.
2. Select New.
3. Select Dataflow from the drop-down menu.
4. Under Define new tables, select Add new tables.

Data sources for dataflows


Once you've created the dataflow from the dataflow authoring tool, you'll be presented
with the Choose data source dialog box.
Data sources for dataflows are organized into the following categories, which appear as
tabs in the Choose data source dialog box:

All categories
File
Database
Power Platform
Azure
Online Services
Other

For a list of all of the supported data sources in Power Query, go to Connectors in Power
Query.

Connect to a data source


To connect to a data source, select the data source. This section uses one example to
show how the process works, but each data connection for dataflows is similar in
process. Different connectors might require specific credentials or other information, but
the flow is similar. In this example, SQL Server database is selected from the Database
data connection category.
A connection window for the selected data connection is displayed. If credentials are
required, you're prompted to provide them. The following image shows a server and
database being entered to connect to a SQL Server database.

After the server URL or resource connection information is provided, enter the
credentials to use for access to the data. You may also need to enter the name of an on-
premises data gateway. Then select Next.

Power Query Online initiates and establishes the connection to the data source. It then
presents the available tables from that data source in the Navigator window.
You can select tables and data to load by selecting the check box next to each in the left
pane. To transform the data you've chosen, select Transform data from the bottom of
the Navigator window. A Power Query Online dialog box appears, where you can edit
queries and perform any other transformations you want to the selected data.

Connecting to additional data sources


There are additional data connectors that aren't shown in the Power BI dataflows user
interface, but are supported with a few additional steps.
You can take the following steps to create a connection to a connector that isn't
displayed in the user interface:

1. Open Power BI Desktop, and then select Get data.

2. Open Power Query Editor in Power BI Desktop, right-click the relevant query, and
then select Advanced Editor, as shown in the following image. From there, you can
copy the M script that appears in the Advanced Editor window.

3. Open the Power BI dataflow, and then select Get data for a blank query.

4. Paste the copied query into the blank query for the dataflow.
Your script then connects to the data source you specified.

The following list shows which connectors you can currently use by copying and pasting
the M query into a blank query:

SAP Business Warehouse


Azure Analysis Services
Adobe Analytics
ODBC
OLE DB
Folder
SharePoint Online folder
SharePoint folder
Hadoop HDFS
Azure HDInsight (HDFS)
Hadoop file HDFS
Informix (beta)

Next steps
This article showed which data sources you can connect to for dataflows. The following
articles go into more detail about common usage scenarios for dataflows:

Self-service data prep in Power BI


Using incremental refresh with dataflows
Creating computed entities in dataflows
Link entities between dataflows

For information about individual Power Query connectors, go to the connector reference
list of Power Query connectors, and select the connector you want to learn more about.
Additional information about dataflows and related information can be found in the
following articles:

Create and use dataflows in Power BI


Using dataflows with on-premises data sources
Developer resources for Power BI dataflows
Dataflows and Azure Data Lake integration (Preview)

For more information about Power Query and scheduled refresh, you can read these
articles:

Query overview in Power BI Desktop


Configuring scheduled refresh

For more information about Common Data Model, you can read its overview article:

Common Data Model - overview


What licenses do you need to use
dataflows?
Article • 08/04/2023

Dataflows can be created in different portals, such as Power BI and the Power Apps, and
can be of either analytical or standard type. In addition, some dataflow features are only
available as Premium features. Considering the wide range of products that can use
dataflows, and feature availability in each product or dataflow type, it's important to
know what licensing options you need to use dataflows.

Creating dataflows in Power BI workspaces


If you want to create dataflows in Power BI workspaces, you need to have a paid Power
BI Pro license. A Power BI free license doesn't give you the ability to create dataflows.
Depending on the features you use, Power BI Premium (per user) or embedded capacity
is required.

A Power BI Pro account is available on a user-monthly basis. Multiple options are


available for premium or embedded capacities.

Creating dataflows in Microsoft Power Platform


environments
If you want to create dataflows in the Microsoft Power Platform environment, a Power
Apps (per-user or per-app) license is required.

If you want to create analytical dataflows that store data in your organization's Azure
Data Lake Storage Gen2 account, you or your administrator need access to an Azure
subscription and an Azure Data Lake Storage Gen2 account.

Premium features
Some of the dataflow features are limited to premium licenses. If you want to use the
enhanced compute engine to speed up your dataflow queries' performance over
computed tables, or have the DirectQuery connection option to the dataflow, you need
to have Power BI P1 or A3 or higher capacities.
AI capabilities in Power BI, linked table, and computed table are all premium functions
that aren't available with a Power BI Pro account.

List of features
The following table contains a list of features and the license needed for them to be
available.

Feature Power BI Power Apps

Store data in Dataverse tables (standard dataflow) N/A Per app plan
Per user plan

Store data in Azure Data Lake Storage (analytical Yes Yes, using analytical
dataflow) dataflows

Store data in customer provided Azure Data Lake Yes Per app plan
Storage (analytical dataflow; bring your own Azure Per user plan
Data Lake Storage)

The enhanced compute engine (running on Power BI Power BI N/A


Premium capacity / parallel execution of transforms) Premium only

DirectQuery connection to dataflow Power BI N/A


Premium only

AI capabilities in Power BI Power BI N/A


Premium only

Linked tables Power BI Yes, using analytical


Premium only dataflows

Computed tables (in-storage transformations using M) Power BI Yes, using analytical


Premium only dataflows

Schedule refresh Yes Yes

Dataflow authoring with Power Query Online Yes Yes

Dataflow management Yes Yes

New connectors Yes Yes

Standardized schema, built-in support for Common Yes Yes


Data Model

Dataflows data connector in Power BI Desktop Yes Yes, using analytical


dataflows
Feature Power BI Power Apps

Dataflow incremental refresh Power BI Yes, using analytical


Premium only dataflows with Per user
Plan

Limitations on each license


The preceding table shows what features each license gives you. The following sections
provide details about some of the limitations of each license.

Power Apps licenses


If you use a Power Apps license to create dataflows, there's no limitation on the number
of dataflows and tables you can create. However, there's a limitation on the size of
Dataverse service you can use and refresh performance.

The Power Apps per-app plan covers up to a 50-MB database capacity. The Power Apps
per-user plan allows you to have a database of 250-MB capacity.

The following limitations are in place in Power Apps:

A dataflow refresh can run for a maximum of 24 hours.


Each query/partition has a maximum run duration of four hours.
For each refresh, there's a concurrency limit of four queries/partitions that can
refresh simultaneously.
Power Query Online refresh limits apply for each user. More information: Refresh
limits

Power BI Pro
Power BI Pro gives you the ability to create analytical dataflows, but not use any of the
premium features. With a Power BI Pro account, you can't use linked or computed
tables, you can't use AI capabilities in Power BI, and you can't use DirectQuery to
connect to the dataflow. The storage for your dataflows is limited to the space left under
your Power BI Pro account, which is a subset of 10-GB storage for all Power BI content.
Currently, we don't report the current storage usage of dataflows in the Power BI portal.
You'll be notified if you've almost reached the limit of the left over capacity.

Power BI Premium
If you use Power BI Premium (capacity-based licensing), you can use all the AI
capabilities in Power BI, computed tables and linked tables, with the ability to have a
DirectQuery connection to the dataflow. You can also use the enhanced compute
engine. However, the dataflow created under a premium capacity license uses only the
internal Azure Data Lake Storage, and isn't accessible by other platforms except Power
BI itself. You can't create external dataflows just by having a Power BI Premium license;
you need to have an Azure subscription for Azure Data Lake Storage as well.

Limitations per premium capacity

Dataflows that are using a premium capacity to refresh the data are limited to the
maximum number of parallel tasks they can perform at a given time. The maximum
number of parallel tasks depends on the type of premium capacity you're using. This
table represents the maximum number of parallel tasks that can be run at a given time
by all dataflows in a workspace mapped to the capacity.

Parallel tasks
A premium capacity can run multiple evaluations in parallel. For example, you have a P4
capacity and a dataflow that consists of 84 tasks. You refresh your dataflow and the first
64 tasks are allocated for the refresh. The 20 left over evaluations for this dataflow are
parked in a queue. Once one of the evaluations is finished, it starts with the next
evaluation from the queue. If you start another dataflow in your workspace on the same
premium capacity while the other is still running, it gets parked in the same queue of
the premium capacity and needs to wait on the other dataflows in the workspace to
start the refresh of your data.

You can use the following pointers to estimate the number of tasks of your dataflow
refresh:

The number of queries executed in the refresh (don't forget the upstream linked
tables).
The number of partitions in an incremental refresh query are considered as extra
tasks.

Strategy to lower the number of tasks during refresh

To lower the number of tasks or improve the efficiency of your tasks, you can use the
following strategies:
Lower the number of queries in your dataflow by combining queries where
possible and only "enable load" for queries that are used downstream.
Evaluate if you really need the upstream linked tables to refresh automatically.
Strategically schedule your dataflow refreshes based on the number of tasks.
Make sure your query returns the minimum set of columns and rows to satisfy
your data need. The faster and more efficiently the task executes, the sooner the
next task can start.

Using your organization's Azure Data Lake Storage


account for dataflow storage
To create dataflows that store data in your organization's Azure Data Lake Storage
account, in addition to the product licenses above, you must have an Azure subscription.
The amount of storage that can be used isn't limited by the dataflow or the product it
was created in.

Next step
If you want to read more details about the concepts discussed in this article, go to any
of the following links.

Pricing
Power BI pricing
Power Apps pricing
Azure Data Lake Storage Gen 2 pricing

Features
Computed tables
Linked tables
AI capabilities in Power BI dataflows
Standard vs. analytical dataflows
The enhanced compute engine
How to migrate queries from Power
Query in the desktop (Power BI and
Excel) to dataflows
Article • 02/17/2023

If you already have queries in Power Query, either in Power BI Desktop or in Excel, you
might want to migrate the queries into dataflows. The migration process is simple and
straightforward. In this article, you'll learn the steps to do so.

To learn how to create a dataflow in Microsoft Power Platform, go to Create and use
dataflows in Power Platform. To learn how to create a dataflow in Power BI, go to
Creating and using dataflows in Power BI.

Migrating the queries from the desktop


To migrate queries from Power Query in the desktop tools:

1. Open Power Query Editor:

In Power BI Desktop on the Home tab, select Transform data.

In Excel on the Data tab, select Get Data> Launch Power Query Editor.
2. Copy the queries:

If you've organized your queries into folders (called groups in Power Query):

a. In the Queries pane, select Ctrl as you select the folders you want to migrate to
the dataflow.

b. Select Ctrl+C.
If you don't have folders:

a. In the Queries pane, select Ctrl as you select the queries you want to migrate.

b. Select Ctrl+C.
3. Paste the copied queries into a dataflow:

a. Create a dataflow, if you don't have one already.

b. Open the dataflow in Power Query Editor, and in the Queries pane, select
Ctrl+V to paste the copied folders or queries.
The image below shows an example of successfully copied folders.

4. Connect the on-premises data gateway.

If your data source is an on-premises source, you need to perform an extra step.
Examples of on-premises sources can be Excel files in a shared folder in a local
domain, or a SQL Server database hosted in an on-premises server.

A dataflow, as a cloud-based service, requires the on-premises data gateway to


connect to the on-premises data source. You need to install and configure the
gateway for that source system, and then add the data source for it. After you've
completed these steps, you can select the on-premises data gateway when you
create the entity in the dataflow.

The gateway isn't needed for data sources residing in the cloud, such as an Azure
SQL database.

5. Configure the connection to the data source by selecting Configure connection


and entering credentials or anything else you need to connect to the data source
at this stage.

6. Verify the connection:

If you've done all the steps successfully, you'll see a preview of the data in the
Power Query Editor.

Some Power Query Desktop functions require a


gateway in Power Query Online
Some of the functions might require a gateway, even if their source is not located on-
premises. Among these are functions such as Web.BrowserContents and Web.Page. If
this happens, you might get an error message indicating which specific function isn't
supported. The figure below shows an example of one of these scenarios.
If a scenario like this happens, you have two options. You can set up the gateway for
that data source, or you can update the query in the Power Query Editor for the
dataflow by using a set of steps that are supported without the need for the gateway.

Refresh the dataflow entities


After migrating your queries to the dataflow, you must refresh the dataflow to get data
loaded into these entities. You can refresh a dataflow manually or configure an
automatic refresh based on the schedule of your choice.

Get data from Power Query Desktop


You can now get data from dataflow entities in Power BI Desktop by using the Power
Platform dataflow or Dataverse connectors (depending on what type of dataflow you're
using, analytical or standard). More information: Connect to data created by Power
Platform dataflows in Power BI Desktop
Using an on-premises data gateway in
Power Platform dataflows
Article • 02/17/2023

Install an on-premises data gateway to transfer data quickly and securely between a
Power Platform dataflow and a data source that isn't in the cloud, such as an on-
premises SQL Server database or an on-premises SharePoint site. You can view all
gateways for which you have administrative permissions and manage permissions and
connections for those gateways.

Prerequisites

Power BI service
A Power BI service account. Don't have one? Sign up for 60 days free.

Administrative permissions on a gateway. These permissions are provided by


default for gateways you install. Administrators can grant other people permissions
for gateways.

Power Apps
A Power Apps account. Don't have one? Sign up for 30 days free.

Administrative permissions on a gateway. These permissions are provided by


default for gateways you install. Administrators can grant other people permissions
for gateways.

A license that supports accessing on-premises data using an on-premises gateway.


More information: "Connect to your data" row of the "Explore Power Apps plans"
table in the Power Apps pricing page.

Install a gateway
You can install an on-premises data gateway directly from the online service.

7 Note
It's a good general practice to make sure you're using a supported version of
the on-premises data gateway. We release a new update of the on-premises
data gateway every month. Currently, Microsoft actively supports only the last
six releases of the on-premises data gateway.
Starting April 2022, the minimum required gateway version will be Feburary
2021. Dataflows that refresh using an earlier version of the gateway might
stop refreshing.

Install a gateway from Power BI service


To install a gateway from Power BI service:

1. Select the downloads button in the upper right corner of Power BI service, and
choose Data Gateway.

2. Install the gateway using the instructions provided in Install an on-premises data
gateway.

Install a gateway from Power Apps


To install a gateway from Power Apps:

1. In the left navigation pane of powerapps.com , select Data > Gateways.


2. Select New gateway .
3. In the On-Premises Data Gateway section, select Download.

4. Install the gateway using the instructions provided in Install an on-premises data
gateway.

Use an on-premises data source in a dataflow


1. For instructions on how to create a new dataflow, go to Create a dataflow from a
data source.

2. Select an on-premises data source from the data sources list.

3. Provide the connection details for the enterprise gateway that will be used to
access the on-premises data. You must select the gateway itself, and provide
credentials for the selected gateway. Only gateways for which you're an
administrator appear in the list.

You can change the enterprise gateway used for a given dataflow and change the
gateway assigned to all of your queries using the dataflow authoring tool.

7 Note

The dataflow will try to find or create the required data sources using the new
gateway. If it can't do so, you won't be able to change the gateway until all needed
dataflows are available from the selected gateway.

View and manage gateway permissions

Power BI service gateway permissions


1. Select the setup button in the upper right corner of Power BI service, choose
Manage gateways, and then select the gateway you want.
2. To add a user to the gateway, select the Administrators table and enter the email
address of the user you would like to add as an administrator. Creating or
modifying data sources in dataflows requires Admin permissions to the gateway.
Admins have full control of the gateway, including adding users, setting
permissions, creating connections to all available data sources, and deleting the
gateway.

The following conditions apply when adding a user to the gateway:

1. If we detect that an existing data source is available for the selected gateway, the
Username and Password fields will be pre-populated.

a. If you select Next at this point, you're considered to be using that existing data
source, and so you only need to have permissions to that data source.

b. If you edit any of the credential fields and select Next, then you're considered to
be editing that existing data source, at which point you need to be an admin of
the gateway.

2. If we don't detect that an existing data source is available for the selected gateway,
the Username and Password fields will be blank, and if you edit the credential
fields and select Next, then you're considered to be creating a new data source on
the gateway, at which point you need to be an admin of the gateway.
If you only have data source user permission on the gateway, then 1.b and 2 can't be
achieved and the dataflow can't be created.

Power Apps gateway permissions


1. In the left navigation pane of powerapps.com , select Gateways and then select
the gateway you want.

2. To add a user to a gateway, select Users, specify a user or group, and then specify
a permission level. Creating new data sources with a gateway in dataflows requires
Admin permission on the gateway. Admins have full control of the gateway,
including adding users, setting permissions, creating connections to all available
data sources, and deleting the gateway.

View and manage gateway connections

Power BI service gateway connections


1. Select the setup button in the upper right corner of Power BI service, choose
Manage gateways, and then select the gateway you want.

2. Perform the action that you want:

To view details and edit the settings, select Gateway Cluster Settings.
To add users as administrators of the gateway, select Administrators.
To add a data source to the gateway, select Add Data Source, enter a data
source name and choose the data source type under Data Source Settings,
and then enter the email address of the person who will use the data source.
To delete a gateway, select the ellipsis to the right of the gateway name and
then select Remove.

Power Apps gateway connections


1. In the left navigation bar of powerapps.com, select Gateways, and then choose the
gateway you want.

2. Perform the action that you want:

To view details, edit the settings, or delete a gateway, select Connections, and
then select a connection.

To share a connection, select Share and then add or remove users.


7 Note

You can only share some types of connections, such as a SQL Server
connection. For more information, see Share canvas-app resources in
Power Apps.

For more information about how to manage a connection, see Manage


canvas-app connections in Power Apps.

Limitations
There are a few known limitations when using enterprise gateways and dataflows.

Dataflow refresh might fail if an out-of-date data gateway is used. Starting April
2022, the minimum required data gateway version is February 2021.

Each dataflow can use only one gateway. As such, all queries should be configured
using the same gateway.

Changing the gateway impacts the entire dataflow.

If several gateways are needed, the best practice is to build several dataflows (one
for each gateway). Then use the compute or table reference capabilities to unify
the data.

Dataflows are only supported using enterprise gateways. Personal gateways won't
be available for selection in the drop-down lists and settings screens.

Creating new data sources with a gateway in dataflows is only supported for
people with Admins permissions.

Users with Can Use or Can Use + Share permissions can use existing connections
when creating dataflows.

The following connectors are supported:


DB2
File System
Apache Impala
Informix
MySQL
Oracle Database
PostgreSQL
SAP ERP
SharePoint
SQL Server
Teradata
Desktop flows
HTTP with Azure AD

Troubleshooting
When you attempt to use an on-premises data source to publish a dataflow, you might
come across the following MashupException error:

AzureDataLakeStorage failed to get the response:


'The underlying connection was closed: An unexpected error occurred on a
send.'

This error usually occurs because you're attempting to connect to an Azure Data Lake
Storage endpoint through a proxy, but you haven't properly configured the proxy
settings for the on-premises data gateway. To learn more about how to configure these
proxy settings, go to Configure proxy settings for the on-premises data gateway.

For more information about troubleshooting issues with gateways, or configuring the
gateway service for your network, go to the On-premises data gateway documentation.

If you're experiencing issues with the gateway version you're using, try updating to the
latest version as your issue might have been resolved in the latest version. For more
information about updating your gateway, go to Update an on-premises data gateway.

Next steps
Create and use dataflows in Power Apps

Add data to a table in Microsoft Dataverse by using Power Query

Connect Azure Data Lake Storage Gen2 for dataflow storage


Data exfiltration considerations and
best practices for dataflows
Article • 08/07/2023

Fabric dataflows and Power Platform dataflows are Microsoft 365 services that enable
users to easily connect to, extract, move, and transform data across hundreds of
supported data sources. Dataflows build upon an underlying service called Power Query
Online, which hosts the data movement engine (Mashup Engine) as a cloud service. By
default, connectivity originates from this cloud location and has unrestricted access to
the internet. Therefore, when using dataflows to access and move sensitive data,
organizations should consider strategies to deter insiders from accidental or malicious
data exfiltration. This article outlines known risk factors and best practices for
safeguards.

Considerations

Arbitrary code execution


Dataflow ETL jobs are defined through programs written in a language called Power
Query M. In its default configuration, the Mashup Engine executes these programs in
the cloud, outside the tenant's network boundary, and with unrestricted internet access.
Users can author programs that connect to multiple data sources, and upon consent,
allow data to flow between these sources.

A trusted user who has access to sensitive data can author a program to push the data
to an untrusted data store. Since the Mashup Engine runs entirely in the cloud, it doesn't
go through corporate firewalls and proxy servers. So, it isn't subject to any data loss
prevention (DLP) policies that might be enforced by these networks. Since the point of
access is on the public internet, data can travel to any destination that the user has
access to—either through authentication or anonymous access. Here are some
examples of ways in which these programs can exfiltrate sensitive data:

Anonymous web requests: By using Web.Contents, users can make web requests
with sensitive data in the body of the request.
Cross data source filtering and joins: Sensitive data can be used as filtering or join
conditions against another untrusted data source. Specifically, data can travel to
the untrusted data source in the form of query strings or parameters.
Output destinations: By using Fabric dataflows, users can specify output
destinations for their queries, thereby transferring data to a list of supported data
sinks, which includes Azure SQL databases and data warehouses, Fabric
Lakehouses, Warehouses, and KQL databases.

Cross-tenant data movement


The Mashup Engine requires data sources to be explicitly defined before making
connections. The data sources are bound to programs with a kind and path key (for
example, SQL;contoso.database.windows.net ). This binding provides an opportunity for
organizations to govern which data sources are permitted, by filtering based on the data
source paths. However, there are some data sources where the path is common across
all connections and data is segmented only by the tenant of the signed in user's OAuth
credentials. This scenario creates a risk factor for data exfiltration, where a user signs
into a higher security tenant and a lower security tenant and moves data between them.
Typically, this exfiltration can be done in two ways:

Outbound: A trusted user in a higher security tenant defines a dataflow in that


environment and creates an output destination to a permitted data sink, but
authenticates with the data sink using an account from a lower security tenant.
Inbound: A user in a lower security tenant defines a dataflow that reads data from
a sensitive data source in the higher security tenant. This definition can be
achieved by authenticating against the sensitive data source using a trusted
account in the higher security tenant.

Recommended best practices


DLP policies can operate at various OSI layers. In general, the more sensitive the data,
the lower the layer where the policies must be applied. Lower layer protocols are
typically more expensive to implement, harder to scale, and more difficult to operate.
For example, organizations with lower governance requirements might only need to
apply application layer policies. However, some organizations and entities processing
highly sensitive data might require extreme measures such as physical isolation. We
recommend that organizations handling sensitive data employ a combination of
application and network-level policies to protect against insider threats.

Network isolation
We recommend that all data stores containing sensitive data be network isolated to
permit access only from selected networks. This isolation restriction must be defined
and operated at the network layer or lower. For example, layer 3 firewalls, Network
Security Groups (NSGs), and Azure Private Links are good examples of mechanisms that
can be used. However, location-based conditional access policies in Azure Active
Directory (Azure AD) operate at the application layer and are considered insufficient for
this purpose.

These network isolation policies must obstruct line of sight from dataflows' cloud
execution engine to sensitive data stores (since the cloud engine runs on the public
internet). Dataflows' connectivity to these data stores is then forced to originate from
within one of the permitted networks by binding connections to an on-premises data
gateway or VNet data gateway. An important execution characteristic of dataflows is
that cloud-based evaluation and gateway-based evaluation are never blended. If a
dataflow needs to access a network isolated data store (and is therefore bound to a
gateway), all data access is then required to flow through the gateway. Additionally,
since gateways physically reside in networks controlled by the user tenant, they comply
with network level restrictions such as firewalls and DLP protection solutions. These
restrictions make gateway environments as secure and safe as any corporate managed
devices and mitigate risks associated with arbitrary code execution in a cloud
environment.

It's worth noting that network isolation must be applied to all data stores that might
contain sensitive data. Consider an example where a user creates a dataflow to read
data from OneDrive for Business into Power BI. Then the user later creates a linked
dataflow to transform the data in Power BI into downstream entities. In this scenario, it's
not sufficient to just isolate OneDrive for Business to trusted networks. Since sensitive
data might also reside within Power BI, it's important to isolate such data by enabling
private links and disabling public Internet access for Power BI. Learn more about secure
access to Power BI using private endpoints.
Force gateway execution
The goal for isolating sensitive data store to selected networks is to force the origin of
access back to trusted networks, so that existing policies governing managed devices
can be used to govern data movement from dataflows. In certain cases, a full network
isolation solution might take time to develop, test, and deploy. As an alternative, you
can file a dataflows support ticket to apply a tenant-wide policy that turns off the
Mashup Engine. This policy affects all query evaluations that use the Power Query
Online Mashup Engine. Impacted capabilities include:

Fabric dataflows
Power Platform dataflows
Azure Data Factory wrangling dataflows
Dataflows in Dynamics 365 (Customer Insights, Intelligent Order Management, and
so on)
Power BI Datamart
Power BI Quick Import from SharePoint

After application of the policy, all cloud-based execution fails with the following error:
Cloud evaluation request denied based on tenant policies. Please use a data gateway

and try again. This error effectively forces all query evaluations in the tenant to occur

on gateways, without first rolling out a full network isolation solution. Note that the
policy is applied to the entire tenant and not a subset of workloads. This policy means
existing workloads fail immediately and require manual intervention to convert to run
on gateways. Organizations applying this policy should also ensure that they have
enough capacity in their gateway clusters to accommodate all their workloads.

Tenant isolation
For most software-as-a-service (SaaS) layer data stores, such as Fabric Lakehouse and
Power Platform Dataverse, there's usually a multi-tenant endpoint that one
communicates with to gain access to the data. These endpoints are common across all
users of the service, so they can be difficult to isolate and protect solely using network
(Layer 3) isolation techniques. The recommended approach for this kind of data store is
to use Layer 7 policies, typically provided by Azure Active Directory:

Allow only Azure AD authentication. Remove anonymous and username/password


authentication schemes from the data store.
Use location policies to allow sign-in to the secured tenant only from managed
devices. Learn more.
Disallow unknown tenant sign-ins from managed devices by using Azure AD
tenant restrictions. Use tenant restrictions to manage access to SaaS apps. Learn
more.

This approach restricts access to the tenant’s sensitive data stores to a set of managed
devices where signing into another tenant isn't permitted, effectively isolating data
movement across the tenant.

Roadmap
The following list contains some of the features that are currently planned to help
organizations better manage data exfiltration risks in Fabric:

Data source connection allowlisting: Allows Fabric tenant admins to control the
kinds of connectors that can be used within the tenant, and the endpoints the
connectors can connect to.
Connection usage auditing: Support for auditing logs that track connection
creation, updating, deletion, and usage.
Creating computed tables in dataflows
Article • 08/04/2023

You can perform in-storage computations when using dataflows with a Power BI
Premium subscription. This lets you do calculations on your existing dataflows, and
return results that enable you to focus on report creation and analytics.

To perform in-storage computations, you first must create the dataflow and bring data
into that Power BI dataflow storage. After you have a dataflow that contains data, you
can create computed tables, which are tables that do in-storage computations.

There are two ways you can connect dataflow data to Power BI:

Using self-service authoring of a dataflow


Using an external dataflow

The following sections describe how to create computed tables on your dataflow data.

How to create computed tables


After you have a dataflow with a list of tables, you can perform calculations on those
tables.

In the dataflow authoring tool in the Power BI service, select Edit tables. Then right-click
the table you want to use as the basis for your computed table and on which you want
to perform calculations. On the shortcut menu, select Reference.
For the table to be eligible as a computed table, Enable load must be selected, as shown
in the following image. Right-click the table to display this shortcut menu.

By selecting Enable load, you create a new table whose source is the referenced table.
The icon changes to the computed icon, as shown in the following image.
Any transformation you do on this newly created table will be run on the data that
already resides in Power BI dataflow storage. That means that the query won't run
against the external data source from which the data was imported (for example, the
SQL database from which the data was pulled).

Example use cases


What kind of transformations can be done with computed tables? Any transformation
that you usually specify by using the transformation user interface in Power BI, or the M
editor, are all supported when performing in-storage computation.

Consider the following example. You have an Account table that contains the raw data
for all the customers from your Dynamics 365 subscription. You also have ServiceCalls
raw data from the service center, with data from the support calls that were performed
from the different accounts on each day of the year.

Imagine you want to enrich the Account table with data from ServiceCalls.

First you would need to aggregate the data from the ServiceCalls to calculate the
number of support calls that were done for each account in the last year.

Next, you merge the Account table with the ServiceCallsAggregated table to calculate
the enriched Account table.

Then you can see the results, shown as EnrichedAccount in the following image.

And that's it—the transformation is done on the data in the dataflow that resides in your
Power BI Premium subscription, not on the source data.

Considerations and limitations


It's important to note that if you remove the workspace from Power BI Premium
capacity, the associated dataflow will no longer be refreshed.

When working with dataflows specifically created in an organization's Azure Data Lake
Storage account, linked tables and computed tables only work properly when the tables
reside in the same storage account. More information: Connect Azure Data Lake Storage
Gen2 for dataflow storage

Linked tables are only available for dataflows created in Power BI and Power Apps. As a
best practice, when doing computations on data joined by on-premises and cloud data,
create a new table to perform such computations. This provides a better experience
than using an existing table for computations, such as an table that is also querying data
from both sources and doing in-storage transformations.

See also
Computed table scenarios and use cases

This article described computed tables and dataflows. Here are some more articles that
might be useful:

Self-service data prep in Power BI


Using incremental refresh with dataflows
Connect to data sources for dataflows
Link tables between dataflows

The following links provide additional information about dataflows in Power BI and
other resources:

Create and use dataflows in Power BI


Using dataflows with on-premises data sources
Developer resources for Power BI dataflows
Configure workspace dataflow settings (Preview)
Add a CDM folder to Power BI as a dataflow (Preview)
Connect Azure Data Lake Storage Gen2 for dataflow storage (Preview)

For more information about Power Query and scheduled refresh, you can read these
articles:

Query overview in Power BI Desktop


Configuring scheduled refresh

For more information about Common Data Model, you can read its overview article:

Common Data Model


Link tables between dataflows
Article • 08/04/2023

With dataflows in Microsoft Power Platform, you can have a single organizational data
storage source where business analysts can prep and manage their data once, and then
reuse it between different analytics apps in the organization.

When you link tables between dataflows, you can reuse tables that have already been
ingested, cleansed, and transformed by dataflows that are owned by others, without the
need to maintain that data. The linked tables simply point to the tables in other
dataflows, and don't copy or duplicate the data.

Linked tables are read-only, so if you want to create transformations for a linked table,
you must create a new computed table with a reference to the linked table.

Linked table availability


Linked table availability depends on whether you're using dataflows in Power BI or
Power Apps. The following sections describe the details for each.

Linked tables in Power BI


To be refreshed, linked tables require a Power BI Premium subscription. Linked tables are
available in any dataflow on a workspace that's hosted on Power BI Premium capacity.
There are no limitations on the source dataflow.

Linked tables only work properly in new Power BI workspaces, and, likewise, all linked
dataflows must be located in new workspaces. More information: Create the new
workspaces in Power BI

7 Note

Tables differ based on whether they're standard tables or computed tables.


Standard tables (often simply referred to as tables) query an external data source,
such as a SQL database. Computed tables require Premium capacity on Power BI
and run their transformations on data that's already in Power BI storage.

If your dataflow isn't located in a Premium capacity workspace, you can still
reference a single query—or combine two or more queries—as long as the
transformations aren't defined as in-storage transformations. Such references are
considered standard tables. To do this, turn off the Enable load option for the
referenced queries to prevent the data from being materialized and ingested into
storage. From there, you can reference those Enable load = false queries, and set
Enable load to On only for the resulting queries that you want to materialize.

Linked tables in Power Apps


For more information about the availability of linked tables in Power Apps, go to What
licenses do you need to use dataflows.

How to link tables between dataflows


There are a few ways to link tables between dataflows. To link tables in Power BI, you
must sign in with your Power BI credentials.

You can select Get data from the dataflow authoring tool, which displays a dialog box
for selecting the categories and each data source. Then select the Power Platform
Dataflows connector.

A connection window for the selected data connection is displayed. If credentials are
required, you're prompted to provide them.

In Power BI, you can select Add linked tables from the dataflow authoring tool.

You can also select Add linked tables from the Add tables menu in the Power BI service.

A Navigator window opens, and you can choose a set of tables you can connect to. The
window displays tables for which you have permissions across all workspaces and
environments in your organization.

After you select your linked tables, they appear in the list of tables for your dataflow in
the authoring tool, with a special icon identifying them as linked tables.

You can also view the source dataflow from the dataflow settings of your linked table.

Refresh logic of linked tables


The refresh logic of linked tables differs slightly based on whether you're using Power BI
or Power Apps, as described in the following sections.

Refresh logic in Power BI


The default refresh logic of linked tables depends on whether the source dataflow is in
the same Power BI workspace as the destination dataflow. The following sections
describe the behavior of each.
Links between workspaces: Refresh for links from tables in different workspaces
behaves like a link to an external data source. When the dataflow is refreshed, it
takes the latest data for the table from the source dataflow. If the source dataflow
is refreshed, it doesn't automatically affect the data in the destination dataflow.

Links in the same workspace: When data refresh occurs for a source dataflow, that
event automatically triggers a refresh process for dependent tables in all
destination dataflows in the same workspace, including any calculated tables
based on them. All other tables in the destination dataflow are refreshed according
to the dataflow schedule. Tables that depend on more than one source refresh
their data whenever any of their sources are refreshed successfully.

7 Note

The entire refresh process is committed at once. Because of this, if the data
refresh for the destination dataflow fails, the data refresh for the source
dataflow fails as well.

Refresh logic in Power Apps


The refresh logic of linked tables in Power Apps behaves like an external data source.
When the dataflow is refreshed, it takes the latest data for the table from the source
dataflow. If the source dataflow is refreshed, it doesn't automatically affect the data in
the destination dataflow.

Permissions when viewing reports from


dataflows
When creating a Power BI report that includes data based on a dataflow, you can see
any linked tables only when you have access to the source dataflow.

Limitations and considerations


There are a few limitations to keep in mind when working with linked tables:

A table can be referenced by another dataflows. That reference table can also be
reference by other dataflows, and so on, up to five times.
Cyclical dependencies of linked tables aren't allowed.
The dataflow must be in a new Power BI workspace or a Power Apps environment.
A linked table can't be joined with a regular table that gets its data from an on-
premises data source.
When using M parameters to address linked tables, if the source dataflow is
refreshed, it doesn't automatically affect the data in the destination dataflow.
Attempting to connect two dataflow tables between two workspaces of different
storage types—Bring Your Own Storage Account (BYOSA) and Internal—isn't
supported.

Next steps
The following articles might be useful as you create or work with dataflows:

Self-service data prep in Power BI


Using incremental refresh with dataflows
Creating computed tables in dataflows
Connect to data sources for dataflows

The following articles provide more information about dataflows and Power BI:

Create and use dataflows in Power BI


Using computed tables on Power BI Premium
Using dataflows with on-premises data sources
Developer resources for Power BI dataflows

For more information about Power Query and scheduled refresh, you can read these
articles:

Query overview in Power BI Desktop


Configuring scheduled refresh

For more information about Common Data Model, you can read its overview article:

Common Data Model - overview


Connect Azure Data Lake Storage Gen2
for dataflow storage
Article • 05/12/2023

You can configure dataflows to store their data in your organization's Azure Data Lake
Storage Gen2 account. This article describes the general steps necessary to do so, and
provides guidance and best practices along the way.

) Important

Dataflow with Analytical tables feature utilizes the Azure Synapse Link for Dataverse
service, which may offer varying levels of compliance, privacy, security, and data
location commitments. For more information about Azure Synapse Link for
Dataverse, go to What is Azure Synapse Link for Dataverse?.

There are some advantages to configuring dataflows to store their definitions and
datafiles in your data lake, such as:

Azure Data Lake Storage Gen2 provides an enormously scalable storage facility for
data.
Dataflow data and definition files can be leveraged by your IT department's
developers to leverage Azure data and artificial intelligence (AI) services as
demonstrated in the GitHub samples from Azure data services.
It enables developers in your organization to integrate dataflow data into internal
applications and line-of-business solutions, using developer resources for
dataflows and Azure.

Requirements
To use Azure Data Lake Storage Gen2 for dataflows, you need the following:

A Power Apps environment. Any Power Apps plan will allow you to create
dataflows with Azure Data Lake Storage Gen2 as a destination. You'll need to be
authorized in the environment as a maker.
An Azure subscription. You need an Azure subscription to use Azure Data Lake
Storage Gen2.
A resource group. Use a resource group you already have, or create a new one.
An Azure storage account. The storage account must have the Data Lake Storage
Gen2 feature enabled.
 Tip

If you don't have an Azure subscription, create a free trial account before you
begin.

Prepare your Azure Data Lake Storage Gen2 for


Power Platform dataflows
Before you configure your environment with an Azure Data Lake Storage Gen2 account,
you must create and configure a storage account. Here are the requirements for Power
Platform dataflows:

1. The storage account must be created in the same Azure Active Directory tenant as
your Power Apps tenant.
2. We recommend that the storage account is created in the same region as the
Power Apps environment you plan to use it in. To determine where your Power
Apps environment is, contact your environment admin.
3. The storage account must have the Hierarchical Name Space feature enabled.
4. You must be granted an Owner role on the storage account.

The following sections walk through the steps necessary to configure your Azure Data
Lake Storage Gen2 account.

Create the storage account


Follow the steps in Create an Azure Data Lake Storage Gen2 storage account.

1. Make sure you select the same region as your environment and set your storage as
StorageV2 (general purpose v2).
2. Make sure you enable the hierarchical namespace feature.
3. We recommend that you set the replication setting to Read-access geo-redundant
storage (RA-GRS).

Connect your Azure Data Lake Storage Gen2 to


Power Apps
Once you've set up your Azure Data Lake Storage Gen2 account in the Azure portal,
you're ready to connect it to a specific dataflow or a Power Apps environment.
Connecting the lake to an environment allows other makers and admins in the
environment to create dataflows that store their data in your organization's lake as well.

To connect your Azure Data Lake Storage Gen2 account with the dataflow, follow these
steps:

1. Sign in to Power Apps , and verify which environment you're in. The environment
switcher is located on the right side of the header.

2. On the left navigation pane, select the down arrow next to Data.

3. In the list that appears, select Dataflows and then on the command bar select New
dataflow.

4. Select the analytical tables you want. These tables indicate what data you want to
store in your organization's Azure Data Lake Store Gen2 account.
Select the storage account to use for dataflow
storage
If a storage account hasn't yet been associated with the environment, a Link to data
lake dialog box appears. You'll need to sign in and find the data lake you created in the
previous steps. In this example, no data lake is associated with the environment and so a
prompt occurs to add one.

1. Select storage account.

The Select Storage Account screen appears.

2. Select the Subscription ID of the storage account.


3. Select the Resource group name in which the storage account was created.

4. Enter the Storage account name.

5. Select Save.

Once these steps are successfully completed, your Azure Data Lake Storage Gen2
account is connected to Power Platform Dataflows and you can continue to create a
dataflow.

Considerations and limitations


There are a few considerations and limitations to keep in mind when working with your
dataflow storage:

Linking an Azure Data Lake Store Gen2 account for dataflow storage isn't
supported in the default environment.
Once a dataflow storage location is configured for a dataflow, it can't be changed.
Once a storage account is linked, changing the environment's name isn't
supported and dataflows linked to the that storage account will fail. Changing back
the environment's name will re-enable those dataflows.
By default, any member of the environment can access dataflow data using the
Power Platform Dataflows Connector. However, only the owners of a dataflow can
access its files directly in Azure Data Lake Storage Gen2. To authorize more people
to access the dataflows data directly in the lake, you must authorize them to the
dataflow's CDM Folder in the data lake or the data lake itself.
When a dataflow is deleted, its CDM Folder in the lake will also be deleted.
Attempting to connect two dataflow entities between two workspaces of different
storage types—Bring Your Own Storage Account (BYOSA) and Internal—isn't
supported.

) Important

You shouldn't change files created by dataflows in your organization's lake or add
files to a dataflow's CDM Folder. Changing files might damage dataflows or alter
their behavior and is not supported. Power Platform Dataflows only grants read
access to files it creates in the lake. If you authorize other people or services to the
filesystem used by Power Platform Dataflows, only grant them read access to files
or folders in that filesystem.
Privacy notice
By enabling the creation of dataflows with Analytical tables in your organization, via the
Azure Synapse Link for Dataverse service, details about the Azure Data Lake storage
account, such as the name of the storage account, will be sent to and stored in the
Azure Synapse Link for Dataverse service, which is currently located outside the
PowerApps compliance boundary and may employ lesser or different privacy and
security measures than those typically in PowerApps. Note that you may remove the
data lake association at any time to discontinue use of this functionality and your Azure
Data Lake storage account details will be removed from the Azure Synapse Link for
Dataverse service. Further information about Azure Synapse Link for Dataverse is
available in this article.

Frequently asked questions


What if I had previously created dataflows in my organization's Azure Data Lake Storage
Gen2 and would like to change their storage location?

You can't change the storage location of a dataflow after it was created.

When can I change the dataflow storage location of an environment?

Changing the environment's dataflow storage location isn't currently supported.

Next steps
This article provided guidance about how to connect an Azure Data Lake Storage Gen2
account for dataflow storage.

For more information about dataflows, the Common Data Model, and Azure Data Lake
Storage Gen2, go to these articles:

Self-service data prep with dataflows


Creating and using dataflows in Power Apps
Add data to a table in Microsoft Dataverse

For more information about Azure storage, go to this article:

Azure Storage security guide

For more information about the Common Data Model, go to these articles:

Common Data Model - overview


Common Data Model folders
CDM model file definition

You can ask questions in the Power Apps Community .


What is the storage structure for
analytical dataflows?
Article • 08/04/2023

Analytical dataflows store both data and metadata in Azure Data Lake Storage.
Dataflows leverage a standard structure to store and describe data created in the lake,
which is called Common Data Model folders. In this article, you'll learn more about the
storage standard that dataflows use behind the scenes.

Storage needs a structure for an analytical


dataflow
If the dataflow is standard, then the data is stored in Dataverse. Dataverse is like a
database system; it has the concept of tables, views, and so on. Dataverse is a structured
data storage option used by standard dataflows.

However, when the dataflow is analytical, the data is stored in Azure Data Lake Storage.
A dataflow’s data and metadata is stored in a Common Data Model folder. Since a
storage account might have multiple dataflows stored in it, a hierarchy of folders and
subfolders has been introduced to help organize the data. Depending on the product
the dataflow was created in, the folders and subfolders may represent workspaces (or
environments), and then the dataflow’s Common Data Model folder. Inside the
Common Data Model folder, both schema and data of the dataflow tables are stored.
This structure follows the standards defined for Common Data Model.
What is the Common Data Model storage
structure?
Common Data Model is a metadata structure defined to bring conformity and
consistency for using data across multiple platforms. Common Data Model isn't data
storage, it's the way that data is stored and defined.

Common Data Model folders define how a table's schema and its data should be stored.
In Azure Data Lake Storage, data is organized in folders. Folders can represent a
workspace or environment. Under those folders, subfolders for each dataflow are
created.

What's in a dataflow folder?


Each dataflow folder contains a subfolder for each table and a metadata file named
model.json .

The metadata file: model.json


The model.json file is the metadata definition of the dataflow. This is the one file that
contains all the dataflow metadata. It includes a list of tables, the columns, and their
data types in each table, the relationship between tables, and so on. You can export this
file from a dataflow easily, even if you don't have access to the Common Data Model
folder structure.

You can use this JSON file to migrate (or import) your dataflow into another workspace
or environment.
To learn exactly what the model.json metadata file includes, go to The metadata file
(model.json) for Common Data Model.

Data files
In addition to the metadata file, the dataflow folder includes other subfolders. A
dataflow stores the data for each table in a subfolder with the table's name. Data for a
table might be split into multiple data partitions, stored in CSV format.

How to see or access Common Data Model


folders
If you're using dataflows that use storage provided by the product they were created in,
you won't have access to those folders directly. In such cases, getting data from the
dataflows requires using the Microsoft Power Platform dataflow connector available in
the Get data experience in the Power BI service, Power Apps, and Dynamics 35
Customer Insights products, or in Power BI Desktop.
To learn how dataflows and the internal Data Lake Storage integration work, go to
Dataflows and Azure Data Lake integration (Preview).

If your organization enabled dataflows to take advantage of its Data Lake Storage
account and was selected as a load target for dataflows, you can still get data from the
dataflow by using the Power Platform dataflow connector as mentioned above. But you
can also access the dataflow's Common Data Model folder directly through the lake,
even outside of Power Platform tools and services. Access to the lake is possible through
the Azure portal, Microsoft Azure Storage Explorer, or any other service or experience
that supports Azure Data Lake Storage. More information: Connect Azure Data Lake
Storage Gen2 for dataflow storage
Next steps
Use the Common Data Model to optimize Azure Data Lake Storage Gen2

The metadata file (model.json) for the Common Data Model

Add a CDM folder to Power BI as a dataflow (Preview)

Connect Azure Data Lake Storage Gen2 for dataflow storage

Dataflows and Azure Data Lake Integration (Preview)

Configure workspace dataflow settings (Preview)


Dataflow storage options
Article • 09/01/2022

Standard dataflows always load data into Dataverse tables in an environment. Analytical
dataflows always load data into Azure Data Lake Storage accounts. For both dataflow
types, there's no need to provision or manage the storage. Dataflow storage, by default,
is provided and managed by products the dataflow is created in.

Analytical dataflows allow an additional storage option: your organizations' Azure Data
Lake Storage account. This option enables access to the data created by a dataflow
directly through Azure Data Lake Storage interfaces. Providing your own storage
account for analytical dataflows enables other Azure or line-of-business applications to
leverage the data by connecting to the lake directly.

Dataflows that use built-in storage


By default, analytical dataflows will use the built-in Data Lake Storage; for example,
when you create a dataflow in Power BI or Power Apps. Access to the output of this type
of dataflow is only possible through the Microsoft Power Platform dataflows connector
in Power BI Desktop, or from other dataflows.

Dataflows that use customer-provided storage


Before creating a dataflow that uses your organization's Data Lake Storage account, you
must link the environment or workspace the dataflow was created in to your Data Lake
Storage account. Depending on which product dataflow you're using (Power BI or Power
Platform dataflows), the settings for connecting to an external Data Lake Storage
subscription is linked in different places.

Linking Power BI to your organization's Azure Data Lake


Storage
To configure Power BI dataflows to store data in your organization's Data Lake Storage,
you need to follow the steps described in Connect Azure Data Lake Storage Gen2 for
dataflow storage in the Power BI admin portal.
Linking a Power Platform environment to your
organization's Azure Data Lake Storage
To configure dataflows created in Power Apps to store data in your organization's Azure
Data Lake Storage, follow the steps in Connect Azure Data Lake Storage Gen2 for
dataflow storage in Power Apps.

Known limitations
After a dataflow is created, its storage location can't be changed.
Linked and computed entities features are only available when both dataflows are
in the same storage account.

The enhanced compute engine


In Power BI, in addition to the standard dataflow engine, an enhanced compute engine
is available for the dataflows created in Power BI Premium workspaces. You can
configure this setting in the Power BI admin portal, under the Premium capacity settings.
The enhanced compute engine is available in Premium P1 or A3 capacities and above.
The enhanced compute engine reduces the refresh time required for long-running
extract, transform, load (ETL) steps over computed entities, such as joins, distinct, filters,
and group by. It also provides the ability to perform DirectQuery over entities from the
Power BI dataset. More information: The enhanced compute engine

Next steps
The articles below provide further information that can be helpful.

Connect Azure Data Lake Storage Gen2 for dataflow storage (Power BI dataflows)
Connect Azure Data Lake Storage Gen2 for dataflow storage (Power Platform
dataflows)-->
Creating computed entities in dataflows
The enhanced compute engine
Understanding the differences between standard and analytical dataflows
Computed table scenarios and use cases
Article • 08/04/2023

There are benefits to using computed tables in a dataflow. This article describes use
cases for computed tables and describes how they work behind the scenes.

What is a computed table?


A table represents the data output of a query created in a dataflow, after the dataflow
has been refreshed. It represents data from a source and, optionally, the transformations
that were applied to it. Sometimes, you might want to create new tables that are a
function of a previously ingested table.

Although it's possible to repeat the queries that created a table and apply new
transformations to them, this approach has drawbacks: data is ingested twice, and the
load on the data source is doubled.

Computed tables solve both problems. Computed tables are similar to other tables in
that they get data from a source and you can apply further transformations to create
them. But their data originates from the storage dataflow used, and not the original data
source. That is, they were previously created by a dataflow and then reused.

Computed tables can be created by referencing a table in the same dataflow or by


referencing a table created in a different dataflow.

Why use a computed table?


Performing all transformation steps in one table can be slow. There can be many
reasons for this slowdown—the data source might be slow, or the transformations that
you're doing might need to be replicated in two or more queries. It might be
advantageous to first ingest the data from the source and then reuse it in one or more
tables. In such cases, you might choose to create two tables: one that gets data from the
data source, and another—a computed table—that applies more transformations to
data already written into the data lake used by a dataflow. This change can increase
performance and reusability of data, saving time and resources.

For example, if two tables share even a part of their transformation logic, without a
computed table, the transformation has to be done twice.
However, if a computed table is used, then the common (shared) part of the
transformation is processed once and stored in Azure Data Lake Storage. The remaining
transformations are then be processed from the output of the common transformation.
Overall, this processing is much faster.

A computed table provides one place as the source code for the transformation and
speeds up the transformation because it only needs to be done once instead of multiple
times. The load on the data source is also reduced.

Example scenario for using a computed table


If you're building an aggregated table in Power BI to speed up the data model, you can
build the aggregated table by referencing the original table and applying more
transformations to it. By using this approach, you don't need to replicate your
transformation from the source (the part that is from the original table).

For example, the following figure shows an Orders table.

Using a reference from this table, you can build a computed table.

The computed table can have further transformations. For example, you can use Group
By to aggregate the data at the customer level.
This means that the Orders Aggregated table is getting data from the Orders table, and
not from the data source again. Because some of the transformations that need to be
done have already been done in the Orders table, performance is better and data
transformation is faster.

Computed table in other dataflows


You can also create a computed table in other dataflows. It can be created by getting
data from a dataflow with the Microsoft Power Platform dataflow connector.
The concept of the computed table is to have a table persisted in storage, and other
tables sourced from it, so that you can reduce the read time from the data source and
share some of the common transformations. This reduction can be achieved by getting
data from other dataflows through the dataflow connector or referencing another query
in the same dataflow.

Computed table: With transformations, or


without?
Now that you know computed tables are great for improving performance of the data
transformation, a good question to ask is whether transformations should always be
deferred to the computed table or whether they should be applied to the source table.
That is, should data always be ingested into one table and then transformed in a
computed table? What are the pros and cons?

Load data without transformation for Text/CSV files


When a data source doesn't support query folding (such as Text/CSV files), there's little
benefit in applying transformations when getting data from the source, especially if data
volumes are large. The source table should just load data from the Text/CSV file without
applying any transformations. Then, computed tables can get data from the source table
and perform the transformation on top of the ingested data.

You might ask, what's the value of creating a source table that only ingests data? Such a
table can still be useful, because if the data from the source is used in more than one
table, it reduces the load on the data source. In addition, data can now be reused by
other people and dataflows. Computed tables are especially useful in scenarios where
the data volume is large, or when a data source is accessed through an on-premises
data gateway, because they reduce the traffic from the gateway and the load on data
sources behind them.

Doing some of the common transformations for a SQL


table
If your data source supports query folding, it's good to perform some of the
transformations in the source table because the query is folded to the data source, and
only the transformed data is fetched from it. These changes improve overall
performance. The set of transformations that is common in downstream computed
tables should be applied in the source table, so they can be folded to the source. Other
transformations that only apply to downstream tables should be done in computed
tables.
Best practices for designing and
developing complex dataflows
Article • 08/04/2023

If the dataflow you're developing is getting bigger and more complex, here are some
things you can do to improve on your original design.

Break it into multiple dataflows


Don't do everything in one dataflow. Not only does a single, complex dataflow make the
data transformation process longer, it also makes it harder to understand and reuse the
dataflow. Breaking your dataflow into multiple dataflows can be done by separating
tables in different dataflows, or even one table into multiple dataflows. You can use the
concept of a computed table or linked table to build part of the transformation in one
dataflow, and reuse it in other dataflows.

Split data transformation dataflows from


staging/extraction dataflows
Having some dataflows just for extracting data (that is, staging dataflows) and others
just for transforming data is helpful not only for creating a multilayered architecture, it's
also helpful for reducing the complexity of dataflows. Some steps just extract data from
the data source, such as get data, navigation, and data type changes. By separating the
staging dataflows and transformation dataflows, you make your dataflows simpler to
develop.
Use custom functions
Custom functions are helpful in scenarios where a certain number of steps have to be
done for a number of queries from different sources. Custom functions can be
developed through the graphical interface in Power Query Editor or by using an M
script. Functions can be reused in a dataflow in as many tables as needed.

Having a custom function helps by having only a single version of the source code, so
you don't have to duplicate the code. As a result, maintaining the Power Query
transformation logic and the whole dataflow is much easier. For more information, go to
the following blog post: Custom Functions Made Easy in Power BI Desktop .

7 Note

Sometimes you might receive a notification that tells you a premium capacity is
required to refresh a dataflow with a custom function. You can ignore this message
and reopen the dataflow editor. This usually solves your problem unless your
function refers to a "load enabled" query.

Place queries into folders


Using folders for queries helps to group related queries together. When developing the
dataflow, spend a little more time to arrange queries in folders that make sense. Using
this approach, you can find queries more easily in the future and maintaining the code is
much easier.

Use computed tables


Computed tables not only make your dataflow more understandable, they also provide
better performance. When you use a computed table, the other tables referenced from
it are getting data from an "already-processed-and-stored" table. The transformation is
much simpler and faster.

Take advantage of the enhanced compute


engine
For dataflows developed in Power BI admin portal, ensure that you make use of the
enhanced compute engine by performing joins and filter transformations first in a
computed table before doing other types of transformations.

Break many steps into multiple queries


It's hard to keep track of a large number of steps in one table. Instead, you should break
a large number of steps into multiple tables. You can use Enable Load for other queries
and disable them if they're intermediate queries, and only load the final table through
the dataflow. When you have multiple queries with smaller steps in each, it's easier to
use the dependency diagram and track each query for further investigation, rather than
digging into hundreds of steps in one query.

Add properties for queries and steps


Documentation is the key to having easy-to-maintain code. In Power Query, you can
add properties to the tables and also to steps. The text that you add in the properties
shows up as a tooltip when you hover over that query or step. This documentation helps
you maintain your model in the future. With a glance at a table or step, you can
understand what's happening there, rather than rethinking and remembering what
you've done in that step.

Ensure that capacity is in the same region


Dataflows don't currently support multiple countries or regions. The Premium capacity
must be in the same region as your Power BI tenant.

Separate on-premises sources from cloud


sources
We recommend that you create a separate dataflow for each type of source, such as on-
premises, cloud, SQL Server, Spark, and Dynamics 365. Separating dataflows by source
type facilitates quick troubleshooting and avoids internal limits when you refresh your
dataflows.

Separate dataflows based on the scheduled


refresh required for tables
If you have a sales transaction table that gets updated in the source system every hour
and you have a product-mapping table that gets updated every week, break these two
tables into two dataflows with different data refresh schedules.

Avoid scheduling refresh for linked tables in


the same workspace
If you're regularly being locked out of your dataflows that contain linked tables, it might
be caused by a corresponding, dependent dataflow in the same workspace that's locked
during dataflow refresh. Such locking provides transactional accuracy and ensures that
both dataflows are successfully refreshed, but it can block you from editing.

If you set up a separate schedule for the linked dataflow, dataflows can be refreshed
unnecessarily and block you from editing the dataflow. There are two recommendations
to avoid this problem:

Don't set a refresh schedule for a linked dataflow in the same workspace as the
source dataflow.
If you want to configure a refresh schedule separately and want to avoid the
locking behavior, move the dataflow to a separate workspace.
Best practices for reusing dataflows
across environments and workspaces
Article • 08/04/2023

This article discusses a collection of best practices for reusing dataflows effectively and
efficiently. Read this article to avoid design pitfalls and potential performance issues as
you develop dataflows for reuse.

Separate data transformation dataflows from


staging/extraction dataflows
If a dataflow performs all the actions, it's hard to reuse its tables in other dataflows or
for other purposes. The best dataflows to reuse are those dataflows that do only a few
actions. Creating dataflows that specialize in one specific task is one of the best ways to
reuse them. If you have a set of dataflows that you use as staging dataflows, their only
action is to extract data as-is from the source system. These dataflows can be reused in
multiple other dataflows.

If you have data transformation dataflows, you can split them into dataflows that do
common transformations. Each dataflow can do just a few actions. These few actions per
dataflow ensure that the output of that dataflow is reusable by other dataflows.

Use multiple workspaces


Each workspace (or environment) is available only for members of that workspace. If you
build all your dataflows in one workspace, you're minimizing the reuse of your
dataflows. You can have some generic workspaces for dataflows that are processing
company-wide tables. You can also have some workspace for dataflows to process
tables across multiple departments. And you can also have some workspaces for
dataflows to be used only in specific departments.

Set the correct access levels on workspaces


To give access to dataflows in other workspaces to use the output of a dataflow in a
workspace, you just need to give them View access in the workspace. To learn more
about other roles in a Power BI workspace, go to Roles in the new workspaces.
Endorsement on the dataflow in Power BI
There can be many dataflows created in a tenant organization, and it can be hard for the
users to know which dataflow is most reliable. Authors of a dataflow, or those users who
have edit access to it, can endorse the dataflow at three levels: no endorsement,
promoted, or certified.

These levels of endorsement help users find reliable dataflows easier and faster. The
dataflow with a higher endorsement level appears first. The Power BI administrator can
delegate the ability to endorse dataflows to the certified level to other people. More
information: Endorsement - Promoting and certifying Power BI content
Separate tables in multiple dataflows
You can have multiple tables in one dataflow. One of the reasons you might split tables
in multiple dataflows is what you learned earlier in this article about separating the data
ingestion and data transformation dataflows. Another good reason to have tables in
multiple dataflows is when you want a different refresh schedule than other tables.

In the example shown in the following image, the sales table needs to be refreshed
every four hours. The date table needs to be refreshed only once a day to keep the
current date record updated. And a product-mapping table just needs to be refreshed
once a week. If you have all of these tables in one dataflow, you have only one refresh
option for them all. However, if you split these tables into multiple dataflows, you can
schedule the refresh of each dataflow separately.
Good table candidates for dataflow tables
When you develop solutions using Power Query in the desktop tools, you might ask
yourself; which of these tables are good candidates to be moved to a dataflow? The best
tables to be moved to the dataflow are those tables that need to be used in more than
one solution, or more than one environment or service. For example, the Date table
shown in the following image needs to be used in two separate Power BI files. Instead of
duplicating that table in each file, you can build the table in a dataflow as a table, and
reuse it in those Power BI files.
Best practices for creating a dimensional
model using dataflows
Article • 08/04/2023

Designing a dimensional model is one of the most common tasks you can do with a
dataflow. This article highlights some of the best practices for creating a dimensional
model using a dataflow.

Staging dataflows
One of the key points in any data integration system is to reduce the number of reads
from the source operational system. In the traditional data integration architecture, this
reduction is done by creating a new database called a staging database. The purpose of
the staging database is to load data as-is from the data source into the staging
database on a regular schedule.

The rest of the data integration will then use the staging database as the source for
further transformation and converting it to the dimensional model structure.

We recommended that you follow the same approach using dataflows. Create a set of
dataflows that are responsible for just loading data as-is from the source system (and
only for the tables you need). The result is then stored in the storage structure of the
dataflow (either Azure Data Lake Storage or Dataverse). This change ensures that the
read operation from the source system is minimal.

Next, you can create other dataflows that source their data from staging dataflows. The
benefits of this approach include:

Reducing the number of read operations from the source system, and reducing the
load on the source system as a result.
Reducing the load on data gateways if an on-premises data source is used.
Having an intermediate copy of the data for reconciliation purpose, in case the
source system data changes.
Making the transformation dataflows source-independent.
Transformation dataflows
When you've separated your transformation dataflows from the staging dataflows, the
transformation will be independent from the source. This separation helps if you're
migrating the source system to a new system. All you need to do in that case is to
change the staging dataflows. The transformation dataflows are likely to work without
any problem, because they're sourced only from the staging dataflows.

This separation also helps in case the source system connection is slow. The
transformation dataflow won't need to wait for a long time to get records coming
through a slow connection from the source system. The staging dataflow has already
done that part, and the data will be ready for the transformation layer.
Layered Architecture
A layered architecture is an architecture in which you perform actions in separate layers.
The staging and transformation dataflows can be two layers of a multi-layered dataflow
architecture. Trying to do actions in layers ensures the minimum maintenance required.
When you want to change something, you just need to change it in the layer in which
it's located. The other layers should all continue to work fine.

The following image shows a multi-layered architecture for dataflows in which their
tables are then used in Power BI datasets.
Use a computed table as much as possible
When you use the result of a dataflow in another dataflow, you're using the concept of
the computed table, which means getting data from an "already-processed-and-stored"
table. The same thing can happen inside a dataflow. When you reference an table from
another table, you can use the computed table. This is helpful when you have a set of
transformations that need to be done in multiple tables, which are called common
transformations.

In the previous image, the computed table gets the data directly from the source.
However, in the architecture of staging and transformation dataflows, it's likely that the
computed tables are sourced from the staging dataflows.
Build a star schema
The best dimensional model is a star schema model that has dimensions and fact tables
designed in a way to minimize the amount of time to query the data from the model,
and also makes it easy to understand for the data visualizer.

It isn't ideal to bring data in the same layout of the operational system into a BI system.
The data tables should be remodeled. Some of the tables should take the form of a
dimension table, which keeps the descriptive information. Some of the tables should
take the form of a fact table, to keep the aggregatable data. The best layout for fact
tables and dimension tables to form is a star schema. More information: Understand star
schema and the importance for Power BI
Use a unique key value for dimensions
When building dimension tables, make sure you have a key for each one. This key
ensures that there are no many-to-many (or in other words, "weak") relationships
among dimensions. You can create the key by applying some transformation to make
sure a column or a combination of columns is returning unique rows in the dimension.
Then that combination of columns can be marked as a key in the table in the dataflow.
Do an incremental refresh for large fact tables
Fact tables are always the largest tables in the dimensional model. We recommend that
you reduce the number of rows transferred for these tables. If you have a very large fact
table, ensure that you use incremental refresh for that table. An incremental refresh can
be done in the Power BI dataset, and also the dataflow tables.

You can use incremental refresh to refresh only part of the data, the part that has
changed. There are multiple options to choose which part of the data to be refreshed
and which part to be persisted. More information: Using incremental refresh with Power
BI dataflows

Referencing to create dimensions and fact


tables
In the source system, you often have a table that you use for generating both fact and
dimension tables in the data warehouse. These tables are good candidates for
computed tables and also intermediate dataflows. The common part of the process—
such as data cleaning, and removing extra rows and columns—can be done once. By
using a reference from the output of those actions, you can produce the dimension and
fact tables. This approach will use the computed table for the common transformations.
Improve performance and reusability by
separating data ingestion from data
transformation dataflows
Article • 02/17/2023

One of the best practices for dataflow implementations is separating the responsibilities
of dataflows into two layers: data ingestion and data transformation. This pattern is
specifically helpful when dealing with multiple queries of slower data sources in one
dataflow, or multiple dataflows querying the same data sources. Instead of getting data
from a slow data source again and again for each query, the data ingestion process can
be done once, and the transformation can be done on top of that process. This article
explains the process.

On-premises data source


In many scenarios, the on-premises data source is a slow data source. Especially
considering that the gateway exists as the middle layer between the dataflow and the
data source.
Using analytical dataflows for data ingestion minimizes the get data process from the
source and focuses on loading data to Azure Data Lake Storage. Once in storage, other
dataflows can be created that leverage the ingestion dataflow's output. The dataflow
engine can read the data and do the transformations directly from the data lake,
without contacting the original data source or gateway.
Slow data source
The same process is valid when a data source is slow. Some of the software as a service
(SaaS) data sources perform slowly because of the limitations of their API calls.

Separation of the data ingestion and data


transformation dataflows
The separation of the two layers—data ingestion and transformation—is helpful in the
scenarios where the data source is slow. It helps to minimize the interaction with the
data source.

This separation isn't only useful because of the performance improvement, it's also
helpful for the scenarios where an old legacy data source system has been migrated to a
new system. In those cases, only the data ingestion dataflows need to be changed. The
data transformation dataflows remain intact for this type of change.
Reuse in other tools and services
Separation of data ingestion dataflows from data transformation dataflows is helpful in
many scenarios. Another use case scenario for this pattern is when you want to use this
data in other tools and services. For this purpose, it's better to use analytical dataflows
and use your own Data Lake Storage as the storage engine. More information: Analytical
dataflows

Optimize the data ingestion dataflow


Consider optimizing the data ingestion dataflow whenever possible. As an example, if all
the data from the source isn't needed, and the data source supports query folding, then
filtering data and getting only a required subset is a good approach. To learn more
about query folding, go to Power Query query folding.
Create the data ingestion dataflows as
analytical dataflows
Consider creating your data ingestion dataflows as analytical dataflows. This especially
helps other services and applications to use this data. This also makes it easier for the
data transformation dataflows to get data from the analytical ingestion dataflow. To
learn more, go to Analytical dataflows.
Using the output of Microsoft Power
Platform dataflows from other Azure
data workloads
Article • 02/17/2023

Depending on the storage for the output of the Microsoft Power Platform dataflows,
you can use that output in other Azure services.

The benefits of working with the output of


Power Platform dataflows
Using Power Platform dataflows, you can reshape, clean, and prepare data for further
analysis and consumption. There are many other Azure data services that work with data
as an input and provide actions.

Azure Machine Learning can consume the output of dataflows and use it for
machine learning scenarios (for example, predictive analysis).
Azure Data Factory can get the output of dataflows on a much larger scale,
combined with the data from big data sources, for advanced data integration
solutions.
Azure Databricks can consume the output of dataflows for applied data science
algorithms and further AI with the big data scale in the Apache Spark back end.
Other Azure data services can use the output of Power Platform dataflows to do
further actions on that data.

Dataflows with external Azure Data Lake


Storage
If you've connected an external Azure Data Lake Storage storage to the Power Platform
dataflows, you can connect to it using any Azure services that have Azure Data Lake
Storage as a source, such as Azure Machine Learning, Azure Data Factory, Azure
Databricks, and Azure Analysis Services.

In any of these services, use Azure Data Lake Storage as the source. You'll be able to
enter the details of your storage and connect to the data in it. The data is stored in CSV
format, and is readable through any of these tools and services. The following
screenshot shows how Azure Data Lake Storage is a source option for Azure Data
Factory.

Dataflows with Dataverse


If you're using standard dataflows that store the data in Dataverse, you can still connect
to Dataverse from many Azure services. The following image shows that in Azure Data
Factory, the output of a dataflow from Dataverse can be used as a source.

Dataflows with internal Azure Data Lake


Storage
When you use the internal Data Lake storage that's provided by Power Platform
dataflows, that storage is exclusively limited to the Power Platform tools and isn't
accessible from other Azure data workloads.
Mapping fields with relationships in
standard dataflows
Article • 06/20/2023

In the standard dataflow, you can easily map fields from the dataflow query into
Dataverse tables. However, if the Dataverse table has lookup or relationship fields,
additional consideration is required to make sure this process works.

What is the relationship and why do you need


that?
If you're coming from a database development background, you're familiar with the
concept of a relationship between tables. However, many users of Microsoft Power
Platform services aren't coming from that background. You might wonder what the
relationship is, or why you should create a relationship between tables.

The tables and their relationship are fundamental concepts of designing a database. To
learn everything about relationships is beyond the scope of this article. However, we'll
discuss it in a general way here.

Let's say you want to store information about customers and their details, including
region, in Dataverse. You can keep everything in one table. Your table can be called
Customers, and it can contain fields, such as CustomerID, Name, Birthdate, and Region.
Now imagine that you have another table that also has the store's information. This
table can have fields, such as Store ID, Name, and Region. As you can see, the region is
repeated in both tables. There's no single place where you can get all regions; some of
the region's data is in the Customers table, and some of it's in the Stores table. If you
ever build an application or a report from this information, you always have to combine
the two regions' information into one.
What's done in the database design practice is to create a table for Region in scenarios
like the one described above. This Region table would have a Region ID, Name, and
other information about the region. The other two tables (Customers and Stores) will
have links to this table using a field (which can be Region ID if we have the ID in both
tables, or Name if it's unique enough to determine a region). This means having a
relationship from the Stores and Customers table to the Region table.

In Dataverse, there are a number of ways to create a relationship. One way is to create a
table, and then create a field in one table that's a relationship (or lookup) to another
table, as described in the next section.

What are lookup or relationship fields?


In Dataverse, you can have a field defined as a lookup field, which points to another
table.
In the preceding image, the Region field is a lookup field to another table named Region
Lookup. To learn more about different types of relationships, go to Create a relationship
between tables.

When the field mapping doesn't show the


relationship fields
If you've created a lookup field in one table that points to another table, that field might
not appear in the mapping of the dataflow. That’s because the target entity involved in
the relationship requires a Key field to be defined. This best practice would then make
sure that the field is mappable in the table mappings of the dataflow.
Setting the Key field in the table
To set the key field in the table, go to the Keys tab, and add a key to a field that has
unique values.

After setting the key field, you can see the field in the mapping of the dataflow.

Known limitations
Mapping to polymorphic lookup fields is currently not supported.
Mapping to a multi-level lookup field, a lookup that points to another tables'
lookup field, is currently not supported.
Lookup fields for Standard Tables, unless they contain alternate key fields as
described in this document, won't show up in the Map Tables dialog.
Dataflows don't guarantee correct loading order when loading data to tables
configured as hierarchical data structures.
The order of query execution, or loading order to Dataverse tables isn't
guaranteed. We recommend that you separate child and parent tables into two
dataflows, and first refresh the dataflow containing child artifacts.
Field mapping considerations for
standard dataflows
Article • 06/20/2023

When loading data into Dataverse tables, you'll need to map the source query's columns
in the dataflow's editing experience to the destination Dataverse table columns. Beyond
mapping of data, there are other considerations and best practices to take into account.
In this article, we cover the different dataflow settings that control the behavior of
dataflow refresh and as a result, the data in the destination table.

Controlling whether dataflows create or upsert


records each refresh
Each time you refresh a dataflow, it will fetch records from the source and load them
into Dataverse. If you run the dataflow more than once—depending on how you
configure the dataflow—you can:

Create new records each dataflow refresh, even if such records already exist in the
destination table.
Create new records if they don't already exist in the table, or update existing
records if they already exist in the table. This behavior is called upsert.

Using a key column will indicate to the dataflow to upsert records into the destination
table, while not selecting a key will always create new records in the destination table.

A key column is a column that's unique and deterministic of a data row in the table. For
example, in an Orders table, if the Order ID is a key column, you shouldn't have two
rows with the same Order ID. Also, one Order ID—let's say an order with the ID 345—
should only represent one row in the table. To choose the key column for the table in
Dataverse from the dataflow, you need to set the key field in the Map Tables experience.

Choosing a primary name and key field when creating a


new table
The following image shows how you can choose the key column to be populated from
the source when you create a new table in the dataflow.
The primary name field that you see in the field mapping is for a label field; this field
doesn't need to be unique. The field that's used in the table for checking duplication will
be the field that you set in the Alternate Key field.

Having a primary key in the table ensures that even if you have duplicate data rows with
the same value in the field that's mapped to the primary key, the duplicate entries won't
be loaded into the table, and the table will always have a high quality of the data.
Having a table with a high quality of data is essential in building reporting solutions
based on the table.

The primary name field


The primary name field is a display field used in Dataverse. This field is used in default
views to show the content of the table in other applications. This field isn't the primary
key field, and shouldn't be considered as that. This field can have duplicates, because it's
a display field. The best practice, however, is to use a concatenated field to map to the
primary name field, so the name is fully explanatory.

The alternate key field is what is used as the primary key.

Choosing a key field when loading to an existing table


When mapping a dataflow query to an existing Dataverse table, you can choose if and
which key should be used when loading data into the destination table.

The following image shows how you can choose the key column to be used when
upserting records to an existing Dataverse table:


Setting a table’s Unique ID column and using it as a key
field for upserting records into existing Dataverse tables
All Microsoft Dataverse table rows have unique identifiers defined as GUIDs. These
GUIDs are the primary key for each table. By default, a tables primary key can't be set by
dataflows, and is auto-generated by Dataverse when a record is created. There are
advanced use cases where leveraging the primary key of a table is desirable, for
example, integrating data with external sources while keeping the same primary key
values in both the external table and Dataverse table.

7 Note

This capability is only available when loading data to existing tables.


The unique identifier field only accepts a string containing GUID values, any
other data type or value will cause record creation to fail.

To take advantage of a table’s unique identifier field, select Load to existing table in the
Map Tables page while authoring a dataflow. In the example shown in the image below,
we would like to load data into the CustomerTransactions table, and use the
TransactionID column from the data source as the unique identifier of the table.

You'll notice that in the Select key dropdown, the unique identifier—which is always
named "tablename + id"— of the table can be selected. Since the table name is
"CustomerTransactions", the unique identifier field will be named
"CustomerTransactionId".

Once selected, the column mapping section is updated to include the unique identifier
as a destination column. You can then map the source column representing the unique
identifier for each record.

What are good candidates for the key field


The key field is a unique value representing a unique row in the table. It's important to
have this field, because it helps you avoid having duplicate records in the table. This
field can come from three sources:

The primary key in the source system (such as OrderID in the example above).

A concatenated field created through Power Query transformations in the


dataflow.

A combination of fields to be selected in the Alternate Key option. A combination


of fields used as a key field is also called a composite key.
Remove rows that no longer exist
If you want to have the data in your table always synchronized with the data from the
source system, choose the Delete rows that no longer exist in the query output option.
However, be aware that this option slows down the dataflow because there's a need for
a row comparison based on the primary key (alternate Key in the field mapping of the
dataflow) for this action to occur.

Having this option checked means that if there's a data row in the table that doesn't
exist in the next dataflow refresh's query output, that row will be removed from the
table.

7 Note
Standard V2 dataflows rely on the createdon and modifiedon fields in order to
remove rows that don't exist in the dataflows output, from the destination table. If
those columns don't exist in the destination table, records aren't deleted.

Known limitations
Mapping to polymorphic lookup fields is currently not supported.
Mapping to a multi-level lookup field, a lookup that points to another tables'
lookup field, is currently not supported.
Mapping to Status and Status Reason fields is currently not supported.
Mapping data into multi-line text that includes line break characters isn't
supported and the line breaks will be removed. Instead, you could use the line
break tag <br> to load and preserve multi-line text.
Mapping to Choice fields configured with the multiple select option enabled is
only supported under certain conditions. The dataflow only loads data to Choice
fields with the multiple select option enabled, and a comma separated list of
values (integers) of the labels are used. For example, if the labels are "Choice1,
Choice2, Choice3" with corresponding integer values of "1, 2, 3", then the column
values should be "1,3" to select the first and last choices.
Standard V2 dataflows rely on the createdon and modifiedon fields in order to
remove rows that don't exist in the dataflows output, from the destination table. If
those columns don't exist in the destination table, records aren't deleted.
Security roles and permission levels
required to create standard dataflows
Article • 10/03/2023

Dataflows are created within an environment, and standard dataflows load data to new,
existing, or standard Dataverse tables that also reside in the environment. Depending on
the scenario, a dataflow creator might need different or multiple roles to create and
refresh a dataflow successfully. This article walks you through the roles and permission
levels related to standard dataflows, and provides links to articles to learn how to
manage them.

Roles required to create standard dataflows


Some roles are needed to create dataflows, and others are required to control which
tables a dataflow creator can create or update. The following table describes each role,
along with the level of permission associated with that role.

Security role Privileges Description

Environment Create dataflows Required to create any dataflow. Standard dataflows


Maker require additional roles depending on Dataverse tables
permissions.

Basic User Write to non- Has all the rights to work with non-custom tables.
custom tables

System Create custom Custom tables this user creates are visible to this user only.
Customizer tables

Steps to assign roles


The steps in the following procedure are sourced from Configure user security to
resources in an environment.

Verify that the user you want to assign a security role to is present in the environment. If
not, add the user to the environment. You can assign a security role as part of the
process of adding the user. More information: Add users to an environment

In general, a security role can only be assigned to users who are in the Enabled state.
But if you need to assign a security role to users in the Disabled state, you can do so by
enabling allowRoleAssignmentOnDisabledUsers in OrgDBOrgSettings.
To add a security role to a user who is already present in an environment:

1. Sign in to the Power Platform admin center .

2. Select Environments > [select an environment] > Settings > Users + permissions
> Users.

3. Select Manage users in Dynamics 365.

4. Select the user from the list of users in the environment, and then select Manage
roles.

5. Assign one or more security roles to the user.


6. Select OK.

Row-level security isn't supported


The current version of standard dataflows doesn't support row-level security.

If you haven't heard of row-level security before, here's a quick introduction. If you have
users with different levels of access to the same table, you can filter the data at the row
level. For example, in the Orders table, you might have a SalesTerritory column. Also, you
might want to filter the data in a way that users from California could only see records
from the Orders table that belongs to California. This difference is possible through row-
level security.

Sharing or coauthoring dataflows


Currently, it's not possible for two different users to collaborate on the same dataflow.
Only the creator (owner) of the dataflow can edit it. In the event a dataflow needs to be
transferred to a different user, an environment administrator can change the owner of
the dataflow from one user to another. For security reasons, transferring ownership
removes the connection associated with the dataflow and requires the new owner to
update the credentials used by the dataflow.
Sync your Excel data source with
Dataverse using a dataflow
Article • 02/17/2023

One of the common scenarios that happens when you integrate data into Dataverse is
keeping it synchronized with the source. Using the standard dataflow, you can load data
into Dataverse. This article explains how you can keep the data synchronized with the
source system.

The importance of the key column


If you're using a relational data base system as a source, normally you have key columns
in the tables, and the data is in a proper format to be loaded into Dataverse. However,
the data from the Excel files aren't always that clean. You often have an Excel file with
sheets of data without having any key column. In Field mapping considerations for
standard dataflows, you can see that if there's a key column in the source, it can be
easily used as the alternate key in the field mapping of the dataflow.

Having a key column is important for the table in Dataverse. The key column is the row
identifier; this column contains unique values in each row. Having a key column helps in
avoiding duplicate rows, and it also helps in synchronizing the data with the source
system. If a row is removed from the source system, having a key column is helpful to
find it and remove it from Dataverse as well.

Creating a key column


If you don't have a key column in your data source (Excel, text file, or any other sources),
then you can generate one using the following method:

1. Clean your data.

The first step to create the key column is to remove all unnecessary rows, clean the
data, remove empty rows, and remove any possible duplicates.
2. Add an index column.

After the data is cleaned, the next step is to assign a key column to it. You can use
Add Index Column from the Add Column tab for this purpose.
When you add the index column, you have some options to customize it, for example,
customizations on the starting number or the number of values to jump each time. The
default start value is zero, and it increments one value each time.

Use the key column as the alternate key


Now that you have the key column(s), you can assign the dataflow's field mapping to
the Alternate Key.

The setting is simple, you just need to set the alternate key. However, if you have
multiple files or tables, it has one other step to consider.

If you have multiple files


If you have just one Excel file (or sheet or table), then the steps in the previous
procedure are enough to set the alternate key. However, if you have multiple files (or
sheets or tables) with the same structure (but with different data), then you to append
them together.

If you're getting data from multiple Excel files, then the Combine Files option of Power
Query will automatically append all the data together, and your output will look like the
following image.
As shown in the preceding image, besides the append result, Power Query also brings in
the Source.Name column, which contains the file name. The Index value in each file
might be unique, but it's not unique across multiple files. However, the combination of
the Index column and the Source.Name column is a unique combination. Choose a
composite alternate key for this scenario.
Delete rows that no longer exists in the query
output
The last step is to select the Delete rows that no longer exist in the query output. This
option compares the data in the Dataverse table with the data coming from the source
based on the alternate key (which might be a composite key), and remove the rows that
no longer exist. As a result, your data in Dataverse will be always synchronized with your
data source.
Add data to a table in Microsoft
Dataverse by using Power Query
Article • 02/17/2023

In this procedure, you'll create a table in Dataverse and fill that table with data from an
OData feed by using Power Query. You can use the same techniques to integrate data
from these online and on-premises sources, among others:

SQL Server
Salesforce
IBM DB2
Access
Excel
Web APIs
OData feeds
Text files

You can also filter, transform, and combine data before you load it into a new or existing
table.

If you don't have a license for Power Apps, you can sign up for free.

Prerequisites
Before you start to follow this article:

Switch to an environment in which you can create tables.


You must have a Power Apps per user plan or Power Apps per app plan.

Specify the source data


1. Sign in to Power Apps .

2. In the navigation pane, select Dataverse to expand it, and then select Tables.
3. In the command menu, select Data > Get data.

4. In the list of data sources, select OData.

5. Under Connection settings, type or paste this URL, and then select Next:

https://services.odata.org/V4/Northwind/Northwind.svc/

6. In the list of tables, select the Customers check box, and then select Next.
7. (optional) Modify the schema to suit your needs by choosing which columns to
include, transforming the table in one or more ways, adding an index or
conditional column, or making other changes.

8. In the lower-right corner, select Next.

Specify the target table


1. Under Load settings, select Load to new table.
You can give the new table a different name or display name, but leave the default
values to follow this tutorial exactly.

2. In the Unique primary name column list, select ContactName, and then select
Next.

You can specify a different primary-name column, map a different column in the
source table to each column in the table that you're creating, or both. You can also
specify whether Text columns in your query output should be created as either
Multiline Text or Single-Line Text in the Dataverse. To follow this tutorial exactly,
leave the default column mapping.

3. Select Refresh manually for Power Query - Refresh Settings, and then select
Publish.

4. Under Dataverse (near the left edge), select Tables to show the list of tables in
your database.

The Customers table that you created from an OData feed appears as a custom
table.

2 Warning

Existing data might be altered or deleted when loading data to a Dataverse table
while having the Delete rows that no longer exist in the query output enabled or
a primary key column defined.

If you select Load to existing table, you can specify a table into which you add data
from the Customers table. You could, for example, add the data to the Account table
with which the Dataverse ships. Under Column mapping, you can further specify that
data in the ContactName column from the Customers table should be added to the
Name column in the Account table.
If an error message about permissions appears, contact your administrator.
How Microsoft Power Platform
dataflows and Azure Data Factory
wrangling dataflows relate to each other
Article • 02/17/2023

Microsoft Power Platform dataflows and Azure Data Factory dataflows are often
considered to be doing the same thing: extracting data from source systems,
transforming the data, and loading the transformed data into a destination. However,
there are differences in these two types of dataflows, and you can have a solution
implemented that works with a combination of these technologies. This article describes
this relationship in more detail.

Power Platform dataflows


Power Platform dataflows are data transformation services empowered by the Power
Query engine and hosted in the cloud. These dataflows get data from different data
sources and, after applying transformations, store it either in Dataverse or in Azure Data
Lake Storage.

Data Factory wrangling dataflows


Data Factory is a cloud-based extract, transform, load (ETL) service that supports many
different sources and destinations. There are two types of dataflows under this
technology: mapping dataflows and wrangling dataflows. Wrangling dataflows are
empowered by the Power Query engine for data transformation.

What do they have in common?


Power Platform dataflows and Data Factory wrangling dataflows are both useful for
getting data from one or more sources, applying transformations on the data by using
Power Query, and loading the transformed data into destinations. In addition:

Both are empowered by using Power Query data transformation.


Both are cloud-based technologies.

What's the difference?


The main point is knowing their differences, because then you can think about scenarios
where you'd want to use one or the other.

Features Power Platform dataflows Data Factory wrangling


dataflows

Destinations Dataverse or Azure Data Lake Storage Many destinations (go to the list
here )
Features Power Platform dataflows Data Factory wrangling
dataflows

Power Query All Power Query functions are supported A limited set of functions are
transformation supported (go to the list here)

Sources Many sources are supported Only a few sources (go to the list
here)

Scalability Depends on the Premium capacity and the Highly scalable


use of the enhanced compute engine

Which user persona is suited to which type of


dataflow?
If you're a citizen application developer or citizen data analyst with small-scale to
medium-scale data to be integrated and transformed, you'll find the Power Platform
dataflows more convenient. The large number of transformations available, the ability to
work with them without having developer knowledge, and the fact that dataflows can be
authored, monitored, and edited in Power BI or Power Platform—all are reasons that
make Power Platform dataflows a great data integration solution for this type of
developer.

If you're a data developer who's dealing with big data and huge datasets, with a large
number of rows to be ingested every time, you'll find the Data Factory wrangling
dataflows a better tool for the job. Wrangling data flow translates M generated by the
Power Query Online Mashup Editor into spark code for cloud scale execution. Working
with the Azure portal to author, monitor, and edit wrangling dataflows requires a higher
developer learning curve than the experience in Power Platform dataflows. Wrangling
dataflows are best suited for this type of audience.
Power Automate templates for the
dataflows connector
Article • 07/30/2022

The dataflows Power Automate connector can:

Trigger a flow when a dataflow refresh completes.


Take action to start a dataflow refresh.

This section discusses some use cases with provided tutorials to help you quickstart the
use of this connector:

Send notifications:

When a dataflow refresh status changes, send an email notification.


When a dataflow succeeds or fails, send an email notification.
When a dataflow refresh status changes, send a Teams notification.

Open support tickets:

When a dataflow refresh fails, send a message to Azure Service Bus queue to open
a support ticket.

Refresh dataflows and datasets sequentially:

When you select a button, start a dataflow refresh.


When an analytical dataflow refresh succeeds, start a standard dataflow refresh.
When a dataflow refresh succeeds, start a Power BI dataset refresh.
When a file in SharePoint is updated, start a dataflow refresh.

Save dataflow refresh metadata and build a dataflows monitoring dashboard:

When a dataflow refresh completes, save metadata to a Dataverse table


When a dataflow refresh completes, save metadata to Excel online
When a dataflow refresh completes, save metadata to a Power BI streaming
dataset
Send a notification when a dataflow
refresh completes
Article • 02/17/2023

When your dataflow refresh completes, you or others who manage or depend on the
dataflow might want to receive a notification to alert you of the dataflow refresh status.
This way, you know your data is up to date and you can start getting new insights.
Another common scenario addressed by this tutorial is notification after a dataflow fails.
A notification allows you to start investigating the problem and alert people that
depend on the data being successfully refreshed.

To set up a Power Automate notification that will be sent when a dataflow fails:

1. Navigate to Power Automate .

2. Select Create > Automated cloud flow.

3. Enter a flow name, and then search for the "When a dataflow refresh completes"
connector. Select this connector from the list, and then select Create.

4. Customize the connector. Enter the following information on your dataflow:

Group Type: Select Environment when connecting to Power Apps and


Workspace when connecting to Power BI.
Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
Dataflow: Select your dataflow by name.

5. Select New step to add an action to your flow.

6. Search for the Condition connector, and then select it.

7. Customize the Condition connector. Enter the following information:


a. In the first cell, add Refresh Status from the dataflow connector.
b. Leave the second cell as is equal to.
c. In the third cell, enter False.

8. In the If Yes section, select Add an action.

9. Search for the "Send an email notification (V3)" connector, and then select it.

10. Enter the email address and subject information.

11. Inside the body of the email, select the field next to Body and use Dynamic
content to add dataflow information to the content of your email.


Open a ticket when a dataflow refresh
fails
Article • 02/17/2023

When your dataflow refresh completes or has been taking longer than expected, you
might want your support team to investigate. With this tutorial, you can automatically
open a support ticket, create a message in a queue or Service Bus, or add an item to
Azure DevOps to notify your support team.

In this tutorial, we make use of Azure Service Bus. For instructions on how to set up an
Azure Service Bus and create a queue, go to Use Azure portal to create a Service Bus
namespace and a queue.

To automatically create a queue in Azure Service Bus:

1. Navigate to Power Automate .

2. Select Create > Automated cloud flow.

3. Enter a flow name, and then search for the "When a dataflow refresh completes"
connector. Select this connector from the list, and then select Create.

4. Customize the connector. Enter the following information on your dataflow:

Group Type: Select Environment when connecting to Power Apps and


Workspace when connecting to Power BI.
Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
Dataflow: Select your dataflow by name.

5. Select New step to add an action to your flow.

6. Search for the Condition connector, and then select it.

7. Customize the Condition connector. Enter the following information:


a. In the first cell, add Refresh Status from the dataflow connector.
b. Leave the second cell as is equal to.
c. In the third cell, enter False.

8. In the If Yes section, select Add an action.

9. Search for the "Send message" connector from Service Bus, and then select it.

10. Enter a Connection name for this message. In Connection string, enter the
connection string that was generated when you created the Service Bus
namespace. Then select Create.

11. Add dataflow information to the content of your message by selecting the field
next to Content, and then select the dynamic content you want to use from
Dynamic content.


Trigger dataflows and Power BI datasets
sequentially
Article • 02/17/2023

There are two common scenarios for how you can use this connector to trigger multiple
dataflows and Power BI datasets sequentially.

Trigger the refresh of a standard dataflow after the successful completion of an


analytical dataflow refresh.

If a single dataflow does every action, then it's hard to reuse its entities in other
dataflows or for other purposes. The best dataflows to reuse are dataflows doing
only a few actions, specializing in one specific task. If you have a set of dataflows
as staging dataflows, and their only action is to extract data "as is" from the source
system, these dataflows can be reused in multiple other dataflows. More
information: Best practices for reusing dataflows across environments and
workspaces

Trigger the refresh of a Power BI dataset when a dataflow refresh completes


successfully.

If you want to ensure that your dashboard is up to date after a dataflow refreshes
your data, you can use the connector to trigger the refresh of a Power BI dataset
after your dataflow refreshes successfully.

This tutorial covers the first scenario.

To trigger dataflows sequentially:

1. Navigate to Power Automate .

2. Select Create > Automated cloud flow.

3. Enter a flow name, and then search for the "When a dataflow refresh completes"
connector. Select this connector from the list, and then select Create.

4. Customize the connector. Enter the following information on your dataflow:

Group Type: Select Environment when connecting to Power Apps and


Workspace when connecting to Power BI.
Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
Dataflow: Select your dataflow by name.
5. Select New step to add an action to your flow.

6. Search for the Condition connector, and then select it.

7. Customize the Condition connector. Enter the following information:


a. In the first cell, add Refresh Status from the dataflow connector.
b. Leave the second cell as is equal to.
c. In the third cell, enter Success.

8. In the If Yes section, select Add an action.

9. Search for the "Refresh a dataflow" connector, and then select it.

10. Customize the connector:

Group Type: Select Environment when connecting to Power Apps and


Workspace when connecting to Power BI.
Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
Dataflow: Select your dataflow by name.
Load data in a Dataverse table and build
a dataflows monitoring report with
Power BI
Article • 02/17/2023

This tutorial demonstrates how to load data in a Dataverse table to create a dataflows
monitoring report in Power BI.

You can use this dashboard to monitor your dataflows' refresh duration and failure
count. With this dashboard, you can track any issues with your dataflows performance
and share the data with others.

First, you'll create a new Dataverse table that stores all the metadata from the dataflow
run. For every refresh of a dataflow, a record is added to this table. You can also store
metadata for multiple dataflow runs in the same table. After the table is created, you'll
connect the Power BI file to the Dataverse table.
Prerequisites
Power BI Desktop .

A Dataverse environment with permissions to create new custom tables.

A Premium Power Automate License.

A Power BI dataflow or Power Platform dataflow.

Download the .pbit file


First, download the Dataverse .pbit file .

Create a new table in Dataverse


1. Navigate to the Power Apps portal .

2. On the left navigation pane expand Data, select Tables, and then select New table.

3. In the New table pane:


a. Enter Dataflows Monitoring in Display name.
b. Under Primary Name Column, enter Dataflow name in Display name.
c. Select Create.

4. Select Add column to repeat adding columns for the following values:

Display name: "Dataflow Id", Data type: Text, Required: Required.


Display name: "Refresh Status", Data type: Text, Required: Required.
Display name: "Refresh Type", Data type: Text, Required: Required.
Display name: "Start Time", Data type: Date and Time, Required: Required.
Display name: "End Time", Data type: Date and Time, Required: Required.

Create a dataflow
If you don't already have one, create a dataflow. You can create a dataflow in either
Power BI dataflows or Power Apps dataflows.

Create a Power Automate flow


1. Navigate to Power Automate .

2. Select Create > Automated cloud flow.

3. Enter a flow name, and then search for the "When a dataflow refresh completes"
connector. Select this connector from the list, and then select Create.

4. Customize the connector. Enter the following information on your dataflow:

Group Type: Select Environment when connecting to Power Apps and


Workspace when connecting to Power BI.
Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
Dataflow: Select your dataflow by name.
5. Select New step to add an action to your flow.

6. Search for the "Add a new row" connector from Dataverse, and then select it.

7. In Add a new row, select Choose a table and then choose Dataflows Monitoring
from the list.

8. For every required field, you need to add a dynamic value. This value is the output
of the metadata of the dataflow that's run.

a. Select the field next to Dataflow Name and then select Dataflow Name from
the dynamic content.

b. Repeat this process for all required fields.

9. Save the flow.

Create a Power BI Report


1. Open the .pbit file.

2. Connect to your Dataverse entity Dataflows Monitoring.

In this dashboard, for every dataflow in your specified time interval, you can monitor:
The dataflow duration
The dataflow count
The dataflow failure count

The unique ID for every dataflow is generated by a merge between the dataflow name
and the dataflow start time.
Load data in Excel Online and build a
dataflows monitoring report with Power
BI
Article • 02/17/2023

This tutorial demonstrates how to use an Excel file and the dataflows connector in Power
Automate to create a dataflows monitoring report in Power BI.

First, you'll download the Excel file and save it in OneDrive for Business or SharePoint.
Next, you'll create a Power Automate connector that loads metadata from your dataflow
to the Excel file in OneDrive for Business or SharePoint. Lastly, you'll connect a Power BI
file to the Excel file to visualize the metadata and start monitoring the dataflows.

You can use this dashboard to monitor your dataflows' refresh duration and failure
count. With this dashboard, you can track any issues with your dataflows performance
and share the data with others.
Prerequisites
Microsoft Excel

Power BI Desktop .

A Premium Power Automate License

OneDrive for Business .

A Power BI dataflow or Power Platform dataflow.

Download the .pbit file


First, download the .pbit file .

Download the Excel file and save to OneDrive


Next, download the .xlsx file and save the file to a location on OneDrive for Business
or SharePoint

Create a dataflow
If you don't already have one, create a dataflow. You can create a dataflow in either
Power BI dataflows or Power Apps dataflows.

Create a flow in Power Automate


1. Navigate to Power Automate .

2. Select Create > Automated cloud flow.

3. Enter a flow name, and then search for the "When a dataflow refresh completes"
connector. Select this connector from the list, and then select Create.

4. Customize the connector. Enter the following information on your dataflow:

Group Type: Select Environment when connecting to Power Apps and


Workspace when connecting to Power BI.
Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
Dataflow: Select your dataflow by name.
5. Select New step to add an action to your flow.

6. Search for the "Add a row into a table" connector from Excel Online (Business), and
then select it.

7. Customize the connector. Enter the Location of the Excel file and the specific Table
the data loads to.

Location: Select the location of the Excel file on OneDrive for Business or
SharePoint.
Document Library: Select the library of the Excel file.
File: Select the file path to the Excel file.
Table: Select "Dataflow_monitoring".

8. Add dynamic values to the required fields.

For every required field, you need to add a dynamic value. This value is the output
of the metadata of the dataflow run.

a. Select the field next to dataflowname_name.

b. Select Dataflow Name from the Dynamic content context box.


c. Repeat this process for all required fields.

9. Save the flow.

Create a Power BI Report


1. Open the .pbit file.

2. Connect to your Excel file.

In this dashboard, for every dataflow in your specified time interval, you can monitor:

The dataflow duration


The dataflow count
The dataflow failure count

The uniqueID for every dataflow is generated by a merge between the dataflow name
and the dataflow start time.
Load data in a Power BI streaming
dataset and build a dataflows
monitoring report with Power BI
Article • 02/17/2023

This tutorial demonstrates how to load data in a Power BI streaming dataset to create a
dataflows monitoring report in Power BI.

First, you'll create a new streaming dataset in Power BI. This dataset collects all the
metadata from the dataflow run, and for every refresh of a dataflow, a record is added
to this dataset. You can run multiple dataflows all to the same dataset. Lastly, you can
build a Power BI report on the data to visualize the metadata and start monitoring the
dataflows.

You can use this dashboard to monitor your dataflows' refresh duration and failure
count. With this dashboard, you can track any issues with your dataflows performance
and share the data with others.
Prerequisites
A Power BI Pro License.

A Premium Power Automate License

A Power BI dataflow or Power Platform dataflow.

Create a new streaming dataset in Power BI


1. Navigate to Power BI .

2. Open a workspace.

3. From the workspace, select New > Streaming dataset.


4. From New streaming dataset, select the API tile, and then select Next.
5. In the new pane, turn Historic data analysis on.

6. Enter the following values, and then select Create.

Dataset Name: "Dataflow Monitoring".


Value: "Dataflow Name", Data type: Text.
Value: "Dataflow ID", Data type: Text.
Value: "Refresh Status", Data type: Text.
Value: "Refresh Type", Data type: Text.
Value: "Start Time", Data type: Date and Time.
Value: "End Time", Data type: Date and Time.
Create a dataflow
If you do not already have one, create a dataflow. You can create a dataflow in either
Power BI dataflows or Power Apps dataflows.

Create a flow in Power Automate


1. Navigate to Power Automate .

2. Select Create > Automated cloud flow.

3. Enter a flow name, and then search for the "When a dataflow refresh completes"
connector. Select this connector from the list, and then select Create.

4. Customize the connector. Enter the following information on your dataflow:


Group Type: Select Environment when connecting to Power Apps and
Workspace when connecting to Power BI.
Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
Dataflow: Select your dataflow by name.

5. Select new step to add an action to your flow.

6. Search for the connector "Add rows to a dataset" from Power BI, and then select it.

7. Customize the connector. Enter the following information:

Workspace ID: Select the Power BI workspace that contains your streaming
dataset.
Dataset: Select the streaming dataset Dataflow Monitoring that you
previously created in Create a new streaming dataset in Power BI.
Table: Select RealTimeData.

8. Add dynamic values to the required fields.

For every required field, you need to add a dynamic value. This value is the output
of the metadata of the dataflow run.

a. Select the field next to Dataflow Name and then select the lightning button.

b. Select Dataflow Name from the Dynamic content context box.


c. Repeat this process for all required fields.

9. Save the flow.

Create a Power BI Report


1. Navigate to Power BI .

2. Navigate to the streaming dataset (in this example, in the DocTestWorkspace


workspace, from the Dataflow Monitoring dataset, select Create Report).

3. Create your own report on top of this data.


Automatically retry a dataflow on failure
Article • 02/17/2023

In the scenario where you want to automatically retry a dataflow when the refresh fails,
the Power Automate Connector is probably the way to go. In this tutorial, we'll guide
you step by step in setting up your Power Automate flow.

To automatically retry a dataflow on failure:

1. Navigate to Power Automate .

2. Select Create > Automated cloud flow.

3. Enter a flow name, and then search for the When a dataflow refresh completes
connector. Select this connector from the list, and then select Create.

4. Customize the connector. Enter the following information on your dataflow:


a. Group Type: Select Environment if you're connecting to Power Apps and
Workspace if you're connecting to Power BI.
b. Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
c. Dataflow: Select your dataflow by name.

5. Select New step to add an action to your flow.

6. Search for the Condition connector, and then select it.

7. Customize the Condition connector. Enter the following information:


a. In the first cell, add Refresh Status from the dataflow connector.
b. Leave the second cell as is equal to.
c. In the third cell, enter Failed.

8. In the If Yes section, select Add an action.

9. Search for the Refresh a dataflow connector, and then select it.

10. Customize the connector:


a. Group Type: Select Environment if you're connecting to Power Apps and
Workspace if you're connecting to Power BI.
b. Group: Select the Power Apps environment or the Power BI workspace your
dataflow is in.
c. Dataflow: Select the same dataflow as chosen in step 4.
Monitor your dataflow refreshes with
Power BI
Article • 02/17/2023

When working with any kind dataflows other than Power BI dataflows, you have the
ability to monitor dataflow refreshes using Power BI. This article includes step by step
instructions on how to set up your own dashboard to share with everyone on your team.
This dashboard provides insights into the success rate of refreshes, duration, and much
more.

Set up your dashboard


To set up your monitoring dashboard for dataflow refresh history:

1. Navigate to Power Apps .

2. Select the environment you want to monitor.

3. Open Settings > Session Details.

4. Copy the Instance url, which should look something like


contoso.crm.dynamics.com .

5. Download the Power BI Template .

6. Open the template file with Power BI Desktop and provide your instance URL.
7. Select Load.

8. If this is the first time you've used this dashboard, you might need to enter your
credentials to sign in.

9. Inside the dashboard, you'll find two tabs with information about errors, duration,
and the count of rows that were inserted, upserted, or failed:

Dataflow monitoring
Table monitoring

10. From this point on, you can change the dashboard however you like and publish it
to a workspace of your choice.

Where to find the Dataverse tables


The new tables are available in your environment right away. To find the tables in your
environment, do the following steps:

1. Navigate to Power Apps .

2. Open the Dataverse tables overview.


3. Navigate to the managed tables section.
4. Scroll down to the dataflow refresh history tables.

These tables will store history for at least the last 50 refreshes. Refresh history records
older than 90 days may be removed by the system. To use these tables, we suggest that
you use Power BI to get data through the Dataverse connector. We also suggest that
you extract this data into a self-managed table if you would like to do analysis over a
longer period of time.

Power BI dataflows monitoring


When working with Power BI dataflows, you might want to check out the following
pages for setting up your monitoring solution:

Use Dataverse to build a dataflows monitoring report


Use a Power BI dataset to build a dataflows monitoring report
Use Excel to build a dataflows monitoring report

Known issues
In some cases when you try to connect to the Dataverse tables manually through Power
BI, the tables might appear to be empty. To solve this issue, just refresh the preview and
you should be good to go.
Troubleshoot dataflow issues: Creating
dataflows
Article • 03/14/2023

This article explains some of the most common errors and issues you might get when
you want to create a dataflow, and how to fix them.

I can't create a dataflow in My workspace


This problem happens when you try to create a dataflow in My workspace in Power BI.

Reason:

Creating dataflows in My workspace isn't supported.

Resolution:

Create your dataflows in organizational workspaces. To learn how to create an


organizational workspace, go to Create the new workspaces in Power BI.
I can't create a dataflow in an organizational
workspace where I have read-only rights
If you're a member of an organization workspace and you still can't create a dataflow, it
might be because of your access rights in that workspace.

Reason:

You don't have edit rights in the workspace.

Resolution:

Ask the workspace administrators or members to give you an Admin, Member, or


Contributor role.

I can't create a dataflow in a workspace where I


have edit rights
This problem happens when you're in an organizational workspace that you've created,
or someone else has created and you have Admin, Member, or Contributor access. You
want to create a dataflow in this scenario, but you can't.

Reason:

The access to dataflow creation was disabled by the Power BI administrator.

Resolution:

Ask the Power BI tenant administrator to enable access for you by following these steps:

1. On the Settings menu in the Power BI service, select Admin portal.

2. On the left pane, select Tenant settings, and in the Dataflow settings section, turn
on the toggle for Enabled. Then select Apply.
I only see limited options when I create a
dataflow
When creating a dataflow, sometimes you don't see all the options that are available.
For example, you might see only the options shown in the following image.
However, more options are actually available, as shown in the following image.
Reason:

You're creating the dataflow in an old version of the Power BI workspace, called V1.

Resolution:

Upgrade your Power BI workspace to the new version (v2). More information: Upgrade
classic workspaces to the new workspaces in Power BI
Dataflow name exists already, but I deleted the
old one
This problem happens when you try to create a dataflow with a name that already exists
or use the name of a recently deleted dataflow.

Reason:

It can take up to 48 hours for the backend systems to delete all the files and references
to the deleted dataflow.

Resolution:

You can either wait 48 hours before publishing the dataflow or create it now under a
different name and rename it later on.
Troubleshooting dataflow issues: Get
data from a dataflow
Article • 02/17/2023

You might have created a dataflow but then had difficulty getting data from it (either by
using Power Query in Power BI Desktop or from other dataflows). This article explains
some of the most common problems with getting data from a dataflow.

Error: This table is empty


Let's assume that you're getting data from a dataflow (either in Power BI Desktop or in
another dataflow), and you have access to that dataflow. Sometimes, however, when
you get data from a dataflow in the above situation, you get a message in the Navigator
window saying "This table is empty."

Reason:

The data wasn't loaded into the table.

Resolution:

In the desktop tools, such as Power Query in Excel and Power Query in Power BI
Desktop, the loading of data into tables happens automatically (unless you disable it).
This behavior is a bit different in Power Query in dataflows. In dataflow entities, the data
won't be loaded unless you refresh the data.

You have to set up a scheduled refresh for a dataflow, or—if you want to just have a
single refresh—use the manual refresh option.
After a dataflow is refreshed, the data in entities will be visible in the Navigator window
of other tools and services.

More information: Refreshing a dataflow in Power BI and Set the refresh frequency in
Power Apps

My Microsoft Power Platform dataflow isn't


listed
Sometimes, you have a Microsoft Power Platform dataflow you created and also
refreshed, but you still can't access it through the Get data command. This might be
because the account that's trying to access the dataflow doesn't have access. However,
if the account does have access to the dataflow, another reason might be the type of
dataflow you're accessing.

You might receive the error message "We reached the end of the buffer" or
"DataFormat.Error: We reached the end of the buffer".
Reason:

Only analytical dataflows can be used in a Get data operation from a dataflow.

Resolution:

If you've created a dataflow that stores data in Dataverse—that is, a standard dataflow—
you can't see it by using the Get data operation from a dataflow. However, you can use
Get data from Dataverse to access it. Or you can create an analytical dataflow instead,
and then access it by using Get data from a dataflow.
I can't make a DirectQuery connection to the
dataflow
If you intend to use the dataflow as a DirectQuery source, you might need to enable it
first.

Reason:

The enhanced compute engine settings are disabled.

Resolution:

Enable the enhanced compute engine, and then you'll have the option to connect to the
dataflow by using DirectQuery.
Troubleshooting dataflow issues:
Connection to the data source
Article • 08/11/2023

When you create a dataflow, sometimes you get an error connecting to the data source.
This error can be caused by the gateway, credentials, or other reasons. This article
explains the most common connection errors and problems, and their resolution.

Error: An on-premises data gateway is required


to connect
This problem can happen when you move a query from Power Query in desktop tools to
Power Query in the dataflow, and you get the error "An on-premises data gateway is
required to connect."

Reason:

When your table in the dataflow gets data from an on-premises data source, a gateway
is needed for the connection, but the gateway hasn't been selected.

Resolution:

Select Select gateway. If the gateway hasn't been set up yet, go to Install an on-
premises data gateway.
Error: Please specify how to connect
This problem happens when you're connected to a data source, but haven't set up the
credentials or connection details yet. It can happen when you migrate queries into a
dataflow.

Reason:

The connection details aren't set up correctly.

Resolution:

Select Configure connection. Set up the connection details and credentials.

Expression.Error: The module named 'xyz' has


been disabled in this context
Sometimes, when you migrate your queries from Power Query in desktop tools to the
dataflow, you get an error saying that a module is disabled in this context. One example
of this situation is when your query uses functions such as Web.Page or
Web.BrowserContents .

Reason:

Disabled modules are related to functions that require an on-premises data gateway
connection to work. Even if the function is getting data from a webpage, because of
some security compliance requirements, it needs to go through a gateway connection.

Resolution:
First, install and set up an on-premises gateway. Then add a web data source for the
web URL you're connecting to.

After adding the web data source, you can select the gateway in the dataflow from
Options > Project options.
You might be asked to set up credentials. When you've set up the gateway and your
credentials successfully, the modules will no longer be disabled."

Deleted or old data sources still show up


Sometimes when you delete a data source from your dataflow, it still shows up on your
credentials overview or lineage overview. This doesn't impact the refresh or authoring of
your dataflow.
Reason:

A dataflow maintains its association with deleted dataflow data sources and doesn't
delete them automatically. This requires a trim initiated by the user.

Resolution:

In order to trim the data sources, you'll need to take the following steps:

1. Open your dataflow.

2. Select Options.

3. Select Project options.

4. Change the gateway to another gateway. It doesn't matter which one, as long as
it's a different gateway.
5. After you apply the change by selecting OK, repeat steps 1 through 4 to select the
original gateway again.

These steps essentially delete all the data source bindings for the dataflow. After
finishing these steps, you might be asked to set up credentials. When you've set up the
gateway and your credentials successfully, you effectively "trimmed" the data source
bindings for the dataflow to just the ones that the dataflow is actually using.

7 Note

For dataflows with cloud data sources without an on-premises data gateway, when
the data source name changes, an old data source connection still exists. To remove
this connection, in the Power BI service, select the settings icon, go to Manage
Connections and Gateways > Connections, select the ellipsis menu by the
connection name, then select Remove to remove the old/unused data source
connection. Then go to the Home page, select the workspace, select the dataflow
to open it, and then in the Table name page, select Edit tables. Then in the Power
Query editor, select Save & Close for the dataflows to be updated to the current
data source connection and to remove the older connection.
Power Query template (preview)
Article • 06/20/2023

A Power Query template is a file that encompasses the Power Query script and its
associated metadata for a single Power Query project.

It's meant to be used as a simple means to package your entire Power Query project in a
single file. You can then share the file or import it to other Power Query integrations.

7 Note

A Power Query project can be defined as a single instance of the Power Query
editor. This instance could contain any number of parameters, queries, and specific
settings, such as the locale that should be used to interpret data types.

) Important

Concepts that are outside of the Power Query script or its metadata are outside of
the scope of what to find in a Power Query template. Some examples are dataflow
related concepts such as scheduled refresh definition, refresh history, dataflow IDs,
connections referenced, and other concepts that aren't stored in the Power Query
script or the metadata of a Power Query project.

Export a template

7 Note

The following guide applies to all Power Query Online hosts.

For Power Query in Excel for Windows, follow the guide in exporting queries in
Excel to a Power Query template.

You can find the Export template button inside of the Home tab of the Power Query
ribbon, under the Share group.

When you select this button, the Export template dialog appears. In this dialog, you can
enter the name of the template and the description that is used for the template.

This operation downloads a file with the extension.pqt, which is your Power Query
template file.
Import a template

7 Note

This functionality is only available inside of the Dataflows Gen2 experience for
Microsoft Fabric.

The option to import from a template is only available in Power Query projects with no
queries or parameters set. In the home screen of the Power Query editor, there's a link
that reads Import from Power Query template.

Selecting the Import from Power Query template button triggers an experience to help
you upload your Power Query template and immediately load the project with all its
queries, parameters, and settings.

) Important

Connections and/or credential definitions aren't stored inside of a Power Query


template file. These items need to be set after importing the template.
Keyboard shortcuts in Power Query
Article • 03/21/2023

Keyboard shortcuts provide a quick way to navigate and allow users to work more
efficiently. For users with mobility or vision disabilities, keyboard shortcuts can be easier
than using the touchscreen, and are an essential alternative to using the mouse. The
table in this article lists all the shortcuts available in Power Query Online.

When using the Query Editor in Power Query Online, you can navigate to the Keyboard
shortcuts button in the Help tab to view the list of keyboard shortcuts.

7 Note

These shortcuts have been tested with Microsoft Edge on Windows and MacOS.
While we try to provide support for all browsers, other browsers can make or
implement their own shortcuts. Because we can't control how other browsers
behave, we can't guarantee that all the shortcuts in this list will work on all
browsers.

Query Editor
Action Windows Keyboard shortcut MacOS keyboard shortcut

Get Data Ctrl+Alt+D Control+Option+D

Enter Data Ctrl+Alt+T Command+Option+T

Add custom column Ctrl+Alt+C Command+Option+C

Choose column Ctrl+K Command+K

Go to column Ctrl+G Command+G

Add column from examples Ctrl+E Command+E

Blank query Ctrl+M Fn+Command+M

Advanced editor Ctrl+Shift+M Command+Shift+M

Refresh Alt+F5 Option+F5

Search Alt+Q Option+Q


Data Preview
Action Windows keyboard shortcut MacOS keyboard shortcut

Copy cells/rows/columns Ctrl+C Command+C

Select all cells Ctrl+A Command+A

Select column Ctrl+Space Control+Space

When the focus is on the column header

Action Windows keyboard MacOS keyboard


shortcut shortcut

Filter menu Alt+Down arrow key Option+Down arrow key

Change column type menu Ctrl+Down arrow key Command+Down arrow


key

Move focus to column header on the Ctrl+Left arrow key Command+Left arrow key
left

Move focus to column header on the Ctrl+Right arrow key Command+Right arrow
right key

Select first column Home Fn+Left arrow key

Select last column End Fn+Right arrow key

When the focus is on the cell

Action Windows keyboard shortcut MacOS keyboard shortcut

Select first cell of the row Home Fn+Left arrow key

Select last cell of the row End Fn+Right arrow key

Select first cell of the column Alt+Home Fn+Option+Left arrow key

Select last cell of the column Alt+End Fn+Option+Right arrow key

Select first cell of the first row Ctrl+Home Fn+Command+Left

Select last cell of the last row Ctrl+End Fn+Command+Right arrow key

Select the cell one page up Page up Fn+Up arrow key


Action Windows keyboard shortcut MacOS keyboard shortcut

Select the cell one page down Page down Fn+Down arrow key

Editable grid (Enter data/Blank table)


Action Windows keyboard shortcut MacOS keyboard shortcut

Copy cells/rows/columns Ctrl+C Command+C

Paste cells/rows/columns Ctrl+V Command+V

Save entered data Ctrl+Enter Command+Return

Show suggestions (when available) Ctrl+Space Ctrl+Space

Multi-line text editor (Blank query/Advanced


editor)
Action Windows keyboard shortcut MacOS keyboard shortcut

Toggle tab behavior Ctrl+M Control+Shift+M

Diagram View
Action Windows keyboard MacOS keyboard shortcut
shortcut

Expand selected query Ctrl+Right arrow key Command+Right arrow key

Collapse selected query Ctrl+Left arrow key Command+Left arrow key

Move focus from query level to step Alt+Down arrow key Option+Down arrow key
level

Move focus from step level to query Esc Esc


level

Expand all queries Ctrl+Shift+Right arrow Command+Shift+Right


key arrow key

Collapse all queries Ctrl+Shift+Left arrow Command+Shift+Left


key arrow key
Action Windows keyboard MacOS keyboard shortcut
shortcut

Insert new step using+button (after Ctrl+Alt+N Command+Options+N


selected step)

Highlight related queries Ctrl+Alt+R Command +Options+R

Select all queries Ctrl+A Command+A

Copy queries Ctrl+C Command+C

Paste queries Ctrl+V Command+V

Queries pane
Action Windows keyboard MacOS keyboard shortcut
shortcut

Select all queries Ctrl+A Command+A

Copy queries Ctrl+C Command+C

Paste queries Ctrl+V Command+V

Select multiple Ctrl+Up arrow key and Command+Up arrow key and
consecutive queries Ctrl+Down arrow key Command+Down arrow key
Best practices when working with Power
Query
Article • 04/10/2023

This article contains some tips and tricks to make the most out of your data wrangling
experience in Power Query.

Choose the right connector


Power Query offers a vast number of data connectors. These connectors range from
data sources such as TXT, CSV, and Excel files, to databases such as Microsoft SQL
Server, and popular SaaS services such as Microsoft Dynamics 365 and Salesforce. If you
don't see your data source listed in the Get Data window, you can always use the ODBC
or OLEDB connector to connect to your data source.

Using the best connector for the task will provide you with the best experience and
performance. For example, using the SQL Server connector instead of the ODBC
connector when connecting to a SQL Server database not only provides you with a
much better Get Data experience, but the SQL Server connector also offers you features
that can improve your experience and performance, such as query folding. To read more
about query folding, go to Power Query query folding.

Each data connector follows a standard experience as explained in Getting data. This
standardized experience has a stage called Data Preview. In this stage, you're provided
with a user-friendly window to select the data that you want to get from your data
source, if the connector allows it, and a simple data preview of that data. You can even
select multiple datasets from your data source through the Navigator window, as shown
in the following image.
7 Note

To see the full list of available connectors in Power Query, go to Connectors in


Power Query.

Filter early
It's always recommended to filter your data in the early stages of your query or as early
as possible. Some connectors will take advantage of your filters through query folding,
as described in Power Query query folding. It's also a best practice to filter out any data
that isn't relevant for your case. This will let you better focus on your task at hand by
only showing data that’s relevant in the data preview section.

You can use the auto filter menu that displays a distinct list of the values found in your
column to select the values that you want to keep or filter out. You can also use the
search bar to help you find the values in your column.
You can also take advantage of the type-specific filters such as In the previous for a
date, datetime, or even date timezone column.
These type-specific filters can help you create a dynamic filter that will always retrieve
data that's in the previous x number of seconds, minutes, hours, days, weeks, months,
quarters, or years as showcased in the following image.

7 Note

To learn more about filtering your data based on values from a column, go to Filter
by values.
Do expensive operations last
Certain operations require reading the full data source in order to return any results, and
will thus be slow to preview in the Power Query Editor. For example, if you perform a
sort, it's possible that the first few sorted rows are at the end of the source data. So in
order to return any results, the sort operation must first read all the rows.

Other operations (such as filters) do not need to read all the data before returning any
results. Instead, they operate over the data in what's called a "streaming" fashion. The
data "streams" by, and results are returned along the way. In the Power Query Editor,
such operations only need to read enough of the source data to populate the preview.

When possible, perform such streaming operations first, and do any more expensive
operations last. This will help minimize the amount of time you spend waiting for the
preview to render each time you add a new step to your query.

Temporarily work against a subset of your data


If adding new steps to your query in the Power Query Editor is slow, consider first doing
a "Keep First Rows" operation and limiting the number of rows you're working against.
Then, once you've added all the steps you need, remove the "Keep First Rows" step.

Use the correct data types


Some features in Power Query are contextual to the data type of the column selected.
For example, when selecting a date column, the available options under the Date and
time column group in the Add Column menu will be available. But if the column doesn't
have a data type set, then these options will be greyed out.
A similar situation occurs for the type-specific filters, since they're specific to certain data
types. If your column doesn't have the correct data type defined, these type-specific
filters won't be available.

It's crucial that you always work with the correct data types for your columns. When
working with structured data sources such as databases, the data type information will
be brought from the table schema found in the database. But for unstructured data
sources such as TXT and CSV files, it's important that you set the correct data types for
the columns coming from that data source. By default, Power Query offers an automatic
data type detection for unstructured data sources. You can read more about this feature
and how it can help you in Data types.

7 Note

To learn more about the importance of data types and how to work with them, see
Data types.

Explore your data


Before you start preparing you data and adding new transformation steps, we
recommend that you enable the Power Query data profiling tools to easily discover
information about your data.

These data profiling tools help you better understand your data. The tools provide you
with small visualizations that show you information on a per column basis, such as:

Column quality—Provides a small bar chart and three indicators with the
representation of how many values in the column fall under the categories of valid,
error, or empty values.
Column distribution—Provides a set of visuals underneath the names of the
columns that showcase the frequency and distribution of the values in each of the
columns.
Column profile—Provides a more thorough view of your column and the statistics
associated to it.

You can also interact with these features, which will help you prepare your data.
7 Note

To learn more about the data profiling tools, go to Data profiling tools.

Document your work


We recommend that you document your queries by renaming or adding a description
to your steps, queries, or groups as you see fit.

While Power Query automatically creates a step name for you in the applied steps pane,
you can also rename your steps or add a description to any of them.
7 Note

To learn more about all the available features and components found inside the
applied steps pane, go to Using the Applied steps list.

Take a modular approach


It's entirely possible to create a single query that contains all the transformations and
calculations that you may need. But if the query contains a large number of steps, then
it might be a good idea to split the query into multiple queries, where one query
references the next. The goal of this approach is to simplify and decouple
transformation phases into smaller pieces so they're easier to understand.

For example, say you have a query with the nine steps shown in the following image.
You could split this query into two at the Merge with Prices table step. That way it's
easier to understand the steps that were applied to the sales query before the merge. To
do this operation, you right-click the Merge with Prices table step and select the Extract
Previous option.
You'll then be prompted with a dialog to give your new query a name. This will
effectively split your query into two queries. One query will have all the queries before
the merge. The other query will have an initial step that will reference your new query
and the rest of the steps that you had in your original query from the Merge with Prices
table step downward.

You could also leverage the use of query referencing as you see fit. But it's a good idea
to keep your queries at a level that doesn't seem daunting at first glance with so many
steps.

7 Note

To learn more about query referencing, go to Understanding the queries pane.

Create groups
A great way to keep your work organized is by leveraging the use of groups in the
queries pane.
The sole purpose of groups is to help you keep your work organized by serving as
folders for your queries. You can create groups within groups should you ever need to.
Moving queries across groups is as easy as drag and drop.

Try to give your groups a meaningful name that makes sense to you and your case.

7 Note

To learn more about all the available features and components found inside the
queries pane, go to Understanding the queries pane.

Future-proofing queries
Making sure that you create a query that won't have any issues during a future refresh is
a top priority. There are several features in Power Query to make your query resilient to
changes and able to refresh even when some components of your data source changes.

It's a best practice to define the scope of your query as to what it should do and what it
should account for in terms of structure, layout, column names, data types, and any
other component that you consider relevant to the scope.

Some examples of transformations that can help you make your query resilient to
changes are:
If your query has a dynamic number of rows with data, but a fixed number of rows
that serve as the footer that should be removed, you can use the Remove bottom
rows feature.

7 Note

To learn more about filtering your data by row position, go to Filter a table by
row position.

If your query has a dynamic number of columns, but you only need to select
specific columns from your dataset, you can use the Choose columns feature.

7 Note

To learn more about choosing or removing columns, go to Choose or remove


columns.

If your query has a dynamic number of columns and you need to unpivot only a
subset of your columns, you can use the unpivot only selected columns feature.

7 Note

To learn more about the options to unpivot your columns, go to Unpivot


columns.

If your query has a step that changes the data type of a column, but some cells
yield errors as the values don't conform to the desired data type, you could
remove the rows that yielded error values.

7 Note

To more about working and dealing with errors, go to Dealing with errors.

Use parameters
Creating queries that are dynamic and flexible is a best practice. Parameters in Power
Query help you make your queries more dynamic and flexible. A parameter serves as a
way to easily store and manage a value that can be reused in many different ways. But
it's more commonly used in two scenarios:
Step argument—You can use a parameter as the argument of multiple
transformations driven from the user interface.

Custom Function argument—You can create a new function from a query, and
reference parameters as the arguments of your custom function.

The main benefits of creating and using parameters are:

Centralized view of all your parameters through the Manage Parameters window.
Reusability of the parameter in multiple steps or queries.

Makes the creation of custom functions straightforward and easy.

You can even use parameters in some of the arguments of the data connectors. For
example, you could create a parameter for your server name when connecting to your
SQL Server database. Then you could use that parameter inside the SQL Server database
dialog.

If you change your server location, all you need to do is update the parameter for your
server name and your queries will be updated.

7 Note

To learn more about creating and using parameters, go to Using parameters.

Create reusable functions


If you find yourself in a situation where you need to apply the same set of
transformations to different queries or values, creating a Power Query custom function
that can be reused as many times as you need could be beneficial. A Power Query
custom function is a mapping from a set of input values to a single output value, and is
created from native M functions and operators.

For example, say you have multiple queries or values that require the same set of
transformations. You could create a custom function that later could be invoked against
the queries or values of your choice. This custom function would save you time and help
you in managing your set of transformations in a central location, which you can modify
at any moment.

Power Query custom functions can be created from existing queries and parameters. For
example, imagine a query that has several codes as a text string and you want to create
a function that will decode those values.
You start by having a parameter that has a value that serves as an example.

From that parameter, you create a new query where you apply the transformations that
you need. For this case, you want to split the code PTY-CM1090-LAX into multiple
components:
Origin = PTY
Destination = LAX
Airline = CM
FlightID = 1090

You can then transform that query into a function by doing a right-click on the query
and selecting Create Function. Finally, you can invoke your custom function into any of
your queries or values, as shown in the following image.

After a few more transformations, you can see that you've reached your desired output
and leveraged the logic for such a transformation from a custom function.

7 Note
To learn more about how to create and use custom functions in Power Query from
the article Custom Functions.
Power Query feedback
Article • 02/17/2023

This article describes how to get support or submit feedback for Power Query.

For Power Query connectors, go to Feedback and support for Power Query connectors.

For Power Query documentation, you can submit feedback through the Submit and
view feedback for - This page link at the bottom of each article.

Support and troubleshooting


If you're finding an issue with Power Query, use the dedicated support channels for the
product you're using Power Query in. For example, for Power BI, visit the Power BI
support page .

You can also use any of the following community resources:

Community forums for the product you're using Power Query in. For example, for
Power BI, this forum would be the Power BI Community
Power Query website resources

For information about the built-in Power Query help support links, go to Getting Power
Query help.

Submitting feedback
To submit feedback about Power Query, provide the feedback to the "ideas" forum for
the product you're using Power Query in. For example, for Power BI, visit the Power BI
ideas forum . If you have one, you can also provide feedback directly to your Microsoft
account contact.
How fuzzy matching works in Power
Query
Article • 12/17/2022

Power Query features such as fuzzy merge, cluster values, and fuzzy grouping use the
same mechanisms to work as fuzzy matching.

This article goes over many scenarios that demonstrate how to take advantage of the
options that fuzzy matching has, with the goal of making 'fuzzy' clear.

Adjust the similarity threshold


The best scenario for applying the fuzzy match algorithm is when all text strings in a
column contain only the strings that need to be compared and no extra components.
For example, comparing Apples against 4ppl3s yields higher similarity scores than
comparing Apples to My favorite fruit, by far, is Apples. I simply love them! .

Because the word Apples in the second string is only a small part of the whole text
string, that comparison yields a lower similarity score.

For example, the following dataset consists of responses from a survey that had only
one question—"What is your favorite fruit?"

Fruit

Blueberries

Blue berries are simply the best

Strawberries

Strawberries = <3

Apples

'sples

4ppl3s

Bananas

fav fruit is bananas

Banas
Fruit

My favorite fruit, by far, is Apples. I simply love them!

The survey provided one single textbox to input the value and had no validation.

Now you're tasked with clustering the values. To do that task, load the previous table of
fruits into Power Query, select the column, and then select the Cluster values option in
the Add column tab in the ribbon.

The Cluster values dialog box appears, where you can specify the name of the new
column. Name this new column Cluster and select OK.
By default, Power Query uses a similarity threshold of 0.8 (or 80%) and the result of the
previous operation yields the following table with a new Cluster column.

While the clustering has been done, it's not giving you the expected results for all the
rows. Row number two (2) still has the value Blue berries are simply the best , but it
should be clustered to Blueberries , and something similar happens to the text strings
Strawberries = <3 , fav fruit is bananas , and My favorite fruit, by far, is Apples.

I simply love them! .

To determine what's causing this clustering, double-click Clustered values in the


Applied steps panel to bring back the Cluster values dialog box. Inside this dialog box,
expand Fuzzy cluster options. Enable the Show similarity scores option, and then select
OK.
Enabling the Show similarity scores option creates a new column in your table. This
column shows you the exact similarity score between the defined cluster and the
original value.

Upon closer inspection, Power Query couldn't find any other values in the similarity
threshold for the text strings Blue berries are simply the best , Strawberries = <3 , fav
fruit is bananas , and My favorite fruit, by far, is Apples. I simply love them! .

Go back to the Cluster values dialog box one more time by double-clicking Clustered
values in the Applied steps panel. Change the Similarity threshold from 0.8 to 0.6, and
then select OK.
This change gets you closer to the result that you're looking for, except for the text
string My favorite fruit, by far, is Apples. I simply love them! . When you changed
the Similarity threshold value from 0.8 to 0.6, Power Query was now able to use the
values with a similarity score that starts from 0.6 all the way up to 1.

7 Note

Power Query always uses the value closest to the threshold to define the clusters.
The threshold defines the lower limit of the similarity score that's acceptable to
assign the value to a cluster.
You can try again by changing the Similarity score from 0.6 to a lower number until you
get the results that you're looking for. In this case, change the Similarity score to 0.5.
This change yields the exact result that you're expecting with the text string My favorite
fruit, by far, is Apples. I simply love them! now assigned to the cluster Apples .

7 Note

Currently, only the Cluster values feature in Power Query Online provides a new
column with the similarity score.

Special considerations for transformation table


The transformation table helps you map values from your column to new values before
performing the fuzzy matching algorithm.

Some examples of how the transformation table can be used:

Transformation table in fuzzy merge queries


Transformation table in group by
Transformation table in cluster values

) Important

When the transformation table is used, the maximum similarity score for the values
from the transformation table is 0.95. This deliberate penalty of 0.05 is in place to
distinguish that the original value from such column isn't equal to the values that it
was compared to since a transformation occurred.

For scenarios where you first want to map your values and then perform the fuzzy
matching without the 0.05 penalty, we recommend that you replace the values
from your column and then perform the fuzzy matching.
Behind the scenes of the Data Privacy
Firewall
Article • 02/17/2023

7 Note

Privacy levels are currently unavailable in Power Platform dataflows. The product
team is working towards re-enabling this functionality in the coming weeks.

If you've used Power Query for any length of time, you've likely experienced it. There
you are, querying away, when you suddenly get an error that no amount of online
searching, query tweaking, or keyboard bashing can remedy. An error like:

Formula.Firewall: Query 'Query1' (step 'Source') references other queries or steps,

so it may not directly access a data source. Please rebuild this data combination.

Or maybe:

Formula.Firewall: Query 'Query1' (step 'Source') is accessing data sources that

have privacy levels which cannot be used together. Please rebuild this data
combination.

These Formula.Firewall errors are the result of Power Query's Data Privacy Firewall (also
known as the Firewall), which at times may seem like it exists solely to frustrate data
analysts the world over. Believe it or not, however, the Firewall serves an important
purpose. In this article, we'll delve under the hood to better understand how it works.
Armed with greater understanding, you'll hopefully be able to better diagnose and fix
Firewall errors in the future.

What is it?
The purpose of the Data Privacy Firewall is simple: it exists to prevent Power Query from
unintentionally leaking data between sources.

Why is this needed? I mean, you could certainly author some M that would pass a SQL
value to an OData feed. But this would be intentional data leakage. The mashup author
would (or at least should) know they were doing this. Why then the need for protection
against unintentional data leakage?

The answer? Folding.


Folding?
Folding is a term that refers to converting expressions in M (such as filters, renames,
joins, and so on) into operations against a raw data source (such as SQL, OData, and so
on). A huge part of Power Query's power comes from the fact that PQ can convert the
operations a user performs via its user interface into complex SQL or other backend
data source languages, without the user having to know said languages. Users get the
performance benefit of native data source operations, with the ease of use of a UI where
all data sources can be transformed using a common set of commands.

As part of folding, PQ sometimes may determine that the most efficient way to execute
a given mashup is to take data from one source and pass it to another. For example, if
you're joining a small CSV file to a huge SQL table, you probably don't want PQ to read
the CSV file, read the entire SQL table, and then join them together on your local
computer. You probably want PQ to inline the CSV data into a SQL statement and ask
the SQL database to perform the join.

This is how unintentional data leakage can happen.

Imagine if you were joining SQL data that included employee Social Security Numbers
with the results of an external OData feed, and you suddenly discovered that the Social
Security Numbers from SQL where being sent to the OData service. Bad news, right?

This is the kind of scenario the Firewall is intended to prevent.

How does it work?


The Firewall exists to prevent data from one source from being unintentionally sent to
another source. Simple enough.

So how does it accomplish this mission?

It does this by dividing your M queries into something called partitions, and then
enforcing the following rule:

A partition may either access compatible data sources, or reference other


partitions, but not both.

Simple…yet confusing. What's a partition? What makes two data sources "compatible"?
And why should the Firewall care if a partition wants to access a data source and
reference a partition?

Let's break this down and look at the above rule one piece at a time.
What's a partition?
At its most basic level, a partition is just a collection of one or more query steps. The
most granular partition possible (at least in the current implementation) is a single step.
The largest partitions can sometimes encompass multiple queries. (More on this later.)

If you're not familiar with steps, you can view them on the right of the Power Query
Editor window after selecting a query, in the Applied Steps pane. Steps keep track of
everything you've done to transform your data into its final shape.

Partitions that reference other partitions


When a query is evaluated with the Firewall on, the Firewall divides the query and all its
dependencies into partitions (that is, groups of steps). Anytime one partition references
something in another partition, the Firewall replaces the reference with a call to a special
function called Value.Firewall . In other words, the Firewall doesn't allow partitions to
access each other randomly. All references are modified to go through the Firewall.
Think of the Firewall as a gatekeeper. A partition that references another partition must
get the Firewall's permission to do so, and the Firewall controls whether or not the
referenced data will be allowed into the partition.

This all may seem pretty abstract, so let's look at an example.

Assume you have a query called Employees, which pulls some data from a SQL
database. Assume you also have another query (EmployeesReference), which simply
references Employees.

Power Query M

shared Employees = let


Source = Sql.Database(…),
EmployeesTable = …
in
EmployeesTable;

shared EmployeesReference = let


Source = Employees
in
Source;

These queries will end up divided into two partitions: one for the Employees query, and
one for the EmployeesReference query (which will reference the Employees partition).
When evaluated with the Firewall on, these queries will be rewritten like so:

Power Query M
shared Employees = let
Source = Sql.Database(…),
EmployeesTable = …
in
EmployeesTable;

shared EmployeesReference = let


Source = Value.Firewall("Section1/Employees")
in
Source;

Notice that the simple reference to the Employees query has been replaced by a call to
Value.Firewall , which is provided the full name of the Employees query.

When EmployeesReference is evaluated, the call to


Value.Firewall("Section1/Employees") is intercepted by the Firewall, which now has a
chance to control whether (and how) the requested data flows into the
EmployeesReference partition. It can do any number of things: deny the request, buffer
the requested data (which prevents any further folding to its original data source from
occurring), and so on.

This is how the Firewall maintains control over the data flowing between partitions.

Partitions that directly access data sources


Let's say you define a query Query1 with one step (note that this single-step query
corresponds to one Firewall partition), and that this single step accesses two data
sources: a SQL database table and a CSV file. How does the Firewall deal with this, since
there's no partition reference, and thus no call to Value.Firewall for it to intercept?
Let's review to the rule stated earlier:

A partition may either access compatible data sources, or reference other


partitions, but not both.

In order for your single-partition-but-two-data-sources query to be allowed to run, its


two data sources must be "compatible". In other words, it needs to be okay for data to
be shared between them. In terms of the Power Query UI, this means the privacy levels
of the SQL and CSV data sources need to both be Public, or both be Organizational. If
they're both marked Private, or one is marked Public and one is marked Organizational,
or they're marked using some other combination of privacy levels, then it's not safe for
them to both be evaluated in the same partition. Doing so would mean unsafe data
leakage could occur (due to folding), and the Firewall would have no way to prevent it.

What happens if you try to access incompatible data sources in the same partition?
Formula.Firewall: Query 'Query1' (step 'Source') is accessing data sources that

have privacy levels which cannot be used together. Please rebuild this data
combination.

Hopefully you now better understand one of the error messages listed at the beginning
of this article.

Note that this compatibility requirement only applies within a given partition. If a
partition is referencing other partitions, the data sources from the referenced partitions
don't have to be compatible with one another. This is because the Firewall can buffer
the data, which will prevent any further folding against the original data source. The
data will be loaded into memory and treated as if it came from nowhere.

Why not do both?


Let's say you define a query with one step (which will again correspond to one partition)
that accesses two other queries (that is, two other partitions). What if you wanted, in the
same step, to also directly access a SQL database? Why can't a partition reference other
partitions and directly access compatible data sources?

As you saw earlier, when one partition references another partition, the Firewall acts as
the gatekeeper for all the data flowing into the partition. To do so, it must be able to
control what data is allowed in. If there are data sources being accessed within the
partition, and data flowing in from other partitions, it loses its ability to be the
gatekeeper, since the data flowing in could be leaked to one of the internally accessed
data sources without it knowing about it. Thus the Firewall prevents a partition that
accesses other partitions from being allowed to directly access any data sources.

So what happens if a partition tries to reference other partitions and also directly access
data sources?

Formula.Firewall: Query 'Query1' (step 'Source') references other queries or steps,


so it may not directly access a data source. Please rebuild this data combination.

Now you hopefully better understand the other error message listed at the beginning of
this article.

Partitions in-depth
As you can probably guess from the above information, how queries are partitioned
ends up being incredibly important. If you have some steps that are referencing other
queries, and other steps that access data sources, you now hopefully recognize that
drawing the partition boundaries in certain places will cause Firewall errors, while
drawing them in other places will allow your query to run just fine.

So how exactly do queries get partitioned?

This section is probably the most important for understanding why you're seeing
Firewall errors, and understanding how to resolve them (where possible).

Here's a high-level summary of the partitioning logic.

Initial Partitioning
Creates a partition for each step in each query
Static Phase
This phase doesn't depend on evaluation results. Instead, it relies on how the
queries are structured.
Parameter Trimming
Trims parameter-esque partitions, that is, any one that:
Doesn't reference any other partitions
Doesn't contain any function invocations
Isn't cyclic (that is, it doesn't refer to itself)
Note that "removing" a partition effectively includes it in whatever other
partitions reference it.
Trimming parameter partitions allows parameter references used within
data source function calls (for example, Web.Contents(myUrl) ) to work,
instead of throwing "partition can't reference data sources and other
steps" errors.
Grouping (Static)
Partitions are merged, while maintaining separation between:
Partitions in different queries
Partitions that reference other partitions vs. those that don't
Dynamic Phase
This phase depends on evaluation results, including information about data
sources accessed by various partitions.
Trimming
Trims partitions that meet all the following requirements:
Doesn't access any data sources
Doesn't reference any partitions that access data sources
Isn't cyclic
Grouping (Dynamic)
Now that unnecessary partitions have been trimmed, try to create Source
partitions that are as large as possible.
Merge all partitions with their input partitions if each of its inputs:
Is part of the same query
Doesn't reference any other partitions
Is only referenced by the current partition
Isn't the result (that is, final step) of a query
Isn't cyclic

What does all this mean?


Let's walk through an example to illustrate how the complex logic laid out above works.

Here's a sample scenario. It's a fairly straightforward merge of a text file (Contacts) with
a SQL database (Employees), where the SQL server is a parameter (DbServer).

The three queries


Here's the M code for the three queries used in this example.

Power Query M

shared DbServer = "montegoref6" meta [IsParameterQuery=true, Type="Text",


IsParameterQueryRequired=true];

Power Query M

shared Contacts = let

Source = Csv.Document(File.Contents("C:\contacts.txt"),[Delimiter=" ",


Columns=15, Encoding=1252, QuoteStyle=QuoteStyle.None]),

#"Promoted Headers" = Table.PromoteHeaders(Source,


[PromoteAllScalars=true]),

#"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",


{{"ContactID", Int64.Type}, {"NameStyle", type logical}, {"Title", type
text}, {"FirstName", type text}, {"MiddleName", type text}, {"LastName",
type text}, {"Suffix", type text}, {"EmailAddress", type text},
{"EmailPromotion", Int64.Type}, {"Phone", type text}, {"PasswordHash", type
text}, {"PasswordSalt", type text}, {"AdditionalContactInfo", type text},
{"rowguid", type text}, {"ModifiedDate", type datetime}})

in

#"Changed Type";

Power Query M
shared Employees = let

Source = Sql.Databases(DbServer),

AdventureWorks = Source{[Name="AdventureWorks"]}[Data],

HumanResources_Employee =
AdventureWorks{[Schema="HumanResources",Item="Employee"]}[Data],

#"Removed Columns" = Table.RemoveColumns(HumanResources_Employee,


{"HumanResources.Employee(EmployeeID)",
"HumanResources.Employee(ManagerID)", "HumanResources.EmployeeAddress",
"HumanResources.EmployeeDepartmentHistory",
"HumanResources.EmployeePayHistory", "HumanResources.JobCandidate",
"Person.Contact", "Purchasing.PurchaseOrderHeader", "Sales.SalesPerson"}),

#"Merged Queries" = Table.NestedJoin(#"Removed Columns",


{"ContactID"},Contacts,{"ContactID"},"Contacts",JoinKind.LeftOuter),

#"Expanded Contacts" = Table.ExpandTableColumn(#"Merged Queries",


"Contacts", {"EmailAddress"}, {"EmailAddress"})

in

#"Expanded Contacts";

Here's a higher-level view, showing the dependencies.


Let's partition
Let's zoom in a bit and include steps in the picture, and start walking through the
partitioning logic. Here's a diagram of the three queries, showing the initial firewall
partitions in green. Notice that each step starts in its own partition.
Next, we trim parameter partitions. Thus, DbServer gets implicitly included in the Source
partition.
Now we perform the static grouping. This maintains separation between partitions in
separate queries (note for instance that the last two steps of Employees don't get
grouped with the steps of Contacts), and between partitions that reference other
partitions (such as the last two steps of Employees) and those that don't (such as the
first three steps of Employees).

Now we enter the dynamic phase. In this phase, the above static partitions are
evaluated. Partitions that don't access any data sources are trimmed. Partitions are then
grouped to create source partitions that are as large as possible. However, in this
sample scenario, all the remaining partitions access data sources, and there isn't any
further grouping that can be done. The partitions in our sample thus won't change
during this phase.

Let's pretend
For the sake of illustration, though, let's look at what would happen if the Contacts
query, instead of coming from a text file, were hard-coded in M (perhaps via the Enter
Data dialog).

In this case, the Contacts query wouldn't access any data sources. Thus, it would get
trimmed during the first part of the dynamic phase.
With the Contacts partition removed, the last two steps of Employees would no longer
reference any partitions except the one containing the first three steps of Employees.
Thus, the two partitions would be grouped.

The resulting partition would look like this.


Example: Passing data from one data source to
another
Okay, enough abstract explanation. Let's look at a common scenario where you're likely
to encounter a Firewall error and the steps to resolve it.

Imagine you want to look up a company name from the Northwind OData service, and
then use the company name to perform a Bing search.

First, you create a Company query to retrieve the company name.

Power Query M

let
Source =
OData.Feed("https://services.odata.org/V4/Northwind/Northwind.svc/", null,
[Implementation="2.0"]),
Customers_table = Source{[Name="Customers",Signature="table"]}[Data],
CHOPS = Customers_table{[CustomerID="CHOPS"]}[CompanyName]
in
CHOPS

Next, you create a Search query that references Company and passes it to Bing.
Power Query M

let
Source = Text.FromBinary(Web.Contents("https://www.bing.com/search?q=" &
Company))
in
Source

At this point you run into trouble. Evaluating Search produces a Firewall error.

Formula.Firewall: Query 'Search' (step 'Source') references other queries or steps,


so it may not directly access a data source. Please rebuild this data combination.

This is because the Source step of Search is referencing a data source (bing.com) and
also referencing another query/partition (Company). It's violating the rule mentioned
above ("a partition may either access compatible data sources, or reference other
partitions, but not both").

What to do? One option is to disable the Firewall altogether (via the Privacy option
labeled Ignore the Privacy Levels and potentially improve performance). But what if
you want to leave the Firewall enabled?

To resolve the error without disabling the Firewall, you can combine Company and
Search into a single query, like this:

Power Query M

let
Source =
OData.Feed("https://services.odata.org/V4/Northwind/Northwind.svc/", null,
[Implementation="2.0"]),
Customers_table = Source{[Name="Customers",Signature="table"]}[Data],
CHOPS = Customers_table{[CustomerID="CHOPS"]}[CompanyName],
Search = Text.FromBinary(Web.Contents("https://www.bing.com/search?q=" &
CHOPS))
in
Search

Everything is now happening inside a single partition. Assuming that the privacy levels
for the two data sources are compatible, the Firewall should now be happy, and you'll
no longer get an error.

That's a wrap
While there's much more that could be said on this topic, this introductory article is
already long enough. Hopefully it's given you a better understanding of the Firewall, and
will help you to understand and fix Firewall errors when you encounter them in the
future.
Query diagnostics
Article • 02/17/2023

With Query Diagnostics, you can achieve a better understanding of what Power Query is
doing at authoring and at refresh time in Power BI Desktop. While we'll be expanding on
this feature in the future, including adding the ability to use it during full refreshes, at
this time you can use it to understand what sort of queries you're emitting, what
slowdowns you might run into during authoring refresh, and what kind of background
events are happening.

To use Query Diagnostics, go to the Tools tab in the Power Query editor ribbon.

By default, Query Diagnostics might require administrative rights to run (depending on


IT policy). If you find yourself unable to run Query Diagnostics, open the Power BI
options page, and in the Diagnostics tab, select Enable in Query Editor (does not
require running as admin). This selection constrains you from being able to trace
diagnostics when doing a full refresh into Power BI rather than the Power Query editor,
but does allow you to still use it when previewing, authoring, and so on.
Whenever you start diagnostics, Power Query begins tracing any evaluations that you
cause. The evaluation that most users think of is when you press refresh, or when you
retrieve data for the first time, but there are many actions that can cause evaluations,
depending on the connector. For example, with the SQL connector, when you retrieve a
list of values to filter, that would kick off an evaluation as well—but it doesn't associate
with a user query, and that's represented in the diagnostics. Other system-generated
queries might include the navigator or the get data experience.

When you press Diagnose Step, Power Query runs a special evaluation of just the step
you're looking at. It then shows you the diagnostics for that step, without showing you
the diagnostics for other steps in the query. This can make it much easier to get a
narrow view into a problem.

It's important that if you're recording all traces from Start Diagnostics that you press
Stop diagnostics. Stopping the diagnostics allows the engine to collect the recorded
traces and parse them into the proper output. Without this step, you'll lose your traces.
Types of diagnostics
We currently provide three types of diagnostics, one of which has two levels of detail.

The first of these diagnostics are the primary diagnostics, which have a detailed view
and a summarized view. The summarized view is aimed to give you an immediate
insight into where time is being spent in your query. The detailed view is much deeper,
line by line, and is, in general, only needed for serious diagnosing by power users.

For this view, some capabilities, like the Data Source Query column, are currently
available only on certain connectors. We'll be working to extend the breadth of this
coverage in the future.

Data privacy partitions provide you with a better understanding of the logical partitions
used for data privacy.

7 Note

Power Query might perform evaluations that you may not have directly triggered.
Some of these evaluations are performed in order to retrieve metadata so we can
best optimize our queries or to provide a better user experience (such as retrieving
the list of distinct values within a column that are displayed in the Filter Rows
experience). Others might be related to how a connector handles parallel
evaluations. At the same time, if you see in your query diagnostics repeated queries
that you don't believe make sense, feel free to reach out through normal support
channels—your feedback is how we improve our product.

Summarized vs. detailed view


Query diagnostics provides two views: summarized and detailed. The summarized view
"collapses" multiple related operations into a single operation. In this process, details
collected by each operation are combined, and the exclusive durations are summed. No
information is lost as part of this process.

The summarized view provides an overview of what occurred during an evaluation for
easy high-level review. If further breakdown is wanted for a specific operation, the user
can look at the group ID and view the corresponding operations that were grouped in
the detail view.

Explaining multiple evaluations


When refreshing in the Power Query editor, there's a lot done behind the scenes to
attempt to give you a fluent user experience. As an example, when you Refresh Preview,
the evaluator will execute the final step of each given Query. But then in the background
it sequentially runs n-1 steps, n-2, steps, and so on, so that if you step back through
your steps, it's already available.

To provide higher performance, currently some caching happens so that it doesn't have
to rerun every part of the final query plan as it goes back through the steps. While this
caching is useful for normal authoring, it means that you won't always get correct step
comparison information because of later evaluations pulling on cached data.

Diagnostics schema

Id
When analyzing the results of a recording, it's important to filter the recording session
by Id, so that columns such as Exclusive Duration % make sense.

Id is a composite identifier. It's formed of two numbers—one before the dot, and one
after. The first number is the same for all evaluations that resulted from a single user
action. In other words, if you press refresh twice, there will be two different numbers
leading the dot, one for each user activity taken. This numbering is sequential for a
given diagnostics recording.

The second number represents an evaluation by the engine. This number is sequential
for the lifetime of the process where the evaluation is queued. If you run multiple
diagnostics recording sessions, you'll see this number continue to grow across the
different sessions.

To summarize, if you start recording, press evaluation once, and stop recording, you'll
have some number of Ids in your diagnostics. But since you only took one action, they'll
all be 1.1, 1.2, 1.3, and so on.

The combination of the activityId and the evaluationId, separated by the dot, provides a
unique identifier for an evaluation of a single recording session.

Query
The name of the Query in the left-hand pane of the Power Query editor.

Step
The name of the Step in the right-hand pane of the Power Query editor. Things like filter
dropdowns generally associate with the step you're filtering on, even if you're not
refreshing the step.

Category
The category of the operation.

Data source kind


This tells you what sort of data source you're accessing, such as SQL or Oracle.

Operation
The actual operation being performed. This operation can include evaluator work,
opening connections, sending queries to the data source, and many more.

Start time
The time that the operation started.

End time
The time that the operation ended.

Exclusive duration (%)


The Exclusive Duration column of an event is the amount of time the event was active.
This contrasts with the "duration" value that results from subtracting the values in an
event's Start Time column and End Time column. This "duration" value represents the
total time that elapsed between when an event began and when it ended, which may
include times the event was in a suspended or inactive state and another event was
consuming resources.

Exclusive duration % adds up to approximately 100% within a given evaluation, as


represented by the Id column. For example, if you filter on rows with Id 1.x, the Exclusive
Duration percentages would sum to approximately 100%. This isn't the case if you sum
the Exclusive Duration % values of all rows in a given diagnostic table.

Exclusive duration
The absolute time, rather than %, of exclusive duration. The total duration (that is,
exclusive duration + time when the event was inactive) of an evaluation can be
calculated in one of two ways:

Find the operation called "Evaluation". The difference between End Time–Start
Time results in the total duration of an event.

Subtract the minimum start time of all operations in an event from the maximum
end time. Note that in cases when the information collected for an event doesn't
account for the total duration, an operation called "Trace Gaps" is generated to
account for this time gap.

Resource
The resource you're accessing for data. The exact format of this resource depends on
the data source.

Data source query


Power Query does something called folding, which is the act of running as many parts of
the query against the back-end data source as possible. In Direct Query mode (over
Power Query), where enabled, only transforms that fold will run. In import mode,
transforms that can't fold will instead be run locally.

The Data Source Query column allows you to see the query or HTTP request/response
sent against the back-end data source. As you author your Query in the editor, many
Data Source Queries will be emitted. Some of these are the actual final Data Source
Query to render the preview, but others may be for data profiling, filter dropdowns,
information on joins, retrieving metadata for schemas, and any number of other small
queries.

In general, you shouldn't be concerned by the number of Data Source Queries emitted
unless there are specific reasons to be concerned. Instead, you should focus on making
sure the proper content is being retrieved. This column might also help determine if the
Power Query evaluation was fully folded.

Additional info
There's a lot of information retrieved by our connectors. Much of it is ragged and
doesn't fit well into a standard column hierarchy. This information is put in a record in
the additional info column. Information logged from custom connectors also appears
here.
Row count
The number of rows returned by a Data Source Query. Not enabled on all connectors.

Content length
Content length returned by HTTP Requests, as commonly defined. This isn't enabled in
all connectors, and it won't be accurate for connectors that retrieve requests in chunks.

Is user query
A Boolean value that indicates if it's a query authored by the user and present in the
left-hand pane, or if it was generated by some other user action. Other user actions can
include things such as filter selection or using the navigator in the get data experience.

Path
Path represents the relative route of the operation when viewed as part of an interval
tree for all operations within a single evaluation. At the top (root) of the tree, there's a
single operation called Evaluation with path "0". The start time of this evaluation
corresponds to the start of this evaluation as a whole. The end time of this evaluation
shows when the whole evaluation finished. This top-level operation has an exclusive
duration of 0, as its only purpose is to serve as the root of the tree.

Further operations branch from the root. For example, an operation might have "0/1/5"
as a path. This path would be understood as:

0: tree root
1: current operation's parent
5: index of current operation

Operation "0/1/5" might have a child node, in which case, the path has the form
"0/1/5/8", with 8 representing the index of the child.

Group ID
Combining two (or more) operations won't occur if it leads to detail loss. The grouping
is designed to approximate "commands" executed during the evaluation. In the detailed
view, multiple operations share a Group Id, corresponding to the groups that are
aggregated in the Summary view.
As with most columns, the group ID is only relevant within a specific evaluation, as
filtered by the Id column.

Data privacy partitions schema

Id
Same as the ID for the other query diagnostics results. The integer part represents a
single activity ID, while the fractional part represents a single evaluation.

Partition key
Corresponds to the Query/Step that's used as a firewall partition.

Firewall group
Categorization that explains why this partition has to be evaluated separately, including
details on the privacy level of the partition.

Accessed resources
List of resource paths for all the resources accessed by this partition, and is in general
uniquely identifying a data source.

Partition inputs
List of partition keys upon which the current partition depends (this list could be used to
build a graph).

Expression
The expression that gets evaluated on top of the partition's query/step. In several cases,
it coincides with the query/step.

Start time
Time when evaluation started for this partition.

End time
Time when evaluation ended for this partition.

Duration
A value derived from End Time minus Start Time.

Exclusive duration
If partitions are assumed to execute in a single thread, exclusive duration is the "real"
duration that can be attributed to this partition.

Exclusive duration %
Exclusive duration as a percentage.

Diagnostics
This column only appears when the query diagnostics "Aggregated" or "Detailed" is also
captured, allowing the user to correspond between the two diagnostics outputs.

Performance counters schema


When you run performance counters, every half second Power Query takes a snapshot
of resource utilization. This snapshot isn't useful for very fast queries, but can be helpful
for queries that use up a lot more resources.

% processor time
Percent of time spent by processors on the query. This percentage may reach above
100% because of multiple processors.

Total processor time


Total duration of processor time spent on the query.

IO data bytes per second


Throughput speed of data received from the data source, expressed in bytes per second.
Commit (bytes)
Amount of virtual memory reserved by the evaluation.

Working set (bytes)


Amount of memory reserved by the evaluation.

Additional reading
How to record diagnostics in various use cases

More about reading and visualizing your recorded traces

How to understand what query operations are folding using Query Diagnostics
Recording query diagnostics in Power BI
Article • 02/17/2023

When authoring in Power Query, the basic workflow is that you connect to a data
source, apply some transformations, potentially refresh your data in the Power Query
editor, and then load it to the Power BI model. Once it's in the Power BI model, you may
refresh it from time to time in Power BI Desktop (if you're using Desktop to view
analytics), aside from any refreshes you do in the service.

While you may get a similar result at the end of an authoring workflow, refreshing in the
editor, or refreshing in Power BI proper, very different evaluations are run by the
software for the different user experiences provided. It's important to know what to
expect when doing query diagnostics in these different workflows so you aren't
surprised by the very different diagnostic data.

To start Query Diagnostics, go to the Tools tab in the Power Query editor ribbon. You're
presented here with a few different options.

There are two primary options here, 'Diagnose Step' and 'Start Diagnostics' (paired with
'Stop Diagnostics'). The former will give you information on a query up to a selected
step, and is most useful for understanding what operations are being performed locally
or remotely in a query. The latter gives you more insight into a variety of other cases,
discussed below.

Connector specifics
It's important to mention that there is no way to cover all the different permutations of
what you'll see in Query Diagnostics. There are lots of things that can change exactly
what you see in results:

Connector
Transforms applied
System that you're running on
Network configuration
Advanced configuration choices
ODBC configuration

For the most broad coverage this documentation will focus on Query Diagnostics of the
Northwind Customers table, both on SQL and OData. The OData notes use the public
endpoint found at the OData.org website , while you'll need to provide a SQL server
for yourself. Many data sources will differ significantly from these, and will have
connector specific documentation added over time.

Start / stop diagnostics


'Start Diagnostics' and 'Stop Diagnostics' are more broadly applicable than 'Diagnose
Step', but will also give you a lot more information that you'll need to sort through. For
example, starting diagnostics, refreshing a preview, and then stopping will give you
equivalent information to running Diagnose Step on every step (due to how Power
Query works in the editor to refresh each step independently).

To start recording, select Start Diagnostics, perform whatever evaluations you want
(authoring, preview refresh, full refresh), and then select Stop Diagnostics.

Authoring
The authoring workflow's primary difference is that it will generally generate more
individual evaluations than seen in other workflows. As discussed in the primary Query
Diagnostics article, these are a result of populating various user interfaces such as the
navigator or filter dropdowns.

We're going to walk through an example. We're using the OData connector in this
sample, but when reviewing the output we'll also look at the SQL version of the same
database. For both data sources, we're going to connect to the data source via 'New
Source', 'Recent Sources', or 'Get Data'. For the SQL connection you'll need to put in
credentials for your server, but for the public OData endpoint you can put in the
endpoint linked above.
Once you connect and choose authentication, select the Customers table from the
OData service.

This will present you with the Customers table in the Power Query interface. Let's say
that we want to know how many Sales Representatives there are in different countries.
First, right-click on Sales Representative under the Contact Title column, mouse over
Text Filters, and select Equals.

Now, select Group By from the ribbon and do a grouping by Country, with your
aggregate being a Count.
This should present you with the same data you see below.

Finally, navigate back to the Tools tab of the Ribbon and select Stop Diagnostics. This
will stop the tracing and build your diagnostics file for you, and the summary and
detailed tables will appear on the left-hand side.

If you trace an entire authoring session, you will generally expect to see something like a
source query evaluation, then evaluations related to the relevant navigator, then at least
one query emitted for each step you apply (with potentially more depending on the
exact UX actions taken). In some connectors, parallel evaluations will happen for
performance reasons that will yield very similar sets of data.

Refresh preview
When you have finished transforming your data, you have a sequence of steps in a
query. When you press 'Refresh Preview' or 'Refresh All' in the Power Query editor, you
won't see just one step in your query diagnostics. The reason for this is that refreshing in
the Power Query Editor explicitly refreshes the query ending with the last step applied,
and then steps back through the applied steps and refreshes for the query up to that
point, back to the source.

This means that if you have five steps in your query, including Source and Navigator,
you will expect to see five different evaluations in your diagnostics. The first one,
chronologically, will often (but not always) take the longest. This is due to two different
reasons:

It may potentially cache input data that the queries run after it (representing earlier
steps in the User Query) can access faster locally.
It may have transforms applied to it that significantly truncate how much data has
to be returned.

Note that when talking about 'Refresh All' that it will refresh all queries and you'll need
to filter to the ones you care about, as you might expect.

Full refresh
Query Diagnostics can be used to diagnose the so-called 'final query' that is emitted
during the Refresh in Power BI, rather than just the Power Query editor experience. To
do this, you first need to load the data to the model once. If you are planning to do this,
make sure that you realize that if you select Close and Apply that the editor window will
close (interrupting tracing) so you either need to do it on the second refresh, or select
the dropdown icon under Close and Apply and select Apply instead.

Either way, make sure to select Start Diagnostics on the Diagnostics section of the
Tools tab in the editor. Once you've done this refresh your model, or even just the table
you care about.
Once it's done loading the data to model, select Stop Diagnostics.

You can expect to see some combination of metadata and data queries. Metadata calls
grab the information it can about the data source. Data retrieval is about accessing the
data source, emitting the final built up Data Source Query with folded down operations,
and then performing whatever evaluations are missing on top, locally.

It's important to note that just because you see a resource (database, web endpoint,
etc.) or a data source query in your diagnostics, it doesn't mean that it's necessarily
performing network activity. Power Query may retrieve this information from its cache.
In future updates, we will indicate whether or not information is being retrieved from
the cache for easier diagnosis.

Diagnose step
'Diagnose Step' is more useful for getting an insight into what evaluations are
happening up to a single step, which can help you identify, up to that step, what
performance is like as well as what parts of your query are being performed locally or
remotely.

If you used 'Diagnose Step' on the query we built above, you'll find that it only returns
10 or so rows, and if we look at the last row with a Data Source Query we can get a
pretty good idea of what our final emitted query to the data source will be. In this case,
we can see that Sales Representative was filtered remotely, but the grouping (by process
of elimination) happened locally.
If you start and stop diagnostics and refresh the same query, we get 40 rows due to the
fact that, as mentioned above, Power Query is getting information on every step, not
just the final step. This makes it harder when you're just trying to get insight into one
particular part of your query.

Additional reading
An introduction to the feature

More about reading and visualizing your recorded traces

How to understand what query operations are folding using Query Diagnostics
Visualizing and Interpreting Query
Diagnostics in Power BI
Article • 02/17/2023

Introduction
Once you've recorded the diagnostics you want to use, the next step is being able to
understand what they say.

It's helpful to have a good understanding of what exactly each column in the query
diagnostics schema means, which we're not going to repeat in this short tutorial. There's
a full write up of that here.

In general, when building visualizations, it's better to use the full detailed table. Because
regardless of how many rows it is, what you're probably looking at is some kind of
depiction of how the time spent in different resources adds up, or what the native query
emitted was.

As mentioned in our article on recording the diagnostics, I'm working with the OData
and SQL traces for the same table (or nearly so)—the Customers table from Northwind.
In particular, I'm going to focus on common ask from our customers, and one of the
easier to interpret sets of traces: full refresh of the data model.

Building the visualizations


When you're going through traces, there are many ways you can evaluate them. In this
article, we're going to focus on a two visualization split--one to show the details you
care about, and the other to easily look at time contributions of various factors. For the
first visualization, a table is used. You can pick any fields you like, but the ones
recommended for an easy, high level look at what's going on are:

Id
Start Time
Query
Step
Data Source Query
Exclusive Duration (%)
Row Count
Category
Is User Query
Path

For the second visualization, one choice is to use a Stacked Column Chart. In the 'Axis'
parameter, you might want to use 'Id' or 'Step'. If we're looking at the Refresh, because it
doesn't have anything to do with steps in the Editor itself, we probably just want to look
at 'Id'. For the 'Legend' parameter, you should set 'Category' or 'Operation' (depending
on the granularity you want). For the 'Value', set 'Exclusive Duration' (and make sure it's
not the %, so that you get the raw duration value). Finally, for the Tooltip, set 'Earliest
Start Time'.

Once your visualization is built, make sure you sort by 'Earliest Start Time' ascending so
you can see the order things happen in.

While your exact needs might differ, this combination of charts is a good place to start
for looking at numerous diagnostics files and for a number of purposes.

Interpreting the visualizations


As mentioned above, there's many questions you can try to answer with query
diagnostics, but the two that we see the most often are asking how time is spent, and
asking what the query sent to the source is.

Asking how the time is spent is easy, and will be similar for most connectors. A warning
with query diagnostics, as mentioned elsewhere, is that you'll see drastically different
capabilities depending on the connector. For example, many ODBC based connectors
won't have an accurate recording of what query is sent to the actual back-end system,
as Power Query only sees what it sends to the ODBC driver.
If we want to see how the time is spent, we can just look at the visualizations we built
above.

Now, because the time values for the sample queries we're using here are so small, if we
want to work with how Power BI reports time it's better if we convert the Exclusive
Duration column to 'Seconds' in the Power Query editor. Once we do this this
conversion, we can look at our chart and get a decent idea of where time is spent.

For my OData results, I see in the image that the vast majority of the time was spent
retrieving the data from source—if I select the 'Data Source' item on the legend, it
shows me all of the different operations related to sending a query to the Data Source.

If we perform all the same operations and build similar visualizations, but with the SQL
traces instead of the ODATA ones, we can see how the two data sources compare!

If we select the Data Source table, like with the ODATA diagnostics we can see the first
evaluation (2.3 in this image) emits metadata queries, with the second evaluation
actually retrieving the data we care about. Because we're retrieving small amounts of
data in this case, the data pulled back takes a small amount of time (less than a tenth of
a second for the entire second evaluation to happen, with less than a twentieth of a
second for data retrieval itself), but that won't be true in all cases.

As above, we can select the 'Data Source' category on the legend to see the emitted
queries.

Digging into the data


Looking at paths

When you're looking at this, if it seems like time spent is strange—for example, on the
OData query you might see that there's a Data Source Query with the following value:

Request:
https://services.odata.org/V4/Northwind/Northwind.svc/Customers?
$filter=ContactTitle%20eq%20%27Sales%20Representative%27&$select=CustomerID%
2CCountry HTTP/1.1
Content-Type:
application/json;odata.metadata=minimal;q=1.0,application/json;odata=minimal
metadata;q=0.9,application/atomsvc+xml;q=0.8,application/atom+xml;q=0.8,appl
ication/xml;q=0.7,text/plain;q=0.7

<Content placeholder>

Response:
Content-Type:
application/json;odata.metadata=minimal;q=1.0,application/json;odata=minimal
metadata;q=0.9,application/atomsvc+xml;q=0.8,application/atom+xml;q=0.8,appl
ication/xml;q=0.7,text/plain;q=0.7
Content-Length: 435

<Content placeholder>

This Data Source Query is associated with an operation that only takes up, say, 1% of the
Exclusive Duration. Meanwhile, there's a similar one:

Request:
GET https://services.odata.org/V4/Northwind/Northwind.svc/Customers?
$filter=ContactTitle eq 'Sales Representative'&$select=CustomerID%2CCountry
HTTP/1.1

Response:
https://services.odata.org/V4/Northwind/Northwind.svc/Customers?
$filter=ContactTitle eq 'Sales Representative'&$select=CustomerID%2CCountry
HTTP/1.1 200 OK

This Data Source Query is associated with an operation that takes up nearly 75% of the
Exclusive Duration. If you turn on the Path, you discover the latter is actually a child of
the former. This means that the first query basically added a small amount of time on its
own, with the actual data retrieval being tracked by the 'inner' query.

These are extreme values, but they're within the bounds of what might be seen.
Understanding folding with Query
Diagnostics
Article • 03/31/2023

One of the most common reasons to use Query Diagnostics is to have a better
understanding of what operations were 'pushed down' by Power Query to be performed
by the back-end data source, which is also known as 'folding'. If we want to see what
folded, we can look at what is the 'most specific' query, or queries, that get sent to the
back-end data source. We can look at this for both ODATA and SQL.

The operation that was described in the article on Recording Diagnostics does
essentially four things:

Connects to the data source


Grabs the customer table
Filters the Customer ID role to 'Sales Representative'
Groups by 'Country'

Since the ODATA connector doesn't currently support folding COUNT() to the endpoint,
and since this endpoint is somewhat limited in its operations as well, we don't expect
that final step to fold. On the other hand, filtering is relatively trivial. This is exactly what
we see if we look at the most specific query emitted above:

Request:
GET https://services.odata.org/V4/Northwind/Northwind.svc/Customers?
$filter=ContactTitle eq 'Sales Representative'&$select=CustomerID%2CCountry
HTTP/1.1

Response:
https://services.odata.org/V4/Northwind/Northwind.svc/Customers?
$filter=ContactTitle eq 'Sales Representative'&$select=CustomerID%2CCountry
HTTP/1.1 200 OK

We can see we're filtering the table for ContactTitle equaling 'Sales Representative', and
we're only returning two columns--Customer ID and Country. Country, of course, is
needed for the grouping operation, which since it isn't being performed by the ODATA
endpoint must be performed locally. We can conclude what folds and doesn't fold here.

Similarly, if we look at the specific and final query emitted in the SQL diagnostics, we see
something slightly different:
select

count(1) as [Count]
from
(
select [_].[Country]
from [dbo].[Customers] as [_]
where [_].[ContactTitle] = 'Sales Representative' and [_].[ContactTitle]
is not null
) as [rows]
group by [Country]

Here, we can see that Power Query creates a subselection where ContactTitle is filtered
to 'Sales Representative', then groups by Country on this subselection. All of our
operations folded.

Using Query Diagnostics, we can examine what kind of operations folded--in the future,
we hope to make this capability easier to use.
Why does my query run multiple times?
Article • 08/31/2022

When refreshing in Power Query, there's a lot done behind the scenes to attempt to
give you a smooth user experience, and to execute your queries efficiently and securely.
However, in some cases you might notice that multiple data source requests are being
triggered by Power Query when data is refreshed. Sometimes these requests are normal,
but other times they can be prevented.

When multiple requests occur


The following sections describe a few instances when Power Query can send multiple
requests to a data source.

Connector design
Connectors can make multiple calls to a data source for various reasons, including
metadata, caching of results, pagination, and so on. This behavior is normal and is
designed to work that way.

Multiple queries referencing a single data source


Multiple requests to the same data source can occur if multiple queries pull from that
data source. These requests can happen even in a case where only one query references
the data source. If that query is referenced by one or more other queries, then each
query—along with all the queries it depends on—is evaluated independently.

In a desktop environment, a single refresh of all the tables in the data model is run using
a single shared cache. Caching can reduce the likelihood of multiple requests to the
same data source, since one query can benefit from the same request having already
been run and cached for a different query. Even here, though, you can get multiple
requests either because the data source isn't cached (for example, local CSV files), the
request to the data source is different than a request that was already cached because
of downstream operations (which can alter folding), the cache is too small (which is
relatively unlikely), or because the queries are running at roughly the same time.

In a cloud environment, each query is refreshed using its own separate cache, so a query
can’t benefit from the same request having already been cached for a different query.
Folding
Sometimes Power Query’s folding layer may generate multiple requests to a data
source, based on the operations being performed downstream. In such cases, you might
avoid multiple requests by using Table.Buffer . More information: Buffer your table

Loading to the Power BI Desktop model


In Power BI Desktop, Analysis Services (AS) refreshes data by using two evaluations: one
to fetch the schema—which AS does by asking for zero rows—and one to fetch the
data. If computing the zero-row schema requires fetching the data, then duplicate data
source requests can occur.

Data privacy analysis


Data privacy does its own evaluations of each query to determine whether the queries
are safe to run together. This evaluation can sometimes cause multiple requests to a
data source. A telltale sign that a given request is coming from data privacy analysis is
that it will have a “TOP 1000” condition (although not all data sources support such a
condition). In general, disabling data privacy—assuming that's acceptable—would
eliminate the "TOP 1000" or other data privacy-related requests during refresh. More
information: Disable the data privacy firewall

Background data downloads (also known as “background


analysis”)
Similar to the evaluations performed for data privacy, the Power Query editor by default
will download a preview of the first 1000 rows of each query step. Downloading these
rows helps ensure the data preview is ready to display as soon as a step is selected, but
it can also cause duplicate data source requests. More information: Disable background
analysis

Miscellaneous Power Query editor background tasks


Various Power Query editor background tasks can also trigger extra data source
requests (for example, query folding analysis, column profiling, the automatic refresh of
the 1000-row preview that Power Query triggers after loading results to Excel, and so
on).
Isolating multiple queries
You can isolate instances of multiple queries by turning off specific parts of the query
process to isolate where the duplicate requests are coming from. For example, if you
start:

In the Power Query editor


With the firewall disabled
With background analysis disabled
With column profiling and any other background tasks disabled
[Optional] Doing a Table.Buffer

In this example, you’ll have only a single M evaluation that happens when you refresh
the Power Query editor preview. If the duplicate requests occur at this point, then
they’re somehow inherent in the way the query is authored. If not, and if you enable the
settings above one-by-one, you can then observe at what point the duplicate requests
start occurring.

The following sections describe these steps in more detail.

Set up Power Query editor


You don't need to reconnect or recreate your query, just open the query you want to
test in the Power Query editor. You can duplicate the query in the editor if you don't
want to mess with the existing query.

Disable the data privacy firewall


The next step is to disable the data privacy firewall. This step assumes you aren't
concerned about data leakage between sources, so disabling the data privacy firewall
can be done using the Always ignore Privacy Level settings described in Set Fast
Combine option in Excel or using the Ignore the Privacy levels and potentially
improve performance setting described in Power BI Desktop privacy levels in Power BI
Desktop.

Be sure to undo this step before resuming normal testing.

Disable background analysis


The next step is to disable background analysis. Background analysis is controlled by the
Allow data preview to download in the background setting described in Disable Power
Query background refresh for Power BI. You can also disable this option in Excel.
Buffer your table
Optionally, you can also use Table.Buffer to force all the data to be read, which
imitates what happens during a load. To use Table.Buffer in the Power Query editor:

1. In the Power Query editor formula bar, select the fx button to add a new step.
2. In the formula bar, surround the name of the previous step with
Table.Buffer(<previous step name goes here>). For example, if the previous step was
named Source , the formula bar will display = Source . Edit the step in the formula
bar to say = Table.Buffer(Source) .

More information: Table.Buffer

Run the test


To run the test, do a refresh in the Power Query editor.
Using parameters
Article • 12/17/2022

A parameter serves as a way to easily store and manage a value that can be reused.

Parameters give you the flexibility to dynamically change the output of your queries
depending on their value, and can be used for:

Changing the argument values for particular transforms and data source functions.
Inputs in custom functions.

You can easily manage your parameters inside the Manage Parameters window. To get
to the Manage Parameters window, select the Manage Parameters option inside
Manage Parameters in the Home tab.
Creating a parameter
Power Query provides two easy ways to create parameters:
From an existing query: Right-click a query whose value is a simple non-structured
constant, such as a date, text, or number, and then select Convert to Parameter.

You can also convert a parameter to a query by right-clicking the parameter and
then selecting Convert To Query.

Using the Manage Parameters window: Select the New Parameter option from
the dropdown menu of Manage Parameters in the Home tab. Or launch the
Manage Parameters window and select New on the top to create a parameter. Fill
in this form, and then select OK to create a new parameter.
After creating the parameter, you can always go back to the Manage Parameters
window to modify any of your parameters at any moment.

Parameter properties
A parameter stores a value that can be used for transformations in Power Query. Apart
from the name of the parameter and the value that it stores, it also has other properties
that provide metadata to it. The properties of a parameter are:

Name: Provide a name for this parameter that lets you easily recognize and
differentiate it from other parameters you might create.

Description: The description is displayed next to the parameter name when


parameter information is displayed, helping users who are specifying the
parameter value to understand its purpose and its semantics.

Required: The checkbox indicates whether subsequent users can specify whether a
value for the parameter must be provided.
Type: Specifies the data type of the parameter. We recommended that you always
set up the data type of your parameter. To learn more about the importance of
data types, go to Data types.

Suggested Values: Provides the user with suggestions to select a value for the
Current Value from the available options:

Any value: The current value can be any manually entered value.

List of values: Provides you with a simple table-like experience so you can
define a list of suggested values that you can later select from for the Current
Value. When this option is selected, a new option called Default Value will be
made available. From here, you can select what should be the default value for
this parameter, which is the default value shown to the user when referencing
the parameter. This value isn't the same as the Current Value, which is the value
that's stored inside the parameter and can be passed as an argument in
transformations. Using the List of values provides a drop-down menu that's
displayed in the Default Value and Current Value fields, where you can pick one
of the values from the suggested list of values.

7 Note
You can still manually type any value that you want to pass to the
parameter. The list of suggested values only serves as simple suggestions.

Query: Uses a list query (a query whose output is a list) to provide the list of
suggested values that you can later select for the Current Value.

Current Value: The value that's stored in this parameter.

Where to use parameters


A parameter can be used in many different ways, but it's more commonly used in two
scenarios:

Step argument: You can use a parameter as the argument of multiple


transformations driven from the user interface (UI).
Custom Function argument: You can create a new function from a query and
reference parameters as the arguments of your custom function.
In the next sections, you'll see an example for these two scenarios.

Step argument
To enable this feature, first go to the View tab in the Power Query editor and select the
Always allow option in the Parameters group.

For example, the following Orders table contains the OrderID, Units, and Margin fields.

In this example, create a new parameter with the name Minimum Margin with a
Decimal Number type and a Current Value of 0.2.
Go to the Orders query, and in the Margin field select the Greater Than filter option.
In the Filter Rows window, there's a button with a data type for the field selected. Select
the Parameter option from the dropdown menu for this button. From the field selection
right next to the data type button, select the parameter that you want to pass to this
argument. In this case, it's the Minimum Margin parameter.

After you select OK, your table is filtered using the Current Value for your parameter.
If you modify the Current Value of your Minimum Margin parameter to be 0.3, your
orders query gets updated immediately and shows you only the rows where the Margin
is above 30%.

 Tip

Many transformations in Power Query let you select your parameter from a
dropdown. We recommend that you always look for it and take advantage of what
parameters can offer you.

Custom function argument


With Power Query, you can create a custom function from an existing query with a
simple click. Following the previous example, right-click the Orders query and select
Create Function. This action launches a new Create Function window. In this window,
name your new function and it will tell you the parameters being referenced in your
query. These parameters are used as the parameters for the custom function.
You can name this new function however you want. For demonstration purposes, the
name of this new function is MyFunction. After you select OK, a new group is created in
the Queries pane using the name of your new function. In this group, you'll find the
parameters being used for the function, the query that was used to create the function,
and the function itself.

To test this new function, enter a value, such as 0.4, in the field underneath the
Minimum Margin label. Then select the Invoke button. This creates a new query with
the name Invoked Function, effectively passing the value 0.4 to be used as the
argument for the function and giving you only the rows where the margin is above 40%.
To learn more about how to create custom functions, go to Creating a custom function.

Multi-value or list parameters


A new type of parameter available only in Power Query Online is multi-value or list
parameters. This section describes how to create a new list parameter and how to use it
in your queries.

Following the previous example, change the current value for Minimum Margin from
0.3 to 0.1. The new goal is to create a list parameter that can hold the order numbers of
the orders that you're interested in analyzing. To create the new parameter, go to
Manage Parameters dialog and select New to create a new parameter. Fill in this new
parameter with the following information:

Name: Interesting Orders


Description: A set of order numbers that are interesting for a specific analysis
Required: True
Type: List

After defining these fields, a new grid pops up where you can enter the values that you
want to store for your parameter. In this case, those values are 125, 777, and 999.
7 Note

While this example uses numbers, you can also store other data types in your list,
such as text, dates, datetime, and more. More information: Data types in Power
Query

 Tip

If you want to have more control over what values are used in your list parameter,
you can always create a list with constant values and convert your list query to a
parameter as showcased previously in this article.

With the new Interesting Orders list parameters in place, head back to the Orders
query. Select the auto-filter menu of the OrderID field. Select Number filters > In.
After selecting this option, a new Filter rows dialog box appears. From here, you can
select the list parameter from a drop-down menu.

7 Note

List parameters can work with either the In or Not in options. In lets you filter only
by the values from your list. Not in does exactly the opposite, and tries to filter your
column to get all values that are not equal to the values stored in your parameter.

After selecting OK, you'll be taken back to your query. There, your query has been
filtered using the list parameter that you've created, with the result that only the rows
where the OrderID was equal to either 125, 777, or 999 was kept.
Error handling
Article • 12/17/2022

Similar to how Excel and the DAX language have an IFERROR function, Power Query has
its own syntax to test and catch errors.

As mentioned in the article on dealing with errors in Power Query, errors can appear
either at the step or cell level. This article will focus on how you can catch and manage
errors based on your own specific logic.

7 Note

To demonstrate this concept, this article will use an Excel Workbook as its data
source. The concepts showcased here apply to all values in Power Query and not
only the ones coming from an Excel Workbook.

The sample data source for this demonstration is an Excel Workbook with the following
table.

This table from an Excel Workbook has Excel errors such as #NULL!, #REF!, and #DIV/0!
in the Standard Rate column. When you import this table into the Power Query editor,
the following image shows how it will look.
Notice how the errors from the Excel workbook are shown with the [Error] value in
each of the cells.

In this article, you'll learn how to replace an error with another value. In addition, you'll
also learn how to catch an error and use it for your own specific logic.

In this case, the goal is to create a new Final Rate column that will use the values from
the Standard Rate column. If there are any errors, then it will use the value from the
correspondent Special Rate column.

Provide an alternative value when finding


errors
In this case, the goal is to create a new Final Rate column in the sample data source that
will use the values from the Standard Rate column. If there are any errors, then it will
use the value from the corresponding Special Rate column.

To create a new custom column, go to the Add column menu and select Custom
column. In the Custom column window, enter the formula try [Standard Rate]
otherwise [Special Rate] . Name this new column Final Rate.
The formula above will try to evaluate the Standard Rate column and will output its
value if no errors are found. If errors are found in the Standard Rate column, then the
output will be the value defined after the otherwise statement, which in this case is the
Special Rate column.

After adding the correct data types to all columns in the table, the following image
shows how the final table looks.

7 Note

As an alternative approach, you can also enter the formula try [Standard Rate]
catch ()=> [Special Rate] , which is equivalent to the previous formula, but using

the catch keyword with a function that requires no parameters.

The catch keyword was introduced to Power Query in May of 2022.

Provide your own conditional error logic


Using the same sample data source as the previous section, the new goal is to create a
new column for the Final Rate. If the value from the Standard Rate exists, then that
value will be used. Otherwise the value from the Special Rate column will be used,
except for the rows with any #REF! error.

7 Note

The sole purpose of excluding the #REF! error is for demonstration purposes. With
the concepts showcased in this article, you can target any fields of your choice from
the error record.

When you select any of the whitespace next to the error value, you get the details pane
at the bottom of the screen. The details pane contains both the error reason,
DataFormat.Error , and the error message, Invalid cell value '#REF!' :

You can only select one cell at a time, so you can effectively only see the error
components of one error value at a time. This is where you'll create a new custom
column and use the try expression.

Use try with custom logic


To create a new custom column, go to the Add column menu and select Custom
column. In the Custom column window, enter the formula try [Standard Rate] . Name
this new column All Errors.
The try expression converts values and errors into a record value that indicates whether
the try expression handled an error or not, as well as the proper value or the error
record.

You can expand this newly created column with record values and look at the available
fields to be expanded by selecting the icon next to the column header.
This operation will expose three new fields:

All Errors.HasError—displays whether the value from the Standard Rate column
had an error or not.
All Errors.Value—if the value from the Standard Rate column had no error, this
column will display the value from the Standard Rate column. For values with
errors this field won't be available, and during the expand operation this column
will have null values.
All Errors.Error—if the value from the Standard Rate column had an error, this
column will display the error record for the value from the Standard Rate column.
For values with no errors, this field won't be available, and during the expand
operation this column will have null values.
For further investigation, you can expand the All Errors.Error column to get the three
components of the error record:

Error reason
Error message
Error detail

After doing the expand operation, the All Errors.Error.Message field displays the specific
error message that tells you exactly what Excel error each cell has. The error message is
derived from the Error Message field of the error record.

Now with each error message in a new column, you can create a new conditional
column with the name Final Rate and the following clauses:

If the value in the All Errors.Errors.Message column equals null , then the output
will be the value from the Standard Rate column.
Else, if the value in the All Errors.Errors.Message column doesn't equal Invalid
cell value '#REF!'. , then the output will be the value from the Special Rate
column.
Else, null.
After keeping only the Account, Standard Rate, Special Rate, and Final Rate columns,
and adding the correct data type for each column, the following image demonstrates
what the final table looks like.

Use try and catch with custom logic


Alternatively, you can also create a new custom column using the try and catch
keywords.

try [Standard Rate] catch (r)=> if r[Message] <> "Invalid cell value '#REF!'." then
[Special Rate] else null
More resources
Understanding and working with errors in Power Query
Add a Custom column in Power Query
Add a Conditional column in Power Query
Import data from a database using
native database query
Article • 09/01/2022

Power Query gives you the flexibility to import data from wide variety of databases that
it supports. It can run native database queries, which can save you the time it takes to
build queries using the Power Query interface. This feature is especially useful for using
complex queries that already exist—and that you might not want to or know how to
rebuild using the Power Query interface.

7 Note

One intent of native database queries is to be non-side effecting. However, Power


Query does not guarantee that the query will not affect the database. If you run a
native database query written by another user, you will be prompted to ensure that
you're aware of the queries that will be evaluated with your credentials. For more
information, see Native database query security.

Power Query enables you to specify your native database query in a text box under
Advanced options when connecting to a database. In the example below, you'll import
data from a SQL Server database using a native database query entered in the SQL
statement text box. The procedure is similar in all other databases with native database
query that Power Query supports.

1. Connect to a SQL Server database using Power Query. Select the SQL Server
database option in the connector selection.

2. In the SQL Server database popup window:

a. Specify the Server and Database where you want to import data from using
native database query.

b. Under Advanced options, select the SQL statement field and paste or enter
your native database query, then select OK.
3. If this is the first time you're connecting to this server, you'll see a prompt to select
the authentication mode to connect to the database. Select an appropriate
authentication mode, and continue.

7 Note

If you don't have access to the data source (both Server and Database), you'll
see a prompt to request access to the server and database (if access-request
information is specified in Power BI for the data source).

4. If the connection is established, the result data is returned in the Power Query
Editor.

Shape the data as you prefer, then select Apply & Close to save the changes and
import the data.

Connectors that support native database


queries
The following Power Query connectors support native database queries.

Connector Type of native database query

Amazon Redshift SQL statement


Connector Type of native database query

Azure Analysis Services database MDX or DAX query

Azure Database for PostgreSQL SQL statement

Azure Cosmos DB SQL statement

Azure Synapse Analytics SQL statement

Azure SQL database SQL statement

DataWorld.Dataset dwSQL

Dataverse SQL statement

Essbase MDX statement

FHIR FHIR Search

Google BigQuery SQL statement

IBM Db2 database SQL statement

IBM Informix database (Beta) SQL statement

MySQL database SQL statement

ODBC SQL statement

OLE DB SQL statement

Oracle database SQL statement

PostgreSQL SQL statement

SAP HANA database SQL statement

Snowflake SQL statement

SQL Server Analysis Services database MDX or DAX query

SQL Server database SQL statement

TIBCO(R) Data Virtualization (Beta) SQL statement

Vena (Beta) Model Query (MQL)

Limitations and issues


Before using native database query, you should be aware of the limitations and issues
that you may meet.

Query folding
Query folding while using a native database query is limited to only a certain number of
Power Query connectors. For more information, go to Query folding on native queries.

Native database query security


Sometimes, when you attempt to use a query created by another user or through the
Advanced Editor or formula bar—essentially any other path outside of the connector
dialogs where the native query input box is shown—you may get a message that says:

If you see this message, select Edit Permission. This selection will open the Native
Database Query dialog box. You'll be given an opportunity to either run the native
database query, or cancel the query.
By default, if you run a native database query outside of the connector dialogs, you'll be
prompted each time you run a different query text to ensure that the query text that will
be executed is approved by you.

7 Note

Native database queries that you insert in your get data operation won't ask you
whether you want to run the query or not. They'll just run.

You can turn off the native database query security messages if the native database
query is run in either Power BI Desktop or Excel. To turn off the security messages:

1. If you're using Power BI Desktop, under the File tab, select Options and settings >
Options.

If you're using Excel, under the Data tab, select Get Data > Query Options.

2. Under Global settings, select Security.

3. Clear Require user approval for new native database queries.

4. Select OK.

You can also revoke the approval of any native database queries that you've previously
approved for a given data source in either Power BI Desktop or Excel. To revoke the
approval:

1. If you're using Power BI Desktop, under the File tab, select Options and settings >
Data source settings.

If you're using Excel, under the Data tab, select Get Data > Data Source Settings.

2. In the Data source settings dialog box, select Global permissions. Then select the
data source containing the native database queries whose approval you want to
revoke.

3. Select Edit permissions.

4. In the Edit permissions dialog box, under Native Database Queries, select Revoke
Approvals.
Create Power Microsoft Platform
dataflows from queries in Microsoft
Excel (Preview)
Article • 02/17/2023

[This topic is pre-release documentation and is subject to change.]

You can create Microsoft Power Platform dataflows from queries in Microsoft Excel
workbooks to take advantage of cloud-powered dataflows refreshing and processing
the data at regular intervals instead of performing these operations manually in Excel.

This article walks you through how to export queries from Excel into a Power Query
template that can then be imported into Power Platform dataflow to create a dataflow.

7 Note

The preview feature for creating Power Query templates from queries feature is
only available to Office Insiders. For more information on the Office insider
program, go to Office Insider .

Overview
Working with large datasets or long-running queries can be cumbersome every time
you have to manually trigger a data refresh in Excel because it takes resources from your
computer to do this, and you have to wait until the computation is done to get the
latest data. Moving these data operations into a Power Platform dataflow is an effective
way to free up your computer's resources and to have the latest data easily available for
you to consume in Excel.

It only takes two quick steps to do this:

1. Exporting queries in Excel to a Power Query template


2. Creating a Power Platform dataflow from the Power Query template

Exporting queries in Excel to a Power Query


template
The first step is to create a Power Query template with your queries in Excel.
1. Start the Power Query editor from Data tab > Get Data > Launch Power Query
Editor.

2. Once Power Query loads, select File > Export Template.


3. The template requires basic information such as a name and a description before it
can be saved locally on your computer.

Creating a Power Platform dataflow from the


Power Query template
1. Sign in to Power Apps .

2. In the left navigation pane, select Data > Dataflows.

3. From the toolbar, select New dataflow > Import template.

4. Select the Power Query template you created earlier. The dataflow name will
prepopulate with the template name provided. Once you're done with the
dataflow creation screen, select Next to view your queries from Excel in the query
editor.

5. From this point, go through the normal dataflow creation and configuration
process so you can further transform your data, set refresh schedules on the
dataflow, and any other dataflow operation possible. For more information on how
to configure and create Power Platform dataflows, go to Create and use dataflows.

See also
Create and use dataflows in Power Apps
Optimize Power Query when expanding
table columns
Article • 02/17/2023

The simplicity and ease of use that allows Power BI users to quickly gather data and
generate interesting and powerful reports to make intelligent business decisions also
allows users to easily generate poorly performing queries. This often occurs when there
are two tables that are related in the way a foreign key relates SQL tables or SharePoint
lists. (For the record, this issue isn't specific to SQL or SharePoint, and occurs in many
backend data extraction scenarios, especially where schema is fluid and customizable.)
There's also nothing inherently wrong with storing data in separate tables that share a
common key—in fact this is a fundamental tenet of database design and normalization.
But it does imply a better way to expand the relationship.

Consider the following example of a SharePoint customer list.

And the following location list it refers to.


When first connecting to the list, the location shows up as a record.

This top-level data is gathered through a single HTTP call to the SharePoint API
(ignoring the metadata call), which you can see in any web debugger.

When you expand the record, you see the fields joined from the secondary table.
When expanding related rows from one table to another, the default behavior of Power
BI is to generate a call to Table.ExpandTableColumn . You can see this in the generated
formula field. Unfortunately, this method generates an individual call to the second table
for every row in the first table.

This increases the number of HTTP calls by one for each row in the primary list. This may
not seem like a lot in the above example of five or six rows, but in production systems
where SharePoint lists reach hundreds of thousands of rows, this can cause a significant
experience degradation.

When queries reach this bottleneck, the best mitigation is to avoid the call-per-row
behavior by using a classic table join. This ensures that there will be only one call to
retrieve the second table, and the rest of the expansion can occur in memory using the
common key between the two tables. The performance difference can be massive in
some cases.

First, start with the original table, noting the column you want to expand, and ensuring
you have the ID of the item so that you can match it. Typically the foreign key is named
similar to the display name of the column with Id appended. In this example, it's
LocationId.

Second, load the secondary table, making sure to include the Id, which is the foreign
key. Right-click on the Queries panel to create a new query.
Finally, join the two tables using the respective column names that match. You can
typically find this field by first expanding the column, then looking for the matching
columns in the preview.

In this example, you can see that LocationId in the primary list matches Id in the
secondary list. The UI renames this to Location.Id to make the column name unique.
Now let's use this information to merge the tables.

By right-clicking on the query panel and selecting New Query > Combine > Merge
Queries as New, you see a friendly UI to help you combine these two queries.

Select each table from the drop-down to see a preview of the query.
Once you've selected both tables, select the column that joins the tables logically (in this
example, it's LocationId from the primary table and Id from the secondary table). The
dialog will instruct you how many of the rows match using that foreign key. You'll likely
want to use the default join kind (left outer) for this kind of data.
Select OK and you'll see a new query, which is the result of the join. Expanding the
record now doesn't imply additional calls to the backend.

Refreshing this data will result in only two calls to SharePoint—one for the primary list,
and one for the secondary list. The join will be performed in memory, significantly
reducing the number of calls to SharePoint.

This approach can be used for any two tables in PowerQuery that have a matching
foreign key.

7 Note
SharePoint user lists and taxonomy are also accessible as tables, and can be joined
in exactly the way described above, provided the user has adequate privileges to
access these lists.
Enabling Microsoft Edge (Chromium)
for OAuth authentication in Power BI
Desktop
Article • 08/31/2022

If you're using OAuth authentication to connect to your data, the OAuth dialog in Power
Query uses the Microsoft Internet Explorer 11 embedded control browser. However,
certain web services, such as QuickBooks Online, Salesforce Reports, and Salesforce
Objects no longer support Internet Explorer 11.

October 2021 Power BI release

7 Note

If you are using an earlier release of Power BI, go to December 2020 Power BI
Release.

As of October of 2021, Power BI Desktop now uses Microsoft Edge WebView2, by


default, for OAuth authentication for all connectors. However, you can change the
default behavior using environment variables.

To disable the use of WebView2 for specific connectors, set


PQ_ExtendEdgeChromiumOAuthDenyList with the name(s) of the connector(s) you

want to disable. Multiple connectors are separated by semicolons.

setx PQ_ExtendEdgeChromiumOAuthDenyList MyExtension1;MyExtension2

To disable the use of WebView2, set PQ_DisableEdgeChromiumOAuth to true.

setx PQ_DisableEdgeChromiumOAuth true

December 2020 Power BI release


As of December of 2020, Power BI Desktop uses Microsoft Edge WebView2 for OAuth
authentication with certain connectors. These connectors are:

GitHub
QuickBooks Online
Salesforce Reports
Salesforce Objects
Smartsheet
Twilio
Zendesk

On your Power BI Desktop machine, you can get WebView2 control either by installing
the new Edge (Chromium) browser (at least beta) from
https://www.microsoftedgeinsider.com/download , or by installing the WebView2
redist package.

All other connectors will use Internet Explorer 11 by default unless the settings are
overridden using environment variables.

To enable WebView2 for all connectors, set PQ_EdgeChromiumOAuthAllowListAll to


true:

setx PQ_EdgeChromiumOAuthAllowListAll true

To enable WebView2 for specific connectors, set


PQ_ExtendEdgeChromiumOAuthAllowList with the name(s) of the connector(s) you
want to enable. Multiple connectors are separated by semicolons.

setx PQ_ExtendEdgeChromiumOAuthAllowList MyExtension1;MyExtension2

To disable the use of WebView2, set PQ_DisableEdgeChromiumOAuth to true.

setx PQ_DisableEdgeChromiumOAuth true


Connectors in Power Query
Article • 10/19/2023

The following table contains a list of all the connectors currently available for Power Query.
For those connectors that have a reference page in this document, a link is provided under
the connector icon and name.

A checkmark indicates the connector is currently supported in the listed service; an X


indicates that the connector is not currently supported in the listed service.

The connectors are listed in alphabetical order in separate tables for each letter in the
alphabet. Use the In this article list on the right side of this article to go to any of the
alphabetized tables.

7 Note

The Excel column in the following table indicates all connectors that are available on at
least one version of Excel. However, not all Excel versions support all of these indicated
Power Query connectors. For a complete list of the Power Query connectors supported
by all versions of Excel, go to Power Query data sources in Excel versions .

7 Note

There's a known issue resulting in an Expression.Error: The import [connector name]


matches no exports error where connectors requiring a gateway aren't working in

dataflows and datamarts in Premium workspaces. There's ongoing work towards a fix
and the documentation will be updated when a fix is released.

A
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Access
Database
By Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Active
Directory
By Microsoft

Acterys
(Beta)
By Acterys

Actian (Beta)
By Actian

Adobe
Analytics
By Microsoft

Amazon
Athena
By Amazon

Amazon
OpenSearch
Service
(Beta)
By Amazon

Amazon
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Redshift
By Microsoft

Anaplan
By Anaplan

appFigures
(Beta)
By Microsoft

Asana
(Beta)
By Asana

Assemble
Views
By Autodesk

AtScale
cubes
By Microsoft

Autodesk
Construction
Cloud
By Autodesk

Automation
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Anywhere
By
Automation
Anywhere

Automy Data
Analytics
(Beta)
By
ACEROYALTY

Azure
Analysis
Services
database
By Microsoft

Azure Blob
Storage
By Microsoft

Azure
Cosmos DB
By Microsoft

Azure
Cosmos
DB v2 (Beta)
By Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Azure Cost
Management
By Microsoft

Azure
Database for
PostgreSQL
By Microsoft

Azure
Databricks
By Databricks

Azure Data
Explorer
(Kusto)
By Microsoft

Azure Data
Lake
Storage Gen1
By Microsoft

Azure Data
Lake
Storage
Gen2
By Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Azure
DevOps
By Microsoft

Azure
DevOps
Server
By Microsoft

Azure
HDInsight
(HDFS)
By Microsoft

Azure
HDInsight
on AKS Trino
(Beta)
By Microsoft

Azure
HDInsight
Spark
By Microsoft

Azure
Synapse
Analytics
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

(SQL DW)
By Microsoft

Azure
Synapse
Analytics
workspace
(Beta)
By Microsoft

Azure SQL
database
By Microsoft

Azure Table
Storage
By Microsoft

Azure Time
Series
Insights
(Beta)
By Microsoft

B
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

BI
Connector
By Guidanz

BitSight
Security
Ratings
By BitSight

Bloomberg
Data
and
Analytics
By
Bloomberg

BQE Core
By BQE

C
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

CData
Connect
Cloud
By CData
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Celonis EMS
(Beta)
By Celonis

Cherwell
(Beta)
By Cherwell

CloudBluePSA
(Beta)
By CloudBlue
PSA (Beta)

Cognite Data
Fusion
By Cognite

Common
Data
Service
(legacy)
By Microsoft

D
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Data.World -
Get Dataset
(Beta)
By Microsoft

Data
Virtuality
LDW
By Data
Virtuality

Databricks

By
Databricks

Dataflows
By Microsoft

Dataverse
By Microsoft

Delta
Sharing
By
Databricks
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Denodo
By Denodo

Digital
Construction
Works
Insights
By Digital
Construction
Works

Dremio
Cloud
By Dremio

Dremio
Software
By Dremio

Dynamics
365
Business
Central
By Microsoft

Dynamics
365
Business
Central
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

(on-
premises)
By Microsoft

Dynamics
365
Customer
Insights
(Beta)
By Microsoft

Dynamics
365
(Dataverse)
By Microsoft

Dynamics
365
Online
(legacy)
By Microsoft

Dynamics
NAV
By Microsoft

E
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

eWay-
CRM
By eWay-
CRM

Eduframe
Reporting
(Beta)
By Drieam

Emigo
Data
Source
By Sagra

Entersoft
Business
Suite
(Beta)
By
Entersoft

EQuIS
By
EarthSoft

Essbase
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

By
Microsoft

Exasol
By Exasol

Excel
By
Microsoft

1 Available in dataflows for Microsoft Teams.

F
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

FactSet
Analytics
By FactSet

FactSet
RMS
(Beta)
By FactSet

FHIR
By
Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Folder
By
Microsoft

Funnel
By Funnel

G
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Github
(Beta)
By
Microsoft

Google
Analytics
By
Microsoft

Google
BigQuery
By
Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Google
BigQuery
(Azure
AD) (Beta)
By
Microsoft

Google
Sheets
By
Microsoft

H
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Hadoop
File
(HDFS)
By
Microsoft

HDInsight
Interactive
Query
By
Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Hexagon
PPM
Smart API
By
Hexagon
PPM

Hive LLAP
By
Microsoft

I
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

IBM DB2
database
By Microsoft

IBM
Informix
database
(Beta)
By Microsoft

IBM Netezza
By Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Impala
By Microsoft

Indexima
By Indexima

Industrial
App Store
By Intelligent
Plant

Information
Grid (Beta)
By Luminis

InterSystems
IRIS (Beta)
By
Intersystems

Intune Data
Warehouse
(Beta)
By Microsoft

J
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Jamf Pro
(Beta)
By Jamf

Jethro
(Beta)
By
JethroData

JSON
By
Microsoft

1
Available in dataflows for Microsoft Teams.

K
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Kognitwin
By
Kongsberg

KQL
Database
By
Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Kyligence
By
Kyligence

L
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Linkar PICK
Style/MultiValue
Databases
(Beta)
By Kosday
Solutions

LinkedIn Sales
Navigator
(Beta)
By Microsoft

M
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Marketo
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

(Beta)
By Microsoft

MarkLogic
By MarkLogic

MariaDB
By MariaDB

Microsoft
Azure
Consumption
Insights
(Beta)
(Deprecated)
By Microsoft

Microsoft
Exchange
By Microsoft

Microsoft
Exchange
Online
By Microsoft

Microsoft
Graph
Security
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

(Deprecated)
By Microsoft

Microsoft
Teams
Personal
Analytics
(Beta)
By Microsoft

MicroStrategy
for Power BI
By
MicroStrategy

Mixpanel
(Beta)
By Microsoft

MongoDB
Atlas
SQL interface
(Beta)
By MongoDB

MySQL
database
By Microsoft

O
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

OData Feed
By
Microsoft

ODBC
By
Microsoft

OLE DB
By
Microsoft

OpenSearch
Project
(Beta)
By
OpenSearch

Oracle
database
By
Microsoft

1 Available in dataflows for Microsoft Teams.

P
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Parquet
By Microsoft

Palantir
Foundry
By Palantir

Paxata
By Paxata

2 2 1

PDF
By Microsoft

Planview
Enterprise
Architecture
By Planview

Planview
IdeaPlace
By Planview

Planview
ORK
(Beta)
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Planview
Portfolios
By Planview

Planview
Projectplace
By Planview

PostgreSQL
database
By Microsoft

Power BI
datasets
By Microsoft

Product
Insights
(Beta)
By Microsoft

Profisee
By Profisee

Python
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Script
By Microsoft

1 Available in dataflows for Microsoft Teams.


2 The PDF connector isn't supported in Power BI Premium.
3 The Parquet connector isn’t supported in the 32-bit version of Power BI Desktop.

Q
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

QubolePresto
(Beta)
By Qubole

Quickbase
By Quick Base

Quickbooks
Online
(Beta)
By Microsoft

R
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

R Script
By
Microsoft

Roamler
(Beta)
By
Roamler

S
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Salesforce
Objects
By Microsoft

Salesforce
Reports
By Microsoft

SAP Business
Warehouse
Application
Server
By Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

SAP Business
Warehouse
Message
Server
By Microsoft

SAP HANA
database
By Microsoft

SIS-CC SDMX
(Beta)
By SIS-CC

SharePoint
folder
By Microsoft

SharePoint list
By Microsoft

SharePoint
Online
list
By Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Shortcuts
Business
Insights (Beta)
By Shortcuts

SingleStore
(Beta)
By SingleStore

SiteImprove
By SiteImprove

Smartsheet
By Microsoft

Snowflake
By Microsoft

Socialbakers
Metrics (Beta)
By Emplifi

SoftOneBI
(Beta)
By SoftOne
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

SolarWinds
Service Desk
(Beta)
By SolarWinds

Solver
By BI360

Spark
By Microsoft

SparkPost
(Beta)
By Microsoft

SQL Server
Analysis
Services
database
By Microsoft

SQL Server
database
By Microsoft
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Starburst
Enterprise
By Starburst
Data

SumTotal
By SumTotal

SurveyMonkey
By
SurveyMonkey

SweetIQ (Beta)
By Microsoft

Sybase
Database
By Microsoft

1
Available in dataflows for Microsoft Teams.

T
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

TeamDesk
(Beta)
By ForeSoft

Tenforce
(Smart)List
By Tenforce

Teradata
database
By Microsoft

Text/CSV
By Microsoft

TIBCO(R)
Data
Virtualization
By TIBCO

Twilio
(Deprecated)
(Beta)
By Microsoft

1 Available in dataflows for Microsoft Teams.


U
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Usercube
(Beta)
By
Usercube

V
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Vena
By Vena

Vertica
By
Microsoft

Vessel
Insight
By
Kongsberg

Viva
Insights
By
Microsoft
W
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Web
By Microsoft

Webtrends
Analytics
(Beta)
By Microsoft

Witivio
(Beta)
By Witivio

Workforce
Dimensions
(Beta)
(Deprecated)
By Kronos

1 Available in dataflows for Microsoft Teams.

X
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

XML
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

By
Microsoft

1 Available in dataflows for Microsoft Teams.

Z
Connector Excel Power BI Power BI Fabric Power Customer Analysis
(Datasets) (Dataflows) (Dataflow Apps Insights Services
Gen2) (Dataflows) (Dataflows)

Zendesk
(Beta)
By
Microsoft

Zoho
Creator
By Zoho

Zucchetti
HR
Infinity
(Beta)
By
Zucchetti

Next steps
Power BI data sources (datasets)
Connect to data sources for Power BI dataflows
Available data sources (Dynamics 365 Customer Insights)
Data sources supported in Azure Analysis Services
Access database
Article • 07/18/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Excel
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Anonymous


Windows
Basic
Organizational Account

Function Reference Documentation Access.Database

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
If you're connecting to an Access database from Power Query Online, the system that
contains the on-premises data gateway must have the 64-bit version of the Access
Database Engine 2016 OLEDB provider installed.

If you're loading an Access database to Power BI Desktop, the versions of the Access
Database Engine 2016 OLEDB provider and Power BI Desktop on that machine must
match (that is, either 32-bit or 64-bit). For more information, go to Import Access
database to Power BI Desktop.
Capabilities Supported
Import

Connect to an Access database from Power


Query Desktop
To make the connection from Power Query desktop:

1. Select the Access database option in the connector selection.

2. Browse for and select the Access database you want to load. Then select Open.

If the Access database is online, use the Web connector to connect to the
database.

3. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.

Connect to an Access database from Power


Query Online
To make the connection from Power Query desktop:

1. Select the Access database option in the connector selection.

2. In the Access database dialog that appears, provide the path to the Access
database.

3. Enter the file path or URL address to the Access database.

4. Select the name of your on-premises data gateway.


7 Note

You must select an on-premises data gateway for this connector, whether the
Access database is on your local network or on a web site.

5. Select the type of credentials for the connection to the Access database in
Authentication kind.

6. Enter your credentials.

7. Select Next to continue.

8. In Navigator, select the data you require, and then select Transform data to
continue transforming the data in Power Query Editor.

Troubleshooting

Connect to local file from Power Query Online


When you attempt to connect to a local Access database using Power Query Online, you
must select an on-premises data gateway, even if your Access database is online.

On-premises data gateway error


A 64-bit version of the Access Database Engine 2016 OLEDB provider must be installed
on your on-premises data gateway machine to be able to load Access database files. If
you already have a 64-bit version of Microsoft Office installed on the same machine as
the gateway, the Access Database Engine 2016 OLEDB provider is already installed. If
not, you can download the driver from the following location:
Access Database Engine 2016 OLEDB provider

7 Note

Microsoft Office has stopped supporting the Access Database Engine 2010 OLEDB
provider as part of end-of-life for Office 2010. However, some legacy use cases,
such as using 32-bit Office and 64-bit PBI Desktop, might require the continued use
of the older 2010 version. In these cases, you can still download the 2010 version
from the following location:

Access Database Engine 2010 OLEDB provider

Import Access database to Power BI Desktop


In some cases, you might get a The 'Microsoft.ACE.OLEDB.12.0' provider is not
registered error when attempting to import an Access database file to Power BI

Desktop. This error can be caused by using mismatched bit versions of Power BI
Desktop and the Access Database Engine 2016 OLEDB provider. For more information
about how you can fix this mismatch, go to Troubleshoot importing Access and Excel .xls
files in Power BI Desktop.
Active Directory
Article • 07/18/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Excel
Analysis Services

Authentication Types Supported Windows

Function Reference Documentation ActiveDirectory.Domains

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities Supported
Import

Prerequisites
To connect to Active Directory in Power BI (Dataflows) you'll need an on-premises
data gateway.

Connect to Active Directory from Power Query


Desktop
1. Select the Active Directory option under Other in the Get Data selection, and then
select Connect. More information: Where to get data
2. In the Active Directory dialog box, provide the domain for your Active Directory.

3. You can choose to use your current windows credentials or enter alternate
credentials. Then select Connect.

 Tip

You may need to add the domain suffix to your username. For example:
domain\username.
4. In Navigator, review and/or select data from your database. Then select OK.

7 Note

The navigator doesn't load data for the configuration partition. More
information: Limitations and issues

Connect to Active Directory from Power Query


Online
1. Select the Active Directory option under Other in the Choose data source
selection, and then select Connect. More information: Where to get data

2. In Connect to data source, enter your domain, select connection credentials if


you've connected before or select Create new connection, and select your on-
premises data gateway.

3. Only Windows authentication is available. Add your username in the format


domain\alias, and add your password. Select Next.
4. In Navigator, review and/or select data from your database. Then select OK.

7 Note

The navigator doesn't load data for the configuration partition. More
information: Limitations and issues

Limitations and issues

Some objects in the Navigator return empty tables


The Power Query Active Directory connector supports search in the Schema and Domain
partitions, but not in the Configuration partition. This limitation is the current design. So
if there are any objects that reside in the Configuration partition, Power Query doesn't
load data for those objects into the navigator.
Adobe Analytics
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Customer Insights (Dataflows)

Authentication Types Supported Organizational account

Function Reference Documentation AdobeAnalytics.Cubes

Prerequisites
Before you can sign in to Adobe Analytics, you must have an Adobe Analytics account
(username/password).

Capabilities Supported
Import

Connect to Adobe Analytics data


To connect to Adobe Analytics data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, select Adobe Analytics, and then select Connect.
2. If this is the first time you're getting data through the Adobe Analytics connector, a
third-party notice will be displayed. Select Don't warn me again with this
connector if you don't want this message to be displayed again, and then select
Continue.

3. To sign in to your Adobe Analytics account, select Sign in.


4. In the Adobe Analytics window that appears, provide your credentials to sign in to
your Adobe Analytics account. You can either supply a username (which is usually
an email address), or select Continue with Google or Continue with Facebook.

If you entered an email address, select Continue.

5. Enter your Adobe Analytics password and select Continue.


6. Once you've successfully signed in, select Connect.

Once the connection is established, you can preview and select multiple dimensions and
measures within the Navigator dialog box to create a single tabular output.
You can also provide any optional input parameters required for the selected items. For
more information about these parameters, see Optional input parameters.

You can Load the selected table, which brings the entire table into Power BI Desktop, or
you can select Transform Data to edit the query, which opens Power Query Editor. You
can then filter and refine the set of data you want to use, and then load that refined set
of data into Power BI Desktop.

Optional input parameters


When you've selected the Adobe Analytics data you want to load or transform in the
Power Query Navigator dialog box, you can also limit the amount of data by selecting a
set of optional input parameters.
These input parameters are:

Date Range—filter with a reporting range between a start date and an end date
that you set.

Segment—filter the data based on all segments contained in the data, or only
those segments you select. To change the list of segments, select the ellipsis to the
right of the Segment list box, then choose the segments you want. By default, all
segments are included in the data.
Top—filter the data based on the top items for the dimension. You can enter a
value in the Top text box, or select the ellipsis next to the text box to select some
default values. By default, all items are selected.

Dimension—filter the data based on the selected dimension. By default, all


dimensions are selected. Custom Adobe dimension filters are not currently
supported in the Power Query user interface, but can be defined by hand as M
parameters in the query. For more information, see Using Query Parameters in
Power BI Desktop.

Limitations and issues


You should be aware of the following limitations and issues associated with accessing
Adobe Analytics data.

Adobe Analytics has a built-in limit of 50 K rows returned per API call.

If the number of API calls exceeds four per second, a warning will be issued. If the
number exceeds five per second, an error message will be returned. For more
information about these limits and the associated messages, see Web Services
Error Codes .

The API request timeout through adobe.io is currently 60 seconds.

The default rate limit for an Adobe Analytics Company is 120 requests per minute
per user (the limit is enforced as 12 requests every 6 seconds).

Import from Adobe Analytics will stop and display an error message whenever the
Adobe Analytics connector hits any of the API limits listed above.

When accessing your data using the Adobe Analytics connector, follow the guidelines
provided under the Best Practices heading.

For additional guidelines on accessing Adobe Analytics data, see Recommended usage
guidelines .

Next steps
You may also find the following Adobe Analytics information useful:

Adobe Analytics 1.4 APIs


Adobe Analytics Reporting API
Metrics
Elements
Segments
GetReportSuites
Adobe Analytics support
Amazon Athena
Article • 07/13/2023

7 Note

The following connector article is provided by Amazon, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Amazon website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported DSN configuration


Organizational account

Prerequisites
An Amazon Web Services (AWS) account
Permissions to use Athena
Customers must install the Amazon Athena ODBC driver before using the
connector

Capabilities supported
Import
DirectQuery (Power BI Datasets)

Connect to Amazon Athena


To connect to Athena data:
1. Launch Power BI Desktop.

2. In the Home tab, select Get Data.

3. In the search box, enter Athena.

4. Select Amazon Athena, and then select Connect.

5. On the Amazon Athena connection page, enter the following information:

For DSN, enter the name of the ODBC DSN that you want to use. For
instructions on configuring your DSN, go to the ODBC driver
documentation .
For Data Connectivity mode, choose a mode that's appropriate for your use
case, following these general guidelines:
For smaller datasets, choose Import. When using import mode, Power BI
works with Athena to import the contents of the entire dataset for use in
your visualizations.
For larger datasets, choose DirectQuery. In DirectQuery mode, no data is
downloaded to your workstation. While you create or interact with a
visualization, Microsoft Power BI works with Athena to dynamically query
the underlying data source so that you're always viewing current data.
More information: Use DirectQuery in Power BI Desktop
6. Select OK.

7. At the prompt to configure data source authentication, select either Use Data
Source Configuration or AAD Authentication. Enter any required sign-in
information. Then select Connect.

Your data catalog, databases, and tables appear in the Navigator dialog box.
8. In the Display Options pane, select the check box for the dataset that you want to
use.

9. If you want to transform the dataset before you import it, go to the bottom of the
dialog box and select Transform Data. This selection opens the Power Query Editor
so that you can filter and refine the set of data you want to use.

10. Otherwise, select Load. After the load is complete, you can create visualizations like
the one in the following image. If you selected DirectQuery, Power BI issues a
query to Athena for the visualization that you requested.
Amazon OpenSearch Service (Beta)
Article • 07/18/2023

7 Note

The following connector article is provided by Amazon, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the OpenSearch website and use the support
channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic

Prerequisites
Microsoft Power BI Desktop
OpenSearch
OpenSearch SQL ODBC driver

Capabilities supported
Import
DirectQuery (Power BI Datasets)

Connect to OpenSearch Project


1. Open Power BI Desktop.
2. Select Home > Get Data > More > Other, select OpenSearch Project, and then
select Connect.

3. You'll get a warning for using a third-party service. Select Continue.

4. Enter host and port values and select your preferred SSL option. Then select OK.

5. Select an authentication option. Enter credentials if necessary, and then select


Connect.

6. Select the required table. Data preview will then be loaded.

7. Select Load.

8. Select the required columns for creating a graph.

Troubleshooting
If you get an error indicating the driver wasn't installed, install the OpenSearch SQL
ODBC Driver .

If you get a connection error:

1. Check if the host and port values are correct.


2. Check if the authentication credentials are correct.
3. Check if the server is running.
Amazon Redshift
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Amazon Redshift


Basic
Microsoft account
Organizational account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
An Amazon Web Services (AWS) account

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Provider name
Batch size
SQL statement
Connect to Amazon Redshift data from Power
Query Desktop
To connect to Amazon Redshift data:

1. Select the Amazon Redshift option in the Get Data selection.

2. In Server, enter the server name where your data is located. As part of the Server
field, you can also specify a port in the following format: ServerURL:Port. In
Database, enter the name of the Amazon Redshift database you want to access. In
this example, contoso.redshift.amazonaws.com:5439 is the server name and port
number, dev is the database name, and Data Connectivity mode is set to Import.

You can also choose some optional advanced options for your connection. More
information: Connect using advanced options

After you have finished filling in and selecting all the options you need, select OK.

3. If this is the first time you're connecting to this database, enter your credentials in
the User name and Password boxes of the Amazon Redshift authentication type.
Then select Connect.
More information: Authentication with a data source

4. Once you successfully connect, a Navigator window appears and displays the data
available on the server. Choose one or more of the elements you want to import.

5. Once you've selected the elements you want, then either select Load to load the
data or Transform Data to continue transforming the data in Power Query Editor.

6. Select either the Import or DirectQuery data connectivity mode, and then select
OK.

Connect to Amazon Redshift data from Power


Query Online
To connect to Amazon Redshift data:

1. Select the Amazon Redshift option in the Power Query - Choose data source
page.

2. In Server, enter the server name where your data is located. As part of the Server
field, you can also specify a port in the following format: ServerURL:Port. In
Database, enter the name of the Amazon Redshift database you want to access. In
this example, contoso.redshift.amazonaws.com:5439 is the server name and port
number, and dev is the database name.

You can also choose some optional advanced options for your connection. More
information: Connect using advanced options

3. If needed, select the on-premises data gateway in Data gateway.

4. Select the type of authentication you want to use in Authentication kind, and then
enter your credentials.

5. Select or clear Use Encrypted Connection depending on whether you want to use
an encrypted connection or not.

6. Select Next to continue.

7. In Navigator, select the data you require, and then select Transform data. This
selection opens the Power Query Editor so that you can filter and refine the set of
data you want to use.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed.

The following table describes all of the advanced options you can set in Power Query.

Advanced Description
option

Provider Provides an Amazon Resource Name (ARN), which uniquely identifies AWS
Name resources.

Batch size Specifies the maximum number of rows to retrieve at a time from the server when
fetching data. A small number translates into more calls to the server when
retrieving a large dataset. A large number of rows may improve performance, but
could cause high memory usage. The default value is 100 rows.

SQL For information, go to Import data from a database using native database query.
Statement This option is only available in Power BI Desktop.

Enable Azure AD Single Sign-On (SSO) for


Amazon Redshift
We support Azure AD SSO through both Power BI Service (cloud) and also through the
on-premises data gateway. For more information about enabling Azure AD SSO for all
connectors, go to Overview of single sign-on (SSO) for on-premises data gateways in
Power BI.

Azure AD Single Sign-On (SSO) through Power BI service


To configure a new connection in Power BI service:

1. In Power BI service, select Admin portal from the settings list.

2. Enable the Redshift SSO option.


Azure AD Single Sign-On (SSO) for Amazon Redshift with
an on-premises data gateway
Before you can enable Azure AD SSO for Amazon Redshift, you must first enable Azure
AD SSO for all data sources that support Azure AD SSO with an on-premises data
gateway:

1. In Power BI service, select Admin portal from the settings list.

2. Under Tenant settings, enable Azure AD Single-Sign On (SSO) for Gateway.


Once you've enabled Azure AD SSO for all data sources, then enable Azure AD SSO for
Amazon Redshift:

1. Enable the Redshift SSO option.

2. Select Manage gateways from the settings list.


3. Select a gateway, and then select Choose Data Source.

4. Under the Data Source Settings tab, enter a value in Provider Name. The Provider
Name parameter is required when using Azure AD and needs to be specified in
Advanced settings.

Also select Use SSO via Azure AD for DirectQuery queries.


Anaplan
Article • 07/13/2023

7 Note

The following connector article is provided by Anaplan, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Anaplan website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic


Organizational account

Prerequisites
There are system requirements to verify before you install Microsoft Power BI
Desktop.

Capabilities supported
The Anaplan Connector enables access to your Anaplan model exports. It also:

Runs using Anaplan public data integration APIs.


Loads export actions saved in your Anaplan models, directly into Power BI.
Displays Anaplan models (those models not archived) in your default tenant.
Creates datasets in Power BI Desktop and exports or refreshes them to the Power
BI service (cloud).
Connect to Anaplan from Power BI Desktop

Get data
To access your saved export from Power BI desktop:

1. Select Get data from the ribbon menu, which displays the Common data sources
menu.

2. Select More from the menu.

3. In Get Data, enter Anaplan into the search field, which displays the Anaplan
Connector.

4. Select Anaplan Connector.

5. Select Connect.

Configure the connector


Configure the connector once you've downloaded it. This step allows you to connect
using Anaplan APIs.

The connector uses either basic authentication (user ID, password) or the Anaplan-
configured IDP for logging into Anaplan. To use the second method, you must be
designated as an Anaplan Single-Sign On (SSO) user in your Anaplan model. You can set
your own configuration.

The following image shows the Anaplan Connection Configuration dialog.


1. Enter the API and authentication (auth) URLs.

a. If you choose Basic auth, enter the following URLs, and then select OK.

Anaplan API URL: https://api.anaplan.com

Anaplan Auth URL: https://auth.anaplan.com

b. If you prefer the Anaplan configured IDP for logging into Anaplan, enter the
following URLs, and then select OK.

Anaplan API URL: https://api.anaplan.com

Anaplan Auth URL: https://us1a.app.anaplan.com

c. Select OK.

2. From the next Anaplan dialog, choose either Basic or Organizational account
(which triggers Anaplan-configured IDP).

Authenticate
You've chosen either basic authentication or Anaplan-configured IDP.

1. If you chose Basic authentication, from the Anaplan dialog:


a. Enter your User name and Password.

b. Select Connect.

c. Proceed to the next step, Get data.

2. If you chose Organizational account (Anaplan-configured IDP), from the Anaplan


dialog:

a. Select Sign in.

b. From Welcome to Anaplan, select Log in with Single Sign-on (SSO).


c. On your company's SSO dialog, enter your user name and password.

d. Select Sign In.

7 Note

Your company's Anaplan-configured IDP service generates this dialog. A


common ID management service is Okta.

e. Select Connect on the next Anaplan dialog.


Run a Power BI export action
The Anaplan connector for Power BI uses exports to download data from your Anaplan
model. Make sure you have an export action set. The navigator step is where you load
and run these exports.

When you run an export action:

Only exports that output .csv and .txt files are supported.
If you don't see the export action in the Power BI connector, check your model role
and the export actions in your model.

To run an export action, use the Navigator dialog to locate your export.

1. Search your Anaplan models to find and set your export.

Select the white chevron, which expands to a drop-down list.


You can also locate for your model name using the search field.

2. Check the box next to ƒx Run Export Action to select your export.

When you select ƒx Run Export Action, this selection doesn't trigger the
export run. Instead this selection downloads the last version of the exported
Anaplan data for preview.
A preview displays in the right panel. If the Anaplan export is set to Admins
only, model users might see a blank preview, but the export will run as
normal.
You'll see the preview the next time you set an integration with the same
export.

3. Select Load, which starts the export. The Load dialog displays.

Your data then loads.

Manage your data


Once your data loads into Power BI Desktop, you can perform data visualizations.
How to make the most of data visualization helps you to select Columns, apply Filters,
perform Visualizations, and Publish.

Publish a report to Power BI service


The publish step sends the report from Power BI Desktop to Power BI service.

More information: Create reports in Power BI. You need a report to begin.

To publish a report to Power BI service, select Publish from the Power BI Desktop report
dialog.


The report is now in Power BI service. Sign in to Power BI service to see the report.

Schedule a refresh (Power BI service)


Once you've created a dataset in Power BI service, you can schedule a data refresh.

First, create a report in the Power BI Desktop. More information: Create reports in Power
BI.

To refresh your dataset:

1. Select Add Data Source (top left of dialog).

2. Select from the Data Source Type and Data Source Information dropdowns.

3. Select Apply.

This image displays an example dataset entitled New data source.

From the Anaplan dialog:

1. Select OAuth2 from the Authentication method dropdown menu.

2. Select Sign in.


Once you've created a scheduled refresh in Power BI service:

If your scheduled refresh frequency is more than 15 days, you must reenter your
sign-in credentials before the end of the 15th day. If you don't, you need to
authenticate anew.
We recommend a refresh frequency that's less than every 15 days.
Apart from data refreshes, you need to reenter your sign-in credentials every 90
days.

Limits, considerations, and troubleshooting


You may get error messages. Here are some common scenarios and steps for resolution.

If you do get an error message, select the refresh icon. This refresh resolves the error in
most cases.

Credential error in the Navigator


We couldn't authenticate with the credentials provided, please try again.

To resolve such an error:

1. Select Refresh (top right).


2. Select File.

3. Select Options and settings.

4. Select Data source settings.

5. Select the permission Anaplan Connector from the list.

6. Select Clear Permissions.

7. Select Delete.

These steps remove expired Anaplan API tokens. You must reauthenticate to proceed.

Credential error in the Power Query editor


You might experience a credential error in the Power Query editor when you reopen a
Power BI report or when you edit a prior data set.

To resolve this error, select either Close & Apply or Refresh Preview.

Your data then refreshes.

Integration list, display error


Solution: If you don't see your complete list of integrations after you switch your
Workspace, you can:

Select the refresh button next to Integrations.


Refresh your browser.

You can also:

1. Select Recent sources.

2. Select More.

3. Right-click on the connection.

4. Select Remove from list.


5. Connect to the export again. Your data then refreshes.

Power Query expression error


Expression.Error: The key didn't match any rows in the table.

Select Refresh Preview.


The first 1,000 rows of data then displays.

More information: Power Query M expression functions

Large file download error


If you go beyond the 5 gigabytes (GB) size limit, you'll get the following error.

Anaplan connector supports file downloads up to 5 GB in size.

Return to your Anaplan model, decrease the size of your file, and try again.
Assemble Views
Article • 07/13/2023

7 Note

The following connector article is provided by Autodesk, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Autodesk website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Autodesk Account

Function Reference Documentation —

Prerequisites
To use the Assemble Views connector, you must have an Autodesk account with a
username and password, and be a member of at least one project in Assemble.

You'll also need at least one view associated with the Assemble project.

Capabilities supported
Import

Connect to Assemble Views from Power Query


Desktop
To connect to Assemble data:

1. Select Assemble Views from the Get Data experience under the Online Services
category, and then select Connect.

2. In Assemble Views, enter your site's URL to sign in. For example, use
https://example.tryassemble.com .

a. (Optional) Select a date from which you want to load the data. Leaving this entry
blank results in the latest data being pulled each time you refresh.

b. Once you've entered the URL, select OK to continue.

3. Select Sign in to sign in to your Autodesk account.

Once you've successfully signed in, select Connect.


4. In the Navigator dialog box, search for and expand the project that you want to
load. For each project, there's a folder for Models and Views.

Models (New!) - fetches select properties from any or all versions of a model.

Views - fetches all visible columns in a previously saved view.

Loading data from Models


1. Expand the Models folder. Select each of the models you want to include. The
settings outlined in the following steps must be completed for each selected
model.

2. Select the versions you wish to load:

Active version only – Loads only the active version of the model.
All versions – Loads all versions of the model.
All except active version – Loads all previous versions of the model without
loading the active version (intended for advanced workflows when previous
version data only needs to be loaded once and not included in a refresh).
Specific versions – Loads all specific versions of the model that are selected
(specific versions will be selected in the Version Name and Number drop
down).

3. Select the properties you want to fetch by using the search filter or scrolling. By
default, Model ID, Model Name, Version ID, Version Name, and Version Number
will always be included in the result.

7 Note

When using the search bar, be sure to clear the selection and select all
properties before selecting OK, or previous selections will be overwritten.

4. If using "Specific versions", select the versions you want to load in the Version
Name and Number (optional) dropdown, then select Apply. Selections in this
dropdown will be ignored if any of the other "Load model data" settings are
selected.

5. Once the data preview has been displayed, you can either select Transform Data to
go to the Power Query editor, or Load to go straight to the dashboard.

6. If you want to load multiple models at once, be sure to select Apply after setting
up each model per the aforementioned steps.
Loading data from Views
1. Expand the Views folder. Select the view you want to include. Additionally select
[Your Project] View Thumbnails if you want to include images in your report.
Select Transform Data to continue to Power Query.

2. In Power Query, you'll see a single column named Rows. On the header of the
column, select the button with two arrows pointing in opposite directions to
expand your rows.

a. Uncheck Use original column name as prefix and select OK for each view data
query you've selected.
b. Select Close & Apply to load the datasets.

3. (Optional) If you have chosen to load images, you'll need to update the Data
category for the image field.

a. Expand the [Your Project] View Thumbnails table, and then select the Image
field. This selection opens the Column tools tab.

b. Open the Data category drop-down and select Image URL. You can now drag
and drop the Image field into your report visuals.
Known issues and limitations
Views with greater than 100,000 rows may not load depending on the number of
fields included in the view. To avoid this limitation, we suggest breaking large
views into multiple smaller views and appending the queries in your report, or
creating relationships in your data model.

The view images feature currently only supports thumbnail sized images because
of a row size limitation in Power BI.

When creating a query using Models data, a maximum of 200 properties can be
selected.
Autodesk Construction Cloud
Article • 07/13/2023

7 Note

The following connector article is provided by Autodesk, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Autodesk website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Autodesk account

Function Reference Documentation —

Prerequisites
To use the Autodesk Construction Cloud connector, you must have an Autodesk account
with a username and password and have access to the Executive Overview in a BIM360
or an ACC Account. You also need to run a Data Connector extraction manually or have
the extractions scheduled to run in order to use this connector. The Connector pulls
from the last ran extract.

Capabilities Supported
Import
Supports US and EU Autodesk Construction Cloud servers

Connect to Autodesk Construction Cloud data


To connect to Autodesk Construction Cloud data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, select Autodesk Construction Cloud, and then
select Connect.

2. If this is the first time you're getting data through the Autodesk Construction
Cloud connector, a preview connector notice will be displayed. Select Don't warn
me again with this connector if you don't want this message to be displayed
again, and then select Continue.

3. Select the Autodesk server region (US or EU), and then select OK.

4. To sign in to your Autodesk account, select Sign in.

5. In the Autodesk window that appears, provide your credentials to sign in to your
Autodesk account.
6. Once you've successfully signed in, select Connect.

7. In Navigator, expand the Account you want to pull data from, which will display an
Account Extract folder and a Project Extracts folder. Account Extract will contain
the data extract of the most recent account level extract if you have proper access
and have ran an account level data extract. Project Extracts will contain a listing of
each project in the account you have access to, which you can then expand to see
the relevant tables in that specific project's extracts if you have run a data
extraction.
8. Once you navigate to the desired Account or Project extract, select the desired
tables, and then either select Load to load the data or Transform Data to continue
transforming the data in the Power Query editor.
Connect using Autodesk provided Power BI
Templates
Download the latest Power BI Templates here:
https://construction.autodesk.com/templates/power-bi/ .
Only templates with "...(Connector).." in the file name are set up to work with this
connector.

To use the Power BI Templates:

1. Open a template file.

2. Provide your ACC Account name and select the server region.
7 Note

The Account Name is the name of the ACC account you want to connect to,
not your user account name. You can find the Account name on the Account
Admin portal just to the right of the Account Admin drop down or under
Settings. The Account name also appears on the Insight (Account) page just
to the right of the Insight dropdown in the upper left hand corner.

3. If prompted, follow steps 4 through 6 in the previous procedure to sign-in and


connect.

Limitations and issues


You should be aware of the following limitations and issues associated with accessing
Autodesk data.
Autodesk returns a 404 error status if you don't have any request in the last 30
days.
This connector pulls data from the last run Data Connector extract and not the live
data from the account.
Automy Data Analytics (Beta)
Article • 07/13/2023

7 Note

The following connector article is provided by ACEROYALTY, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the ACEROYALTY website
and use the support channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Automy Report Token

Prerequisites
Before you can sign in to Automy Data Analytics, you must have an Automy Report
Token.

Capabilities Supported
Import

Connect to Automy Data Analytics data


To connect to Automy Data Analytics data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, select Automy Data Analytics, and then select
Connect.
2. If this is the first time you're connecting to the Automy Data Analytics connector, a
third-party notice will be displayed. Select Don't warn me again with this
connector if you don't want this message to be displayed again, and then select
Continue.

3. Sign in to the connector with API Key to verify your access to Automy.
Once you've succeeded, select Connect.

4. In the Automy Data Analytics window that appears, select the correct parameters
to prepare the connection. Select the type of report and data type and completed
the token information, and then select Ok.

7 Note

You can generate an authentication token for reports using the configuration
option in Automy.

5. In the Navigator dialog box, select the Automy tables you want. You can then
either load or transform the data.

If you’re selecting functions, be sure to select Transform Data so that you can add
parameters to the functions you’ve selected. More information: Using parameters
Limitations and issues
Users should be aware of the following limitations and issues associated with accessing
Automy Data Analytics data.

Automy Data Analytics has a built-in limit of 100,000 rows returned per
connection.

The default rate limit for an Automy Data Analytics Company is 120 requests per
minute per user.

Import from Automy Data Analytics will stop and display an error message whenever the
Automy Data Analytics connector reaches any of the limits listed above.

For more guidelines on accessing Automy Data Analytics, contact


[email protected].

Credential error in the Navigator


If a credential error occurs in the Navigator, clear your recent data source settings.

1. In Power BI Desktop, select File > Data source settings.


2. Select the data source, and then select Clear permissions. Establish the connection
to the navigation again.
Azure Analysis Services database
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Basic


Microsoft account
Organizational account
Windows

M Function Reference AnalysisServices.Database


AnalysisServices.Databases

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
An Azure subscription. Go to Get Azure free trial .

An Azure Analysis Services server. Follow the instructions at Quickstart: Create a


server - Portal for creating the server using the Azure portal. Other articles in this
section also include information on creating a server using PowerShell, Bicep, or an
Azure Resource Manager (ARM) template. This connector article assumes that
you've already created an Azure Analysis Services server and have an existing
database on the server.

Capabilities Supported
Import
Connect live (Power BI Desktop)
Advanced options
MDX or DAX query

Connect to Azure Analysis Services database


from Power Query Desktop
To make the connection, take the following steps:

1. Select the Azure Analysis Services database option in the connector selection.
More information: Where to get data

2. In the SQL Server Analysis Services database dialog that appears, provide the
name of the server and database (optional).

7 Note

Only Power BI Desktop will display the Import and Connect live options. If
you're connecting using Power BI Desktop, selecting Connect live uses a live
connection to load the connected data directly to Power BI Desktop. In this
case, you can't use Power Query to transform your data before loading the
data to Power BI Desktop. For the purposes of this article, the Import option
is selected. For more information about using a live connection in Power BI
Desktop, go to Connect to Analysis Services tabular data in Power BI
Desktop.

3. Select OK.
4. If you're connecting to this database for the first time, select the authentication
type and input your credentials. Then select Connect.

5. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in the Power
Query editor.

Connect to Azure Analysis Services database


from Power Query Online
To make the connection, take the following steps:

1. Select the Azure Analysis Services database option in the connector selection.
More information: Where to get data
2. In the Connect to data source page, provide the name of the server and database
(optional).

3. If needed, select an on-premises data gateway.

4. If you're connecting to this database for the first time, select the authentication
kind and input your credentials.

5. Select Next to continue.

6. In Navigator, select the data you require, and then select Transform data.

Connect using advanced options


Power Query provides an advanced option that you can add to your query if needed.

Advanced option Description

MDX or DAX Optionally provides a specific MDX or DAX statement to the Azure Analysis
statement Services database server to execute.

Once you've entered a value in the advanced option, select OK in Power Query Desktop
or Next in Power Query Online to connect to your Azure Analysis Services database.

Troubleshooting

Error indicates “invalid credentials”


If you've enabled the firewall in Azure Analysis Services and you're getting an “invalid
credentials” error, you might need to allowlist the IP addresses of the Power Query
Online clusters, even if you've allowed access from the Power BI service in your Azure
Analysis Services server firewall.
To allowlist the Power Query Online clusters, download the Azure IP ranges and service
tags from the Microsoft Download Center . You'll need to allowlist the IP addresses
from the PowerQueryOnline service tag. For more information on how to configure the
server firewall, go to Quickstart: Configure server firewall - Portal.
Azure Blob Storage
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Anonymous


Account key
Organizational account
Shared access signature (SAS)

Function Reference Documentation AzureStorage.BlobContents


AzureStorage.Blobs

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
An Azure subscription. Go to Get Azure free trial .

An Azure Blob Storage account. Follow the instructions at Create a storage


account. This article assumes that you've already created an Azure Blob Storage
account, and uploaded data files to it.

7 Note
If you are connecting to an Azure Blob Storage account from Power BI, the Azure
Blob storage account must be in the same region as your Power BI account.

Capabilities supported
Import

Connect to Azure Blob Storage from Power


Query Desktop
To connect to Azure Blob Storage:

1. From Get Data, select the Azure category, select Azure Blob Storage, and then
select Connect. More information: Where to get data
2. In Azure Blob Storage, enter the account name or URL of your Azure Blob Storage
account. Then select OK.

3. If this is the first time you're connecting to this account, select either the
Anonymous or Account key authentication method to sign into the Azure Blob
Storage account. More information: Copy your account key from Azure Blob
Storage
For more information about using and managing authentication, go to
Authentication with a data source.

7 Note

If you are signing in from Excel, you can also select the shared access
signature (SAS) authentication method.

4. Select Connect.

5. The Navigator dialog box shows the files that you uploaded to your Azure Blob
Storage account. Select the containers you want to use, and then select either
Transform Data to transform the data in Power Query or Load to load the data.

Connect to Azure Blob Storage from Power


Query Online
To connect to Azure Blob Storage:

1. From Choose data source, select the Azure category, and then select Azure Blobs.

2. In Connection settings, enter the account name or URL of your Azure Blob Storage
account.

3. Optionally, enter the name of the on-premises data gateway you require.

4. Select the Authentication kind used to access your blob storage. If you've set blob
storage for anonymous access, choose Anonymous. If you set blob storage to
require an account key, choose Account key. More information: Copy your account
key from Azure Blob Storage
For more information about using and managing authentication, go to
Authentication with a data source.

5. Select Next.

6. The Navigator screen shows the files tht you uploaded to your Azure Blob Storage
account. Select the containers you want to use, and then select Transform data.

Copy your account key from Azure Blob


Storage
To retrieve your Azure Blob Storage account key to use while authenticating your
account in Power Query:

1. Sign in to the Azure portal .

2. Locate your Azure Blob Storage account.

3. In the storage account menu pane, under Security + networking, select Access
keys.

4. In the key1 section, locate the Key value. Select Show next to the key value.

5. Select the Copy to clipboard icon to copy the Key value.


Azure Cosmos DB v2 (Beta)
Article • 07/13/2023

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Feed Key

Prerequisites
An Azure Cosmos DB account.

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Number of Retries
Enable "AVERAGE" function Passdown
Enable "SORT" Passdown for multiple columns

Connect to Azure Cosmos DB


To connect to Azure Cosmos DB data:

1. Launch Power BI Desktop.

2. In the Home tab, select Get Data.

3. In the search box, enter Cosmos DB v2.

4. Select Azure Cosmos DB v2 (Beta), and then select Connect.


5. On the Azure Cosmos DB v2 connection page, for Cosmos Endpoint, enter the URI
of the Azure Cosmos DB account that you want to use. For Data Connectivity
mode, choose a mode that's appropriate for your use case, following these general
guidelines:

For smaller datasets, choose Import. When using import mode, Power BI
works with Cosmos DB to import the contents of the entire dataset for use in
your visualizations.

DirectQuery mode enables query pushdown to the Cosmos DB container for


execution and improves the performance of the connector. For partitioned
Cosmos DB containers, a SQL query with an aggregate function is passed
down to Cosmos DB if the query also contains a filter (WHERE clause) on the
Partition Key. For example, if the partition key is defined to be "Product", then
a SQL query that can be passed down and be executed on Cosmos DB server
can be:

SELECT SUM(ColumnName) FROM TableName where Product = 'SampleValue'

7 Note

Use Azure Synapse Link for Azure Cosmos DB if you would like to execute
cross-partitioned aggregate functions against the Cosmos DB container.

More information:

Dataset modes in the Power BI service

Use DirectQuery in Power BI Desktop


6. At the prompt to configure data source authentication, enter the Account Key.
Then select Connect. Your data catalog, databases, and tables appear in the
Navigator dialog box.

7. In the Display Options pane, select the check box for the dataset that you want to
use.

8. The most optimal way to specify the Partition Key filter (so that the aggregate
functions can be pushed down to Cosmos DB) is to use dynamic M parameters. To
use dynamic M parameters, you would create a dataset with unique Partition Key
values, create a parameter, add it as filter on main dataset, bind it to the unique
Partition key datset, and use it as a slicer for the main dataset. Use the following
steps to enable dynamic M parameters for Partition Key filtering.

a. Create a dataset with unique partition key values:


In Navigator, select Transform Data instead of Load to bring up the Power Query
editor. Right-click on the queries dataset, and then select Duplicate to create a
new dataset.

Rename the new Partition Key dataset, then right-click on the Cosmos DB partition
key column. In this example, Product is the Cosmos DB partition key column.
Select Remove Other Columns, and then select Remove Duplicates.

b. Create a parameter for dynamic filtering:

In the Power Query editor, select Manage Parameters > New Parameter. Rename
the new parameter to reflect the filter parameter and input a valid value as Current
Value.

c. Apply parameterized filter on main dataset:

Select the dropdown icon of the Partition Key column, then select Text Filters >
Equals. Change the filter type from Text to Parameter. Then choose the parameter
that was created in step b. Select Close & Apply on top left corner of the Power
Query editor.

d. Create Partition Key values slicer with parameter binding:

In Power BI, select the Model tab. Then select the Partition Key field. From the
Properties pane, select Advanced > Bind to parameter. Choose the parameter
that was created in step b.

Select the Report tab and add a slicer with the unique Partition Key.
e. Add visualizations and apply Partition Key filter from the slicer:

Since the chosen partition key value on the slicer is bound to the parameter (as
done in step d) and the parameterized filter is applied on the main dataset (as
done in step c), the chosen partition key value will be applied as a filter on the
main dataset and the query with the partition key filter will be passed down to
Cosmos DB in all visualizations.

Advanced options
Power Query provides a set of advanced options that you can add to your query if
needed.

The following table lists all of the advanced options you can set in Power Query.

Advanced Description
option

Number of How many times to retry if there are HTTP return codes of 408 - Request Timeout ,
Retries 412 - Precondition Failed , or 429 - Too Many Requests . The default number of
retries is 5.

Enable Specifies whether the connector allows pass-down of the AVG aggregate function
AVERAGE to the Cosmos DB. The default value is 1 and the connector attempts to pass-
function down the AVG aggregate function down to Cosmos DB, by default. If the
Passdown argument contains string, boolean, or null values for the AVG aggregate function,
an undefined result set is returned by the Cosmos DB server. When set to value of
0, the AVG aggregate function isn't passed down to the Cosmos DB server, and
the connector handles performing the AVG aggregation operation itself.

Enable SORT Specifies whether the connector allows multiple columns to be passed down to
Passdown Cosmos DB when specified in the ORDER BY clause of the SQL query. The default
for multiple value is 0 and if more than one column is specified in the ORDER BY clause, the
columns connector doesn't pass down the columns by default and instead handles
performing the order by itself. When set to value of 1, the connector attempts to
pass-down multiple columns to Cosmos DB when specified in the ORDER BY
clause of the SQL query. To allow multiple columns to be passed down to Cosmos
DB, make sure to have composite indexes set on the columns in the respective
collections. For partitioned collections, a SQL query with ORDER BY will be passed
Advanced Description
option

down to Cosmos DB only if the query contains a filter on the partitioned key. Also,
if there are more than eight columns specified in the ORDER BY clause, the
connector doesn't pass down the ORDER BY clause and instead handles the
ordering execution itself.

Known issues and limitations


For partitioned Cosmos DB containers, a SQL query with an aggregate function is
passed down to Cosmos DB if the query also contains a filter (WHERE clause) on
the Partition Key. If the aggregate query doesn't contain a filter on the Partition
Key, the aggregation is performed by the connector.

The connector doesn't pass down an aggregate function if it's called upon after
TOP or LIMIT is applied. Cosmos DB processes the TOP operation at the end when
processing a query. For example, in the following query, TOP is applied in the
subquery, while the aggregate function is applied on top of that result set:

SELECT COUNT(1) FROM (SELECT TOP 4 * FROM EMP) E

If DISTINCT is provided in an aggregate function, the connector doesn't pass the


aggregate function down to Cosmos DB if a DISTINCT clause is provided in an
aggregate function. When present in an aggregate function, DISTINCT isn't
supported by the Cosmos DB SQL API.

For the SUM aggregate function, Cosmos DB returns undefined as the result set if
any of the arguments in SUM are string, boolean, or null. However, if there are null
values, the connector passes the query to Cosmos DB in such a way that it asks the
data source to replace a null value with zero as part of the SUM calculation.

For the AVG aggregate function, Cosmos DB returns undefined as result set if any
of the arguments in SUM are string, boolean, or null. The connector exposes a
connection property to disable passing down the AVG aggregate function to
Cosmos DB in case this default Cosmos DB behavior needs to be overridden. When
AVG passdown is disabled, it isn't passed down to Cosmos DB, and the connector
handles performing the AVG aggregation operation itself. For more information,
go to "Enable AVERAGE function Passdown" in Advanced options.

Azure Cosmos DB Containers with large partition key are not currently supported
in the Connector.
Aggregation passdown is disabled for the following syntax due to server
limitations:

When the query isn't filtering on a partition key or when the partition key filter
uses the OR operator with another predicate at the top level in the WHERE
clause.

When the query has one or more partition keys appear in an IS NOT NULL
clause in the WHERE clause.

Filter passdown is disabled for the following syntax due to server limitations:
When the query containing one or more aggregate columns is referenced in the
WHERE clause.
Azure Cost Management
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational Account

Function Reference Documentation —

7 Note

This connector replaces the previously available Azure Consumption Insights and
Azure Cost Management (Beta) connectors. Any reports created with the previous
connector must be recreated using this connector.

Prerequisites
An Azure subscription. Go to Get Azure free trial .

A direct Microsoft Customer Agreement or an Enterprise Agreement (EA) .

Capabilities supported
Import
Advanced options
Start Date
End Date

Connect to Azure Cost Management from


Power Query Desktop
To connect to the Azure Cost Management:

1. In the Home ribbon, select Get Data.

2. Select Azure from the list of data categories.

3. Select Azure Cost Management.

4. In the dialog that appears, for the Choose Scope drop down, use Manually Input
Scope for Microsoft Customer Agreements, or use Enrollment Number for
Enterprise Agreements (EA).

Connect to a Microsoft Customer Agreement


account
This section describes the steps necessary to connect to a Microsoft Customer
Agreement account.

Connect to a billing account


To connect to a billing account, you need to retrieve your Billing account ID from the
Azure portal:

1. In the Azure portal , navigate to Cost Management + Billing.

2. Select your Billing profile.

3. Under Settings in the menu, select Properties in the sidebar.

4. Under Billing profile, copy the ID.


5. For Choose Scope, select Manually Input Scope and input the connection string as
shown in the example below, replacing {billingAccountId} with the data copied
from the previous steps.

/providers/Microsoft.Billing/billingAccounts/{billingAccountId}

Alternatively, for Choose Scope, select Enrollment Number and input the Billing
Account ID string as copied from the previous steps.

6. Enter the number of months and select OK.

Alternatively, if you want to download less than a month's worth of data you can
set Number of months to zero, then specify a date range using Start Date and End
Date values that equate to less than 31 days.

7. When prompted, sign in with your Azure user account and password. You must
have access to the Billing account scope to successfully access the billing data.

Connect to a billing profile


To connect to a billing profile, you must retrieve your Billing profile ID and Billing
account ID from the Azure portal:

1. In the Azure portal , navigate to Cost Management + Billing.

2. Select your Billing profile.

3. Under Settings in the menu, select Properties in the sidebar.

4. Under Billing profile, copy the ID.


5. Under Billing account, copy the ID.

6. For Choose Scope, select Manually Input Scope and input the connection string as
shown in the example below, replacing {billingAccountId} and {billingProfileId} with
the data copied from the previous steps.

/providers/Microsoft.Billing/billingAccounts/{billingAccountId}/billingProfile
s/{billingProfileId}

7. Enter the number of months and select OK.

8. When prompted, sign in with your Azure user account and password. You must
have access to the Billing profile to successfully access the billing profile data.

Connect to an Enterprise Agreement account


To connect with an Enterprise Agreement (EA) account, you can get your enrollment ID
from the Azure portal:

1. In the Azure portal , navigate to Cost Management + Billing.

2. Select your billing account.

3. On the Overview menu, copy the Billing account ID.

4. For Choose Scope, select Enrollment Number and paste the billing account ID
from the previous step.

5. Enter the number of months and then select OK.

6. When prompted, sign in with your Azure user account and password. You must use
an Enterprise Administrator account for Enterprise Agreements.

Data available through the connector


Once you successfully authenticate, a Navigator window appears with the following
available data tables:

Table Description

Balance summary Summary of the balance for the current billing month for Enterprise
Agreements (EA).

Billing events Event log of new invoices, credit purchases, etc. Microsoft Customer
Agreement only.

Budgets Budget details to view actual costs or usage against existing budget
targets.

Charges A month-level summary of Azure usage, Marketplace charges, and


charges billed separately. Microsoft Customer Agreement only.
Table Description

Credit lots Azure credit lot purchase details for the provided billing profile.
Microsoft Customer Agreement only.

Pricesheets Applicable meter rates for the provided billing profile or EA enrollment.

RI charges Charges associated to your Reserved Instances over the last 24 months.
This table is in the process of being deprecated, please use RI
transactions

RI recommendations Reserved Instance purchase recommendations based on all your


(shared) subscription usage trends for the last 30 days.

RI recommendations Reserved Instance purchase recommendations based on your single


(single) subscription usage trends for the last 30 days.

RI transactions List of transactions for reserved instances on billing account scope.

RI usage details Consumption details for your existing Reserved Instances over the last
month.

RI usage summary Daily Azure reservation usage percentage.

Usage details A breakdown of consumed quantities and estimated charges for the
given billing profile on EA enrollment.

Usage details A breakdown of consumed quantities and estimated amortized charges


amortized for the given billing profile on EA enrollment.

You can select a table to see a preview dialog. You can select one or more tables by
selecting the boxes beside their name. Then either select Load to load the data or
Transform Data to continue transforming the data in Power Query Editor.
When you select Load, the data is loaded into Power BI Desktop.

When the data you selected is loaded, the data tables and fields are shown in the Fields
pane.

Considerations and limitations


The following considerations and limitations apply to the Azure Cost Management
connector:

The Azure Cost Management connector doesn’t support pay-as-you-go Microsoft


Customer Agreements or indirect Microsoft Customer Agreements. Microsoft
Partner Agreements are also not supported. If you have an unsupported
agreement, you can use Exports to save the cost data to a share and then connect
to it using Power BI. More information: Tutorial - Create and manage exported
data from Azure Cost Management

The Azure Cost Management connector uses OAuth 2.0 for authentication with
Azure and identifies users who are going to use the connector. Tokens generated
in this process are valid for a specific period. Power BI preserves the token for the
next login. OAuth 2.0, is a standard for the process that goes on behind the scenes
to ensure the secure handling of these permissions. To connect, you must use an
Enterprise Administrator account for Enterprise Agreements, or have appropriate
permissions at the billing account or billing profile levels for Microsoft Customer
Agreements.

Data row requests exceeding one million rows is not supported by Power BI.
Instead, you can try using the export feature described in create and manage
exported data in Azure Cost Management.

The Azure Cost Management data connector doesn't work with Office 365 GCC
customer accounts.

Data refresh: The cost and usage data is typically updated and available in the
Azure portal and supporting APIs within 8 to 24 hours, so we suggest you
constrain Power BI scheduled refreshes to once or twice a day.

Data source reuse: If you have multiple reports that are pulling the same data, and
don't need additional report-specific data transformations, you should reuse the
same data source, which would reduce the amount of time required to pull the
Usage Details data.

For more information on reusing data sources, go to:


Introduction to datasets across workspaces
Create reports based on datasets from different workspaces

You might receive a 400 bad request from the RI usage details when you try to refresh
the data if you've chosen a date parameter greater than three months. To mitigate the
error, take the following steps:

1. In Power BI Desktop, select Home > Transform data.

2. In Power Query Editor, select the RI usage details dataset and select Advanced
Editor.

3. Update the Power Query code as shown in the following paragraph(s), which will
split the calls into three-month chunks. Make sure you note and retain your
enrollment number, or billing account/billing profile ID.

For EA use the following code update:

Power Query M

let
enrollmentNumber = "<<Enrollment Number>>",
optionalParameters1 = [startBillingDataWindow = "-9",
endBillingDataWindow = "-6"],
source1 = AzureCostManagement.Tables("Enrollment Number",
enrollmentNumber, 5, optionalParameters1),
riusagedetails1 = source1{[Key="riusagedetails"]}[Data],
optionalParameters2 = [startBillingDataWindow = "-6",
endBillingDataWindow = "-3"],
source2 = AzureCostManagement.Tables("Enrollment Number",
enrollmentNumber, 5, optionalParameters2),
riusagedetails2 = source2{[Key="riusagedetails"]}[Data],
riusagedetails = Table.Combine({riusagedetails1, riusagedetails2})
in
riusagedetails

For Microsoft Customer Agreements use the following update:

Power Query M

let
billingProfileId = "<<Billing Profile Id>>",
optionalParameters1 = [startBillingDataWindow = "-9",
endBillingDataWindow = "-6"],
source1 = AzureCostManagement.Tables("Billing Profile Id",
billingProfileId, 5, optionalParameters1),
riusagedetails1 = source1{[Key="riusagedetails"]}[Data],
optionalParameters2 = [startBillingDataWindow = "-6",
endBillingDataWindow = "-3"],
source2 = AzureCostManagement.Tables("Billing Profile Id",
billingProfileId, 5, optionalParameters2),
riusagedetails2 = source2{[Key="riusagedetails"]}[Data],
riusagedetails = Table.Combine({riusagedetails1, riusagedetails2})
in
riusagedetails

4. Once you've updated the code with the appropriate update from the previous
step, select Done and then select Close & Apply.
Azure Databricks
Article • 07/13/2023

7 Note

The following connector article is provided by Databricks, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Databricks website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Azure Active Directory


Organizational Account
Personal Access Token

Capabilities supported
Import
DirectQuery (Power BI Datasets)

Connect to Databricks (AWS, Azure or GCP)


from Power Query Desktop
To connect to Databricks from Power Query Desktop, take the following steps:

1. In the Get Data experience, search for Databricks to shortlist the Databricks
connector, Azure Databricks. Use the Azure Databricks connector for all
Databricks SQL Warehouse data unless you've been instructed otherwise by your
Databricks rep.
2. Provide the Server hostname and HTTP Path for your Databricks SQL Warehouse.
Refer to Configure the Databricks ODBC and JDBC drivers for instructions to look
up your "Server hostname" and "HTTP Path". Enter this information accordingly.
You can optionally supply a default catalog and/or database under Advanced
options. Select OK to continue.

3. Provide your credentials to authenticate with your Databricks SQL Warehouse.


There are three options for credentials:

Username / Password (useable for AWS or GCP). This option isn't available if
your organization/account uses 2FA/MFA.

Personal Access Token (useable for AWS, Azure or GCP). Refer to Personal
access tokens for instructions on generating a Personal Access Token (PAT).

Azure Active Directory (useable only for Azure). Sign in to your organizational
account using the browser popup.
7 Note

Once you enter your credentials for a particular Databricks SQL Warehouse,
Power BI Desktop caches and reuses those same credentials in subsequent
connection attempts. You can modify those credentials by going to File >
Options and settings > Data source settings. More information: Change the
authentication method

4. Once you successfully connect, the Navigator shows the data available to you on
the cluster. You can choose either Transform Data to transform the data using
Power Query or Load to load the data in Power Query Desktop.
Connect to Databricks data from Power Query
Online
To connect to Databricks from Power Query Online, take the following steps:

1. In the Get Data experience, select the Database category. (Refer to Creating a
dataflow for instructions.) Shortlist the available Databricks connectors with the
search box. Use the Azure Databricks connector for all Databricks SQL Warehouse
data unless you've been instructed otherwise by your Databricks rep.

2. Enter the Server hostname and HTTP Path for your Databricks SQL Warehouse.
Refer to Configure the Databricks ODBC and JDBC drivers for instructions to look
up your "Server hostname" and "HTTP Path". You can optionally supply a default
catalog and/or database under Advanced options.
3. Provide your credentials to authenticate with your Databricks SQL Warehouse.
There are three options for credentials:

Username / Password (useable for AWS or GCP). This option isn't available if
your organization/account uses 2FA/MFA.
Account Key (useable for AWS, Azure or GCP). Refer to Personal access tokens
for instructions on generating a Personal Access Token (PAT).
Azure Active Directory (useable only for Azure). Sign in to your organizational
account using the browser popup.

4. Once you successfully connect, the Navigator appears and displays the data
available on the server. Select your data in the navigator. Then select Next to
transform the data in Power Query.
Limitations
The Azure Databricks connector supports web proxy. However, automatic proxy
settings defined in .pac files aren't supported.
In the Azure Databricks connector, the Databricks.Query data source isn't
supported in combination with Power BI dataset's DirectQuery mode.
Azure Data Explorer (Kusto)
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Organizational Account

Function Reference Documentation —

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
An Azure subscription. Go to Get Azure free trial .

An organizational email account that is a member of Azure Active directory. This


article uses this account to connect to the Azure Data Explorer help cluster
samples.

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Limit query result record number
Limit query result data size in Bytes
Disable result-set truncation
Additional set statements

Connect to Azure Data Explorer from Power


Query Desktop
To connect to Azure Data Explorer from Power Query Desktop:

1. In Get Data, select Azure > Azure Data Explorer (Kusto), and then select Connect.
More information: Where to get data

2. In Azure Data Explorer (Kusto), provide the name of your Azure Data Explorer
cluster. For this example, use https://help.kusto.windows.net to access the
sample help cluster. For other clusters, the URL is in the form
https://<ClusterName>.<Region>.kusto.windows.net.

You can also select a database that's hosted on the cluster you're connecting to,
and one of the tables in the database, or a query like StormEvents | take 1000 .

3. If you want to use any advance options, select the option and enter the data to use
with that option. More information: Connect using advanced options

7 Note

You might need to scroll down to display all of the advanced options and the
data connectivity selection.

4. Select either the Import or DirectQuery data connectivity mode (Power BI Desktop
only). More information: When to use Import or Direct Query mode

5. Select OK to continue.

6. If you don't already have a connection to the cluster, select Sign in. Sign in with an
organizational account, then select Connect.
7. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in the Power
Query editor. In this example, StormEvents was selected in the Samples database.

Connect to Azure Data Explorer from Power


Query Online
To connect to Azure Data Explorer from Power Query Online:

1. Select the Azure Data Explorer (Kusto) option from Choose data source. More
information: Where to get data

2. In Connect to data source, provide the name of your Azure Data Explorer cluster.
For this example, use https://help.kusto.windows.net to access the sample help
cluster. For other clusters, the URL is in the form https://<ClusterName>.
<Region>.kusto.windows.net.
You can also select a database that's hosted on the cluster you're connecting to,
and one of the tables in the database, or a query like StormEvents | take 1000 .

3. If you want to use any advance options, select the option and enter the data to use
with that option. More information: Connect using advanced options

4. If needed, select the on-premises data gateway in Data gateway.

5. If you don't already have a connection to the cluster, select Sign in. Sign in with an
organizational account.

6. After you've successfully signed in, select Next.

7. In the Choose data page, select the database information you want, then either
select Transform Data or Next to continue transforming the data in the Power
Query editor. In this example, StormEvents was selected in the Samples database.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to your query if needed.

The following table lists all of the advanced options you can set in Power Query Desktop
and Power Query Online.

Advanced option Description

Limit query result The maximum number of records to return in the result.
record number

Limit query result The maximum data size in bytes to return in the result.
data size in Bytes

Disable result-set Enable or disable result truncation by using the notruncation request
truncation option.

Additional Set Sets query options for the duration of the query. Query options control
Statements how a query executes and returns results. Multiple Set statements can be
separated by semicolons.

For information about additional advanced options not available in the Power Query UI,
go to Configuring Azure Data Explorer connector options in an M Query.

When to use Import or Direct Query mode


In Import mode, data is moved to Power BI. In DirectQuery mode, data is queried
directly from your cluster.

Use Import mode when:

Your data set is small.


You don't need near real-time data.
Your data is already aggregated or you perform aggregation in Kusto.

Use DirectQuery mode when:

Your data set is very large.


You need near real-time data.

To learn more about using DirectQuery, go to About using DirectQuery in Power BI.

Tips for using the Azure Data Explorer


connector to query data
The following sections include tips and tricks for using Kusto query language with Power
Query.

Complex queries in Power BI


Complex queries are more easily expressed in Kusto than in Power Query. They should
be implemented as Kusto functions, and invoked in Power BI. This method is required
when using DirectQuery with let statements in your Kusto query. Because Power BI
joins two queries, and let statements can't be used with the join operator, syntax
errors might occur. Therefore, save each portion of the join as a Kusto function and
allow Power BI to join these two functions together.

How to simulate a relative date-time operator


Power Query doesn't contain a relative date-time operator such as ago() . To simulate
ago() , use a combination of DateTime.FixedLocalNow and #duration Power Query M

functions.

Instead of this query using the ago() operator:

Kusto
StormEvents | where StartTime > (now()-5d)
StormEvents | where StartTime > ago(5d)

Use the following equivalent query:

Power Query M

let
Source = AzureDataExplorer.Contents("help", "Samples", "StormEvents",
[]),
#"Filtered Rows" = Table.SelectRows(Source, each [StartTime] >
(DateTime.FixedLocalNow()-#duration(5,0,0,0)))
in
#"Filtered Rows"

Configuring Azure Data Explorer connector options in an


M Query
You can configure the options of the Azure Data Explorer connector from the Power
Query advanced editor in the M query language. Using these options, you can control
the generated query that's being sent to your Azure Data Explorer cluster.

Power Query M

let
Source = AzureDataExplorer.Contents("help", "Samples", "StormEvents",
[<options>])
in
Source

You can use any of the following options in your M query:

Option Sample Description

MaxRows [MaxRows=300000] Adds the


truncationmaxrecords set
statement to your query.
Overrides the default
maximum number of records
a query can return to the
caller (truncation).

MaxSize [MaxSize=4194304] Adds the truncationmaxsize


set statement to your query.
Overrides the default
Option Sample Description

maximum data size a query is


allowed to return to the caller
(truncation).

NoTruncate [NoTruncate=true] Adds the notruncation set


statement to your query.
Enables suppressing
truncation of the query results
returned to the caller.

AdditionalSetStatements [AdditionalSetStatements="set Adds the provided set


query_datascope=hotcache"] statements to your query.
These statements are used to
set query options for the
duration of the query. Query
options control how a query
executes and returns results.

CaseInsensitive [CaseInsensitive=true] Makes the connector generate


queries that are case
insensitive—queries use the
=~ operator instead of the ==
operator when comparing
values.

ForceUseContains [ForceUseContains=true] Makes the connector generate


queries that use contains
instead of the default has
when working with text fields.
While has is much more
performant, it doesn't handle
substrings. For more
information about the
difference between the two
operators, go to string
operators.

Timeout [Timeout=#duration(0,10,0,0)] Configures both the client and


server timeout of the query to
the provided duration.

ClientRequestIdPrefix [ClientRequestIdPrefix="MyReport"] Configures a ClientRequestId


prefix for all queries sent by
the connector. This allows the
queries to be identifiable in
the cluster as coming from a
specific report and/or data
source.
7 Note

You can combine multiple options together to reach the required behavior:
[NoTruncate=true, CaseInsensitive=true]

Reaching Kusto query limits


Kusto queries return, by default, up to 500,000 rows or 64 MB, as described in query
limits. You can override these defaults by using Advanced options in the Azure Data
Explorer (Kusto) connection window:

These options issue set statements with your query to change the default query limits:

Limit query result record number generates a set truncationmaxrecords


Limit query result data size in Bytes generates a set truncationmaxsize
Disable result-set truncation generates a set notruncation

Case sensitivity
By default, the connector generates queries that use the case sensitive == operator
when comparing string values. If the data is case insensitive, this isn't the desired
behavior. To change the generated query, use the CaseInsensitive connector option:

Power Query M

let
Source = AzureDataExplorer.Contents("help", "Samples", "StormEvents",
[CaseInsensitive=true]),
#"Filtered Rows" = Table.SelectRows(Source, each [State] == "aLaBama")
in
#"Filtered Rows"

Using query parameters


You can use query parameters to modify your query dynamically.

Use a query parameter in the query steps

You can use a query parameter in any query step that supports it. For example, filter the
results based on the value of a parameter. In this example, select the drop-down menu
on the right side of the State column in the Power Query editor, select Text Filters >
Equals, then select ALABAMA under Keep rows where 'State'.

Provide parameters to an Azure Data Explorer function


Kusto functions are a great way to maintain complex Kusto Query Language (KQL)
queries. We recommend using functions instead of embedding KQL in Power Query. The
main advantage of using function is that the logic is maintained once in an environment
that is easy to create and test.

Functions can also receive parameters and so add a lot of flexibility to the Power BI user.
Power BI has a lot of ways to slice the data. But all filters and slicers are added after the
original KQL and in many cases you'll want to use filtering at an early stage of the query.
Using functions and dynamic parameters is a very effective way to customize the final
query.
Creating a function

You can create the following function in any Azure Data Explorer cluster that you have
access to, including a free cluster. The function returns the table SalesTable from the
help cluster, filtered for sales transactions greater than or smaller than a number
provided by the report user.

Kusto

.create-or-alter function LargeOrSmallSales(Cutoff:long,Op:string=">")


{
cluster("help").database("ContosoSales").SalesTable
| where (Op==">" and SalesAmount >= Cutoff) or (Op=="<" and
SalesAmount <= Cutoff)
}

After you create the function, you can test it using:

Kusto

LargeOrSmallSales(2000,">")
| summarize Sales=tolong(sum(SalesAmount)) by Country

You can also test it using:

Kusto

LargeOrSmallSales(20,"<")
| summarize Sales=tolong(sum(SalesAmount)) by Country

Using the function in Power BI

1. Connect to the cluster where you created the function.

2. In the Power Query navigator, select the function from the list of objects. The
connector analyzes the parameters and presents them above the data on the right
side of the navigator.
3. Add values to the parameters and then select Apply.

4. After the preview appears, select Transform Data.

5. Once in the Power Query editor, create two parameters, one for the cutoff value
and one for the operator.

6. Go back to the LargeOrSmallSales query and replace the values with the query
parameters in the formula bar.

7. From the editor, create two static tables (Enter Data) to provide options for the two
parameters. For the cutoff, you can create a table with values like 10, 50, 100, 200,
500, 1000, 2000. For the Op , a table with two Text values < and > .

8. The two columns in the tables need to be bound to the query parameters using
the Bind to parameter selection.
The final report will include slicers for the two static tables and any visuals from the
summary sales.

The base table is filtered first and then aggregated.

Using a query parameter in the connection details


Use a query parameter to filter information in the query and optimize query
performance.

In Advanced Editor:

1. Find the following section of the query:

Source = AzureDataExplorer.Contents("<Cluster>", "<Database>", "<Query>", [])

For example:

Power Query M

Source = AzureDataExplorer.Contents("Help", "Samples", "StormEvents |


where State == 'ALABAMA' | take 100", [])

2. Insert a query parameter into the Kusto Query Language (KQL) query.
If you paste a KQL query directly in the connection dialog, the query will be part of
the source step in Power Query. You can embed parameters as part of the query
using the advanced editor or when editing the source statement in the formula
bar. An example query could be StormEvents | where State == ' " & State & " '
| take 100 . State is a parameter and in run time the query will be:

StormEvents | where State == 'ALABAMA' | take 100

3. If your query contains quotation marks, encode them correctly. For example, the
following query in KQL:

Kusto

"StormEvents | where State == "ALABAMA" | take 100"

will appear in the Advanced Editor as follows with two quotation marks:

Power Query M

"StormEvents | where State == ""ALABAMA"" | take 100"

If you are using a parameter, such as State , it should be replaced with the
following query, which contains three quotation marks:

Kusto

"StormEvents | where State == """ & State & """ | take 100"

Use Value.NativeQuery for Azure Data Explorer features


To use an Azure Data Explorer feature that's not supported in Power Query, use the
Value.NativeQuery method in Power Query M. This method inserts a Kusto Query
Language fragment inside the generated query, and can also be used to give you more
control over the executed query.

The following example shows how to use the percentiles function in Azure Data
Explorer:

Power Query M

let
StormEvents = AzureDataExplorer.Contents(DefaultCluster,
DefaultDatabase){[Name = DefaultTable]}[Data],
Percentiles = Value.NativeQuery(StormEvents, "| summarize
percentiles(DamageProperty, 50, 90, 95) by State")
in
Percentiles

Don't use Power BI data refresh scheduler to issue control


commands to Kusto
Power BI includes a data refresh scheduler that can periodically issue queries against a
data source. This mechanism shouldn't be used to schedule control commands to Kusto
because Power BI assumes all queries are read-only.
Azure Data Lake Storage Gen1
Article • 07/13/2023

7 Note

On Feb 29, 2024 Azure Data Lake Storage Gen1 will be retired. For more
information, go to the official announcement . If you use Azure Data Lake
Storage Gen1, make sure to migrate to Azure Data Lake Storage Gen2 prior to that
date. To learn how, go to Migrate Azure Data Lake Storage from Gen1 to Gen2.

Unless you already have an Azure Data Lake Storage Gen1 account, you can't create
new ones.

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Analysis Services

Authentication Types Supported Organizational Account

Function Reference Documentation AzureStorage.DataLake


AzureStorage.DataLakeContents

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
An Azure subscription. Go to Get Azure free trial .

An Azure Data Lake Storage Gen1 account. Follow the instructions at Get started
with Azure Data Lake Storage Gen1 using the Azure portal. This article assumes
that you've already created a Data Lake Storage Gen1 account, called myadlsg1,
and uploaded a sample data file (Drivers.txt) to it. This sample file is available for
download from Azure Data Lake Git Repository .

Capabilities supported
Import
Advanced options
Page size in bytes

Connect to Azure Data Lake Storage Gen1


1. Select the Azure Data Lake Storage Gen1 option in the Get Data selection, and
then select Connect. More information: Where to get data
2. In the Azure Data Lake Store dialog box, provide the URL to your Data Lake
Storage Gen1 account. Optionally, enter a value in Page Size in Bytes. Then select
OK.

3. If this is the first time you're connecting to this database, select Sign in to sign into
the Azure Data Lake Storage Gen1 account. You'll be redirected to your
organization's sign-in page. Follow the prompts to sign in to the account.

4. After you've successfully signed in, select Connect.

5. The Navigator dialog box shows the file that you uploaded to your Azure Data
Lake Storage Gen1 account. Verify the information and then select either
Transform Data to transform the data in Power Query or Load to load the data in
Power BI Desktop.
Connect using advanced options
Power Query provides an advanced option that you can add to your query if needed.

Advanced Description
option

Page Size in Bytes Used to break up large files into smaller pieces. The default page size is 4
MB.

See also
Azure Data Lake Storage Gen2
Azure Data Lake Storage Gen1 documentation
Azure Data Lake Storage Gen2
Article • 07/07/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Organizational Account


Account Key
Shared Access Signature (SAS) Key

Function Reference Documentation AzureStorage.DataLake


AzureStorage.DataLakeContents

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
An Azure subscription. Go to Get Azure free trial .

A storage account that has a hierarchical namespace. Follow the instructions at


Create a storage account to create one. This article assumes that you've created a
storage account named myadlsg2 .

Ensure you're granted one of the following roles for the storage account: Blob
Data Reader, Blob Data Contributor, or Blob Data Owner.

A sample data file named Drivers.txt located in your storage account. You can
download this sample from Azure Data Lake Git Repository , and then upload
that file to your storage account.
Capabilities supported
Import
File System View
CDM Folder View

Connect to Azure Data Lake Storage Gen2 from


Power Query Desktop
1. Select the Azure Data Lake Storage Gen2 option in the Get Data selection, and
then select Connect. More information: Where to get data

2. In the Azure Data Lake Storage Gen2 dialog box, provide the URL to your Azure
Data Lake Storage Gen2 account, container, or subfolder using the container
endpoint format. URLs for Data Lake Storage Gen2 have the following pattern:

https://<accountname>.dfs.core.windows.net/<container>/<subfolder>

You can also select whether you want to use the file system view or the Common
Data Model folder view.

Select OK to continue.

3. If this is the first time you're using this URL address, you'll be asked to select the
authentication method.

If you select the Organizational account method, select Sign in to sign into your
storage account. You'll be redirected to your organization's sign-in page. Follow
the prompts to sign into the account. After you've successfully signed in, select
Connect.

If you select the Account key method, enter your account key and then select
Connect.

4. The Navigator dialog box shows all files under the URL you provided. Verify the
information and then select either Transform Data to transform the data in Power
Query or Load to load the data.

Connect to Azure Data Lake Storage Gen2 from


Power Query Online
1. Select the Azure Data Lake Storage Gen2 option in the Get Data selection, and
then select Connect. More information: Where to get data

2. In Connect to data source, enter the URL to your Azure Data Lake Storage Gen2
account. Refer to Limitations to determine the URL to use.

3. Select whether you want to use the file system view or the Common Data Model
folder view.
4. If needed, select the on-premises data gateway in Data gateway.

5. Select Sign in to sign into the Azure Data Lake Storage Gen2 account. You'll be
redirected to your organization's sign-in page. Follow the prompts to sign in to the
account.

6. After you've successfully signed in, select Next.

7. The Choose data page shows all files under the URL you provided. Verify the
information and then select Transform Data to transform the data in Power Query.

Limitations

Subfolder or file not supported in Power Query Online


Currently, in Power Query Online, the Azure Data Lake Storage Gen2 connector only
supports paths with container, and not subfolder or file. For example,
https://<accountname>.dfs.core.windows.net/<container> will work, while
https://<accountname>.dfs.core.windows.net/<container>/<filename> or
https://<accountname>.dfs.core.windows.net/<container>/<subfolder> will fail.

Refresh authentication
Microsoft doesn't support dataflow or dataset refresh using OAuth2 authentication
when the Azure Data Lake Storage Gen2 (ADLS) account is in a different tenant. This
limitation only applies to ADLS when the authentication method is OAuth2, that is, when
you attempt to connect to a cross-tenant ADLS using an Azure AD account. In this case,
we recommend that you use a different authentication method that isn't OAuth2/AAD,
such as the Key authentication method.
Proxy and firewall requirements
When you create a dataflow using a gateway, you might need to change some of your
proxy settings or firewall ports to successfully connect to your Azure data lake. If a
dataflow fails with a gateway-bound refresh, it might be due to a firewall or proxy issue
on the gateway to the Azure storage endpoints.

If you're using a proxy with your gateway, you might need to configure the
Microsoft.Mashup.Container.NetFX45.exe.config file in the on-premises data gateway.
More information: Configure proxy settings for the on-premises data gateway.

To enable connectivity from your network to the Azure data lake, you might need to
enable list specific IP addresses on the gateway machine. For example, if your network
has any firewall rules in place that might block these attempts, you'll need to unblock
the outbound network connections for your Azure data lake. To enable list the required
outbound addresses, use the AzureDataLake service tag. More information: Virtual
network service tags

Dataflows also support the "Bring Your Own" data lake option, which means you create
your own data lake, manage your permissions, and you explicitly connect it to your
dataflow. In this case, when you're connecting to your development or production
environment using an Organizational account, you must enable one of the following
roles for the storage account: Blob Data Reader, Blob Data Contributor, or Blob Data
Owner.

See also
Analyze data in Azure Data Lake Storage Gen2 by using Power BI
Introduction to Azure Data Lake Storage Gen2
Analyze data in Azure Data Lake Storage
Gen2 by using Power BI
Article • 07/18/2023

In this article, you'll learn how to use Power BI Desktop to analyze and visualize data
that's stored in a storage account that has a hierarchical namespace (Azure Data Lake
Storage Gen2).

Prerequisites
Before you begin this tutorial, you must have the following prerequisites:

" An Azure subscription. Go to Get Azure free trial .


" A storage account that has a hierarchical namespace. Follow the instructions at
Create a storage account to create one. This article assumes that you've created a
storage account named contosoadlscdm .
" Ensure you are granted one of the following roles for the storage account: Blob
Data Reader, Blob Data Contributor, or Blob Data Owner.
" A sample data file named Drivers.txt located in your storage account. You can
download this sample from Azure Data Lake Git Repository , and then upload that
file to your storage account.
" Power BI Desktop. You can download this application from the Microsoft Download
Center .

Create a report in Power BI Desktop


1. Launch Power BI Desktop on your computer.

2. Follow the instructions in the Azure Data Lake Storage Gen2 connector article to
connect to the sample data.

3. From the Power Query Navigator, select Load.


4. After the data has been successfully loaded into Power BI, the following fields are
displayed in the Fields panel.

However, to visualize and analyze the data, you might prefer the data to be
available using the following fields.
In the next steps, you'll update the query to convert the imported data to the
desired format.

5. From the Home tab on the ribbon, select Transform Data. The Power Query editor
then opens, displaying the contents of the file.

6. In the Power Query editor, under the Content column, select Binary. The file will
automatically be detected as CSV and will contain the output as shown below. Your
data is now available in a format that you can use to create visualizations.

7. From the Home tab on the ribbon, select Close & Apply.
8. Once the query is updated, the Fields tab displays the new fields available for
visualization.
9. Now you can create a pie chart to represent the drivers in each city for a given
country/region. To do so, make the following selections.

From the Visualizations tab, select the symbol for a pie chart.
In this example, the columns you're going to use are Column 4 (name of the city)
and Column 7 (name of the country/region). Drag these columns from the Fields
tab to the Visualizations tab as shown below.
The pie chart should now resemble the one shown below.
10. If you select a specific country/region from the page level filters, the number of
drivers in each city of the selected country/region will be displayed. For example,
under the Visualizations tab, under Page level filters, select Brazil.
11. The pie chart is automatically updated to display the drivers in the cities of Brazil.
12. From the File menu, select Save to save the visualization as a Power BI Desktop file.

Publish report to Power BI service


After you've created the visualizations in Power BI Desktop, you can share it with others
by publishing it to the Power BI service. For instructions on how to do that, go to Publish
from Power BI Desktop.

See also
Azure Data Lake Storage Gen2
Azure HDInsight (HDFS)
Article • 01/24/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Customer Insights
Analysis Services

Authentication Types Supported Anonymous


Account Key

Function Reference Documentation Hdfs.Contents


Hdfs.Files

Capabilities Supported
Import

Connect to Azure HDInsight from Power Query


Desktop
Power Query desktop includes Excel, Power BI, and Analysis Services as experiences.

To make the connection to an Azure HDInsight account, follow these steps:

1. From Get Data, select the Azure category, select Azure HDInsight, and then select
Connect. More information: where to get data
2. In the window that appears, enter the name of the storage account associated
with your HDInsight account. If you don't know the name of your storage account,
you can find it using the steps in the section below.

3. Select OK.

4. You can either select anonymous access, if your storage account is configured for
anonymous access, or you can select account key.
5. If you select anonymous access, there's nothing to enter, so select Connect.

6. If you select account key, add the storage account key for the Azure Storage
account associated with your HDInsight account and select Connect.

7. In Navigator, select one or more files or folders to import and use in your
application. Then select either Load to load the table, or Transform Data to open
the Power Query Editor where you can filter and refine the set of data you want to
use, and then load that refined set of data.

Connect to Azure HDInsight from Power Query


Online
Power Query Online includes Power BI (Dataflows) and Customer Insights (Dataflows) as
experiences.

To make the connection, take the following steps:

1. Select the Azure HDInsight option in the connector selection. More information:
Where to get data

2. In the Azure HDInsight dialog that appears, enter the name of the storage
account associated with your HDInsight account. If you don't know the name of
your storage account, you can find it using the steps in the section below.

3. You can select an existing a connection or a gateway. You can also either select
anonymous access, if your storage account is configured for anonymous access, or
you can select account key.

4. If you select anonymous access, there's nothing to enter, so select Next.

5. If you select account key, add the storage account key for the Azure Storage
account associated with your HDInsight account and select Next.
6. Select one or multiple tables to import and use, then select Transform Data to
transform data in the Power Query editor.

Copy your account key for Azure HDInsight


Your Azure HDInsight account key is just the Azure Storage account key for the blob
storage associated with your Azure HDInsight account. To retrieve your account key,
follow these steps:

1. Sign in to the Azure portal .

2. Locate your Azure HDInsight account and select Storage accounts in the left
menu. Then select your storage account.
3. In the storage account menu pane, under Security + networking, select Access
keys.

4. In the key1 section, locate the Key value. Select Show next to the key value.
5. Select the Copy to clipboard icon to copy the Key value.
Azure HDInsight on AKS Trino (Beta)
Article • 10/10/2023

Summary
Item Description

Release State Preview

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational account

Prerequisites
An Azure HDInsight on AKS Trino cluster.

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Native SQL statement
Session properties
Set default catalog/schema

Connect to Azure HDInsight on AKS Trino from


Power Query Desktop
To establish a connection, take the following steps:

1. In the Get Data experience, select Azure from the categories on the left, then select
Azure HDInsight on AKS Trino. More information: Where to get data
2. In the Azure HDInsight on AKS Trino dialog that appears, provide the Azure Trino
URL (Trino cluster URL).
3. Select the connectivity mode: Direct Query (recommended for Trino big data) or
Import.

4. Select OK.

5. If you're connecting to this Trino cluster for the first time, select Sign in to
authenticate. Then select Connect.

6. In Navigator, expand the catalog and schema to reveal the table you want, then
either select Load to load the data or Transform Data to continue transforming the
data in Power Query Editor.

Connect to Azure HDInsight on AKS Trino from


Power Query Online
To connect to an Azure HDInsight on AKS Trino cluster from Power Query Online, take
the following steps:

1. In the Get Data experience, search for and select Azure HDInsight on AKS Trino.
More information: Where to get data
2. In the options that appear, provide the Azure Trino URL.

3. If necessary, select the name of your on-premises data gateway.

4. To configure the connection options, select Advanced options. More information:


Connect using advanced options

5. If you're connecting to this Trino cluster for the first time, select Sign in.

6. Select Next to continue.

7. In Navigator, select the table you require, and then select Transform data.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to the query if needed. The following table lists all of the advanced
options you can set in Power Query Desktop and Power Query Online.

Advanced Description
option

SQL statement For information, go to Import data from a database using native database
query.

Session Allows Trino session properties to be passed with the connection to the Trino
properties cluster. Session property should specify key value pairs separated by a colon,
and each pair separated by a comma. Example:
distributed_sort:true,colocated_join:false

Default Providing a catalog and schema sets the context of the connection to a
catalog/schema specific Trino catalog and schema.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to the Trino cluster.

Known issues and limitations


The following sections describe any known issues or limitations currently associated with
this connector.
Types Support
The Azure HDInsight on AKS Trino supports the following types. Other types, such as
uuid, P4HyperLogLog need to be transformed or cast to be usable with the connector.

bigint
integer
smallint
tinyint
real
double
decimal
boolean
char
varchar
date
timestamp
array
map
varbinary
Azure SQL database
Article • 07/13/2023

Summary
Item Description

Release state General Availability

Products supported Excel


Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Excel
Dynamics 365 Customer Insights
Analysis Services

Authentication types Windows (Power BI Desktop, Excel, Power Query Online with
supported gateway)
Database (Power BI Desktop, Excel)
Microsoft Account (all)
Basic (Power Query Online)

Function reference docs Sql.Database


Sql.Databases

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
By default, Power BI installs an OLE DB driver for Azure SQL database. However, for
optimal performance, we recommend that the customer installs the SQL Server Native
Client before using the Azure SQL database connector. SQL Server Native Client 11.0
and SQL Server Native Client 10.0 are both supported in the latest version.

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
SQL Server failover support

Connect to Azure SQL database from Power


Query Desktop
To connect to an Azure SQL database from Power Query Desktop, take the following
steps:

1. Select the Azure SQL database option in the connector selection.

2. In SQL Server database, provide the name of the server and database (optional).

3. Select either the Import or DirectQuery data connectivity mode.

4. Optionally, you can select and enter advanced options that will modify the
connection query, such as a command timeout or a native query (SQL statement).
For information: Connect using advance options

5. Select OK.

6. If this is the first time you're connecting to this database, select the authentication
type, input your credentials, and select the level to apply the authentication
settings to. Then select Connect.
For more information about authentication methods, go to Authentication with a
data source.

7 Note

If the connection is not encrypted, you'll be prompted with the following


message.

Select OK to connect to the database by using an unencrypted connection, or


follow the instructions in Enable encrypted connections to the Database Engine to
set up encrypted connections to Azure SQL database.

7. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.
Connect to Azure SQL database from Power
Query Online
To connect to an Azure SQL database from Power Query Online, take the following
steps:

1. Select the Azure SQL database option in the connector selection.

2. In Azure SQL database, provide the name of the server and database.
You can also select and enter advanced options that will modify the connection
query, such as a command timeout or a native query (SQL statement). More
information: Connect using advanced options

3. If this is the first time you're connecting to this database, select the authentication
kind and input your credentials.

4. If necessary, select the name of your on-premises data gateway.

5. If the connection is not encrypted, clear the Use Encrypted Connection check box.

6. Select Next to continue.

7. In Navigator, select the data you require, and then select Transform data.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to your query if needed.
The following table lists all of the advanced options you can set in Power Query Desktop
and Power Query Online.

Advanced option Description

Command timeout If your connection lasts longer than 10 minutes (the default timeout), you
in minutes can enter another value in minutes to keep the connection open longer.
This option is only available in Power Query Desktop.

SQL statement For information, go to Import data from a database using native database
query.

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, you won’t see those columns.
columns

Navigate using full If checked, the navigator displays the complete hierarchy of tables in the
hierarchy database you're connecting to. If cleared, the navigator displays only the
tables whose columns and rows contain data.

Enable SQL Server If checked, when a node in the Azure SQL failover group isn't available,
Failover support Power Query moves from that node to another when failover occurs. If
cleared, no failover occurs.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your Azure SQL database.

Troubleshooting

Always Encrypted columns


Power Query doesn't support 'Always Encrypted' columns.
Azure Synapse Analytics (SQL DW)
Article • 08/16/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Database (Power BI Desktop, Excel)


Microsoft Account (all)
Basic (online service)
Service principal

Function Reference Documentation Sql.Database


Sql.Databases

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
By default, Power BI installs an OLE DB driver for Azure Synapse Analytics (SQL DW).
However, for optimal performance, we recommend that the customer installs the SQL
Server Native Client before using the Azure Synapse Analytics (SQL DW) connector. SQL
Server Native Client 11.0 and SQL Server Native Client 10.0 are both supported in the
latest version.

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
SQL Server failover support
Enable cross database folding

Connect to Azure Synapse Analytics (SQL DW)


from Power Query Desktop
To make the connection from Power Query Desktop:

1. Select the Azure Synapse Analytics SQL option in the connector selection.

2. In the SQL Server database dialog that appears, provide the name of the server
and database (optional). In this example, TestAzureSQLServer is the server name
and AdventureWorks2012 is the database.

3. Select either the Import or DirectQuery data connectivity mode.

You can also select and enter advanced options that will modify the connection
query, such as a command timeout or a native query (SQL statement). More
information: Connect using advanced options

4. Select OK.

5. If this is the first time you're connecting to this database, select the authentication
type, input your credentials, and select the level to apply the authentication
settings to. Then select Connect.

7 Note

Although the Windows authentication type is displayed, Azure Synapse


Analytics SQL doesn't support the Windows authentication type. Use either
the Database or the Microsoft account authentication types.

For more information about authentication methods, go to Authentication with a


data source.

If the connection is not encrypted, you'll be prompted with the following dialog.

Select OK to connect to the database by using an unencrypted connection, or


follow the instructions in Enable encrypted connections to the Database Engine to
set up encrypted connections to Azure Synapse Analytics (SQL DW).

6. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.

Connect to Azure Synapse Analytics (SQL DW)


from Power Query Online
To make the connection from Power Query Online:

1. Select the Azure Synapse Analytics (SQL DW) option in the connector selection.

2. In the Azure Synapse Analytics (SQL DW) dialog that appears, provide the name
of the server and database (optional). In this example, TestAzureSQLServer is the
server name and AdventureWorks2012 is the database.

You can also select and enter advanced options that will modify the connection
query, such as a command timeout or a native query (SQL statement). More
information: Connect using advanced options

3. If this is the first time you're connecting to this database, select the authentication
kind and input your credentials.
4. If required, select the name of your on-premises data gateway.

5. If the connection is not encrypted, clear the Use Encrypted Connection check box.

6. Select Next to continue.

7. In Navigator, select the data you require, and then select Transform data.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to your query if needed.

The following table lists all of the advanced options you can set in Power Query Desktop
and Power Query Online.

Advanced option Description

Command timeout If your connection lasts longer than 10 minutes (the default timeout), you
in minutes can enter another value in minutes to keep the connection open longer.

SQL statement For information, go to Import data from a database using native database
query.

Include relationship If checked, includes columns that might have relationships to other tables.
columns If this box is cleared, you won’t see those columns.

Navigate using full If checked, the navigator displays the complete hierarchy of tables in the
hierarchy database you're connecting to. If cleared, the navigator displays only the
tables whose columns and rows contain data.

Enable SQL Server If checked, when a node in the Azure SQL failover group isn't available,
Failover support Power Query moves from that node to another when failover occurs. If
Advanced option Description

cleared, no failover occurs.

Enable cross This option is only available in Power Query Online.


database folding

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to Azure Synapse Analytics.

Troubleshooting

Always Encrypted columns


Power Query doesn't support 'Always Encrypted' columns.
Azure Synapse Analytics workspace
(Beta)
Article • 07/18/2023

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational account

7 Note

This Azure Synapse Analytics workspace connector doesn't replace the Azure
Synapse Analytics (SQL DW) connector. This connector makes exploring data in
Synapse workspaces more accessible. Some capabilities aren't present in this
connector, including native query and DirectQuery support.

7 Note

This connector supports access to all data in your Synapse workspace, including
Synapse Serverless, Synapse on-demand, and Spark tables.

Prerequisites
Before you can sign in to Synapse workspaces, you must have access to Azure Synapse
Analytics Workspace.

Capabilities Supported
Import
Connect to Synapse workspace data from
Power Query Desktop
To connect to Synapse workspace data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Azure Synapse
Analytics workspace (Beta). Then select Connect.

2. If this is the first time you are connecting to this workspace, you'll be asked to sign
in to your Synapse account. To sign in, select Sign in.
3. In the Sign in with Microsoft window that appears, provide your credentials to
sign in to your Synapse account. Then select Next.

4. Once you've successfully signed in, select Connect.

Once the connection is established, you’ll see a list of the workspaces you have access
to. Drill through the workspaces, databases, and tables.

You can Load the selected table, which brings the entire table into Power BI Desktop, or
you can select Transform Data to edit the query, which opens the Power Query editor.
You can then filter and refine the set of data you want to use, and then load that refined
set of data into Power BI Desktop.

Troubleshooting

I don't see my Synapse workspace in the connector


The Synapse connector is using Azure role-based access control (RBAC) to find the
Synapse workspaces you have access to.

If your access is only defined in Synapse RBAC, you might not see the workspace.

Make sure your access is defined by Azure RBAC to ensure all Synapse workspaces are
displayed.
Azure Table Storage
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Customer Insights (Dataflows)
Analysis Services

Authentication Types Supported Account Key

Function Reference Documentation AzureStorage.Table

Capabilities Supported
Import

Connect to Azure Table Storage from Power


Query Desktop
Power Query Desktop includes Excel, Power BI (Datasets), and Analysis Services as
experiences.

To make the connection to Azure Table Storage, follow these steps:

1. From Get Data, select the Azure category, select Azure Table Storage, and then
select Connect. More information: Where to get data
2. In the Azure Table Storage window that appears, enter the name or the URL of the
storage account where your table is located. Don't add the name of the table to
the URL.
3. Select OK.

4. Add the Azure table storage account key and select Connect.

5. In Navigator, select one or multiple tables to import and use in your application.
Then select either Load to load the table, or Transform Data to open the Power
Query Editor where you can filter and refine the set of data you want to use, and
then load that refined set of data.

Connect to Azure Table Storage from Power


Query Online
Power Query Online includes Power BI (Dataflows), Power Apps (Dataflows), and
Customer Insights (Dataflows) as experiences.

To make the connection, take the following steps:

1. Select the Azure Table Storage option in the connector selection. More
information: Where to get data

2. In the Azure Table Storage dialog that appears, enter the name or URL of the
Azure Storage account where the table is housed. Don't add the name of the table
to the URL.

3. Add your Azure table storage account key, and then select Next.

4. Select one or multiple tables to import and use, then select Transform Data to
transform data in the Power Query editor.
Copy your account key for Azure Table Storage
Your Azure Table Storage account key is the same as your Azure Blob storage account
key. To retrieve your Azure Table Storage account key to use while authenticating your
account in Power Query, follow these steps:

1. Sign in to the Azure portal .

2. Locate your Azure Blob Storage account where your table is housed.

3. In the storage account menu pane, under Security + networking, select Access
keys.

4. In the key1 section, locate the Key value. Select Show next to the key value.
5. Select the Copy to clipboard icon to copy the Key value.
Bitsight Security Ratings
Article • 07/18/2023

7 Note

The following connector article is provided by Bitsight, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Bitsight website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported API key

Function Reference Document —

Prerequisites
A user must have a Bitsight Security Ratings product in order to access the Bitsight data
in Power BI. For more information on Bitsight Security Ratings, go to
https://www.bitsight.com/security-ratings .

Users must also have the March 2021 release of Power BI Desktop or later.

Capabilities supported
Import

Connect to Bitsight from Power BI Desktop


To connect to Bitsight:

1. Retrieve API token from Bitsight.

2. In Power BI Desktop, select Get Data from the Home ribbon, select More from the
drop down, and search for Bitsight.

3. If this is the first time you're getting the data through the Bitsight connector, a
prompt appears to inform you of the connection to a third-party service.

4. Place your Bitsight API token in Power BI. In the Window that appears, provide
your credentials.

5. Once the connection is established, you can preview and select multiple data
points in the Navigator dialog box to create an output.

You can Load the selected table, which brings the entire table into Power BI Desktop, or
you can select Transform Data to edit the query, which opens the Power Query editor.
You can then filter and refine the set of data you want to use, and then load that refined
set of data into Power BI Desktop.

Limitations and issues

Bitsight for Security Performance Management


Can only view data on one company at a time, and assumes the parent company. The
parent company can be changed.
Bloomberg Data and Analytics
Article • 07/18/2023

7 Note

The following connector article is provided by Bloomberg, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Bloomberg website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational account

Prerequisites
Your organization must subscribe to Bloomberg PORT Enterprise and you must be a
Bloomberg Anywhere user and have a Bloomberg biometric authentication device (B-
Unit).

Capabilities Supported
Import

Connect to Bloomberg Data and Analytics


To connect to Bloomberg Data and Analytics:

1. In Power BI Desktop, select Home > Get Data.


2. Select Other from the categories on the left, select Bloomberg Data and Analytics,
and then select Connect.

3. If this is the first time you're connecting to the Bloomberg Data and Analytics
connector, a third-party notice will be displayed. Select Don't warn me again with
this connector if you don't want this message to be displayed again, and then
select Continue.

4. Enter a Bloomberg Query Language (BQL) query to specify what data you want to
get. To learn more about BQL, contact your Bloomberg Sales representative. Select
OK.
5. To sign in to your Bloomberg account, select Sign in.

6. In the window that appears, provide your credentials to sign in to your Bloomberg
account. If you entered an email address and a password, select Next.
7. Enter your B-Unit code and select Log In.
8. Once you've successfully signed in, select Connect.

Once the connection is established, you will see data available for preview in Navigator.
You can Load the selected table, or you can select Transform Data to edit the query,
which opens Power Query Editor. You can then filter and refine the set of data you want
to use, and then load that refined set of data into Power BI Desktop.
BQE Core
Article • 07/18/2023

7 Note

The following connector article is provided by BQE, the owner of this connector and
a member of the Microsoft Power Query Connector Certification Program. If you
have questions regarding the content of this article or have changes you would like
to see made to this article, visit the BQE website and use the support channels
there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported BQE Core Account

Function Reference Documentation —

Prerequisites
To use the BQE Core Power BI connector, you must have a BQE Core account with
username and password.

Capabilities supported
Import

Connect to BQE Core from Power Query


Desktop
To connect to BQE Core data:
1. Launch Power BI Desktop and enter the Get Data experience.

2. From the Other category, select BQEDataConnector, and then select Connect.

3. Select Sign In. You'll be prompted to sign in to Core.


4. In the sign in screen, enter your Core email and password. Select Login.

5. You'll then be prompted to select your Core company file.


a. Select the Core company file you want to use.
b. (Optional) If you select Remember my consent, the next time you connect to
this Core company file you won't need to grant permission again.
c. Select Grant Permission.
6. Select Connect, and then select a module. For reference, review the API Reference
under the Core API Documentation .

7. From the Navigator, select the tables to load, and then select Transform Data to
transform the data in Power Query.
CData Connect Cloud
Article • 07/18/2023

7 Note

The following connector article is provided by CData, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the CData website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Supported Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported CData account (OAuth)

Function Reference Docs —

Prerequisites
A CData Connect Cloud account
At least one connection configured in your CData Connect Cloud account

Capabilities supported
Import

Connect to CData Connect Cloud


Follow these steps to connect to your CData Connect Cloud account:

1. In Power BI Desktop, select Get Data.


2. In the search field, enter CData.

3. Select CData Connect Cloud in the list and then select Connect.

4. In the CData Connect Cloud connection window, select Sign in.


5. In the sign-in window, enter your CData Connect Cloud sign-in information, and
then select Continue.

6. Select Connect in the connection window.


The Navigator window appears and allows you to import your data.

Import data
With the Navigator window open, follow these steps to access your CData Connect
Cloud data:

1. Expand the CData Connect Cloud tree. Your connections appear as subtrees.

2. Select the data from each connection that you want to import.

3. Select Load or Transform Data.

Load instructs the connector to execute the underlying query to CData


Connect Cloud.
Transform Data launches the Power Query editor and allows you to adjust
the query before loading the data. When you're finished using the Power
Query editor, select Close & Apply in the navigation ribbon to apply your
changes and load the data.
Celonis EMS (Beta)
Article • 08/14/2023

7 Note

The following connector article is provided by Celonis, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Celonis website and use the support
channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational Account

Function Reference Documentation Web.Contents


Parquet.Document
Json.Document
Json.FromValue

Prerequisites
Before you can sign in to Celonis EMS , you must have a Celonis EMS account
(username/password).

Capabilities Supported
Import
Navigate using full hierarchy
Connect to Celonis EMS from Power Query
Desktop
To make the connection, take the following steps:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Celonis EMS in
the connector selection and then select Connect.

2. The Celonis EMS dialog now appears, with an example URL. Enter your Celonis
EMS Team URL and then select OK.
3. Enter your Application Key or your Personal API Key generated in the Celonis
EMS.

4. Once you're done, select Connect.

5. Upon a successful connection, the Navigator opens with the list of packages
available in the given EMS team. Select the Knowledge Model Record you want to
import and then select Load.
Limitations and issues
You should be aware of the following limitations and issues associated with accessing
Celonis EMS data:

Celonis EMS has a built-in limit of 200-K rows and 20 columns returned per record.

Only defined records can be imported. Autogenerated records are excluded here.
CloudBluePSA (Beta)
Article • 07/13/2023

7 Note

The following connector article is provided by CloudBluePSA, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the CloudBluePSA website
and use the support channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported API Key

Function Reference Documentation CloudBluePSA Connector.Help

Prerequisites
Before you can use the CloudBluePSA connector, you must have a CloudBluePSA
instance (username/password) and an API key. Sign in to your PSA instance, which is
usually at a URL similar to YOUR_COMPANY_NAME.cloudbluepsa.io , and then navigate to
Setup > Employees > Find employees and add a new API user, which gives you an API
key.

Capabilities
Import
Connect to CloudBluePSA from Power Query
Desktop
To connect to CloudBluePSA data:

1. Select Get Data from the Home ribbon in Power BI Desktop.

2. Select the Search box and start typing CloudBluePSA. Then select CloudBluePSA
from the list on the right, and select Connect.

3. In the Retrieve all pages of data window that appears, copy and paste the URL
generated on the GET side of the API endpoint of your choice. Then in Filter, copy
and paste the constructed filter on the same API endpoint. For example:
URL:
https://INSTANCE_URL/webapi/v1.3/tickets/getticketsreport

Filter:\ { "pageno": 1, "pagesize": 10, "filters": [{"field": "Type", "op":


"cn", "data": "n"}], "filterop": "AND", "sortcolumn": "Name", "sortorder":

"Asc", "resulttype": "Json", "includeccysymbols": true, "gridcolumns":

"RefNumber,Type,Name"}

4. Select OK.

5. Provide your API key generated in CloudBluePSA and select Connect.

6. You need to select Transform Data and this selection opens the Power Query
editor.

7. Expand the Data column, then select OK.

8. Expand the Column1.1 column. This time, on the list of columns, all the grid
columns included in the filter are displayed. Select as many columns as required,
and then select OK. All the selected data is now displayed, and can be reshaped
and used to create reports as required.
9. Select Close & Apply. You can now start using your data.

Connect to CloudBluePSA from PowerQuery


Online
To connect to CloudBluePSA:

1. From Choose data source, start typing in the search box: CloudBluePSA.

2. In Connect to data source, provide the URL and Filter as defined and generated in
your instance, as shown in the example inside each text box. Finally paste your API
key in the Account Key field.
3. Select Next.

4. In the Navigator screen, select the Expand button next to the Data column, and
then select OK.

5. Two new columns now appear. Select the Expand button next to the Column1.1
column and then select OK. You can now start using your data.
Additional Resources
You might also find the following CloudBluePSA information useful:

CloudBluePSA Help Center


Connecting Power BI to the API
API
Creating an API Key
Using the API
Common Data Service (Legacy)
Article • 07/13/2023

7 Note

The Common Data Service (Legacy) connector has be superseded by the Power
Query Dataverse connector. In most cases, we recommend that you use the
Dataverse connector instead of the Common Data Service (Legacy) connector.
However, there may be limited cases where it's necessary to choose the Common
Data Service (Legacy) connector. These cases are described in When to use the
Common Data Service (Legacy) connector.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Dynamics 365 Customer Insights

Authentication types Organizational account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You must have a Common Data Service (Legacy) environment with maker permissions to
access the portal, and read permissions to access data within tables.

Capabilities supported
Server URL
Advanced
Reorder columns
Add display column

Connect to Common Data Service (Legacy)


from Power BI Desktop
To connect to Common Data Service (Legacy) from Power BI Desktop:

1. Select Get data from the Home tab.

2. In the Get Data dialog box, select Power Platform > Common Data Service
(Legacy), and then select Connect.
3. Enter the Common Data Service (Legacy) environment URL of the data you want to
load. Use the format https://<yourenvironmentid>.crm.dynamics.com/. More
information: Finding your Dataverse environment URL

When the table is loaded in the Navigator dialog box, by default the columns in
the table are reordered in alphabetical order by the column names. If you don't
want the columns reordered, in the advanced settings enter false in Reorder
columns.

Also when the table is loaded, by default if the table contains any picklist fields, a
new column with the name of the picklist field with _display appended at the end
of the name is added to the table. If you don't want the picklist field display
column added, in the advanced settings enter false in Add display column.

When you've finished filling in the information, select OK.

4. If this attempt is the first time you're connecting to this site, select Sign in and
input your credentials. Then select Connect.

5. In Navigator, select the data you require, then either load or transform the data.
Connect to Common Data Service (Legacy)
from Power Query Online
To connect to Common Data Service (Legacy) from Power Query Online:

1. From the Data sources page, select Common Data Service (Legacy).

2. Enter the server URL address of the data you want to load.
3. If necessary, enter an on-premises data gateway if you're going to be using on-
premises data. For example, if you're going to combine data from Dataverse and
an on-premises SQL Server database.

4. Sign in to your organizational account.

5. When you've successfully signed in, select Next.

6. In the navigation page, select the data you require, and then select Transform
Data.

Finding your Common Data Service (Legacy)


environment URL
Open Power Apps . In the upper right of the Power Apps page, select the environment
you're going to connect to. Select the settings icon, and then select Advanced
settings.

In the new browser tab that opens, copy the root of the URL. This root URL is the unique
URL for your environment. The URL will be in the format of
https://<yourenvironmentid>.crm.dynamics.com/. Keep this URL somewhere handy so
you can use it later, for example, when you create Power BI reports.
When to use the Common Data Service (Legacy)
connector
Dataverse is the direct replacement for the Common Data Service connector. However,
there may be times when it's necessary to choose the Common Data Service (Legacy)
connector instead of the Dataverse connector:

There are certain Tabular Data Stream (TDS) data types that are supported in OData
when using Common Data Service (Legacy) that aren't supported in Dataverse. The
supported and unsupported data types are listed in How Dataverse SQL differs from
Transact-SQL.

All of these features will be added to the Dataverse connector in the future, at which
time the Common Data Service (Legacy) connector will be deprecated.

More information: Accessing large datasets

Limitations and issues

Dataverse OData API performance and throttling limits


For information about OData API performance and throttling limits for Dataverse
connections, go to Requests limits and allocations. These limitations apply to both the
Common Data Source (Legacy) connector (which uses the OData API as an
implementation detail) and the OData Feed connector when accessing the same
endpoint.
Entity retrieval rate
As a guideline, most default entities will be retrieved at a rate of approximately 500 rows
per second using the Common Data Service (Legacy) connector. Take this rate into
account when deciding whether you want to connect to Dataverse or export to data
lake. If you require faster retrieval rates, consider using the Export to data lake feature or
the Dataverse connector. More information: Alternative Dataverse connections

Alternative Dataverse connections


There are several alternative ways of extracting and migrating data from Dataverse:

Use the Azure Synapse Link feature in Power Apps to extract data from Dataverse
into Azure Data Lake Storage Gen2, which can then be used to run analytics. For
more information about the Azure Synapse Link feature, go to What is Azure
Synapse Link for Dataverse?.

Use the OData connector to move data in and out of Dataverse. For more
information on how to migrate data between Dataverse environments using the
dataflows OData connector, go to Migrate data between Dataverse environments
using the dataflows OData connector.

Use the Dataverse connector to access read-only data in Dataverse. For more
information about this feature, go to View table data in Power BI Desktop.

7 Note

Both the Dataverse connector and the OData APIs are meant to serve analytical
scenarios where data volumes are relatively small. The recommended approach for
bulk data extraction is “Azure Synapse Link”.

Querying display columns must include the field in the


query
When using this connector, if you're including <field> display columns, you must
include <field> in the query as well, or the <field> _display column might be null. For
example:

Power Query M

Table.SelectColumns(#"Navigation 1", {"statuscode", "statuscode_display"})


If statuscode isn't included, statuscode_display might be null and error out.

Accessing large datasets


Power BI datasets contained in Dataverse can be very large. If you're using the Power
Query Dataverse connector, any specific query that accesses the dataset must return less
than 80 MB of data. So you might need to query the data multiple times to access all of
the data in the dataset. Using multiple queries can take a considerable amount of time
to return all the data.

If you're using the Common Data Service (Legacy) connector, you can use a single query
to access all of the data in the dataset. This connector works differently and returns the
result in “pages” of 5 K records. Although the Common Data Service (Legacy) connector
is more efficient in returning large amounts of data, it can still take a significant amount
of time to return the result.

Instead of using these connectors to access large datasets, we recommend that you use
Azure Synapse Link to access large datasets. Using Azure Synapse Link is even more
efficient that either the Power Query Dataverse or Common Data Service (Legacy)
connectors, and it is specifically designed around data integration scenarios.
Databricks
Article • 07/13/2023

7 Note

The following connector article is provided by Databricks, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Databricks website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Azure Active Directory


Organizational Account
Personal Access Token

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
This connector is only for use with a Databricks SQL Warehouse running on AWS and
using OAuth. If you're using Azure Databricks, use the Azure Databricks connector. If
you aren't using OAuth with your Databricks SQL Warehouse (on AWS or GCP), use the
Azure Databricks connector too. Databricks Community Edition isn't supported.

Capabilities supported
Import
DirectQuery (Power BI Datasets)

Connect to Databricks from Power Query


Desktop
To connect to Databricks from Power Query Desktop, take the following steps:

1. In the Get Data experience, search for Databricks to shortlist the Databricks
connector. You should only use the Databricks connector here for your Databricks
SQL Warehouse data (running on AWS) if you're using OAuth for authentication.

2. Provide the Server hostname and HTTP Path for your Databricks SQL Warehouse.
Refer to Configure the Databricks ODBC and JDBC drivers for instructions to look
up your "Server hostname" and "HTTP Path". Enter this information accordingly.
You can optionally supply a default catalog and/or database under Advanced
options. Select OK to continue.
3. Provide your credentials to authenticate with your Databricks SQL Warehouse. You
have three options for credentials:

Username / Password. This option isn't available if your organization/account


uses 2FA/MFA.

Personal Access Token. Refer to Personal access tokens for instructions on


generating a Personal Access Token (PAT).

OAuth (OIDC). Sign in to your organizational account using the browser


popup.

7 Note
Once you enter your credentials for a particular Databricks SQL Warehouse,
Power BI Desktop caches and reuses those same credentials in subsequent
connection attempts. You can modify those credentials by going to File >
Options and settings > Data source settings. More information: Change the
authentication method

4. Once you successfully connect, the Navigator shows the data available to you on
the cluster. You can choose either Transform Data to transform the data using
Power Query or Load to load the data in Power Query Desktop.

Connect to Databricks data from Power Query


Online
To connect to Databricks from Power Query Online, take the following steps:

1. In the Get Data experience, select the Dataflow category. (Refer to Creating a
dataflow for instructions.) Shortlist the available Databricks connector with the
search box. Select the Databricks connector for your Databricks SQL Warehouse.
2. Enter the Server hostname and HTTP Path for your Databricks SQL Warehouse.
Refer to Configure the Databricks ODBC and JDBC drivers for instructions to look
up your "Server hostname" and "HTTP Path". You can optionally supply a default
catalog and/or database under Advanced options.

3. Provide your credentials to authenticate with your Databricks SQL Warehouse.


There are three options for credentials:

Basic. Use this option when authenticating with a user name and password.
This option isn't available if your organization/account uses 2FA/MFA.
Account Key. Use this option when authenticating using a Personal Access
Token. Refer to Personal access tokens for instructions on generating a
Personal Access Token (PAT).
Organizational account. Use this option when authenticating with OAuth.
Sign in to your organizational account using the browser popup.

4. Once you successfully connect, the Navigator appears and displays the data
available on the server. Select your data in the navigator. Then select Next to
transform the data in Power Query.
Dataverse
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication types Organizational account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You must have a Dataverse environment.

You must have read permissions to access data within tables.

To use the Dataverse connector, the TDS endpoint setting must be enabled in your
environment. More information: Manage feature settings

To use the Dataverse connector, one of TCP ports 1433 or 5558 need to be open to
connect. Port 1433 is used automatically. However, if port 1433 is blocked, you can use
port 5558 instead. To enable port 5558, you must append that port number to the
Dataverse environment URL, such as yourenvironmentid.crm.dynamics.com, 5558. More
information: SQL Server connection issue due to closed ports

7 Note
If you are using Power BI Desktop and need to use port 5558, you must create a
source with the Dataverse environment URL, such as
yourenvironmentid.crm.dynamics.com,5558, in Power Query M.

Capabilities supported
Server URL
Import
DirectQuery (Power BI Datasets)
Advanced
Include relationship columns

Connect to Dataverse from Power Query


Desktop

7 Note

The Power Query Dataverse connector is mostly suited towards analytics workloads,
not bulk data extraction. More information: Alternative Dataverse connections

To connect to Dataverse from Power Query Desktop:

1. Select the Dataverse option from Get Data. More information: Where to get data

2. If you're connecting to this site for the first time, select Sign in and input your
credentials. Then select Connect.

3. In Navigator, select the data you require, then either load or transform the data.
4. If you're using Power Query from Power BI Desktop, you'll be asked to select either
the Import or DirectQuery data connectivity mode. Then select OK.

Connect to Dataverse from Power Query Online


To connect to Dataverse from Power Query Online:
1. Select the Dataverse option in the Choose data source page. More information:
Where to get data

2. In the Connect to data source page, leave the server URL address blank. Leaving
the address blank lists all of the available environments you have permission to use
in the Power Query Navigator window.

7 Note

If you need to use port 5558 to access your data, you'll need to load a specific
environment with port 5558 appended at the end in the server URL address.
In this case, go to Finding your Dataverse environment URL for instructions
on obtaining the correct server URL address.

3. If necessary, enter an on-premises data gateway if you're going to be using on-


premises data. For example, if you're going to combine data from Dataverse and
an on-premises SQL Server database.

4. Sign in to your organizational account.


5. When you've successfully signed in, select Next.

6. In the navigation page, select the data you require, and then select Transform
Data.

Connect using advanced options


Power Query Online provides advanced options that you can add to your query if
needed.

The following table lists the advanced options you can set in Power Query Online.

Advanced Description
option

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, you won’t see those columns. More information:
columns Performance issues related to relationship columns

Once you've selected the advanced options you require, select Next to connect to
Dataverse.

Finding your Dataverse environment URL


If you need to use port 5558 to connect to Dataverse, you'll need to find your Dataverse
environment URL. Open Power Apps . In the upper right of the Power Apps page,
select the environment you're going to connect to. Select the settings icon, and then
select Advanced settings.
In the new browser tab that opens, copy the root of the URL. This root URL is the unique
URL for your environment. The URL will be in the format of
https://<yourenvironmentid>.crm.dynamics.com/. Make sure you remove https:// and
the trailing / from the URL before pasting it to connect to your environment. Append
port 5558 to the end of the environment URL, for example
yourenvironmentid.crm.dyamics.com,5558.

When to use the Common Data Service


(Legacy) connector
Dataverse is the direct replacement for the Common Data Service connector. However,
there may be times when it's necessary to choose the Common Data Service (Legacy)
connector instead of the Dataverse connector:

There are certain Tabular Data Stream (TDS) data types that are supported in OData
when using Common Data Service (Legacy) that aren't supported in Dataverse. The
supported and unsupported data types are listed in How Dataverse SQL differs from
Transact-SQL.

All of these features will be added to the Dataverse connector in the future, at which
time the Common Data Service (Legacy) connector will be deprecated.

More information: Accessing large datasets

Limitations and issues

Dataverse performance and throttling limits


For information about performance and throttling limits for Dataverse connections, go
to Requests limits and allocations. These limitations apply to both the Dataverse
connector and the OData Feed connector when accessing the same endpoint.

Table retrieval rate


As a guideline, most default tables will be retrieved at a rate of approximately 500 rows
per second using the Dataverse connector. Take this rate into account when deciding
whether you want to connect to Dataverse or export to data lake. If you require faster
retrieval rates, consider using the Export to data lake feature or Tabular Data Stream
(TDS) endpoint. For more information, go to Alternative Dataverse connections.

Alternative Dataverse connections


There are several alternative ways of extracting and migrating data from Dataverse:

Use the Azure Synapse Link feature in Power Apps to extract data from Dataverse
into Azure Data Lake Storage Gen2, which can then be used to run analytics. For
more information about the Azure Synapse Link feature, go to What is Azure
Synapse Link for Dataverse?.

Use the OData connector to move data in and out of Dataverse. For more
information on how to migrate data between Dataverse environments using the
dataflows OData connector, go to Migrate data between Dataverse environments
using the dataflows OData connector.

7 Note

Both the Dataverse connector and the OData APIs are meant to serve analytical
scenarios where data volumes are relatively small. The recommended approach for
bulk data extraction is “Azure Synapse Link”.

SQL Server connection issue due to closed ports


When connecting with the Dataverse connector, you might encounter an Unable to
connect error indicating that a network or instance-specific error occurred while
establishing a connection to SQL Server. This error is likely caused by the TCP ports 1433
or 5558 being blocked during connection. To troubleshoot the blocked port error, go to
Blocked ports.
Using native database queries with Dataverse
You can connect to Dataverse using a custom SQL statement or a native database query.
While there's no user interface for this experience, you can enter the query using the
Power Query Advanced Editor. In order to use a native database query, a Database must
be specified as the Source.

Power Query M

Source = CommonDataService.Database([DATABASE URL])

Once a database source has been defined, you can specify a native query using the
Value.NativeQuery function.

Power Query M

myQuery = Value.NativeQuery(Source, [QUERY], null, [EnableFolding=true])

Altogether, the query looks like this.

Power Query M

let
Source = CommonDataService.Database("[DATABASE]"),
myQuery = Value.NativeQuery(Source, "[QUERY]", null,
[EnableFolding=true])
in
myQuery

Misspelling a column name might result in an error message about query folding
instead of missing column.

Accessing large datasets


Power BI datasets contained in Dataverse can be very large. If you're using the Power
Query Dataverse connector, any specific query that accesses the dataset must return less
than 80 MB of data. So you might need to query the data multiple times to access all of
the data in the dataset. Using multiple queries can take a considerable amount of time
to return all the data.

If you're using the Common Data Service (Legacy) connector, you can use a single query
to access all of the data in the dataset. This connector works differently and returns the
result in "pages" of 5 K records. Although the Common Data Service (Legacy) connector
is more efficient in returning large amounts of data, it can still take a significant amount
of time to return the result.

Instead of using these connectors to access large datasets, we recommend that you use
Azure Synapse Link to access large datasets. Using Azure Synapse Link is even more
efficient that either the Power Query Dataverse or Common Data Service (Legacy)
connectors, and it's specifically designed around data integration scenarios.

Performance issues related to relationship columns


Similar to the SQL Server connector, there's an option available to disable navigation
properties (relationship columns) in the Dataverse connector to improve performance.
This option can be set in the advanced options available in Power Query Online, or it can
be set using the CreateNavigationProperties=false parameter in the Dataverse
connector function.

Power Query M

Source = CommonDataService.Database("{crminstance}.crm.dynamics.com",
[CreateNavigationProperties=false]),
Dataflows
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights (Dataflows)

Authentication types Organizational account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You must have an existing Dataflow with maker permissions to access the portal, and
read permissions to access data from the dataflow.

Capabilities supported
Import
DirectQuery (Power BI Datasets)

7 Note

DirectQuery requires Power BI premium. More information: Premium features of


dataflows
Get data from Dataflows in Power BI Desktop
To get data from Dataflows in Power BI Desktop:

1. Select Get data from the Home tab.

2. In the Get Data dialog box, select Power Platform > Dataflows, and then select
Connect.

3. If this attempt is the first time you're connecting to this site, select Sign in and
input your credentials. Then select Connect.

4. In Navigator, select the Dataflow you require, then either load or transform the
data.
Get data from Dataflows in Power Query
Online
To get data from Dataflows in Power Query Online:

1. From the Data sources page, select Dataflows.

2. Adjust the connection name.


3. If necessary, enter an on-premises data gateway if you're going to be using on-
premises data. For example, if you're going to combine data from Dataflows and
an on-premises SQL Server database.

4. Sign in to your organizational account.

5. When you've successfully signed in, select Next.

6. In the navigation page, select the data you require, and then select Transform
Data.

Frequently asked questions


DirectQuery is not working for me in Power BI—what should I do?

To get DirectQuery to run, you need to have Power BI Premium and adjust a few
configuration items in your Power BI workspace. These actions are explained in the
dataflows premium features article.

My dataflow table doesn't show up in the dataflow connector in Power BI

You're probably using a Dataverse table as the destination for your standard dataflow.
Use the Dataverse/CDS connector instead or consider switching to an analytical
dataflow.

There's a difference in the data when I remove duplicates in dataflows—how can I


resolve this?
There could be a difference in data between design-time and refresh-time. We don't
guarantee which instance is being kept during refresh time. For information on how to
avoid inconsistencies in your data, go to Working with duplicates.

I'm getting data via the dataflow connector, but I'm receiving a 429 error code—how
can I resolve this?

When you are receiving an error code 429, it's possibly due to exceeding the limit of
1000 requests per minute. This error typically resolves by itself if you wait a minute or
two after the cooldown period ended. This limit is in place to prevent dataflows and
other Power BI functionality from having a degraded performance. Consequences due
to the continued high load on the service might result in additional degraded
performance, so we ask users to significantly reduce the number of requests to less than
1000 (limit) or fix your script/model to this specific limit (1000) to efficiently mitigate
impact and avoid further issues. You should also avoid nested joins that re-request
dataflow data; instead, stage data and perform merges within your dataflow instead of
your dataset.

See also
Using the output of Dataflows from other Power Query experiences
Best practices for designing and developing complex dataflows
Best practices for reusing dataflows across environments and workspaces
Delta Sharing
Article • 07/13/2023

7 Note

The following connector article is provided by Databricks, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Databricks website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Key (Bearer Token)

Prerequisites
If you use Power BI Desktop you need to install the November release of Power BI
Desktop or later. Download the latest version .

The data provider sends an activation URL from which you can download a credentials
file that grants you access to the shared data.

After downloading the credentials file, open it with a text editor to retrieve the endpoint
URL and the token.

For detailed information about Delta Sharing, visit Access data shared with you using
Delta Sharing .

Capabilities supported
Import
Connect to Databricks Delta Sharing in Power
BI Desktop
To connect to Databricks using the Delta Sharing connector, use the following steps:

1. Open Power BI Desktop.

2. Navigate to the Get Data menu and search for Delta Sharing.

3. Select the connector and then select Connect.

4. Enter the endpoint URL retrieved from the credentials file in the Delta Sharing
Server URL field.

5. Optionally, in the Advanced Options tab you can set a Row Limit for the maximum
number of rows you can download. This is set to 1 million rows by default.

6. Select OK.

7. In the Authentication dialog box, enter the token retrieved from the credentials
file in the Bearer Token field.

8. Select Connect.

Limitations and considerations


This section describes any limitations or considerations of the Delta Sharing connector.

You need to make sure that the data loaded with the Delta Sharing connector fits in the
memory of your machine. To ensure this, the connector limits the number of imported
rows to the Row Limit set by the user.
Denodo
Article • 07/13/2023

7 Note

The following connector article is provided by Denodo, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Denodo website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic


Windows

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
To use this connector, you must have installed the Denodo platform, and configured and
started its service. In case of a connection using an ODBC DSN, you must have correctly
configured the connection in the ODBC Data Source Administrator.

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Connect to an ODBC data source from Power
Query Desktop
To make the connection, take the following steps:

1. In order to connect to data, select Get Data from the Home ribbon and select
Denodo in the Database section.

2. There are two ways to connect to the data source of your choice:

Through DSN (ODBC data source name)


Using a connection string

In the DSN or Connection String section of the Denodo Connector dialog box,
provide the Data source name (DSN) or the Connection String depending on the
type of connection you prefer.

When creating a Denodo-compatible connection string, you must take into


account that the Driver field must be omitted, as this is transparently set at
connection time by the connector itself.

The connection string must contain three mandatory parameters: SERVER, PORT
and DATABASE:

HTTP

SERVER=<Server name>;PORT=<Port number>;DATABASE=<Database name>

Additionally, it can contain an optional parameter: SSLmode:

HTTP

SERVER=<Server name>;PORT=<Port number>;DATABASE=<Database


name>;SSLmode=<SSL mode>

Authentication parameters must be omitted, as authentication is configured in


later steps.

7 Note
When writing the connection string, it must be taken into account:

1. The connection string must keep the correct order of its parameters: SERVER,
PORT, DATABASE and SSLMode.
2. The name of these parameters must always be written in the same way. For
example, if you choose to write them in upper case, they must always be
written in upper case; if you decide to write them capitalized (writing the first
letter of a word in uppercase and the rest of the letters in lowercase) they
must always be written that way.

Doing otherwise could prevent Power BI from recognizing different Denodo


datasets in a report as belonging to the same Denodo data source and, as a
consequence, request separate authentication credentials for each of them.

3. The second section, Enable debug mode, is an optional field that allows you to
add trace information to log files. These files are created by Power BI Desktop
when you enable tracing in the application using the Diagnostics tab in the
Options menu. Note that the default value for Enable debug mode is false, and in
this scenario, there will be no trace data in the log files from Denodo Power BI
custom connector.

4. The third section, Native Query, is an optional field where you can enter a query. If
this query field is used, the resulting dataset will be the result of the query instead
of a table or a set of tables.

You can write a query that queries only one of the databases that the datasource is
associated with.

SQL

SELECT title, name FROM film JOIN language ON film.language_id =


language.language_id WHERE film.language_id = 1

If you want to write a query that queries more than one database, you have to
specify in the query the database that owns each table.

SQL

SELECT i_item_sk, country FROM sakila.country, ewd.item

5. The last section in Denodo Connector is Data connectivity mode, where you can
choose between Import mode or DirectQuery mode.
6. Once you're done, select OK.

7. Before showing the navigator window that displays a preview of the available data
in Denodo Virtual DataPort, you'll be asked for authentication. The Denodo Power
BI custom connector supports two authentication types: Windows and Basic.

Windows: When you choose to use Windows authentication, Power BI


Desktop connects to Virtual DataPort using Kerberos authentication.

In this case:

Kerberos authentication must be enabled in the Virtual DataPort server.

The Denodo Virtual DataPort database that the data source connects to
must be configured with the option ODBC/ADO.net authentication type
set to Kerberos.

Power BI Desktop must be running in the Windows domain, because the


ODBC driver requests the Kerberos ticket from the operating system's
ticket cache.

Make sure the Advanced Options page of the DSN configuration contains
all the needed configuration for using Kerberos as an authentication
method.
Basic: This authentication type allows you to connect Power BI Desktop to
your Virtual DataPort data using your Virtual DataPort server credentials.

8. Once you're done, select Connect.

9. In Navigator, select the data you need from the database you want and choose
Load, or choose Transform Data if you're going to modify the incoming data.
Connect to an ODBC data source from Power BI
service using the on-premises data gateway
To make the connection, take the following steps:

1. Configure the on-premises data gateway (enterprise gateway) that acts as a bridge,
providing quick and secure data transfer between on-premises data (data in your
Power BI Desktop application, not in the cloud) and the Power BI service.

2. Sign in and register your gateway. In the on-premises data gateway app, select the
Status tab to verify that your gateway is online and ready to be used.
3. Using the gateway settings page in Power BI service, create a data source for the
Denodo Power BI custom connector.
In order to create the data source, you have to specify the way to connect to the
data source of your choice:

Through DSN
Using a connection string

You also have to specify the authentication mode. The available authentication
methods are:

Windows: When you choose to use Windows authentication, Power BI service


connects to Virtual DataPort using Kerberos authentication. You need:

In Data Source Settings, enter the username and password to create the
Kerberos ticket.

Kerberos authentication must be enabled in the Virtual DataPort server.

The Denodo Virtual DataPort database that the data source connects to
must be configured with the option ODBC/ADO.net authentication type
set to Kerberos.

Make sure the Advanced Options page of the DSN configuration contains
all the needed configuration for using Kerberos as an authentication
method.
Basic: This authentication type allows you to create a data source in Power BI
service to connect to your Virtual DataPort data using your Virtual DataPort
server credentials.

4. If you use Windows authentication, under Advanced settings for the data source
you can enable the single sign-on (SSO) authentication schema in order to use the
same credentials of the user accessing your reports in Power BI for accessing the
required data in Denodo.
There are two options for enabling SSO: Use SSO via Kerberos for DirectQuery
queries and Use SSO via Kerberos for DirectQuery And Import queries. If you're
working with DirectQuery based reports, both options use the SSO credentials of
the user that signs in to the Power BI service. The difference comes when you work
with Import based reports. In this scenario, the former option uses the credentials
entered in the data source page (Username and Password fields), while the latter
uses the credentials of the dataset owner.

It's important to note that there are particular prerequisites and considerations
that you must take into account in order to use the Kerberos-based SSO. Some of
these essential requirements are:

Kerberos constrained delegation must be enabled for the Windows user


running the Microsoft Power BI Gateway, and configuration of both the local
Active Directory and Azure Active Directory environments should be
performed according to the instructions offered by Microsoft for this
purpose.

By default, the Microsoft Power BI Gateway sends the user principal name
(UPN) when it performs an SSO authentication operation. Therefore, you'll
need to review the attribute that you'll use as a login identifier in Denodo
Kerberos Authentication and, if it's different from userPrincipalName , adjust
the gateway settings according to this value.

The Microsoft Power BI Gateway configuration file called


Microsoft.PowerBI.DataMovement.Pipeline.GatewayCore.dll.config , stored at

\Program Files\On-premises data gateway has two properties called


ADUserNameLookupProperty and ADUserNameReplacementProperty that allow the

gateway to perform local Azure AD lookups at runtime. The


ADUserNameLookupProperty must specify against which attribute of the local

AD it must map the user principal name that comes from Azure AD. So, in this
scenario, ADUserNameLookupProperty should be userPrincipalName . Then, once
the user is found, the ADUserNameReplacementProperty value indicates the
attribute that should be used to authenticate the impersonated user (the
attribute that you'll use as the login identifier in Denodo).

You should also take into account that changes in this configuration file are at
the gateway level, and therefore will affect any source with which SSO
authentication is done through the Microsoft Power BI Gateway.

5. Once a data source is created for the Denodo connector, you can refresh Power BI
reports. To publish a report on powerbi.com, you need to:

Open the report in Power BI Desktop.


Select File > Publish > Publish to Power BI.
Save the report on the computer.
Select the workspace where you want to publish.

Troubleshooting

Loading data when a field in a Denodo view has more


than 42 relations with other views.
If a Denodo view being imported as a data source into Power BI has more than 42
relations with other views, Power BI might display the following error when accessing
the data transformation window:

Preview.Error: The type of the current preview value is too complex to display.

This error is due to a limitation in the Microsoft Power Query platform. In order to work
around it, select the failing data source (query) in the data transformation window and
access the advanced editor with View > Advanced Editor. Then edit the data source
expression in M language adding the following property to the options argument of
the Denodo.Contents function call:

CreateNavigationProperties=false

So your call would look similar to:

Source = Denodo.Contents(<dsn>, null, [CreateNavigationProperties=false])

This property will instruct Power BI not to try and generate navigation properties from
the relationships registered for the Denodo view accessed in this data source. So if you
need some of these relationships to be actually present in your Power BI data model,
you will need to manually register them afterwards.
Digital Construction Works Insights
Article • 07/18/2023

Summary
Item Description

Release state General availability

Products supported Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication types supported Digital Construction Works JSON Web Token (JWT)

Function reference docs —

7 Note

The following connector article is provided by Digital Construction Works (DCW),


the owner of this connector and a member of the Microsoft Power Query
Connector Certification Program. If you have questions regarding the content of
this article or have changes you would like to see made to this article, visit the DCW
website and use the support channels there.

Prerequisites
Use of this connector requires a Digital Construction Works Integrations Platform
subscription. To learn more, go to
https://www.digitalconstructionworks.com/solutions/the-dcw-integrations-platform .
Visit https://www.digitalconstructionworks.com for company information.

Users of the Digital Construction Works (DCW) Integrations Platform can request a JSON
Web Token (JWT) from their project administrator in order to access data using the DCW
Insights connector. Users can then follow the documentation for the OData API to
connect to the datasets they want to use in Power BI.

Capabilities supported
Import
Connect to DCW Insights OData API from
Power Query Desktop
To connect to a DCW Insights project, take the following steps:

1. Under Get Data in Power BI Desktop, choose the Digital Construction Works
Insights connector from the Online Services.

2. In Insights Api Url, provide the URL to the OData API you want to connect to. You
need to use https , and you need your full project URL and product name included
in the URL. You can also enter in query string parameters if the URL calls for it.

3. Select OK.

4. If this is the first time you're connecting to this endpoint, you'll be asked to enter
in the JWT used to authorize you for this project. Then select Connect.
For more information about authentication methods, go to Authentication with a
data source.

7 Note

If the connection isn't specified to use https , you'll be prompted to update


your URL.

5. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
editor.

Troubleshooting

Always Encrypted columns


Power Query doesn't support "Always Encrypted" columns.

OData.Feed
We use the following default settings when using OData.Feed:

Implementation = "2.0", MoreColumns = true, ODataVersion = 4


Dynamics 365 Online (legacy)
Article • 05/05/2023

This article provides basic information, prerequisites, and instructions on how to connect
to Dynamics 365 Online (legacy) from Power Query.

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Analysis Services

Authentication Types Supported Anonymous


Windows
Basic
Web API
Organizational account

Capabilities supported
Import

Connect to Dynamics 365 Online (legacy) from


Power Query Desktop
Power Query Desktop experiences include Power BI Desktop, Excel, and Analysis
Services.

To make the connection to Dynamics 365 Online (legacy), follow these steps:

1. From Get Data, select the Online Services category, select Dynamics 365 Online
(legacy), and then select Connect. More information: where to get data
2. In the window that appears, enter the server name of your Dynamics 365 Online
(legacy) instance. You can select Advanced to enter other URL parts.

7 Note

The url format will be something like


https://<tenant>.crm.dynamics.com/api/data/v9.0.
The exact format of the URL you will use to connect depends on your
region. For more information, see: Web API URL and versions
(docs.com).
If you select the Advanced option, you can append certain additional
parameters to the query to control what data is returned. For more
information, see: Query Data using the Web API (docs.com).

3. Select OK.

4. If you're connecting to this data source for the first time, you can select one of
these authentication types: Anonymous, Windows, Basic, Web API, or
Organizational account. Enter your credentials and select Connect. The next time
you connect, it will remember your credentials.

5. In Navigator, select one or more files or folders to import and use in your
application. Then select either Load to load the table, or Transform Data to open
the Power Query editor where you can filter and refine the set of data you want to
use, and then load that refined set of data.
Eduframe Reporting (Beta)
Article • 07/18/2023

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Eduframe account

7 Note

The following connector article is provided by Drieam, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Drieam website and use the support
channels there.

Prerequisites
Before you can sign in to Eduframe Reporting, you must have an Eduframe Admin
account (username/password) and have enabled the Eduframe Reporting integration. To
enable this integration, you can send an email to: [email protected].

Capabilities Supported
Import

Connect to Eduframe Reporting data


To connect to Eduframe Reporting data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, select Eduframe (Beta), and then select Connect.
2. If this is the first time you're getting data through the Eduframe connector, a
preview connector notice will be displayed. Select Don't warn me again with this
connector if you don't want this message to be displayed again, and then select
Continue.

3. Enter the Eduframe domain that you want to access (this domain is the URL
without the .eduframe.nl), and if you want to exclude personal information. Then
select OK.
4. To sign in to your Eduframe account, select Sign in.

5. In the window that appears, provide your credentials to sign in to your Eduframe
account and select Sign in.

6. Next it will show a window where you have to approve this integration with Power
BI. Select approve.
7. Once you've successfully signed in, select Connect.

8. In Navigator, select the information you want, then either select Load to load the
data or Transform Data to continue transforming the data in the Power Query
editor.

Limitations and issues


You should be aware of the following limitations and issues associated with accessing
Eduframe data.

There's a throttling limit of 100 API calls per 60 seconds.

Next steps
You can find additional information and templates for this connector on our
documentation page .
EQuIS
Article • 07/26/2023

7 Note

The following connector article is provided by EarthSoft, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the EarthSoft website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic


Web API (API Token)
Organizational Account

Function Reference Documentation —

Prerequisites
To use the EQuIS connector, you must have a valid user account in an EQuIS Enterprise
site (version 7.0.0.19300 or later) that includes a REST API license. Your user account
must be a member of the REST API role. To verify user account configuration, go to the
Roles tab in your user profile and verify that you're a member of the REST API role.

Capabilities supported
Import

Connect to EQuIS from Power BI Desktop


To connect to an EQuIS Enterprise site from Power BI Desktop, take the following steps:

1. Select the EQuIS connector in the connector list, then select Connect.

2. Enter the URL of the EQuIS Enterprise site you're connecting to, then select OK.

3. Select the appropriate type of authentication:

Basic: Enter your EQuIS username and password for the given EQuIS
Enterprise site.
API Token: Enter an API Token that you generated in EQuIS Enterprise (visit
User Profile ).
Organizational Account: If your EQuIS Enterprise site is appropriately
configured, you may authenticate with Azure Active Directory

4. In Navigator, browse to the dataset or report you want to load, then select Load or
Transform Data. Visit Using EQuIS Data for more information about available
datasets.
Additional Information
For best functionality and performance, EarthSoft recommends that you use the
EQuIS connector with the latest build of EQuIS Enterprise.
When using reports in a facility group, non-administrator users must have
permission to all facilities contained in the facility group.
Only "grid" reports are available in the Navigator.
All datasets consumed by the EQuIS connector use camelCase for column names.
The current version of the EQuIS connector retrieves a dataset in a single API
request (this logic might be optimized in a future version of the connector).
Essbase
Article • 06/09/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)

Authentication Types Supported Basic (Username/Password)

Function Reference Documentation Essbase.Cubes

Prerequisites
Essbase 11.1.2.x version is supported.

Capabilities Supported
Import
Direct Query (Power BI Datasets)
Advanced options
Command timeout in minutes
Server
Application
MDX statement

Connect to Essbase from Power Query Desktop


To connect to an Essbase server:

1. Select the Essbase option in the Get Data experience.

2. Enter the URL to the Oracle Essbase Hyperion server. Typically, the URL looks like
http://[hostname]:[port number]/aps/XMLA . The components of the URL are:

The hostname (for example, yourservername.domain.com ) is the hostname or IP


address of the Oracle Hyperion Application Provider Service (APS) server for
your in-house system.

The port number (for example, 19000) is the port number the APS server is
listening to for XMLA requests.

The last portion of the URL, the path (that is, /aps/XMLA), is case-sensitive
and must be specified exactly as shown.

Some example URLs would be:

http://apsserver.company.com:19000/aps/XMLA —Using fully qualified host

name with default port 19000.


http://hypserver01:13080/aps/XMLA —Using a not fully qualified host name

and port 13080.


http://10.10.10.10/aps/XMLA —Using an IP address and port 80–changed
from default 19000.

3. Select either the Import or DirectQuery data connectivity mode. More


information: Use DirectQuery in Power BI Desktop

Optionally, enter values in any advanced options that you want to use to modify
the connection query. More information: Connect using advanced options

4. The first time you connect to a data source (identified by each unique URL), you'll
be prompted to enter account credentials. Enter the User name and Password for
the connection. More information: Authentication with a data source
5. In Navigator, select the data you require. Then, either select Transform data to
transform the data in Power Query Editor, or Load to load the data in Power BI.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed. The following table lists all of the advanced options you can set in Power Query.

Advanced Description
option
Advanced Description
option

Command Lets you set the maximum time a command is allowed to run before Power BI
timeout in abandons the call. If the command timeout is reached, Power BI may retry two more
minutes times before completely abandoning the call. This setting is helpful for querying
large amounts of data. The default value of the command timeout is 140 seconds.

Server The name of the server where the optional MDX statement is to run. This value is
case sensitive.

Application The name of the application where the optional MDX statement is to run. This value
is case sensitive.

MDX Optionally provides a specific MDX statement to the Oracle Essbase server to
statement execute. Normally, Power BI interactively determines the measures and dimensions
of the cube to return. However, by specifying the MDX statement, the results of that
particular MDX statement will be loaded. When you specify the MDX statement, you
must also provide the Server (for example, essbaseserver-1 ) and Application (for
example, Sample ) advanced options to indicate where the MDX statement is to run.
Also, you can only use the MDX statement in conjunction with Data Connectivity
mode set to Import.

In general, the MDX generated by SmartView or one accepted by Essbase Admin is


not 100% compatible with Power BI.

PowerBI requires measures to be specified on a 0 axis in an MDX query. In addition,


level numbering is reversed in XMLA. The least granular level is level 0, 1, and so on
in XML, but the opposite in Essbase "directly". So if level numbers are used explicitly
in the MDX query, they need to be adjusted.
Using data source navigator when importing
data
When Data Connectivity mode is set to Import, the data source navigator loads the
servers configured for the APS server you've provided in the URL. Expanding a server
node in the tree reveals the available applications. Expanding an application node
reveals the available databases (also known as cubes). Expanding a database node
reveals the available measures and dimensions. The dimension can be further expanded
to reveal the levels in the hierarchy for each dimension.

Choose a measure and all (or specific) dimension levels by selecting the checkbox next
to the name. A preview of the data is provided in the pane on the right. You can select
the Load button to retrieve the data associated with the selection or select the
Transform Data button to set further filters on the data before loading it in Power BI.

Differences in display compared with Essbase


administration
When expanding a dimension hierarchy in the navigator, you might notice that it looks
different when compared to using the Essbase Administration Services control panel.
As an example, the following image shows the expanded Entity dimension hierarchy
from Essbase Administration Services.

While in the Power Query navigator, the same Entity being expanded appears like this:
Be aware that this look is a stylistic decision and that there are no differences in data.
The levels in the Power Query navigator correspond to the hierarchical level.

In the example above, Level 1 would contain “R_ReportingUnits”, “Adjustment Entity


Input” and “No_Entity”. Level 2 would contain “R_Americas”, “R_EMEA”, “R_AsiaPacific”,
“1_ReportingUnits_Adjustment”, “CALA_HFM_Input”, “CALA_Total”, and so on.

The reason is because the navigator in Power Query is limited to 10,000 members to
display, and there can be millions or billions of members underneath a hierarchy. Even
for the case of no member display limit (such as with Power Query Online), navigating
and selecting every individual member in a tree format with so many possible values
quickly becomes tedious and difficult to use.

So, the grouping of the hierarchical levels makes it easier to select what to import, and
the subsequent report generation can use filters to target only the members the end
user wants.

Using data source navigator for DirectQuery


When a Data Connectivity mode of DirectQuery is chosen, the data source navigator
loads the servers configured for the APS server you've provided in the URL. Expanding a
server node in the tree reveals the available applications. Expanding an application node
reveals the available databases (also known as cubes).
Known limitations
The Essbase connector doesn't support measure hierarchies. All measures are displayed
at the same level. You can still select all the measures that you need. The search field can
be used to narrow down the displayed measures if there are large numbers of measures.

Performance considerations
Interacting with Power BI in DirectQuery mode is very dynamic. When selecting a
checkbox to include a measure or dimension level in the visualization, Power BI Desktop
generates a query and sends it to the Oracle Essbase server to get the results. Power BI
is optimized to cache any repeated queries to improve performance. But if any new
query is generated, it's sent to the Oracle Essbase server to produce a new result.
Depending on the number of selected measures, dimension levels, and the filters
applied, the query might get sent more quickly than the Oracle Essbase server can
respond. To improve performance and increase responsiveness, consider the following
three methods to optimize your interaction with the Oracle Essbase server.
Query reductions options
There are three options to reduce the number of queries sent. In Power BI Desktop,
select the File tab, then select Options and settings > Options, and then select Query
reductions under the Current File section.

Selecting the Disabling cross highlighting/filtering by default option under Reduce


number of queries sent by disables cross highlighting/filtering by default. When
disabled, member lists in the filter don't get updated when filtering members in other
levels of the same dimension. Selecting the Slicer selections option under Show an
Apply button and only send queries once for section displays the Apply button when a
slicer selection is changed. Selecting the Filter selections option under Show an Apply
button and only send queries once for section displays the Apply button when a filter
selection is changed.

7 Note
These options apply only to the current file you are working on. Current File option
settings are saved with the file and restored when opening the same file.

Iterative filter application when adding


dimension levels in import mode
When interacting in import mode with a multidimensional cube data source like Oracle's
Essbase, Power Query initially displays the measures, dimensions, and dimension levels
in the database Navigator dialog box. However, while Power BI makes it easy to select
and visualize data it can, at times, lead to retrieving too much data from the server.

The following procedure demonstrates how to reduce the chances of retrieving more
data than is necessary when importing data into Power BI by iteratively applying filters
on dimension members at each level.

Connecting to the Oracle Essbase data source


1. Follow the instructions in Connect to Essbase from Power Query Desktop to
connect to an Essbase server using import mode.

2. Expand the tree to drill down to your desired server, application, and database
until it exposes the measures and dimensions for your database. For now, select
your measures and only one dimension level. Pick the most important dimension
level. In later steps, you'll build the result by incrementally adding more
dimensions levels.

3. Select Load to import the selected measures and dimension level.


Editing queries to add more dimension levels


Now that you have a starting point, you can start adding more dimension levels and
applying filters as you go.

1. Select Edit Queries on the Power BI Desktop ribbon to start the process.

2. If you have members you want to filter on in the initial dimension, select the
column properties button to display the list of available dimension members at
this level. Select only the dimension members you need at this level and then
select OK to apply the filter.

3. The resulting data is now updated with the applied filter. Applied Steps now
contains a new step (Filtered Rows) for the filter you set. You can select the

settings button for the step to modify the filter at a later time.

4. Now you'll add a new dimension level. In this case, you're going to add the next
level down for the same dimension you initially chose. Select Add Items on the
ribbon to bring up the Navigator dialog box.

5. Navigate to the same dimension, but this time select the next level below the first
level. Then select OK to add the dimension level to the result.

6. The result grid now has the data from the new dimension level. Notice that
because you've applied a filter at the top level, only the related members in the
second level are returned.

7. You can now apply a filter to the second-level dimension as you did for the first
level.

8. In this way, each subsequent step ensures only the members and data you need
are retrieved from the server.

9. Now let's add a new dimension level by repeating the previous steps. Select Add
Items on the ribbon bar again.

10. Navigate to the dimension level you want, select it, and then select OK to add the
dimension level to the result.

11. The new dimension level is added to the result.

12. Apply a filter to this dimension level, as needed.


13. Observe the result.

Applying your changes and loading the data


1. When you've added all the dimension levels you want and have set all the required
filters, select Close in the upper right corner to close the editor.

2. Select Yes to apply your changes.

3. Wait for the changes to be applied.


4. Observe the new dimension levels in the Fields pane.

You're now ready to create reports and visualizations.

Iterative filter application when adding


dimension levels in DirectQuery mode
When interacting in DirectQuery mode with a multidimensional cube data source (like
Oracle's Essbase), Power BI displays the cube's dimensions and levels in the Fields pane.

To view and filter based on dimension members:

1. Drag-and-drop a dimension level from the Fields pane over to the Filters pane. You
can drag the dimension level to the Add data fields here area under Filters on this
visual, Filters on this page, or Filters on all pages, depending on your needs.

2. Once a dimension's level is in the Filter pane and the filter type is set to Basic
filtering, you'll notice that the members of that dimension's level are displayed as
a list of available filters.

3. You can check the members you want to include in your result.

Or you can select the Select all option, then uncheck the members you don't want
to include in your result.
Type some characters in the search field for that filter to find members in the list.

4. When you have filters for two or more levels of the same dimension, you'll notice
that selecting members from a higher level in the dimension changes the members
available in the lower levels of that dimension.

This cross highlighting/filtering behavior can be disabled by checking the


Disabling cross highlighting/filtering by default option, as described in Query
reductions options.

5. When you've finished choosing the members you want in the dimension level filter,
it's a good time to add that dimension level to your visualization. Check the
matching dimension level in the Fields pane and it's then added to your current
visualization.

For more information about adding filters, go to Add a filter to a report in Power BI.

Troubleshooting
This section outlines common issues that you might come across, and includes
troubleshooting steps to address the issues.

Connection issues
Symptom 1

Power BI Desktop returns the error message "Unable to connect to the remote server".

Resolution

1. Ensure the Essbase Analytic Provider Services (APS) server is configured correctly
for the Provider Servers and Standalone Servers in the Essbase Administration
Service (EAS) console. More information: Configuring Essbase Clusters

2. Ensure that the URL is correct.

Check to ensure the hostname and or IP address is correct.


Check to ensure the provided port is correct.
Check to ensure the http (not https) protocol is specified.
Check to ensure the case is correct for the /aps/XMLA path in the URL.

3. If there's a firewall between Power BI Desktop and the provided hostname, check
to ensure the provided hostname and port can pass outbound through your
firewall.

Validation

Trying to connect again won't show the error and the Cube and member list is in the
navigation pane. You can also select and display in preview in Import mode.

Symptom 2

Power BI Desktop returns the error message "We couldn't authenticate with the
credentials provided. Please try again."

Resolution

Ensure the provided username and password are correct. Reenter their values carefully.
The password is case-sensitive.

Validation

After correcting the username and password, you should be able to display the
members and the value in the preview or be able to load the data.

Symptom 3

Power BI Desktop returns the error message "Data at the root level is invalid. Line 1,
position 1."

Resolution

Ensure the Essbase Analytic Provider Services (APS) server is configured correctly for the
Provider Servers and Standalone Servers in the Essbase Administration Service (EAS)
console. More information: Configuring Essbase Clusters .

Validation

Trying to connect again won't show the error and the Cube and member list is displayed
in the navigation pane. You can also select and display in the preview in Import mode.

Symptom 4

Once successfully connected to the Oracle Essbase Analytic Provider Services (APS)
server, there are servers listed below the URL node in the data source navigator.
However, when you expand a server node, no applications are listed below that server
node.

Resolution
We recommend configuring the Oracle Hyperion server to define the provider and
standalone servers through the Essbase Administration Service (EAS) console. Refer to
section Addendum: Registering Provider and Standalone Servers in Essbase
Administration Service (EAS) Console .

Validation

Trying to connect again won't show the error and you can see the Cube and member list
in the navigation pane. You can also select and display in the preview in Import mode.

Time out or large data issue


Symptom 1

Power Query returns the error message "The operation has timed out"

Resolution

1. Ensure the network is stable and there's a reliable network path to the Essbase
Analytic Provider Services (APS) server provided in the data source URL.

2. If there's a possibility that the query to the service could return a large amount of
data, specify a long (or longer) command timeout interval. If possible, add filters to
your query to reduce the amount of data returned. For example, select only
specific members of each dimension you want returned.

Validation

Retry to load the data and if the problem persists, try to increase to a longer timeout
interval or filter the data further. If the problem still persists, try the resolution on
Symptoms 3.

Symptom 2

The query returns the error message "Internal error: Query is allocating too large
memory ( > 4GB) and cannot be executed. Query allocation exceeds allocation limits."

Resolution

The query you're trying to execute is producing results greater than the Oracle Essbase
server can handle. Supply or increase the filters on the query to reduce the amount of
data the server will return. For example, select specific members for each level of each
dimension or set numeric limits on the value of measures.

Validation
Retry to load the data and if the problem persists, try to increase to a longer timeout
interval or filter the data further. If the problem still persists, try the resolution on
Symptoms 3.

Symptom 3

Essbase Analytic Provider Services (APS) or Essbase server indicates a large number of
connections with long running sessions.

Resolution

When the connectivity mode is DirectQuery, it's easy to select measures or dimension
levels to add to the selected visualization. However, each new selection creates a new
query and a new session to the Essbase Analytic Provider Services (APS)/Essbase server.
There are a few ways to ensure a reduced number of queries or to reduce the size of
each query result. Review Performance Considerations to reduce the number of times
the server is queried and to also reduce the size of query results.

Validation

Retry to load the data.

Key not matching when running MDX


Symptom

An MDX statement returns the error message "The key didn't match any rows in the
table".

Resolution

It's likely that the value or the case of the Server and Application fields don't match.
Select the Edit button and correct the value and case of the Server and Application
fields.

Validation

Retry to load the data.

Unable to get cube issue - MDX


Symptom

An MDX statement returns the error message "Unable to get the cube name from the
statement. Check the format used for specifying the cube name".
Resolution

Ensure the database name in the MDX statement's FROM clause is fully qualified with
the application and database name, for example, [Sample.Basic]. Select the Edit button
and correct the fully qualified database name in the MDX statement's FROM clause.

Validation

Retry to load the data.

Essbase Error (1260060) issue - MDX


Symptom

An MDX statement returns the error message "Essbase Error (1260060): The cube name
XXXX does not match with current application/database"

Resolution

Ensure the application name and the fully qualified database name in the FROM clause
match. Select the Edit button and correct either the application name or the fully
qualified database name in the MDX statement's FROM clause

Validation

Retry to load the data.

Essbase Error (1200549): Repeated dimension [Measures]


in MDX query
Symptom

Loading a dimension returns the error message "Essbase Error (1200549): Repeated
dimension [Measures] in MDX query".

Resolution

1. Sign in to the Essbase server, open the Essbase Administration Services Console
and sign in with an admin user (or whoever has permissions over the problematic
database).

2. Navigate to the Essbase server > application > database with the problematic
"Measures" dimension.

3. Unlock the outline of the database and edit it.


4. Determine which dimension should be the "Accounts" dimension type. Right-click
it and select Edit member properties….

5. Select the Dimension Type field and set it to Accounts. Select OK.

6. Verify and Save the outline.

Validation

Retry to load the dimension.


Excel
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Anonymous (online)


Basic (online)
Organizational account (online)

Function Reference Documentation Excel.Workbook


Excel.CurrentWorkbook

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
To connect to a legacy workbook (such as .xls or .xlsb), the Access Database Engine
OLEDB (or ACE) provider is required. To install this provider, go to the download page
and install the relevant (32 bit or 64 bit) version. If you don't have it installed, you'll see
the following error when connecting to legacy workbooks:

The 'Microsoft.ACE.OLEDB.12.0' provider is not registered on the local machine. The


32-bit (or 64-bit) version of the Access Database Engine OLEDB provider may be

required to read this type of file. To download the client software, visit the
following site: https://go.microsoft.com/fwlink/?LinkID=285987.
ACE can't be installed in cloud service environments. So if you're seeing this error in a
cloud host (such as Power Query Online), you'll need to use a gateway that has ACE
installed to connect to the legacy Excel files.

Capabilities Supported
Import

Connect to an Excel workbook from Power


Query Desktop
To make the connection from Power Query Desktop:

1. Select the Excel option in the connector selection.

2. Browse for and select the Excel workbook you want to load. Then select Open.

If the Excel workbook is online, use the Web connector to connect to the
workbook.

3. In Navigator, select the workbook information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.
Connect to an Excel workbook from Power
Query Online
To make the connection from Power Query Online:

1. Select the Excel option in the connector selection.

2. In the Excel dialog box that appears, provide the path to the Excel workbook.

3. If necessary, select an on-premises data gateway to access the Excel workbook.

4. If this is the first time you've accessed this Excel workbook, select the
authentication kind and sign in to your account (if needed).
5. In Navigator, select the workbook information you want, and then Transform Data
to continue transforming the data in Power Query Editor.

Suggested tables
If you connect to an Excel Workbook that doesn't specifically contain a single table, the
Power Query navigator will attempt to create a suggested list of tables that you can
choose from. For example, consider the following workbook example that contains data
from A1 to C5, more data from D8 to E10, and more from C13 to F16.
When you connect to the data in Power Query, the Power Query navigator creates two
lists. The first list contains the entire workbook sheet, and the second list contains three
suggested tables.

If you select the entire sheet in the navigator, the workbook is displayed as it appeared
in Excel, with all of the blank cells filled with null.

If you select one of the suggested tables, each individual table that Power Query was
able to determine from the layout of the workbook is displayed in the navigator. For
example, if you select Table 3, the data that originally appeared in cells C13 to F16 is
displayed.

7 Note

If the sheet changes enough, the table might not refresh properly. You might be
able to fix the refresh by importing the data again and selecting a new suggested
table.

Troubleshooting

Numeric precision (or "Why did my numbers change?")


When importing Excel data, you may notice that certain number values seem to change
slightly when imported into Power Query. For example, if you select a cell containing
0.049 in Excel, this number is displayed in the formula bar as 0.049. But if you import the
same cell into Power Query and select it, the preview details display it as
0.049000000000000002 (even though in the preview table it's formatted as 0.049).
What's going on here?

The answer is a bit complicated, and has to do with how Excel stores numbers using
something called binary floating-point notation. The bottom line is that there are certain
numbers that Excel can't represent with 100% precision. If you crack open the .xlsx file
and look at the actual value being stored, you'll see that in the .xlsx file, 0.049 is actually
stored as 0.049000000000000002. This is the value Power Query reads from the .xlsx,
and thus the value that appears when you select the cell in Power Query. (For more
information on numeric precision in Power Query, go to the "Decimal number" and
"Fixed decimal number" sections of Data types in Power Query.)

Connecting to an online Excel workbook


If you want to connect to an Excel document hosted in Sharepoint, you can do so via the
Web connector in Power BI Desktop, Excel, and Dataflows, and also with the Excel
connector in Dataflows. To get the link to the file:

1. Open the document in Excel Desktop.


2. Open the File menu, select the Info tab, and then select Copy Path.
3. Copy the address into the File Path or URL field, and remove the ?web=1 from the
end of the address.

Legacy ACE connector


Power Query reads legacy workbooks (such as .xls or .xlsb) using the Access Database
Engine (or ACE) OLEDB provider. Because of this, you may come across unexpected
behaviors when importing legacy workbooks that don't occur when importing OpenXML
workbooks (such as .xlsx). Here are some common examples.

Unexpected value formatting


Because of ACE, values from a legacy Excel workbook might be imported with less
precision or fidelity than you expect. For example, imagine your Excel file contains the
number 1024.231, which you've formatted for display as "1,024.23". When imported into
Power Query, this value is represented as the text value "1,024.23" instead of as the
underlying full-fidelity number (1024.231). This is because, in this case, ACE doesn't
surface the underlying value to Power Query, but only the value as it's displayed in Excel.

Unexpected null values


When ACE loads a sheet, it looks at the first eight rows to determine the data types of
the columns. If the first eight rows aren't representative of the later rows, ACE may apply
an incorrect type to that column and return nulls for any value that doesn't match the
type. For example, if a column contains numbers in the first eight rows (such as 1000,
1001, and so on) but has non-numerical data in later rows (such as "100Y" and "100Z"),
ACE concludes that the column contains numbers, and any non-numeric values are
returned as null.
Inconsistent value formatting
In some cases, ACE returns completely different results across refreshes. Using the
example described in the formatting section, you might suddenly see the value 1024.231
instead of "1,024.23". This difference can be caused by having the legacy workbook
open in Excel while importing it into Power Query. To resolve this problem, close the
workbook.

Missing or incomplete Excel data


Sometimes Power Query fails to extract all the data from an Excel Worksheet. This failure
is often caused by the Worksheet having incorrect dimensions (for example, having
dimensions of A1:C200 when the actual data occupies more than three columns or 200
rows).

How to diagnose incorrect dimensions


To view the dimensions of a Worksheet:

1. Rename the xlsx file with a .zip extension.


2. Open the file in File Explorer.
3. Navigate into xl\worksheets.
4. Copy the xml file for the problematic sheet (for example, Sheet1.xml) out of the zip
file to another location.
5. Inspect the first few lines of the file. If the file is small enough, open it in a text
editor. If the file is too large to be opened in a text editor, run the following
command from a Command Prompt: more Sheet1.xml.
6. Look for a <dimension .../> tag (for example, <dimension ref="A1:C200" /> ).

If your file has a dimension attribute that points to a single cell (such as <dimension
ref="A1" /> ), Power Query uses this attribute to find the starting row and column of the

data on the sheet.

However, if your file has a dimension attribute that points to multiple cells (such as
<dimension ref="A1:AJ45000"/> ), Power Query uses this range to find the starting row

and column as well as the ending row and column. If this range doesn't contain all the
data on the sheet, some of the data won't be loaded.

How to fix incorrect dimensions

You can fix issues caused by incorrect dimensions by doing one of the following actions:
Open and resave the document in Excel. This action will overwrite the incorrect
dimensions stored in the file with the correct value.

Ensure the tool that generated the Excel file is fixed to output the dimensions
correctly.

Update your M query to ignore the incorrect dimensions. As of the December 2020
release of Power Query, Excel.Workbook now supports an InferSheetDimensions
option. When true, this option will cause the function to ignore the dimensions
stored in the Workbook and instead determine them by inspecting the data.

Here's an example of how to provide this option:

Excel.Workbook(File.Contents("C:\MyExcelFile.xlsx"), [DelayTypes = true,

InferSheetDimensions = true])

Sluggish or slow performance when loading Excel data


Slow loading of Excel data can also be caused by incorrect dimensions. However, in this
case, the slowness is caused by the dimensions being much larger than they need to be,
rather than being too small. Overly large dimensions will cause Power Query to read a
much larger amount of data from the Workbook than is actually needed.

To fix this issue, you can refer to Locate and reset the last cell on a worksheet for
detailed instructions.

Poor performance when loading data from SharePoint


When retrieving data from Excel on your machine or from SharePoint, consider both the
volume of the data involved, as well as the complexity of the workbook.

You'll notice performance degradation when retrieving very large files from SharePoint.
However, this is only one part of the problem. If you have significant business logic in an
Excel file being retrieved from SharePoint, this business logic may have to execute when
you refresh your data, which could cause complicated calculations. Consider
aggregating and pre-calculating data, or moving more of the business logic out of the
Excel layer and into the Power Query layer.

Errors when using the Excel connector to import CSV files


Even though CSV files can be opened in Excel, they're not Excel files. Use the Text/CSV
connector instead.
Error when importing "Strict Open XML Spreadsheet"
workbooks
You might see the following error when importing workbooks saved in Excel's "Strict
Open XML Spreadsheet" format:

DataFormat.Error: The specified package is invalid. The main part is missing.

This error happens when the ACE driver isn't installed on the host computer. Workbooks
saved in the "Strict Open XML Spreadsheet" format can only be read by ACE. However,
because such workbooks use the same file extension as regular Open XML workbooks
(.xlsx), we can't use the extension to display the usual the Access Database Engine OLEDB
provider may be required to read this type of file error message.

To resolve the error, install the ACE driver. If the error is occurring in a cloud service,
you'll need to use a gateway running on a computer that has the ACE driver installed.

"File contains corrupted data" errors


You might see the following error when importing certain Excel workbooks.

DataFormat.Error: File contains corrupted data.

Usually this error indicates there is a problem with the format of the file.

However, sometimes this error can happen when a file appears to be an Open XML file
(such as .xlsx), but the ACE driver is actually needed in order to process the file. Go to
the Legacy ACE connector section for more information about how to process files that
require the ACE driver.
FactSet RMS (Beta)
Article • 07/13/2023

Summary
Item Description

Release state Beta

Products supported Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication types supported Basic

Function reference docs https://developer.factset.com/api-catalog/irn-notes-api|

7 Note

The following connector article is provided by FactSet, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the FactSet website and use the support
channels there.

Prerequisites
To start using the FactSet RMS connector, the following prerequisite steps need to be
completed.

Download Power BI
Ensure that you're using latest version of Power BI, as the latest major update to
the FactSet Power BI data connector will only be available there. Any
subsequent major or minor version updates will only be available by upgrading
Power BI.

Subscription and authentication

To access FactSet’s IRN, the appropriate subscription is required. Refer to the


FactSet Client Assistance page for more details.
With the subscription in place, the next step is to generate the API key from the
Developer Portal. Follow the steps outlined in FactSet API keys Authentication
v1 documentation .

Capabilities supported
Import

Connect to FactSet RMS from Power Query


Desktop
To import data using the FactSet RMS connector from Power Query Desktop, take the
following steps:

1. Open Power BI Desktop.

2. On the Power BI Desktop home page, select Get Data > More.

3. To connect to FactsSetRMS, search for FactSet in Get Data and select the FactSet
RMS connector from the right-hand list.
4. In the authentication page, you'll be prompted to enter the Username - Serial and
the API key. Go to the FactSet Developer Portal for more instructions on setting up
an API Key.

5. The connector opens the Power Query navigator with a list of all provided
functions. Note that all functions might not be available, depending on your
available subscriptions. Your account team can assist with requirements for access
to additional products.

6. Use the Get* queries to look up parameters for your Notes and create new queries.
A form will populate in the query window with parameter fields to narrow your
universe and return the relevant data set of interest based on IRN Subject, Author,
Date Range, Recommendations and/or Sentiments. Note that the functions contain
Get* queries that are common for IRN Notes, Custom Symbols, and Meetings APIs.


The following table describes the Get functions in the connector.

Function Name Function Description

GetNotes Gets all the notes, including non-extended text custom fields in the
specified date (startDate and endDate) range, and can be filtered on
subjectId, authorId, recommendationId, sentimentId, and
modifiedSince.

GetNote Gets details of a note, including note body and extended text custom
fields.

GetMeetings Gets all the meetings, including non-extended text custom fields in
the specified date (startDate and endDate) range, and can be filtered
on modifiedSince.

GetMeeting Gets details of a meeting, including meeting body and extended text
custom fields.

GetCustomSymbols Gets a list of all custom symbols in your IRN database, along with
standard field data and non-extended text custom fields data, and
can be filtered on CustomSymbolTypeName.

GetCustomSymbol Gets details of a custom symbol, including symbol description and


extended text custom fields.

7. Results will be returned as a table with notes and a custom field.


FHIR
Article • 07/25/2023

Fast Healthcare Interoperability Resources (FHIR® ) is a new standard for healthcare


data interoperability. Healthcare data is represented as resources such as Patient ,
Observation , Encounter , and so on, and a REST API is used for querying healthcare data

served by a FHIR server. The Power Query connector for FHIR can be used to import and
shape data from a FHIR server.

If you don't have a FHIR server, you can provision the Azure API for FHIR.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Anonymous


Organizational account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities Supported
Import

Prerequisites
You must have a FHIR Data Reader role on the FHIR server to read data from the server.
More information: Assign roles for the FHIR service
Connect to a FHIR server from Power Query
Desktop
To make a connection to a FHIR server, take the following steps:

1. Select the Get Data button.

2. Select Other > FHIR, and then select Connect.


3. Enter the URL for your FHIR server.

You can optionally enter an initial query for the FHIR server, if you know exactly
what data you're looking for.

Select OK to proceed.
4. Decide on your authentication scheme.

The connector supports "Anonymous" for FHIR servers with no access controls (for
example, public test servers like http://test.fhir.org/r4 ) or Azure Active
Directory authentication. You must have a FHIR Data Reader role on the FHIR
server to read data from the server. Go to FHIR connector authentication for
details.

5. Select the resources you're interested in.

Select Transform Data to shape the data.

6. Shape the data as needed, for example, expand the postal code.
7. Save the query when shaping is complete.

8. Create dashboards with data, for example, make a plot of the patient locations
based on postal code.

Connect to a FHIR server from Power Query


Online
To make a connection to a FHIR server, take the following steps:

1. In Choose data source, search for FHIR, and then select the FHIR connector. More
information: Where to get data

2. In the FHIR dialog, enter the URL for your FHIR server.

You can optionally enter an initial query for the FHIR server, if you know exactly
what data you're looking for.
3. If necessary, include the name of your on-premises data gateway.

4. Select the Organizational account authentication kind, and select Sign in. Enter
your credentials when asked. You must have a FHIR Data Reader role on the FHIR
server to read data from the server.

5. Select Next to proceed.

6. Select the resources you're interested in.

Select Transform data to shape the data.

7. Shape the data as needed, for example, expand the postal code.

8. Save the query when shaping is complete.

7 Note

In some cases, query folding can't be obtained purely through data shaping
with the graphical user interface (GUI), as shown in the previous image. To
learn more about query folding when using the FHIR connector, see FHIR
query folding.

Next Steps
In this article, you've learned how to use the Power Query connector for FHIR to access
FHIR data. Next explore the authentication features of the Power Query connector for
FHIR.

FHIR connector authentication

FHIR® and the FHIR Flame icon are the registered trademarks of HL7 and are used
with the permission of HL7. Use of the FHIR trademark does not constitute
endorsement of this product by HL7.
FHIR connector authentication
Article • 07/25/2023

This article explains authenticated access to FHIR servers using the Power Query
connector for FHIR. The connector supports anonymous access to publicly accessible
FHIR servers and authenticated access to FHIR servers using Azure Active Directory
authentication. The Azure API for FHIR is secured with Azure Active Directory.

7 Note

If you are connecting to a FHIR server from an online service, such as Power BI
service, you can only use an organizational account.

Anonymous access
There are many publicly accessible FHIR servers . To enable testing with these public
servers, the Power Query connector for FHIR supports the "Anonymous" authentication
scheme. For example to access the public https://server.fire.ly server:

1. Enter the URL of the public Firely server.

2. Select Anonymous authentication scheme.


After that, follow the steps to query and shape your data.

Azure Active Directory (organizational)


authentication
The Power Query connector for FHIR supports OAuth authentication for FHIR servers
that are secured with Azure Active Directory .

To use Azure Active Directory authentication, select Organizational account when


connecting.

There are some restrictions to be aware of:

The expected Audience for the FHIR server must be equal to the base URL of the
FHIR server. For the Azure API for FHIR, you can set this when you provision the
FHIR service or later in the portal.

If your FHIR server doesn't return a WWW-Authenticate challenge header with an


authorization_uri field on failed authorization, you must use an organizational

account to sign in. You can't use a guest account in your active directory tenant.
For the Azure API for FHIR, you must use an Azure Active Directory organizational
account.
If your FHIR service isn't the Azure API for FHIR (for example, if you're running the
open source Microsoft FHIR server for Azure ), you'll have registered an Azure
Active Directory resource application for the FHIR server. You must pre-authorize
the Power BI client application to be able to access this resource application.

The client ID for the Power BI client is a672d62c-fc7b-4e81-a576-e60dc46e951d .

The Power Query (for example, Power BI) client will only request a single scope:
user_impersonation . This scope must be available and the FHIR server can't rely on

other scopes.

Next steps
In this article, you've learned how to use the Power Query connector for FHIR
authentication features. Next, explore query folding.

FHIR Power Query folding


FHIR query folding
Article • 07/25/2023

Power Query folding is the mechanism used by a Power Query connector to turn data
transformations into queries that are sent to the data source. This allows Power Query to
off-load as much of the data selection as possible to the data source rather than
retrieving large amounts of unneeded data only to discard it in the client. The Power
Query connector for FHIR includes query folding capabilities, but due to the nature of
FHIR search , special attention must be given to the Power Query expressions to
ensure that query folding is performed when possible. This article explains the basics of
FHIR Power Query folding and provides guidelines and examples.

FHIR and query folding


Suppose you are constructing a query to retrieve "Patient" resources from a FHIR server
and you are interested in patients born before the year 1980. Such a query could look
like:

Power Query M

let
Source = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null),
Patient1 = Source{[Name="Patient"]}[Data],
#"Filtered Rows" = Table.SelectRows(Patient1, each [birthDate] <
#date(1980, 1, 1))
in
#"Filtered Rows"

Instead of retrieving all Patient resources from the FHIR server and filtering them in the
client (Power BI), it's more efficient to send a query with a search parameter to the FHIR
server:

GET https://myfhirserver.azurehealthcareapis.com/Patient?birthdate=lt1980-
01-01

With such a query, the client would only receive the patients of interest and would not
need to discard data in the client.
In the example of a birth date, the query folding is straightforward, but in general it is
challenging in FHIR because the search parameter names don't always correspond to
the data field names and frequently multiple data fields will contribute to a single search
parameter.

For example, let's consider the Observation resource and the category field. The
Observation.category field is a CodeableConcept in FHIR, which has a coding field, which

have system and code fields (among other fields). Suppose you're interested in vital-
signs only, you would be interested in Observations where
Observation.category.coding.code = "vital-signs" , but the FHIR search would look

something like https://myfhirserver.azurehealthcareapis.com/Observation?


category=vital-signs .

To be able to achieve query folding in the more complicated cases, the Power Query
connector for FHIR matches Power Query expressions with a list of expression patterns
and translates them into appropriate search parameters. The expression patterns are
generated from the FHIR specification.

This matching with expression patterns works best when any selection expressions
(filtering) is done as early as possible in data transformation steps before any other
shaping of the data.

7 Note

To give the Power Query engine the best chance of performing query folding, you
should do all data selection expressions before any shaping of the data.

Query folding example


To illustrate efficient query folding, we'll walk through the example of getting all vital
signs from the Observation resource. The intuitive way to do this would be to first
expand the Observation.category field and then expand Observation.category.coding
and then filter. The query would look something like this:

Power Query M

// Inefficient Power Query


let
Source = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null),
Observation = Source{[Name="Observation"]}[Data],
ExpandCategory = Table.ExpandTableColumn(Observation, "category",
{"coding"}, {"category.coding"}),
ExpandCoding = Table.ExpandTableColumn(ExpandCategory,
"category.coding", {"system", "code"}, {"category.coding.system",
"category.coding.code"}),
FilteredRows = Table.SelectRows(ExpandCoding, each
([category.coding.code] = "vital-signs"))
in
FilteredRows

Unfortunately, the Power Query engine no longer recognized that as a selection pattern
that maps to the category search parameter, but if you restructure the query to:

Power Query M

// Efficient Power Query allowing folding


let
Source = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null),
Observation = Source{[Name="Observation"]}[Data],
FilteredObservations = Table.SelectRows(Observation, each
Table.MatchesAnyRows([category], each Table.MatchesAnyRows([coding], each
[code] = "vital-signs"))),
ExpandCategory = Table.ExpandTableColumn(FilteredObservations,
"category", {"coding"}, {"category.coding"}),
ExpandCoding = Table.ExpandTableColumn(ExpandCategory,
"category.coding", {"system", "code"}, {"category.coding.system",
"category.coding.code"})
in
ExpandCoding

The search query /Observation?category=vital-signs will be sent to the FHIR server,


which will reduce the amount of data that the client will receive from the server.

While the first and the second Power Query expressions will result in the same data set,
the latter will, in general, result in better query performance. It's important to note that
the second, more efficient, version of the query can't be obtained purely through data
shaping with the graphical user interface (GUI). It's necessary to write the query in the
"Advanced Editor".

The initial data exploration can be done with the GUI query editor, but it's
recommended that the query be refactored with query folding in mind. Specifically,
selective queries (filtering) should be performed as early as possible.

Finding folding patterns


The Power Query connector for FHIR will only be able to perform query folding if the
Power Query expressions map to known search parameters as defined by the FHIR
specification. If you're wondering if query folding is possible, we recommend that you
consult the FHIR specification . Each resource will list a set of search parameters
towards the bottom of the specification page. You can also consult the folding query
patterns page for examples of how to write foldable Power Query Expressions for FHIR.

Debugging query folding


If you're trying to determine if a given Power Query expression is being folded and what
the resulting FHIR search expression is, you can start Fiddler while shaping queries in
Power BI Desktop.

Summary
Query folding provides more efficient Power Query expressions. A properly crafted
Power Query will enable query folding and thus off-load much of the data filtering
burden to the data source.

Next steps
In this article, you've learned how to use query folding in the Power Query connector for
FHIR. Next, explore the list of FHIR Power Query folding patterns.

FHIR Power Query folding patterns


FHIR query folding patterns
Article • 07/25/2023

This article describes Power Query patterns that will allow effective query folding in
FHIR. It assumes that you are familiar with with using the Power Query connector for
FHIR and understand the basic motivation and principles for Power Query folding in
FHIR.

How to use this document


The list of examples in this document is not exhaustive and does not cover all the search
parameters that queries will fold to. However, we provide examples of the types of
queries and parameters you might encounter. When you are constructing a filter query
expression, consider whether the parameter you would like to filter on is:

A primitive type (like Patient.birthDate )


A complex type, which would be a record in Power Query (like Patient.meta )
An array of primitive types, which would be a list in Power Query (like
Patient.meta.profile )

An array of complex types, which would be a table in Power Query (like


Observation.code.coding , which has a number of columns)

And then consult the list of examples below. There are also examples of combining
these types of filtering patters in multi-level, nested filtering statements. Finally, this
article provides more complicated filtering expressions that fold to composite search
parameters .

In each example you'll find a filtering expression ( Table.SelectRows ) and right above
each filtering statement a comment // Fold: ... explaining what search parameters
and values the expression will fold to.

Filtering on primitive types


Root properties are at the root of a resource and typically of a primitive type (string,
date, and so on), but they can also be coding fields (for example Encoding.class ). This
section shows examples of searching different types of primitive root level properties.

Filtering patients by birth date:

M
let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "birthdate=lt1980-01-01"
FilteredPatients = Table.SelectRows(Patients, each [birthDate] <
#date(1980, 1, 1))
in
FilteredPatients

Filtering Patients by birth date range using and , only the 1970s:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "birthdate=ge1970-01-01&birthdate=lt1980-01-01"
FilteredPatients = Table.SelectRows(Patients, each [birthDate] <
#date(1980, 1, 1) and [birthDate] >= #date(1970, 1, 1))
in
FilteredPatients

Filtering Patients by birthdate using or , not the 1970s:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "birthdate=ge1980-01-01,lt1970-01-01"
FilteredPatients = Table.SelectRows(Patients, each [birthDate] >=
#date(1980, 1, 1) or [birthDate] < #date(1970, 1, 1))
in
FilteredPatients

Alternative search for active patients:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "active=true"
FilteredPatients = Table.SelectRows(Patients, each [active])
in
FilteredPatients

Alternative search for patients where active not true (could include missing):

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "active:not=true"
FilteredPatients = Table.SelectRows(Patients, each [active] <> true)
in
FilteredPatients

Filtering to keep only male patients:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "gender=male"
FilteredPatients = Table.SelectRows(Patients, each [gender] = "male")
in
FilteredPatients

Filtering to keep only patients that are not male (includes other):

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "gender:not=male"
FilteredPatients = Table.SelectRows(Patients, each [gender] <> "male")
in
FilteredPatients

Filtering Observations with status final (code):

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "status=final"
FilteredObservations = Table.SelectRows(Observations, each [status] =
"final")
in
FilteredObservations

Filtering on complex types


Filtering on last updated:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "_lastUpdated=2010-12-31T11:56:02.000+00:00"
FilteredPatients = Table.SelectRows(Patients, each [meta][lastUpdated] =
#datetimezone(2010, 12, 31, 11, 56, 2, 0, 0))
in
FilteredPatients

Filtering Encounter based on class system and code (coding):

let
Encounters =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Encounter" ]}[Data],

// Fold: "class=s|c"
FilteredEncounters = Table.SelectRows(Encounters, each [class][system] =
"s" and [class][code] = "c")
in
FilteredEncounters

Filtering Encounter based on code (coding):

let
Encounters =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Encounter" ]}[Data],
// Fold: "class=c"
FilteredEncounters = Table.SelectRows(Encounters, each [class][code] =
"c")
in
FilteredEncounters

Filtering Encounter based on class system only (coding):

let
Encounters =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Encounter" ]}[Data],

// Fold: "class=s|"
FilteredEncounters = Table.SelectRows(Encounters, each [class][system] =
"s")
in
FilteredEncounters

Filter Observations based on Observation.subject.reference (reference):

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "subject=Patient/1234"
FilteredObservations = Table.SelectRows(Observations, each [subject]
[reference] = "Patient/1234")
in
FilteredObservations

Filter Observations based on variations in Observation.subject.reference (reference):

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "subject=1234,Patient/1234,https://myfhirservice/Patient/1234"
FilteredObservations = Table.SelectRows(Observations, each [subject]
[reference] = "1234" or [subject][reference] = "Patient/1234" or [subject]
[reference] = "https://myfhirservice/Patient/1234")
in
FilteredObservations

Filtering on Quantity equal value (quantity):

let
ChargeItems =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"ChargeItem" ]}[Data],

// Fold: "quantity=1"
FilteredChargeItems = Table.SelectRows(ChargeItems, each [quantity]
[value] = 1)
in
FilteredChargeItems

Filtering on Quantity greater than value (quantity):

let
ChargeItems =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"ChargeItem" ]}[Data],

// Fold: "quantity=gt1.001"
FilteredChargeItems = Table.SelectRows(ChargeItems, each [quantity]
[value] > 1.001)
in
FilteredChargeItems

Filtering on Quantity with value system and code (quantity):

let
ChargeItems =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"ChargeItem" ]}[Data],

// Fold: "quantity=lt1.001|s|c"
FilteredChargeItems = Table.SelectRows(ChargeItems, each [quantity]
[value] < 1.001 and [quantity][system] = "s" and [quantity][code] = "c")
in
FilteredChargeItems

Filtering on period, starts after (period):


M

let
Consents = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Consent" ]}[Data],

// Fold: "period=sa2010-01-01T00:00:00.000+00:00"
FiltertedConsents = Table.SelectRows(Consents, each [provision][period]
[start] > #datetimezone(2010, 1, 1, 0, 0, 0, 0, 0))
in
FiltertedConsents

Filtering on period, ends before (period):

let
Consents = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Consent" ]}[Data],

// Fold: "period=eb2010-01-01T00:00:00.000+00:00"
FiltertedConsents = Table.SelectRows(Consents, each [provision][period]
[end] < #datetimezone(2010, 1, 1, 0, 0, 0, 0, 0))
in
FiltertedConsents

Filtering text field of complex types:

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "code:text=t"
FilteredObservations = Table.SelectRows(Observations, each [code][text]
= "t")
in
FilteredObservations

Filtering on text field (starts with):

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],
// Fold: "code:text=t"
FilteredObservations = Table.SelectRows(Observations, each
Text.StartsWith([code][text], "t"))
in
FilteredObservations

Filtering on lists properties


Filtering Patients on profile:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "_profile=http://myprofile"
FilteredPatients = Table.SelectRows(Patients, each
List.MatchesAny([meta][profile], each _ = "http://myprofile"))
in
FilteredPatients

Filtering AllergyIntolerance on category:

let
AllergyIntolerances =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"AllergyIntolerance" ]}[Data],

// Fold: "category=food"
FilteredAllergyIntolerances = Table.SelectRows(AllergyIntolerances, each
List.MatchesAny([category], each _ = "food"))
in
FilteredAllergyIntolerances

Filtering AllergyIntolerance on missing category:

let
AllergyIntolerances =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"AllergyIntolerance" ]}[Data],

// Fold: "category:missing=true"
FilteredAllergyIntolerances = Table.SelectRows(AllergyIntolerances, each
List.MatchesAll([category], each _ = null))
in
FilteredAllergyIntolerances

Filtering AllergyIntolerance on simpler form of missing category:

let
AllergyIntolerances =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"AllergyIntolerance" ]}[Data],

// Fold: "category:missing=true"
FilteredAllergyIntolerances = Table.SelectRows(AllergyIntolerances, each
[category] = null)
in
FilteredAllergyIntolerances

Filtering on table properties


Filtering Patients on exact family name:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "family:exact=Johnson"
FilteredPatients = Table.SelectRows(Patients, each
Table.MatchesAnyRows([name], each [family] = "Johnson"))
in
FilteredPatients

Filtering on Patients where family name starts with:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "family=John"
FilteredPatients = Table.SelectRows(Patients, each
Table.MatchesAnyRows([name], each Text.StartsWith([family], "John")))
in
FilteredPatients

Filtering Patients on family name starts with John or Paul :

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "family=John,Paul"
FilteredPatients = Table.SelectRows(Patients, each
Table.MatchesAnyRows([name], each Text.StartsWith([family], "John") or
Text.StartsWith([family], "Paul")))
in
FilteredPatients

Filtering Patients on family name starts with John and given starts with Paul :

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "family=John&given=Paul"
FilteredPatients = Table.SelectRows(
Patients,
each
Table.MatchesAnyRows([name], each Text.StartsWith([family],
"John")) and
Table.MatchesAnyRows([name], each List.MatchesAny([given], each
Text.StartsWith(_, "Paul"))))
in
FilteredPatients

Filtering on Goal due date:

let
Goals = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Goal" ]}[Data],

// Fold: "target-date=gt2020-03-01"
FilteredGoals = Table.SelectRows(Goals, each
Table.MatchesAnyRows([target], each [due][date] > #date(2020,3,1)))
in
FilteredGoals

Filtering Patient on identifier:

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "identifier=s|v"
FilteredPatients = Table.SelectRows(Patients, each
Table.MatchesAnyRows([identifier], each [system] = "s" and _[value] = "v"))
in
FilteredPatients

Filtering on Observation code (CodeableConcept):

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "code=s|c"
FilteredObservations = Table.SelectRows(Observations, each
Table.MatchesAnyRows([code][coding], each [system] = "s" and [code] = "c"))
in
FilteredObservations

Filtering on Observation code and text (CodeableConcept):

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "code:text=t&code=s|c"
FilteredObservations = Table.SelectRows(Observations, each
Table.MatchesAnyRows([code][coding], each [system] = "s" and [code] = "c")
and [code][text] = "t")
in
FilteredObservations
Filtering multi-level nested properties
Filtering Patients on family name starts with John and given starts with Paul :

let
Patients = Fhir.Contents("https://myfhirserver.azurehealthcareapis.com",
null){[Name = "Patient" ]}[Data],

// Fold: "family=John&given=Paul"
FilteredPatients =
Table.SelectRows(
Patients,
each
Table.MatchesAnyRows([name], each Text.StartsWith([family],
"John")) and
Table.MatchesAnyRows([name], each List.MatchesAny([given],
each Text.StartsWith(_, "Paul"))))
in
FilteredPatients

Filtering only vital signs from Observations:

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "category=vital-signs"
FilteredObservations = Table.SelectRows(Observations, each
Table.MatchesAnyRows([category], each Table.MatchesAnyRows([coding], each
[code] = "vital-signs")))
in
FilteredObservations

Filtering Observations on category coding with system and code:

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "category=s|c"
FilteredObservations = Table.SelectRows(Observations, each
Table.MatchesAnyRows([category], each Table.MatchesAnyRows([coding], each
[system] = "s" and [code] = "c")))
in
FilteredObservations

Filtering Observations on multiple categories (OR):

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "category=s1|c1,s2|c2"
FilteredObservations =
Table.SelectRows(
Observations,
each
Table.MatchesAnyRows(
[category],
each
Table.MatchesAnyRows(
[coding],
each
([system] = "s1" and [code] = "c1") or
([system] = "s2" and [code] = "c2"))))
in
FilteredObservations

Filtering nested list in table:

let
AuditEvents =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"AuditEvent" ]}[Data],

// Fold: "policy=http://mypolicy"
FilteredAuditEvents = Table.SelectRows(AuditEvents, each
Table.MatchesAnyRows([agent], each List.MatchesAny([policy], each _ =
"http://mypolicy")))
in
FilteredAuditEvents

Filtering with composite search parameters


FHIR has composite search parameters that allow filtering on multiple fields on a
complex type within a resource or at the root of the resource at the same time. For
example, one can search for Observations with specific code and a specific value (a
code-value-quantity search parameter). The Power Query connector for FHIR will

attempt to recognize filtering expressions that map to such composite search


parameters. This sections lists some examples of these patterns. In the context of
analyzing FHIR data, it is especially the composite search parameters on the
Observation resource that are of interest.

Filtering Observations on code and value quantity, body height greater than 150:

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "code-value-quantity=http://loinc.org|8302-2$gt150"
FilteredObservations = Table.SelectRows(Observations, each
Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and
[code] = "8302-2") and [value][Quantity][value] > 150)
in
FilteredObservations

Filtering on Observation component code and value quantity, systolic blood pressure
greater than 140:

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "component-code-value-quantity=http://loinc.org|8480-6$gt140"
FilteredObservations = Table.SelectRows(Observations, each
Table.MatchesAnyRows([component], each Table.MatchesAnyRows([code][coding],
each [system] = "http://loinc.org" and [code] = "8480-6") and [value]
[Quantity][value] > 140))
in
FilteredObservations

Filtering on multiple component code value quantities (AND), diastolic blood pressure
greater than 90 and systolic blood pressure greater than 140:

M
let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "component-code-value-quantity=http://loinc.org|8462-
4$gt90&component-code-value-quantity=http://loinc.org|8480-6$gt140"
FilteredObservations =
Table.SelectRows(
Observations,
each
Table.MatchesAnyRows(
[component],
each
Table.MatchesAnyRows([code][coding], each [system] =
"http://loinc.org" and [code] = "8462-4") and [value][Quantity][value] > 90)
and
Table.MatchesAnyRows([component], each
Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and
[code] = "8480-6") and [value][Quantity][value] > 140))
in
FilteredObservations

Filtering on multiple component code value quantities (OR), diastolic blood pressure
greater than 90 or systolic blood pressure greater than 140:

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "component-code-value-quantity=http://loinc.org|8462-
4$gt90,http://loinc.org|8480-6$gt140"
FilteredObservations =
Table.SelectRows(
Observations,
each
Table.MatchesAnyRows(
[component],
each
(Table.MatchesAnyRows([code][coding], each [system]
= "http://loinc.org" and [code] = "8462-4") and [value][Quantity][value] >
90) or
Table.MatchesAnyRows([code][coding], each [system]
= "http://loinc.org" and [code] = "8480-6") and [value][Quantity][value] >
140 ))
in
FilteredObservations
Filtering Observations on code value quantities on root of resource or in component
array:

let
Observations =
Fhir.Contents("https://myfhirserver.azurehealthcareapis.com", null){[Name =
"Observation" ]}[Data],

// Fold: "combo-code-value-quantity=http://loinc.org|8302-2$gt150"
FilteredObservations =
Table.SelectRows(
Observations,
each
(Table.MatchesAnyRows([code][coding], each [system] =
"http://loinc.org" and [code] = "8302-2") and [value][Quantity][value] >
150) or
(Table.MatchesAnyRows([component], each
Table.MatchesAnyRows([code][coding], each [system] = "http://loinc.org" and
[code] = "8302-2") and [value][Quantity][value] > 150)))
in
FilteredObservations

Summary
Query folding turns Power Query filtering expressions into FHIR search parameters. The
Power Query connector for FHIR recognizes certain patterns and attempts to identify
matching search parameters. Recognizing those patterns will help you write more
efficient Power Query expressions.

Next steps
In this article, we reviewed some classes of filtering expressions that will fold to FHIR
search parameters. Next read about establishing relationships between FHIR resources.

FHIR Power Query relationships


FHIR Relationships
Article • 07/25/2023

This article describes how to establish relationships between tables that have been
imported using the Power Query connector for FHIR.

Introduction
FHIR resources are related to each other, for example, an Observation that references a
subject ( Patient ):

JSON

{
"resourceType": "Observation",
"id": "1234",
"subject": {
"reference": "Patient/456"
}

// ... Other fields


}

Some of the resource reference fields in FHIR can refer to multiple different types of
resources (for example, Practitioner or Organization ). To facilitate an easier way to
resolve references, the Power Query connector for FHIR adds a synthetic field to all
imported resources called <referenceId> , which contains a concatenation of the
resource type and the resource ID.

To establish a relationship between two tables, you can connect a specific reference field
on a resource to the corresponding <referenceId> field on the resource you would like
it linked to. In simple cases, Power BI will even detect this for you automatically.

Establishing FHIR relationships in Power BI


In this section, we'll show an example of establishing a relationship between the
Observation.subject.reference field and the <referenceId> field on Patient .

1. When importing data, select the Patient and Observation tables:


Then select Transform Data.

2. Expand the subject column on Observation to reveal subject.reference :

After expanding, you should see the list of subject references:


3. Make any other modifications you need to the query and save the modified query.

4. Select Manage Relationships in the Power BI client:

5. Establish the relationship. In this simple example, Power BI will likely have detected
the relationship automatically:

If not, you can add it manually:

You can edit the details of the relationship:



Summary
Resources in FHIR are related. These relationships need to be established on data
imported with the Power Query connector for FHIR. The <referenceId> field is a
synthetic field added to all imported FHIR data that will help establish the relationships.

Next steps
In this article, you've learned how to establish relationships between tables imported
with the Power Query connector for FHIR. Next, explore query folding with the Power
Query connector for FHIR.

FHIR Power Query folding


Folder
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Windows

Function Reference Documentation Folder.Contents, Folder.Files

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities supported
Folder path
Combine
Combine and load
Combine and transform

Connect to a folder from Power Query Desktop


To connect to a folder from Power Query Desktop:

1. Select the Folder option in the connector selection.

2. Enter the path to the folder you want to load, or select Browse to browse to the
folder you want to load. Then select OK.
When you select the folder you want to use, the file information about all of the
files in that folder are displayed. Also, file information about any files in any
subfolders is also displayed.

3. Select Combine & Transform Data to combine the data in the files of the selected
folder and load the data in the Power Query Editor for editing. Select Combine &
Load to load the data from all of the files in the folder directly into your app. Or
select Transform Data to load the folder data as-is in the Power Query Editor.

7 Note
The Combine & Transform Data and Combine & Load buttons are the easiest ways
to combine data found in the files of the folder you specify. You could also use the
Load button (in Power BI Desktop only) or the Transform Data buttons to combine
the files as well, but that requires more manual steps.

Connect to a folder from Power Query Online


To connect to a folder from Power Query Online:

1. Select the Folder option in the connector selection.

2. Enter the path to the folder you want to load.

3. Enter the name of an on-premises data gateway that you'll use to access the
folder.

4. Select the authentication kind to connect to the folder. If you select the Windows
authentication kind, enter your credentials.

5. Select Next.

6. In the Navigator dialog box, select Combine to combine the data in the files of the
selected folder and load the data into the Power Query Editor for editing. Or select
Transform data to load the folder data as-is in the Power Query Editor.
Troubleshooting

Combining files
When you combine files using the folder connector, all the files in the folder and its
subfolders are processed the same way, and the results are then combined. The way the
files are processed is determined by the example file you select. For example, if you
select an Excel file and choose a table called "Table1", then all the files will be treated as
Excel files that contain a table called "Table1".

To ensure that combining the files works properly, make sure that all the files in the
folder and its subfolders have the same file format and structure. If you need to exclude
some of the files, first select Transform data instead of Combine and filter the table of
files in the Power Query Editor before combining.

For more information about combining files, go to Combine files in Power Query.
Funnel
Article • 07/18/2023

7 Note

The following connector article is provided by Funnel, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Funnel website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Funnel Workspace

Prerequisites
To use the Funnel connector, you need a Funnel subscription. Funnel helps you collect
data from all your marketing platforms, transform it, and send it to the destinations you
want, like Power BI (https://funnel.io/ ).

In the Funnel App, go to your Workspace, navigate to the Microsoft Power BI page in
the left navigation (if you can't see it, please contact us). Follow the instructions on the
page. You need to create a "Data Share" that contains the fields you want to expose in
Power BI.

Capabilities Supported
Import

Connect to Funnel from Power Query Desktop


To connect to Funnel:

1. Select Online Services, find Funnel from the product-specific data connector list,
and then select Connect.

2. Select Sign in to sign in to your Funnel Workspace.


3. Sign in with your Google user connected to Funnel or use your Funnel credentials.

4. Once you've successfully signed in select Connect to continue.

5. In the Navigator dialog box, choose one or more Data Shares from your
Workspaces to import your data.
For each Data Share you can enter number of rolling months of data you want.

7 Note

The default number of months is 12. If today is 22.03.2022, then you'll get
data for the period 01.04.2021 - 22.03.2022.

You can then either select Load to load the data or select Transform Data to
transform the data.

Known issues and limitations


If a Data Share contains too much data for one month, the connector will get a
TOO_MANY_ROWS error.

For more information, read our Power BI knowledge base articles .


Google Analytics
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Customer Insights (Dataflows)

Authentication Types Supported Google Account

Function Reference Documentation GoogleAnalytics.Accounts

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

7 Note

Effective July 2021, Google will discontinue support for sign-ins to Google accounts
from embedded browser frameworks. Due to this change, you will need to
update your Power BI Desktop version to June 2021 to support signing in to
Google.

7 Note

This connector uses V4 of the Google Analytics API.

Prerequisites
Before you can sign in to Google Analytics, you must have a Google Analytics account
(username/password).
Capabilities Supported
Import
Google Analytics 4 (Data API)

Connect to Google Analytics data from Power


Query Desktop
To connect to Google Analytics data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, and then select Google Analytics. Then select
Connect.
2. If this is the first time you're getting data through the Google Analytics connector,
a third-party notice is displayed. Select Don't warn me again with this connector if
you don't want this message to be displayed again. Then select Continue.

3. To connect to Google Analytics data through the legacy "Universal Analytics" API,
select Implementation 1.0. To connect to Google Analytics data through the new
Google Analytics Data API with support for Google Analytics 4, select
Implementation 2.0 (Beta).

4. To sign in to your Google Analytics account, select Sign in.

5. In the Sign in with Google window that appears, provide your credentials to sign
in to your Google Analytics account. You can either supply an email address or
phone number. Then select Next.
6. Enter your Google Analytics password and select Next.
7. When asked if you want Power BI Desktop to access your Google account, select
Allow.
8. Once you've successfully signed in, select Connect.

Once the connection is established, you’ll see a list of the accounts you have access to.
Drill through the account, properties, and views to see a selection of values, categorized
in display folders.

You can Load the selected table, which brings the entire table into Power BI Desktop, or
you can select Transform Data to edit the query, which opens Power Query Editor. You
can then filter and refine the set of data you want to use, and then load that refined set
of data into Power BI Desktop.

Connect to Google Analytics data from Power


Query Online
To connect to Google Analytics data:

1. Select Google Analytics from the Power Query - Choose data source page.

2. From the connection page, enter a connection name and choose an on-premises
data gateway if necessary.

3. Select Sign in to sign in to your Google account.


4. In the Sign in with Google window that appears, provide your credentials to sign
in to your Google Analytics account. You can either supply an email address or
phone number. Then select Next.

7 Note

Currently, the Google Analytics sign-in dialog boxes indicate that you are
signing in to Power Query Desktop. This wording will be changed in the
future.

5. Enter your Google Analytics password and select Next.


6. When asked if you want Power BI Desktop to access your Google account, select
Allow.
7. Once you've successfully signed in, select Next.

Once the connection is established, you’ll see a list of the accounts you have
access to. Drill through the account, properties, and views to see a selection of
values, categorized in display folders.

8. Select Transform data to edit the query in Power Query Editor. You can then filter
and refine the set of data you want to use, and then load that refined set of data
into Power Apps.

Limitations and issues


You should be aware of the following limitations and issues associated with accessing
Google Analytics data.
Google Analytics quota limits for Power BI
The standard limitations and quotas for Google Analytics AP requests is documented in
Limits and Quotas on API Requests . However, Power BI Desktop and Power BI service
allow you to use the following enhanced number of queries.

Queries per day: 1,500,000


Queries per 100 seconds: 4,000

Troubleshooting

Validating Unexpected Data


When date ranges are very large, Google Analytics will return only a subset of values.
You can use the process described in this section to understand what dates are being
retrieved, and manually edit them. If you need more data, you can append multiple
queries with different date ranges. If you're not sure you're getting back the data you
expect to see, you can also use Data Profiling to get a quick look at what's being
returned.

To make sure that the data you're seeing is the same as you would get from Google
Analytics, you can execute the query yourself in Google's interactive tool. To understand
what data Power Query is retrieving, you can use Query Diagnostics to understand what
query parameters are being sent to Google Analytics.

If you follow the instructions for Query Diagnostics and run Diagnose Step on any
Added Items, you can see the generated results in the Diagnostics Data Source Query
column. We recommend running this with as few additional operations as possible on
top of your initial connection to Google Analytics, to make sure you're not losing data in
a Power Query transform rather than what's being retrieved from Google Analytics.

Depending on your query, the row containing the emitted API call to Google Analytics
may not be in the same place. But for a simple Google Analytics only query, you'll
generally see it as the last row that has content in that column.

In the Data Source Query column, you'll find a record with the following pattern:

Request:
GET https://www.googleapis.com/analytics/v3/data/ga?ids=ga:<GA
Id>&metrics=ga:users&dimensions=ga:source&start-date=2009-03-12&end-
date=2020-08-11&start-index=1&max-results=1000&quotaUser=<User>%40gmail.com
HTTP/1.1

<Content placeholder>

Response:
HTTP/1.1 200 OK
Content-Length: -1

<Content placeholder>

From this record, you can see you have your Analytics view (profile) ID , your list of
metrics (in this case, just ga:users ), your list of dimensions (in this case, just referral
source), the start-date and end-date , the start-index , max-results (set to 1000
for the editor by default), and the quotaUser .

You can copy these values into the Google Analytics Query Explorer to validate that
the same data you're seeing returned by your query is also being returned by the API.

If your error is around a date range, you can easily fix it. Go into the Advanced Editor.
You'll have an M query that looks something like this (at a minimum—there may be
other transforms on top of it).

Power Query M

let
Source = GoogleAnalytics.Accounts(),
#"<ID>" = Source{[Id="<ID>"]}[Data],
#"UA-<ID>-1" = #"<ID>"{[Id="UA-<ID>-1"]}[Data],
#"<View ID>" = #"UA-<ID>-1"{[Id="<View ID>"]}[Data],
#"Added Items" = Cube.Transform(#"<View ID>",
{
{Cube.AddAndExpandDimensionColumn, "ga:source", {"ga:source"},
{"Source"}},
{Cube.AddMeasureColumn, "Users", "ga:users"}
})
in
#"Added Items"

You can do one of two things. If you have a Date column, you can filter on the Date. This
is the easier option. If you don't care about breaking it up by date, you can Group
afterwards.

If you don't have a Date column, you can manually manipulate the query in the
Advanced Editor to add one and filter on it. For example:

Power Query M
let
Source = GoogleAnalytics.Accounts(),
#"<ID>" = Source{[Id="<ID>"]}[Data],
#"UA-<ID>-1" = #"<ID>"{[Id="UA-<ID>-1"]}[Data],
#"<View ID>" = #"UA-<ID>-1"{[Id="<View ID>"]}[Data],
#"Added Items" = Cube.Transform(#"<View ID>",
{
{Cube.AddAndExpandDimensionColumn, "ga:date", {"ga:date"},
{"Date"}},
{Cube.AddAndExpandDimensionColumn, "ga:source", {"ga:source"},
{"Source"}},
{Cube.AddMeasureColumn, "Organic Searches",
"ga:organicSearches"}
}),
#"Filtered Rows" = Table.SelectRows(#"Added Items", each [Date] >=
#date(2019, 9, 1) and [Date] <= #date(2019, 9, 30))
in
#"Filtered Rows"

Next steps
Google Analytics Dimensions & Metrics Explorer
Google Analytics Core Reporting API
Google BigQuery
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Customer Insights (Dataflows)

Authentication Types Supported Organizational account


Service account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

7 Note

Effective July 2021, Google will discontinue support for sign-ins to Google accounts
from embedded browser frameworks. Due to this change, you will need to
update your Power BI Desktop version to June 2021 to support signing in to
Google.

Prerequisites
You'll need a Google account or a Google service account to sign in to Google BigQuery.

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Connect to Google BigQuery data from Power
Query Desktop
To connect to Google BigQuery from Power Query Desktop, take the following steps:

1. In the Get Data experience, search for and select Google BigQuery.

2. If you want to use any advance options, select Advanced options. Otherwise,
select OK to continue. More information: Connect using advanced options

3. The Google BigQuery connector supports connecting through an organizational


account or a service account sign-in. In this example, you'll use the organizational
account to sign in. Select Sign In to continue.
You can also sign in using a Google service account. In this case, select Service
Account Login and enter your service account email and your service account
JSON key file contents. Then select Connect.

4. A Sign in with Google dialog appears. Select your Google account and approve
connecting to Power BI Desktop.

5. Once signed in, select Connect to continue.


6. Once you successfully connect, a Navigator window appears and displays the data
available on the server. Select your data in the navigator. Then select either
Transform Data to transform the data in Power Query or Load to load the data in
Power BI Desktop.

Connect to Google BigQuery data from Power


Query Online
To connect to Google BigQuery from Power Query Online, take the following steps:

1. In the Get Data experience, select the Database category, and then select Google
BigQuery.
2. In the Google BigQuery Database dialog, you may need to either create a new
connection or select an existing connection. If you're using on-premises data,
select an on-premises data gateway. Then select Sign in.

3. A Sign in with Google dialog appears. Select your Google account and approve
connecting.

7 Note

Although the sign in dialog box says you'll continue to Power BI Desktop once
you've signed in, you'll be sent to your online app instead.
4. If you want to use any advance options, select Advanced options. More
information: Connect using advanced options

5. Once signed in, select Next to continue.


6. Once you successfully connect, a Navigator window appears and displays the data
available on the server. Select your data in the navigator. Then select Next to
transform the data in Power Query.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to your query if needed.
The following table lists all of the advanced options you can set in Power Query Desktop
and Power Query Online.

Advanced Description
option

Billing Project A project against which Power Query will run queries. Permissions and billing are
ID tied to this project. If no Billing Project ID is provided, by default the first
available project returned by Google APIs will be used.

Use Storage A flag that enables using the Storage API of Google BigQuery . This option is
Api true by default. This option can be set to false to not use the Storage API and
use REST APIs instead.

Connection The standard connection setting (in seconds) that controls how long Power
timeout Query waits for a connection to complete. You can change this value if your
duration connection doesn't complete before 15 seconds (the default value.)

Command How long Power Query waits for a query to complete and return results. The
timeout default depends on the driver default. You can enter another value in minutes to
duration keep the connection open longer.

Project ID The project that you want to run native queries on. This option is only available
in Power Query Desktop.
Advanced Description
option

SQL statement For information, go to Import data from a database using native database query.
In this version of native database query functionality, you need to use fully
qualified table names in the format Database.Schema.Table , for example SELECT
* FROM DEMO_DB.PUBLIC.DEMO_TABLE . This option is only available in Power Query
Desktop.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your Google BigQuery data.

Limitations and considerations


This section describes any limitations or considerations of the Google BigQuery
connector.

Connecting to Google BigQuery in Power BI Desktop


There are a few limits and considerations to keep in mind when using the Google
BigQuery connector with Power BI.

Connector availability
The Google BigQuery connector is available in Power BI Desktop and in the Power BI
service. In the Power BI service, the connector can be accessed using the Cloud-to-Cloud
connection from Power BI to Google BigQuery.

"Access Denied" error


When you try to connect to Google BigQuery from Power BI Desktop, you might get the
following error message:

Datasource.Error: ODBC: ERROR [HY000][Microsoft][BigQuery] (100) Error interacting

with REST API: Access Denied: Project <project name>: The user <user name>
bigquery.jobs.create permissions in project <project name>.

In this case, you might need to enter a Billing Project ID in the Billing Project advanced
option in the Power Query Connection settings.

In addition, if you also create a report in Power BI service using a gateway, you might
still get this error. In this case, you must manually include the Billing Project ID in the M
code for the connection using the Power Query editor or the Power Query formula bar.
For example:

Source = GoogleBigQuery.Database([BillingProject="Include-Billing-Project-Id-
Here"])

Nested fields
To optimize performance considerations, Google BigQuery does well with large datasets
when denormalized, flattened, and nested.

The Google BigQuery connector supports nested fields, which are loaded as text
columns in JSON format.

Users should select Transform Data and then use the JSON parsing capabilities in the
Power Query Editor to extract the data.

1. Under the Transforms ribbon tab, the Text Column category, select Parse and then
JSON.
2. Extract the JSON record fields using the Expand Column option.

Setting up a Google service account


For more information on setting up or using Google service accounts, go to Creating
and managing service account keys in the Google docs.
Authenticating through a Google service account
When you authenticate through a Google service account in Power BI Desktop, there's a
specific credential format that's required by the connector.

Service Account Email: must be in email format


Service Account JSON key file contents: once this JSON key is downloaded, all new
lines must be removed from the file so that the contents are in one line. Once the
JSON file is in that format, the contents can be pasted into this field.

When you authenticate through a Google service account in Power BI service or Power
Query Online, users need to use "Basic" authentication. The Username field maps to the
Service Account Email field above, and the Password field maps to the Service Account
JSON key file contents field above. The format requirements for each credential remain
the same in both Power BI Desktop, Power BI service, and Power Query Online.

Unable to authenticate with Google BigQuery Storage


API
The Google BigQuery connector uses Google BigQuery Storage API by default. This
feature is controlled by the advanced option called UseStorageApi. You might encounter
issues with this feature if you use granular permissions. In this scenario, you might see
the following error message or fail to get any data from your query:

ERROR [HY000] [Microsoft][BigQuery] (131) Unable to authenticate with Google

BigQuery Storage API. Check your account permissions

You can resolve this issue by adjusting the user permissions for the BigQuery Storage
API correctly. These storage API permissions are required to access data correctly with
BigQueryStorage API:

bigquery.readsessions.create : Creates a new read session via the BigQuery

Storage API.
bigquery.readsessions.getData : Reads data from a read session via the BigQuery

Storage API.
bigquery.readsessions.update : Updates a read session via the BigQuery Storage

API.

These permissions typically are provided in the BigQuery.User role. More information,
Google BigQuery Predefined roles and permissions

If the above steps don't resolve the problem, you can disable the BigQuery Storage API.
Unable to use DateTime type data in Direct Query mode
There's a known issue where the DateTime type isn't supported through Direct Query.
Selecting a column with the DateTime type will cause an "Invalid query" error or a visual
error.
Google BigQuery (Azure AD) (Beta)
Article • 08/03/2023

Summary
Item Description

Release State Beta

Products Supported Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)

Authentication Types Supported Azure Active Directory

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You need an Azure Active Directory account to sign in to Google BigQuery (Azure AD).

Capabilities supported
Import
DirectQuery (Power BI Datasets)

Connect to Google BigQuery (Azure AD) data


from Power Query Desktop
To connect to Google BigQuery (Azure AD) from Power Query Desktop, take the
following steps:

1. In the Get Data experience, search for and select Google BigQuery (Azure AD).
More information: Where to get data
2. Specify a Billing Project ID, which is required for the use of this connector. If you
want to use any advanced options, select Advanced options. Otherwise, select
either Import or DirectQuery, and then select OK to continue. More information:
Connect using advanced options

3. The Google BigQuery (Azure AD) connector supports connecting through an Azure
Active Directory account. Select Sign In to continue.
4. Once signed in, select Connect to continue.

5. Once you successfully connect, a Navigator window appears and displays the data
available on the server. Select your data in the navigator. Then select either
Transform Data to transform the data in Power Query or Load to load the data in
Power BI Desktop.

Connect to Google BigQuery (Azure AD) data


from Power Query Online
To connect to Google BigQuery (Azure AD) from Power Query Online, take the following
steps:

1. In the Get Data experience, do a search for Google, and then select Google
BigQuery (Azure AD). More information: Where to get data
2. In the Google BigQuery (Azure AAD) dialog, you may need to either create a new
connection or select an existing connection. If you're creating a new connection,
enter the Billing Project ID. If you're using on-premises data, select an on-
premises data gateway.

3. If you want to use any advanced options, select Advanced options. More
information: Connect using advanced options

4. Once signed in, select Next to continue.

5. Once you successfully connect, a Navigator window appears and displays the data
available on the server. Select your data in the navigator. Then select Next to
transform the data in Power Query.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to your query if needed.

The following table lists all of the advanced options you can set in Power Query Desktop
and Power Query Online.

Advanced Description
option

Use Storage A flag that enables using the Storage API of Google BigQuery . This option is
Api true by default. This option can be set to false to not use the Storage API and
use REST APIs instead.

Connection The standard connection setting (in seconds) that controls how long Power
timeout Query waits for a connection to complete. You can change this value if your
duration connection doesn't complete before 15 seconds (the default value.)

Command How long Power Query waits for a query to complete and return results. The
timeout default depends on the driver default. You can enter another value in minutes to
duration keep the connection open longer.

Audience Uri

ProjectID The project that you want to run native queries on. This option is only available
in Power Query Desktop.

Native query For information, go to Import data from a database using native database query.
In this version of native database query functionality, you need to use fully
qualified table names in the format Database.Schema.Table , for example SELECT
* FROM DEMO_DB.PUBLIC.DEMO_TABLE . This option is only available in Power Query
Desktop.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your Google BigQuery data.

Limitations and considerations


This section describes any limitations or considerations of the Google BigQuery (Azure
AD) connector.

OIDC configurations
The Google BigQuery (Azure AD) connector utilizes Azure AD JWT tokens to connect
Azure AD-based authentication with Google's Workforce Federation feature. So, the
setup on the authentication side must be an OIDC-based set up to align with the Azure
AD JWT tokens. Reach out to your Google BigQuery point-of-contact for further
information on authentication setup and support on the Google side.

Connecting to Google BigQuery (Azure AD) in Power BI


Desktop
There are a few limits and considerations to keep in mind when using the Google
BigQuery connector with Power BI.

Nested fields

To optimize performance considerations, Google BigQuery does well with large datasets
when denormalized, flattened, and nested.

The Google BigQuery (Azure AD) connector supports nested fields, which are loaded as
text columns in JSON format.

Users should select Transform Data and then use the JSON parsing capabilities in the
Power Query editor to extract the data.

1. Under the Transforms ribbon tab, the Text Column category, select Parse and then
JSON.
2. Extract the JSON record fields using the Expand Column option.
Unable to authenticate with Google BigQuery Storage
API
The Google BigQuery (Azure AD) connector uses Google BigQuery Storage API by
default. This feature is controlled by the advanced option called UseStorageApi. You
might encounter issues with this feature if you use granular permissions. In this scenario,
you might see the following error message or fail to get any data from your query:

ERROR [HY000] [Microsoft][BigQuery] (131) Unable to authenticate with Google


BigQuery Storage API. Check your account permissions

You can resolve this issue by adjusting the user permissions for the BigQuery Storage
API correctly. These storage API permissions are required to access data correctly with
BigQueryStorage API:

bigquery.readsessions.create : Creates a new read session via the BigQuery

Storage API.
bigquery.readsessions.getData : Reads data from a read session via the BigQuery

Storage API.
bigquery.readsessions.update : Updates a read session via the BigQuery Storage

API.

These permissions typically are provided in the BigQuery.User role. More information,
Google BigQuery Predefined roles and permissions

If the above steps don't resolve the problem, you can disable the BigQuery Storage API.

Unable to use DateTime type data in DirectQuery mode


There's a known issue where the DateTime type isn't supported through DirectQuery.
Selecting a column with the DateTime type causes an "Invalid query" error or a visual
error.

Enable Azure AD single sign-on (SSO) for


Google BigQuery
We support Azure AD SSO through both Power BI Service (cloud) and also through the
on-premises data gateway. For more information about enabling Azure AD SSO for all
connectors, go to Overview of single sign-on (SSO) for on-premises data gateways in
Power BI. Confirm directly with your Google contact that your system and accounts are
set up correctly for Azure AD SSO prior to attempting to connect in Power BI.
Azure AD single sign-on (SSO) through Power BI service
To configure a new connection in Power BI service:

1. In Power BI service, select Admin portal from the settings list.

2. Enable the Google BigQuery SSO option.


Azure AD single sign-on (SSO) for Google BigQuery with
an on-premises data gateway
Before you can enable Azure AD SSO for Google BigQuery, you must first enable Azure
AD SSO for all data sources that support Azure AD SSO with an on-premises data
gateway. Also, ensure you've upgraded to at least the December 2022 release of the on-
premises data gateway:

1. In Power BI service, select Admin portal from the settings list.

2. Under Tenant settings, enable Azure AD Single-Sign On (SSO) for Gateway.

Once you've enabled Azure AD SSO for all data sources, then enable Azure AD SSO for
Google BigQuery:

1. Enable the Google BigQuery SSO option.

2. Select Manage gateways from the settings list.


3. Select a gateway, and then select Choose Data Source.

4. Under the Data Source Settings tab, enter a value in Billing Project ID. The Billing
Project ID parameter is required when using Azure AD and needs to be specified
in Advanced settings. Also, select Use SSO via Azure AD for DirectQuery queries.
Google Sheets
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational account

Function Reference Documentation —

Prerequisites
Before you can use the Google Sheets connector, you must have a Google account and
have access to the Google Sheet you're trying to connect to.

Capabilities Supported
Import

Connect to Google Sheets data from Power


Query Desktop
To connect to Google Sheets from Power Query Desktop, take the following steps:

1. In the Get Data experience, search for and select Google Sheets.
2. You'll be prompted for a Google Sheets URL. Copy and paste the URL from your
browser address bar into the input prompt.

3. The Google Sheets connector supports connecting through an organizational


(Google) account. Select Sign In to continue.
4. A Sign in with Google dialog appears in an external browser window. Select your
Google account and approve connecting to Power BI Desktop.

5. Once signed in, select Connect to continue.


6. Once you successfully connect, a Navigator window appears and displays the data
available on the server. Select your data in the navigator. Then select either
Transform Data to transform the data in Power Query or Load to load the data in
Power BI Desktop.

Limitations and considerations


This section describes any limitations or considerations of the Google Sheets connector.

Shared drive support


This connector does support connecting to shared drives.

Multiple connections
This connector uses a different ResourcePath for every Google Sheet URL. You'll need to
authenticate to every new resource path and URL, but you might not need to sign into
Google multiple times if the previous sessions remain active.

Understanding URL parsing


The connector first checks for the signature of the URL, ensuring it starts with
https://docs.google.com/spreadsheets/d/ . The connector then parses the Google

Spreadsheet ID from the URL to include in the Google Sheets API call. The rest of the
URL isn't used. Each Google Sheet connection is tied to the submitted URL, which will
act as the ResourcePath.
Hadoop File (HDFS)
Article • 01/24/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)

Authentication Types Supported Anonymous


Windows
Microsoft account

Function Reference Documentation Hdfs.Contents


Hdfs.Files

Capabilities Supported
Import

Connect to Hadoop Files (HDFS) from Power


Query Desktop
Power Query desktop includes Excel and Power BI as experiences.

To make the connection to Hadoop File (HDFS), follow these steps:

1. From Get Data, select the Other category, select Hadoop File (HDFS), and then
select Connect. More information: where to get data
2. In the window that appears, enter the server name of your Hadoop File (HDFS)
instance.

3. Select OK.

4. You can either select anonymous access, windows access, or Microsoft account,
select the level to apply the settings to, and connect. For more information, see:
Authentication with a data source.
5. Select either Load to load the table, or Transform Data to open the Power Query
Editor where you can filter and refine the set of data you want to use, and then
load that refined set of data.
Hive LLAP
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic (Username/Password)


Windows

Function Reference Documentation —

Prerequisites
An Apache Hive LLAP username and password.

Capabilities Supported
Import
Direct Query (Power BI Datasets)
Thrift Transport Protocol
HTTP
Standard

Connect to Hive LLAP data from Power Query


Desktop
To connect to an Apache Hive LLAP server:

1. Select the Hive LLAP option from Get Data.

2. Enter the URL to the Adobe Hive LLAP server. You can also enter an optional port
number. Typically, the URL looks like http://[hostname]:[port number] . The
components of the URL are:
The hostname (for example, hivellaphttp.southcentralus.contoso.com ) is the
hostname or IP address of the Apache Hive server.
The port number (for example, 10500) is the port number for the Apache Hive
server. If the port number isn't specified, the default value is 10501 for the
HTTP transport protocol and 10500 for the standard transport protocol.

3. In Thrift Transport Protocol, select either Standard for TCP mode, or HTTP for
HTTP mode.

4. Select either the Import or DirectQuery data connectivity mode. More


information: Use DirectQuery in Power BI Desktop

5. Select OK to continue.

6. The first time you connect to a data source (identified by each unique URL), you'll
be prompted to enter account credentials. Select the appropriate type of
authentication and enter your credentials for the connection.

Windows: Select this authentication type if you want to connect using


Windows authentication.
Basic: Select this authentication type if you want to connect using Apache
Hive LLAP authentication. Enter your Apache Hive LLAP User name and
Password.

More information: Authentication with a data source.


7. Select Connect to connect to the Apache Hive LLAP data.

8. In Navigator, select the data you require. Then select either Transform data to
transform the data in Power Query Editor or Load to load the data in Power BI
Desktop.

Connect to Hive LLAP data from Power Query


Online
To connect to the Apache Hive LLAP server:

1. Select the Hive LLAP option in the Power Query - Choose data source page.
2. Enter the URL to the Adobe Hive LLAP server. You can also enter an optional port
number. Typically, the URL looks like http://[hostname]:[port number] . The
components of the URL are:

The hostname (for example, hivellaphttp.southcentralus.contoso.com ) is the


hostname or IP address of the Apache Hive server.
The port number (for example, 10500) is the port number for the Apache Hive
server. If the port number isn't specified, the default value is 10501 for the
HTTP transport protocol and 10500 for the standard transport protocol.

3. In Thrift Transport Protocol, select either Standard for TCP mode, or HTTP for
HTTP mode.

4. If necessary, select the name of your on-premises data gateway.

5. If you're connecting to this Hive LLAP data for the first time, select the type of
credentials for the connection in Authentication kind.

6. Enter your credentials.


7. Select Use Encrypted Connection if you want to use an encrypted connection, or
clear the option if you want to use an unencrypted connection.

8. Select Next to continue.

9. In Navigator, select the data you require, then select Transform data to transform
the data in the Power Query editor.

Kerberos-based single sign-on (SSO) for Hive


LLAP
The Hive LLAP connector now supports Kerberos-based single sign-on (SSO).

To use this feature:

1. Sign in to your Power BI account, and navigate to the Gateway management page.

2. Add a new data source under the gateway cluster you want to use.

3. Select the connector in the Data Source Type list.

4. Expand the Advanced Settings section.

5. Select the option to Use SSO via Kerberos for DirectQuery queries or Use SSO via
Kerberos for DirectQuery and Import queries.
More information, Configure Kerberos-based SSO from Power BI service to on-premises
data sources

Troubleshooting

SSL Error in Power BI Desktop


You might come across the following "SSL_connect" error after entering the
authentication information for the connector and selecting Connect.
If this error occurs:

1. In Power BI Desktop, select Files > Options and settings > Data source settings.

2. In Data source settings, select the Hive LLAP source you created, and then select
Edit Permissions.

3. In Edit Permissions, under Encryption, clear the Encrypt connections check box.
4. Select OK, and then in Data source settings, select Close.

5. Redo the steps in Connect to Hive LLAP data from Power Query Desktop.

SSL Error in Power BI Service


After publishing a report in Power BI service, you need to update the credentials to the
Hive data source in the cloud to refresh in Power BI service. After filling in the
credentials, you might get the error:

If you get this error and you see the following message in Fiddler trace, this is an SSL
issue.

ODBC: ERROR [HY000] [Microsoft][ThriftExtension] (14) Unexpected response


from server during a HTTP connection: Could not refill buffer
SSL is by default disabled when connecting directly from powerbi.com. To enable, you
can use on-premises data gateway where you can enable encryption to connect and
refresh reports from Power BI service. Enable HTTPS on Gateway
IBM Db2 database
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Basic


Database
Windows

Function Reference Documentation DB2.Database

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
By default, the IBM Db2 database connector uses the Microsoft driver to connect to
your data. If you choose to use the IBM driver in the advanced options in Power Query
Desktop, you must first install the IBM Db2 driver for .NET on the machine used to
connect to the data. The name of this driver changes from time to time, so be sure to
install the IBM Db2 driver that works with .NET. For instructions on how to download,
install, and configure the IBM Db2 driver for .NET, go to Download initial Version 11.5
clients and drivers . More information: Driver limitations, Ensure the IBM Db2 driver is
installed

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Driver (IBM or Microsoft)
Command timeout in minutes
Package collection
SQL statement
Include relationship columns
Navigate using full hierarchy

Connect to an IBM Db2 database from Power


Query Desktop
To make the connection, take the following steps:

1. Select the IBM Db2 database option from Get Data.

2. Specify the IBM Db2 server to connect to in Server. If a port is required, specify it
by using the format ServerName:Port, where Port is the port number. Also, enter
the IBM Db2 database you want to access in Database. In this example, the server
name and port are TestIBMDb2server.contoso.com:4000 and the IBM Db2 database
being accessed is NORTHWD2 .

3. If you're connecting from Power BI Desktop, select either the Import or


DirectQuery data connectivity mode. The rest of these example steps use the
Import data connectivity mode. To learn more about DirectQuery, go to Use
DirectQuery in Power BI Desktop.

7 Note
By default, the IBM Db2 database dialog box uses the Microsoft driver during
sign in. If you want to use the IBM driver, open Advanced options and select
IBM. More information: Connect using advanced options

If you select DirectQuery as your data connectivity mode, the SQL statement
in the advanced options will be disabled. DirectQuery currently does not
support query push down on top of a native database query for the IBM Db2
connector.

4. Select OK.

5. If this is the first time you're connecting to this IBM Db2 database, select the
authentication type you want to use, enter your credentials, and then select
Connect. For more information about authentication, go to Authentication with a
data source.

By default, Power Query attempts to connect to the IBM Db2 database using an
encrypted connection. If Power Query can't connect using an encrypted
connection, an "unable to connect" dialog box will appear. To connect using an
unencrypted connection, select OK.
6. In Navigator, select the data you require, then either select Load to load the data
or Transform Data to transform the data.

Connect to an IBM Db2 database from Power


Query Online
To make the connection, take the following steps:

1. Select the IBM Db2 database option in the Power Query - Connect to data source
page.

2. Specify the IBM Db2 server to connect to in Server. If a port is required, specify it
by using the format ServerName:Port, where Port is the port number. Also, enter
the IBM Db2 database you want to access in Database. In this example, the server
name and port are TestIBMDb2server.contoso.com:4000 and the IBM Db2 database
being accessed is NORTHWD2

3. Select the name of your on-premises data gateway.


7 Note

You must select an on-premises data gateway for this connector, whether the
IBM Db2 database is on your local network or online.

4. If this is the first time you're connecting to this IBM Db2 database, select the type
of credentials for the connection in Authentication kind. Choose Basic if you plan
to use an account that's created in the IBM Db2 database instead of Windows
authentication.

5. Enter your credentials.

6. Select Use Encrypted Connection if you want to use an encrypted connection, or


clear the option if you want to use an unencrypted connection.

7. Select Next to continue.

8. In Navigator, select the data you require, then select Transform data to transform
the data in Power Query Editor.
Connect using advanced options
Power Query provides a set of advanced options that you can add to your query if
needed.

The following table lists all of the advanced options you can set in Power Query.

Advanced Description
option

Driver Determines which driver is used to connect to your IBM Db2 database. The
choices are IBM and Windows (default). If you select the IBM driver, you must
first ensure that the IBM Db2 driver for .NET is installed on your machine. This
option is only available in Power Query Desktop. More information: Ensure the
IBM Db2 driver is installed

Command If your connection lasts longer than 10 minutes (the default timeout), you can
timeout in enter another value in minutes to keep the connection open longer.
Advanced Description
option

minutes

Package Specifies where to look for packages. Packages are control structures used by
collection Db2 when processing an SQL statement, and will be automatically created if
necessary. By default, this option uses the value NULLID . Only available when
using the Microsoft driver. More information: DB2 packages: Concepts,
examples, and common problems

SQL statement For information, go to Import data from a database using native database
query.

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, you won’t see those columns.
columns

Navigate using If checked, the navigator displays the complete hierarchy of tables in the
full hierarchy database you're connecting to. If cleared, the navigator displays only the tables
whose columns and rows contain data.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your IBM Db2 database.

Issues and limitations

Driver limitations
The Microsoft driver is the same one used in Microsoft Host Integration Server, called
the "ADO.NET Provider for DB2". The IBM driver is the IBM Db/2 driver that works with
.NET. The name of this driver changes from time to time, so be sure it's the one that
works with .NET, which is different from the IBM Db2 drivers that work with OLE/DB,
ODBC, or JDBC.

You can choose to use either the Microsoft driver (default) or the IBM driver if you're
using Power Query Desktop. Currently, Power Query Online only uses the Microsoft
driver. Each driver has its limitations.

Microsoft driver
Doesn't support Transport Layer Security (TLS)
IBM driver
The IBM Db2 database connector, when using the IBM Db2 driver for .NET,
doesn't work with Mainframe or IBM i systems
Doesn't support DirectQuery
Microsoft provides support for the Microsoft driver, but not for the IBM driver. However,
if your IT department already has it set up and configured on your machines, your IT
department should know how to troubleshoot the IBM driver.

Native queries not supported in DirectQuery


When you select DirectQuery as the data connectivity mode in Power Query Desktop,
the SQL statement text box in the advanced options is disabled. It's disabled because
the Power Query IBM Db2 connector doesn’t currently support query push down on top
of a native database query.

Troubleshooting

Ensure the IBM Db2 driver is installed


If you choose to use the IBM Db2 driver for Power Query Desktop, you first have to
download, install, and configure the driver on your machine. To ensure the IBM Db2
driver has been installed:

1. Open Windows PowerShell on your machine.

2. Enter the following command:

[System.Data.Common.DbProviderFactories]::GetFactoryClasses() | ogv

3. In the dialog box that opens, you should see the following name in the
InvariantName column:

IBM.Data.DB2

If this name is in the InvariantName column, the IBM Db2 driver has been installed and
configured correctly.

SQLCODE -805 and SQLCODE -551 error codes


When attempting to connect to an IBM Db2 database, you may sometimes come across
the common error SQLCODE -805, which indicates the package isn't found in the NULLID
or other collection (specified in the Power Query Package connection configuration).
You may also encounter the common error SQLCODE -551, which indicates you can't
create packages because you lack package binding authority.
Typically, SQLCODE -805 is followed by SQLCODE -551, but you'll see only the second
exception. In reality, the problem is the same. You lack the authority to bind the package
to either NULLID or the specified collection.

Typically, most IBM Db2 administrators don't provide bind package authority to end
users—especially in an IBM z/OS (mainframe) or IBM i (AS/400) environment. Db2 on
Linux, Unix, or Windows is different in that user accounts have bind privileges by default,
which create the MSCS001 (Cursor Stability) package in the user’s own collection (name
= user login name).

If you don't have bind package privileges, you'll need to ask your Db2 administrator for
package binding authority. With this package binding authority, connect to the
database and fetch data, which will auto-create the package. Afterwards, the
administrator can revoke the packaging binding authority. Also, afterwards, the
administrator can "bind copy" the package to other collections—to increase
concurrency, to better match your internal standards for where packages are bound, and
so on.

When connecting to IBM Db2 for z/OS, the Db2 administrator can do the following
steps.

1. Grant authority to bind a new package to the user with one of the following
commands:

GRANT BINDADD ON SYSTEM TO <authorization_name>


GRANT PACKADM ON <collection_name> TO <authorization_name>

2. Using Power Query, connect to the IBM Db2 database and retrieve a list of
schemas, tables, and views. The Power Query IBM Db2 database connector will
auto-create the package NULLID.MSCS001, and then grant execute on the package
to public.

3. Revoke authority to bind a new package to the user with one of the following
commands:

REVOKE BINDADD FROM <authorization_name>


REVOKE PACKADM ON <collection_name> FROM <authorization_name>

When connecting to IBM Db2 for Linux, Unix, or Windows, the Db2 administrator can do
the following steps.

1. GRANT BINDADD ON DATABASE TO USER <authorization_name>.

2. Using Power Query, connect to the IBM Db2 database and retrieve a list of
schemas, tables, and views. The Power Query IBM Db2 connector will auto-create
the package NULLID.MSCS001, and then grant execute on the package to public.

3. REVOKE BINDADD ON DATABASE FROM USER <authorization_name>.

4. GRANT EXECUTE ON PACKAGE <collection.package> TO USER


<authorization_name>.

When connecting to IBM Db2 for i, the Db2 administrator can do the following steps.

1. WRKOBJ QSYS/CRTSQLPKG. Type "2" to change the object authority.

2. Change authority from *EXCLUDE to PUBLIC or <authorization_name>.

3. Afterwards, change authority back to *EXCLUDE.

SQLCODE -360 error code


When attempting to connect to the IBM Db2 database, you may come across the
following error:

Microsoft Db2 Client: The host resource could not be found. Check that the Initial

Catalog value matches the host resource name. SQLSTATE=HY000 SQLCODE=-360

This error message indicates that you didn’t put the right value in for the name of the
database.

SQLCODE -1336 error code


The specified host could not be found.

Double check the name, and confirm that the host is reachable. For example, use ping in
a command prompt to attempt to reach the server and ensure the IP address is correct,
or use telnet to communicate with the server.

SQLCODE -1037 error code


Host is reachable, but is not responding on the specified port.

The port is specified at the end of the server name, separated by a colon. If omitted, the
default value of 50000 is used.

To find the port Db2 is using for Linux, Unix, and Windows, run this command:

db2 get dbm cfg | findstr SVCENAME


Look in the output for an entry for SVCENAME (and SSL_SVCENAME for TLS encrypted
connections). If this value is a number, that’s the port. Otherwise cross reference the
value with the system's "services" table. You can usually find this at /etc/services, or at
c:\windows\system32\drivers\etc\services for Windows.

The following screenshot shows the output of this command in Linux/Unix.

The following screenshot shows the output of this command in Windows.

Determine database name


To determine the database name to use:

1. On IBM i, run DSPRDBDIRE .


2. One of the entries will have a Remote Location of *LOCAL. This entry is the one to
use.

Determine port number


The Microsoft driver connects to the database using the Distributed Relational Database
Architecture (DRDA) protocol. The default port for DRDA is port 446. Try this value first.

To find for certain what port the DRDA service is running on:

1. Run the IBM i command WRKSRVTBLE .

2. Scroll down until your find the entries for DRDA.


3. To confirm that the DRDA service is up and listening on that port, run NETSTAT .
4. Choose either option 3 (for IPv4) or option 6 (for IPv6).

5. Press F14 to see the port numbers instead of names, and scroll until you see the
port in question. It should have an entry with a state of “Listen”.
More information
HIS - Microsoft OLE DB Provider for DB2
Impala database
Article • 07/13/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Anonymous


Database
Windows

Function Reference Documentation —

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Connection timeout duration
Command timeout duration

Connect to an Impala database from Power


Query Desktop
To connect to an Impala database, take the following steps:

1. Select Get Data from the Home ribbon in Power BI Desktop.


2. Select Database from the categories on the left, select Impala on the right, and
then select Connect.

3. In the Impala window that appears, type or paste the name of your Impala server
into the box. You can Import data directly into Power BI or you can use
DirectQuery. Learn more about using DirectQuery. Then select OK.
4. When prompted, enter your credentials or connect anonymously. The Impala
connector supports Anonymous, Database (user name and password), and
Windows authentication.

For more information about authentication methods, go to Authentication with a


data source.

7 Note

If the connection isn't encrypted, you'll be prompted with the following


message. Select OK if you want to use an unencrypted connection.
5. After you connect, a Navigator window appears and displays the data that's
available on the server. Either select Load to load the data or Transform Data to
continue transforming the data in the Power Query editor.

Connect to an Impala database from Power


Query Online
To connect to an Impala database, take the following steps:

1. Select the Impala option in the connector selection.


2. In Connect to data source, provide the name of the server and a port number if
required.

3. If required, select the name of your on-premises data gateway.

4. If this is the first time you're connecting to this Impala database, select the type of
credentials for the connection in Authentication kind.

5. Enter your credentials.

6. Select Use Encrypted Connection if you want to use an encrypted connection, or


clear the option if you want to use an unencrypted connection.

7. Select Next to continue.

8. In Navigator, select the data you require, then select Transform data to transform
the data in the Power Query editor.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed. The following table lists all of the advanced options you can set in Power Query.
Advanced option Description

Connection Specifies the maximum time Power Query will wait for a connection to
timeout duration complete. You can enter another value to keep the connection open longer.

Command timeout Specifies the maximum time a command is allowed to run before Power
duration Query abandons the call.

Considerations and limitations


Here are a few considerations and limitations to keep in mind with the Impala
connector:

The Impala connector is supported on the on-premises data gateway, using any of
the three supported authentication mechanisms.
The Impala connector uses the Impala driver, which limits the size of string types to
32K by default.
JSON
Article • 07/18/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Anonymous


Basic (Web only)
Organizational Account
Web API (Web only)
Windows

Function Reference Documentation Json.Document

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities supported
Import

Load a local JSON file from Power Query


Desktop
To load a local JSON file:

1. Select the JSON option in the Get Data selection. This selection launches a local
file browser where you can select your JSON file.
2. Select Open to open the file.

Loading the JSON file will automatically launch the Power Query Editor. Power Query
uses automatic table detection to seamlessly flatten the JSON data into a table. From
the editor, you can then continue to transform the data if you want, or you can just close
and apply. More information: Automatic table detection from JSON files

Load a local JSON file from Power Query


Online
To load a local JSON file:

1. From the Data sources page, select JSON.


2. Enter the path to the local JSON file.

3. Select an on-premises data gateway from Data gateway.

4. If authentication is required, enter your credentials.

5. Select Next.

Loading the JSON file will automatically launch the Power Query Editor. Power Query
uses automatic table detection to seamlessly flatten the JSON data into a table. From
the editor, you can then continue to transform the data if you want, or you can just save
and close to load the data. More information: Automatic table detection from JSON files

Load from the web


To load a JSON file from the web, select the Web connector, enter the web address of
the file, and follow any credential prompts.

Automatic table detection from JSON files


Importing data from JSON files (or Web APIs) can be challenging for end users. Here is
an example of JSON file with multiple levels of nested data.

With the addition of automatic table detection capabilities, using the JSON connector in
Power Query will automatically apply transformation steps to flatten the JSON data into
a table. Previously, users had to flatten records and lists manually.
Troubleshooting
If you see the following message, it might be because the file is invalid, for example, it's
not really a JSON file, or is malformed. Or you might be trying to load a JSON Lines file.

If you're trying to load a JSON Lines file, the following sample M code converts all JSON
Lines input to a single flattened table automatically:

Power Query M

let
// Read the file into a list of lines
Source = Table.FromColumns({Lines.FromBinary(File.Contents("C:\json-
lines-example.json"), null, null)}),
// Transform each line using Json.Document
#"Transformed Column" = Table.TransformColumns(Source, {"Column1",
Json.Document})
in
#"Transformed Column"

You'll then need to use an Expand operation to combine the lines together.
KQL Database (Preview)
Article • 07/27/2023

Summary
Item Description

Release State Preview

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)

Authentication Types Supported Organizational Account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You must have read permissions on the KQL database.

Capabilities supported
Import
DirectQuery (Power BI Datasets)

Connect to a KQL database from Power Query


Desktop
To connect to a KQL database from Power Query Desktop:

1. In Get Data, select Microsoft Fabric (preview) or search for KQL, select KQL
Database, and then select Connect. More information: Where to get data
2. In KQL Database, fill on the cluster and any optional fields, such as Database.
To get the cluster URI, navigate to your KQL database in the Fabric service and
copy the Query URI.

Then select OK.

3. If this attempt is the first time you're connecting to this site, select Sign in and
input your credentials. Then select Connect.
4. In Navigator, select the tables you require, then either load or transform the data.

Connect to a KQL database from Power Query


Online
To connect to a KQL database from Power Query Online:

1. In Choose data source, search for KQL, and then select KQL Database. More
information: Where to get data

2. In Connect to data source, fill in the cluster and any optional fields, such as
Database.
To get the cluster URI, navigate to your KQL database in the Fabric service and
copy the Query URI.

3. If necessary, enter an on-premises data gateway if you're going to be using on-


premises data. For example, if you're going to combine data from dataflows and
an on-premises SQL Server database.

4. Sign in to your organizational account.


5. When you've successfully signed in, select Next.

6. In Choose data, select the data you require, and then select Transform Data.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to your query if needed.

The following table lists all of the advanced options you can set in Power Query Desktop
and Power Query Online.

Advanced option Description

Limit query result The maximum number of records to return in the result.
record number

Limit query result The maximum data size in bytes to return in the result.
data size in Bytes

Disable result-set Enable or disable result truncation by using the notruncation request
truncation option.

Additional Set Sets query options for the duration of the query. Query options control
Statements how a query executes and returns results. Multiple Set statements can be
separated by semicolons.
LinkedIn Sales Navigator (Beta)
Article • 01/24/2023

Summary
Item Description

Release State Beta

Products Power BI (Datasets)

Authentication Types Supported Organizational Account

Function Reference Documentation —

Prerequisites
A LinkedIn Sales Navigator account. If you don't already have an account, sign up for a
free trial .

Capabilities supported
Import

Connect to LinkedIn Sales Navigator


To connect to LinkedIn Sales Navigator data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, then scroll until you see LinkedIn Sales Navigator
(Beta).
Select Connect to continue.

2. You'll be advised that you're connecting to a third-party connecter that's still under
development.
3. When you select Continue, you're prompted to specify which data you want.

4. In the LinkedIn Sales Navigator window that appears, select which data you want
to return, either All contacts or Selected contacts from the first drop-down
selector. You can then specify the start and end dates to constrain the data it
receives to a particular time window.

5. Once you've provided the information, Power BI Desktop connects to the data
associated with your LinkedIn Sales Navigator contract. Use the same email
address you use to sign in to LinkedIn Sales Navigator through the website.
6. When you connect successfully, you're prompted to select the required data from
your LinkedIn Sales Navigator contract from the Navigator.

Once you've selected the data you require, either select Transform Data to
continue tranforming the data in the Power Query editor, or select Load to load
the data into Power BI Desktop. Once in Power BI Desktop, you can create
whatever reports you like with your LinkedIn Sales Navigator data.

Getting help
If you run into problems when connecting to your data, contact LinkedIn Sales
Navigator support .
Mailchimp (Deprecated)
Article • 01/24/2023

Summary
Item Description

Release State Deprecated

Products -

Authentication Types Supported -

Function Reference Documentation -

Deprecation
This connector is deprecated, and won't be supported soon. We recommend you
transition off existing connections using this connector, and don't use this connector for
new connections.
Microsoft Azure Consumption Insights
(Beta) (Deprecated)
Article • 02/17/2023

Summary
Item Description

Release State Deprecated

Products —

Authentication Types Supported Organizational account

Function Reference Documentation —

Deprecation

7 Note

This connector is deprecated because of end of support for the Microsoft Azure
Consumption Insights service. We recommend that users transition off existing
connections using this connector, and don't use this connector for new
connections.

Transition instructions
Users are instructed to use the certified Microsoft Azure Cost Management connector as
a replacement. The table and field names are similar and should offer the same
functionality.

Timeline
The Microsoft Azure Consumption Insights service will stop working in December 2021.
Users should transition off the Microsoft Azure Consumption Insights connector to the
Microsoft Azure Cost Management connector by December 2021.
Microsoft Exchange
Article • 01/24/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Analysis Services

Authentication Types Supported Exchange account


Microsoft account

Function Reference Documentation Exchange.Contents

Capabilities Supported
Import

Connect to Microsoft Exchange from Power


Query Desktop
Power Query Desktop includes Excel, Power BI (Datasets), and Analysis Services as
experiences.

To make the connection to Microsoft Exchange, follow these steps:

1. From Get Data, select the Other category, select Microsoft Exchange, and then
select Connect. More information: Where to get data
2. In the Microsoft Exchange window that appears, enter the mailbox address for the
account you would like to access.

3. Select OK.

4. Choose either Exchange account sign in and provide your credentials, or Microsoft
account and sign in.
You can also use a User Principal Name (UPN). It looks similar to an email address.
Typical format is user@domain_name.

5. In Navigator, select the data to import and use in your application. Then select
either Load to load the table, or Transform Data to open the Power Query Editor
where you can filter and refine the set of data you want to use, and then load that
refined set of data.
Microsoft Exchange Online
Article • 07/18/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Exchange account


Microsoft account

Function Reference Documentation Exchange.Contents

Capabilities Supported
Import

Connect to Microsoft Exchange Online from


Power Query Desktop
Power Query Desktop includes Excel, Power BI (Datasets), and Analysis Services as
experiences.

To make the connection to Microsoft Exchange Online, follow these steps:

1. From Get Data, select the Online Services category, select Microsoft Exchange
Online, and then select Connect. More information: Where to get data
2. In the Microsoft Exchange Online window that appears, enter the mailbox address
for the account you would like to access.

3. Select OK.

4. Choose either Exchange account sign in and provide your credentials, or Microsoft
account and sign in.
You can also use a User Principal Name (UPN). It looks similar to an email address.
Typical format is user@domain_name.

5. In Navigator, select the data to import and use in your application. Then select
either Load to load the table, or Transform Data to open the Power Query Editor
where you can filter and refine the set of data you want to use, and then load that
refined set of data.
Connect to Microsoft Exchange Online from
Power Query Online
Power Query Online includes Power BI (Dataflows) and Customer Insights (Dataflows) as
experiences.

To make the connection, take the following steps:

1. Select the Microsoft Exchange Online option in the connector selection. More
information: Where to get data

2. Enter the mailbox address for the account you would like to access, and any other
connection details if necessary. Select Next.

3. Select the tables to import, then select Transform Data.


4. If you'd like, you can filter and refine your data set in this window before loading.
When you're ready, select Save & close.
Microsoft Graph Security (Deprecated)
Article • 08/14/2023

Summary
Item Description

Release State Deprecated

Products —

Authentication Types Supported —

Function Reference Documentation —

Deprecation

7 Note

This connector is deprecated. We recommend users explore alternative solutions


for retrieving analytics data based on Graph, such as Microsoft Graph data connect.
More information: Lack of Support for Microsoft Graph in Power Query
MicroStrategy for Power BI
Article • 07/07/2023

7 Note

The following connector article is provided by MicroStrategy, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or need to
modify it, visit the MicroStrategy website and use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic


OAuth 2.0

Prerequisites
Your organization must have a configured MicroStrategy environment. The user account
on the MicroStrategy environment must have access to the Power BI connector.

Capabilities Supported
Import
Data refresh

Connect to MicroStrategy using Power BI


Desktop
To connect to MicroStrategy using Power BI Desktop:

1. In Power BI Desktop, select Home > Get Data.


2. From the categories on the left, select Other > MicroStrategy for Power BI. Then
select Connect.

3. If this is the first time you're connecting to the MicroStrategy for Power BI
connector, a third-party notice appears. Select Don't warn me again with this
connector, and then select Continue.

4. Enter a valid MicroStrategy Library URL, for example,


https://.../MicroStrategyLibrary .

7 Note

If you want to utilize OIDC authentication, you must add a #OIDCMode string
to the end of the URL.

OIDC authentication is available starting in version 2.4.5 of the connector.


5. (optional) Select the authentication mode. This option only affects basic
authentication.

6. (optional) Fill out the advanced options, as necessary.

7. Select OK.

8. Select the authentication mode you want to use.

a. Standard/LDAP

i. Provide user and password.

ii. Select Connect.

b. Library/OIDC

) Important

To utilize OIDC authentication, you must add the


https://oauth.powerbi.com/views/oauthredirect.html URL to the list of

reply URLs in your IdP settings.

i. Select Sign in. A popup appears with the external sign-in site (either
MicroStrategyLibrary or OIDC provider).

ii. Follow the required steps to authenticate with the chosen method.

iii. After the popup closes, select Connect.

9. Choose the report or cube you want to import to Power BI by navigating through
the Navigation Table.
Refresh MicroStrategy data using Power BI
Online

7 Note

The scheduled refresh functionality isn't available for Library authentication.

To refresh MicroStrategy data using Power BI Online:

1. Publish the dataset imported with Power BI Desktop using the MicroStrategy for
Power BI connector.

2. Open the Settings page for the published dataset.

3. If this is the first time you're connecting to this database, select the authentication
type and enter your credentials.

4. If necessary, select the name of your on-premises data gateway.

Now follow the steps required to set up the scheduled refresh/refresh in Power BI
Online.
MongoDB Atlas SQL interface
Article • 07/25/2023

7 Note

The following connector article is provided by MongoDB, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the MongoDB website and
use the support channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Database (Username/Password)

Prerequisites
To use the MongoDB Atlas SQL connector, you must have an Atlas federated database
setup.

Obtaining connection information for your federated


database instance
1. Navigate to your federated database instance. In Atlas, select Data Federation
from the left navigation panel.
2. Select Connect to open the federated database instance connection modal.
3. Select Connect using the Atlas SQL Interface.
4. Select Power BI Connector.
5. Copy your federated database name and MongoDB URI. You'll need them in a later
step.
7 Note

If some or all of your data comes from an Atlas cluster, you must use MongoDB
version 5.0 or greater for that cluster to take advantage of Atlas SQL.

We also recommend that you install the MongoDB Atlas SQL ODBC Driver before using
the MongoDB Atlas SQL connector.

Capabilities Supported
Import

Connect to MongoDB Atlas federated database


using Atlas SQL interface from Power Query
Desktop
To connect using the Atlas SQL interface:

1. Select Get Data from the Home ribbon in Power BI Desktop.

2. Select Database from the categories on the left, select MongoDB Atlas SQL, and
then select Connect.
3. If you're connecting to the MongoDB Atlas SQL connector for the first time, a
third-party notice is displayed. Select "Don't warn me again with this connector"
if you don't want this message to be displayed again.

Select Continue.

4. In the MongoDB Atlas SQL window that appears, fill in the following values:

The MongoDB URI. Required. Use the MongoDB URI obtained in the
prerequisites. Make sure that it doesn't contain your username and password.
URIs containing username and/or passwords are rejected.
Your federated Database name. Required
Use the name of the federated database obtained in the prerequisites.

Select OK.
5. Enter your Atlas MongoDB Database access username and password and select
Connect.

7 Note

Once you enter your username and password for a particular Atlas federated
database, Power BI Desktop uses those same credentials in subsequent
connection attempts. You can modify those credentials by going to File >
Options and settings > Data source settings.

6. In Navigator, select one or multiple elements to import and use in Power BI


Desktop. Then select either Load to load the table in Power BI Desktop, or
Transform Data to open the Power Query editor where you can filter and refine the
set of data you want to use, and then load that refined set of data into Power BI
Desktop.

Connect to MongoDB Atlas federated database


using Atlas SQL interface from Power Query
Online
To connect using the Atlas SQL interface:
1. Select MongoDB Atlas SQL from the Power Query - Choose data source page.

2. On the Connection settings page, fill in the following values:

The MongoDB URI. Required. Use the MongoDB URI obtained in the
prerequisites. Make sure that it doesn't contain your username and password.
URIs containing username and/or passwords are rejected.
Your federated Database name. Required
Use the name of the federated database obtained in the prerequisites.
Enter a Connection name.
Choose a Data gateway.
Enter your Atlas MongoDB Database access username and password and
select Next.

3. In the Navigator screen, select the data you require, and then select Transform
data. This selection opens the Power Query editor so that you can filter and refine
the set of data you want to use.
Troubleshooting
When the connection can't be established successfully, the generic error message The
driver returned invalid (or failed to return) SQL_DRIVER_ODBC_VER: 03.80 is

displayed. Start by checking your credentials and that you have no network issues
accessing your federated database.

Next steps
You might also find the following information useful:

Query with Atlas SQL


Set Up and Query Data Federation
Schema Management
SQL Reference
MySQL database
Article • 09/29/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Windows (Power BI Desktop, Excel, online service with
gateway)
Database (Power BI Desktop, Excel)
Basic (online service with gateway)

Function Reference MySQL.Database


Documentation

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You need to install the Oracle MySQL Connector/NET package prior to using this
connector in Power BI Desktop. This component must also be installed on the machine
running the on-premises data gateway in order to use this connector in Power Query
Online (dataflows) or Power BI service. The MySQL connector requires the MySQL
Connector/NET package to be correctly installed. To determine if the package has
installed correctly, open a PowerShell window and run the following command:

[System.Data.Common.DbProviderFactories]::GetFactoryClasses()|ogv
If the package is installed correctly, the MySQL Data Provider is displayed in the
resulting dialog. For example:

If the package doesn't install correctly, work with your MySQL support team or reach out
to MySQL.

Capabilities Supported
Import
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy

Connect to MySQL database from Power Query


Desktop
To make the connection, take the following steps:

1. Select the MySQL database option in the connector selection. More information:
Where to get data

2. In the MySQL database dialog, provide the name of the server and database.
3. Select the Database authentication type and input your MySQL credentials in the
User name and Password boxes.

4. Select the level to apply your credentials to.

5. Once you're done, select OK.

7 Note

If the connection isn't encrypted, you'll be prompted with the following


dialog.
Select OK to connect to the database by using an unencrypted connection, or
follow the instructions to set up encrypted connections to SQL Server.

6. In Navigator, select the data you require, then either load or transform the data.

Connect to MySQL database from Power Query


Online
To make the connection, take the following steps:

1. Select the MySQL database option in the connector selection. More information:
Where to get data

2. In the MySQL database dialog, provide the name of the server and database.
3. If necessary, include the name of your on-premises data gateway.

4. Select the Basic authentication kind and input your MySQL credentials in the
Username and Password boxes.

5. If your connection isn't encrypted, clear Use Encrypted Connection.

6. Select Next to connect to the database.

7. In Navigator, select the data you require, then select Transform data to transform
the data in Power Query Editor.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed.

The following table lists all of the advanced options you can set in Power Query
Desktop.

Advanced option Description

Command If your connection lasts longer than 10 minutes (the default timeout), you
timeout in can enter another value in minutes to keep the connection open longer.
minutes
Advanced option Description

SQL statement For information, go to Import data from a database using native database
query.

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, those columns aren't included.
columns

Navigate using If checked, the navigator displays the complete hierarchy of tables in the
full hierarchy database you're connecting to. If cleared, the navigator displays only the
tables whose columns and rows contain data.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your MySQL database.
OData Feed
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Anonymous


Windows
Basic (requires Gateway)
Web API
Organizational Account

Function Reference Documentation OData.Feed, ODataOmitValues.Nulls

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities supported
Basic
Advanced
URL parts
Open type columns
Select related tables

7 Note
Microsoft Graph is not supported. More information: Lack of Support for Microsoft
Graph in Power Query

Load data from an OData Feed in Power Query


Desktop
To load data from an OData Feed in Power Query Desktop:

1. Select the OData or OData Feed option in the connector selection.

2. Choose the Basic button and enter a URL address in the text box. This URL should
be the root of the OData service you want to connect to. For example, enter
http://services.odata.org/V4/northwind/northwind.svc/ . Then select OK.

If the URL address you enter is invalid, a warning icon will appear next to the
URL textbox.

3. If this is the first time you're connecting using the OData Feed, select the
authentication type, input your credentials (if necessary), and select the level to
apply the authentication settings to. Then select Connect.

4. From the Navigator dialog, you can select a table, then either transform the data
in the Power Query Editor by selecting Transform Data, or load the data by
selecting Load.

If you have multiple tables that have a direct relationship to one or more of the
already selected tables, you can select the Select Related Tables button. When you
do, all tables that have a direct relationship to one or more of the already selected
tables will be imported as well.

Load data from an OData Feed in Power Query


Online
To load data from an OData Feed in Power Query Online:

1. Select the OData or OData Feed option in the connector selection.

2. In the OData dialog that appears, enter a URL in the text box.
3. If this is the first time you're connecting using the OData Feed, select the
authentication kind and enter your credentials (if necessary). Then select Next.

4. From the Navigator dialog, you can select a table, then transform the data in the
Power Query Editor by selecting Transform Data.

If you have multiple tables that have a direct relationship to one or more of the
already selected tables, you can select the Select Related Tables button. When you
do, all tables that have a direct relationship to one or more of the already selected
tables will be imported as well.

Connecting to Microsoft Graph


Connecting to Microsoft Graph REST APIs from Power Query isn't recommended or
supported. More information: Lack of support for Microsoft Graph in Power Query

Known Issues and Limitations

Joins
Due to the architecture of OData and other web connectors, joins can be non-
performant. While you have the option to use navigation columns when merging
between tables from an OData source, you don't have this option when merging with
non-Odata sources.

If you're seeing performance issues when merging an OData source, you should apply
Table.Buffer to your OData query in the advanced editor, before you merge the data.

Test Connection issues


In cases where you're passing in a URL to the OData connector that's not just the service
root (for example, if you have a filter on the URL), when you set up refresh in the service
you should select Skip Test Connection.

When you enter credentials for an OData service into Power BI service (for example,
after publishing a PBIX that uses OData.Feed ), Power BI service will test the credentials
but will ignore any query options that were specified in the M query. These query
options might have been specified directly in the formula (for example, using the
formula bar or advanced editor), or might have been added by the Power Query editor
by default. You can find the full list of these query options in OData.Feed.

Authenticating to arbitrary services


Some services support the ability for the OData connector to authenticate with
OAuth/AAD authentication out of the box. However, this won't work in most cases.

When attempting to authenticate, if the following error occurs:

We were unable to connect because this credential type isn’t supported for this
resource. Please choose another credential type.
Contact the service owner. They'll either need to change the authentication
configuration or build a custom connector.

Maximum URL length


If you're using the OData feed connector to connect to a SharePoint list, SharePoint
online list, or Project Online, the maximum URL length for these connections is
approximately 2100 characters. Exceeding the character limit results in a 401 error. This
maximum URL length is built in the SharePoint front end and can't be changed.

To get around this limitation, start with the root OData endpoint and then navigate and
filter inside Power Query. Power Query filters this URL locally when the URL is too long
for SharePoint to handle. For example, start with:

OData.Feed("https://contoso.sharepoint.com/teams/sales/_api/ProjectData")

instead of

OData.Feed("https://contoso.sharepoint.com/teams/sales/_api/ProjectData/Projects?
select=_x0031_MetricName...etc...")
Connect with data by using Power BI
and OData queries
Article • 02/24/2023

Azure DevOps Services | Azure DevOps Server 2022 - Azure DevOps Server 2019

Using OData queries is the recommended approach for pulling data into Power BI.
OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines best
practices for building and consuming REST APIs. To learn more, see OData
documentation.

To get started quickly, check out the Overview of sample reports that use OData queries.
For information about other approaches, see Power BI integration overview.

Power BI can run OData queries, which can return a filtered or aggregated set of data to
Power BI. OData queries have two advantages:

All filtering is done server-side. Only the data you need is returned, which leads to
shorter refresh times.
You can pre-aggregate data server-side. An OData query can carry out
aggregations such as work item rollup and build failure rates. The aggregations are
accomplished server-side, and only the aggregate values are returned to Power BI.
With pre-aggregation, you can carry out aggregations across large data sets,
without needing to pull all the detail data into Power BI.

In this article, you learn how to:

" Write and test OData queries.


" Run an OData query from Power BI.

Prerequisites
To view Analytics data and query the service, you need to be a member of a project
with Basic access or greater. By default, all project members are granted
permissions to query Analytics and define Analytics views.
To learn about other prerequisites regarding service and feature enablement and
general data tracking activities, see Permissions and prerequisites to access
Analytics.
Use Visual Studio Code to write and test OData
queries
The easiest way to write and test OData is to use Visual Studio Code with the OData
extension . Visual Studio Code is a free code editor available on Windows, Mac, and
Linux. The OData extension provides syntax highlighting and other functions that are
useful for writing and testing queries.

Install Visual Studio Code


To download and install Visual Studio Code, go to the Visual Studio Code download
page .

Install the OData extension


Open Visual Studio Code, select Extensions, and then search for odata. In the results list,
select vscode-odata, and then install it.

Create an OData file


In Visual Studio Code, create and save a file called filename.odata . You may name it
whatever you want, but it must have a .odata extension to enable the OData extension
functionality.

Write the OData query


Write the OData query. For example queries, review the Overview of sample reports
using OData queries.

The following query returns the top 10 work items under a specific area path. Replace
{organization}, {project}, and {area path} with your values.

https://analytics.dev.azure.com/{organization}/{project}/_odata/v3.0-
preview/WorkItems?
$select=WorkItemId,Title,WorkItemType,State,CreatedDate
&$filter=startswith(Area/AreaPath,'{area path}')
&$orderby=CreatedDate desc
&$top=10
To query across projects, omit /{project} entirely.

For more information about how to write OData queries against Analytics, see OData
query quick reference.

After you've written the query in Visual Studio Code, you should see syntax highlighting:

Test the OData query


To test the OData query, place your cursor anywhere in the query text and select View >
Command Palette. In the search box, type odata to bring up all the OData commands:

Select OData: Open. This action combines the multiline query into a one-line URL and
opens it in your default browser.

The OData query result set is in JSON format. To view the results, install the JSON
Formatter extension for your browser. Several options are available for both Chrome
and Microsoft Edge.

If the query has an error, the Analytics service returns an error in JSON format. For
example, this error states that the query has selected a field that doesn't exist:
After you've verified that the query works correctly, you can run it from Power BI.

Run the OData query from Power BI

Combine the multiline OData query into a single-line


query
Before you use the query in Power BI, you must convert the multiline OData query into a
single-line query. The simplest way to do so is to use Visual Studio Code with the
OData extension and use the OData: Combine command.

7 Note

In your filename.odata file, you might want to first create a copy of the multiline
query text and then run OData: Combine on the copy. You do this because there's
no way to convert the single-line query back to a readable multiline query.

In Visual Studio Code, place your query anywhere in the query text, and then select View
> Command Palette. In the search box, type odata and then, in the results list, select
OData: Combine.

The multiline query is converted into a single-line query.

Copy the entire line for use in the next section.

Run the query from Power BI


Open Power BI, select Get Data, and then select the OData feed connector. For more
information, see Create a Power BI report with an OData query.

In the OData feed window, in the URL box, paste the OData query that you copied in
the preceding section, and then select OK.

Power BI displays a preview page:


Specify query options
On the preview page, select Edit to open the Power Query Editor.

In the ribbon, select Advanced Editor.


In the Query pane, scroll horizontally to view the [Implementation="2.0"] parameter.

Replace [Implementation="2.0"] with the following string:

[Implementation="2.0",OmitValues = ODataOmitValues.Nulls,ODataVersion = 4]

7 Note

The following actions help accomplish this preventative measure:


Prevent throttling errors. Power Query attempts to resolve null values as errors by
generating an additional query for every null value it encounters. This can result in
thousands of queries, which will quickly exceed your usage threshold, beyond
which your user account will be throttled.

To prevent this issue:

Instruct Power BI to reference OData v4.


Instruct the Analytics service to omit any values that are null, which improves
query performance.

7 Note

The following action is required for Power BI to successfully run an OData query
against the Azure DevOps Analytics Service.

Select OK to close the Advanced Editor and return to the Power BI Power Query
Editor. You can use Power Query Editor to perform these optional actions:

Rename the "Query1" query as something more specific.


Transform columns to a specific type. Power BI auto-detects the type, but you
might want to convert column to a specific data type.
Add computed columns.
Remove columns.
Expand columns into specific fields.

Create a report by using the data


Select Close & Apply to save your settings and pull the data into Power BI. After the
data is refreshed, you can create a report as you would normally in Power BI.

Related articles
Sample Power BI Reports by using OData queries
Data available from Analytics
Grant permissions to access Analytics
Power BI integration overview
ODBC
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Database (Username/Password)


Windows
Default or Custom

Function Reference Documentation Odbc.DataSource


Odbc.Query

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
Before you get started, make sure you've properly configured the connection in the
Windows ODBC Data Source Administrator. The exact process here depends on the
driver.

Capabilities Supported
Import
Advanced options
Connection string (non-credential properties)
SQL statement
Supported row reduction clauses

Connect to an ODBC data source from Power


Query Desktop
To make the connection, take the following steps:

1. Select the ODBC option in the Get Data selection.

2. In From ODBC, select the data source name (DSN) from the Data source name
(DSN) drop-down box. In this example, a DSN name of SQL Server Database is
used.

You can also choose Advanced options to enter more optional connection
information. More information: Connect using advanced options

3. Once you're done, select OK.

4. If this is the first time you're connecting to this database, select the authentication
type and input your credentials when prompted.
The authentication types available are:

Default or Custom: Select this authentication type when you don't specify
any credentials if you're using DSN configured with a username and
password. Or, if you need to include credentials as connection string
properties.
Windows: Select this authentication type if you want to connect using
Windows authentication. Optionally, include any connection string properties
you need.
Database: Select this authentication type to use a username and password to
access a data source with an ODBC driver. Optionally, include any connection
string properties you need. This is the default selection.

More information: Authentication with a data source

5. Once you're done, select Connect.

6. In the Navigator, select the database information you want, then either select Load
to load the data or Transform Data to continue transforming the data in Power
Query Editor.

Connect to an ODBC data source from Power


Query Online
To make the connection, take the following steps:

1. From the Data sources page, select ODBC.


2. In the ODBC page, enter your ODBC connection string. In the following example,
the connection string is dsn=SQL Server Database .

3. If needed, select an on-premises data gateway in Data gateway.

4. Choose the authentication kind to sign in, and then enter your credentials.

5. Select Next.

6. In the Navigator, select the database information you want, and then select
Transform data to continue transforming the data in Power Query Editor.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed.
Advanced Description
option

Connection Provides an optional connection string that can be used instead of the Data
string (non- source name (DSN) selection in Power BI Desktop. If Data source name (DSN) is
credential set to (None), you can enter a connection string here instead. For example, the
properties) following connection strings are valid: dsn=<myDSN> or driver=
<myDriver>;port=<myPortNumber>;server=<myServer>;database=
<myDatabase>;. To escape special characters, use { } characters. Keys for
connection strings vary between different ODBC drivers. Consult your ODBC
driver provider for more information about valid connection strings.

SQL statement Provides a SQL statement, depending on the capabilities of the driver. Ask your
vendor for more information, or go to Import data from a database using native
database query.

Supported Enables folding support for Table.FirstN. Select Detect to find supported row
row reduction reduction clauses, or select from one of the drop-down options (TOP, LIMIT and
clauses OFFSET, LIMIT, or ANSI SQL-compatible). This option isn't applicable when using
a native SQL statement. Only available in Power Query Desktop.

Known issues and limitations

Connection string attributes


If a DSN is specified in the ODBC connection string, the attributes after the DSN
specification aren't included. If you want to use additional attributes, update them in the
DSN itself, as opposed to in the connection string.
OLE DB
Article • 02/09/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Analysis Services

Authentication Types Supported Exchange account


Microsoft account

Function Reference Documentation Exchange.Contents

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities Supported
Import

Connect to OLE DB from Power Query Desktop


Power Query Desktop includes Excel, Power BI (Datasets), and Analysis Services as
experiences.

To make the connection to OLE DB, follow these steps:

1. From Get Data, select the Other category, select OLE DB, and then select Connect.
More information: Where to get data
2. In the OLE DB window that appears, enter your connection string. Optionally, you
can provide other SQL query information in the Advanced options tab.

 Tip

Avoid using credential or security properties in this connection string, as you'll


enter them in the next menu.

3. Select OK.

4. Choose the kind of authentication you'd like to use: Default or Custom, Windows,
or Database.

Default or Custom: Enter custom or security related connection string


properties, things like security strings or passwords that shouldn't be added
to a query. For example 'UID=MyUserId;
RefreshToken=MySecretRefreshToken; Encryption=true'
Windows: Sign in with your current windows credentials, or provide a
different Windows username and password. You can also provide any other
connection string properties, if needed.
Database: Provide your database username and password. You can also
provide any other connection string properties, if needed.

5. In Navigator, review and select data from your database. Then select either Load
to load the table, or Transform Data to open the Power Query Editor where you
can filter and refine the set of data you want to use, and then load that refined set
of data.
OpenSearch Project (Beta)
Article • 07/18/2023

7 Note

The following connector article is provided by OpenSearch, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the OpenSearch website
and use the support channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic

Prerequisites
Microsoft Power BI Desktop
OpenSearch
OpenSearch SQL ODBC driver

Capabilities supported
Import
DirectQuery (Power BI Datasets)

Connect to OpenSearch Project


1. Open Power BI Desktop.
2. Select Home > Get Data > More > Other, select OpenSearch Project, and then
select Connect.

3. You'll get a warning for using a third-party service. Select Continue.

4. Enter host and port values and select your preferred SSL option. Then select OK.

5. Select an authentication option. Enter credentials if necessary, and then select


Connect.

6. Select the required table. Data preview will then be loaded.

7. Select Load.

8. Select the required columns for creating a graph.

Troubleshooting
If you get an error indicating the driver wasn't installed, install the OpenSearch SQL
ODBC Driver .

If you get a connection error:

1. Check if the host and port values are correct.


2. Check if the authentication credentials are correct.
3. Check if the server is running.
Oracle database
Article • 10/10/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Windows (desktop/online)


Database (desktop)
Basic (online)
Azure Active Directory (desktop/online)

Function Reference Documentation Oracle.Database

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
Supported Oracle versions:

Oracle Database Server 12c (12.1.0.2) and later


Oracle Autonomous Database - all versions

Before you can connect to an Oracle database using Power Query, you need to install
the Oracle Client for Microsoft Tools (OCMT).

To connect to an Oracle database with the on-premises data gateway, 64-bit OCMT
must be installed on the computer running the gateway. For more information, go to
Manage your data source - Oracle.
Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Command timeout in minutes
SQL statement
Include relationship columns
Navigate using full hierarchy

Download and install Oracle Client for


Microsoft Tools
Oracle Client for Microsoft Tools installs and configures Oracle Data Provider for .NET
(ODP.NET) to support 32-bit and 64-bit Microsoft tool connections with Oracle on-
premises and cloud databases, including Oracle Autonomous Database (ADB). OCMT is
a graphical installer that automates the Oracle Database Client setup process. It
supports connecting with Power BI Desktop, Power BI service, Fabric (Dataflow Gen2),
Excel, SQL Server Analysis Services, SQL Server Data Tools, SQL Server Integration
Services, SQL Server Reporting Services, and BizTalk Server.

OCMT is free software. It can be downloaded from the Oracle Client for Microsoft Tools
page. For 64-bit Power BI Desktop and Power BI service, use 64-bit OCMT. For 32-bit
Power BI Desktop, use 32-bit OCMT.

Even if you already have an Oracle Client or ODP.NET installed on your Power BI client,
it's highly recommended you use the OCMT installer to properly complete all the
configuration steps Power BI requires to work with Oracle database.

OCMT 64-bit and 32-bit downloads


OCMT installation and setup instructions

Connect to an on-premises Oracle database


from Power Query Desktop
To make the connection, take the following steps:

1. Select the Oracle database option in the connector selection.

2. Specify the Oracle net service name/TNS alias or Easy Connect (Plus) connection
string to connect to in Server. Easy Connect is the simplest to use by setting the
Server value to your Oracle Database server Hostname/ServiceName, where
ServiceName is the global database name. The following screenshot uses a net
service name.

3. If you're connecting from Power BI Desktop, select either the Import or


DirectQuery data connectivity mode. The rest of these example steps use the
Import data connectivity mode. To learn more about DirectQuery, go to Use
DirectQuery in Power BI Desktop.

4. If you're connecting to this Oracle database for the first time, select the
authentication type you want to use, and then enter your credentials. The
authentication types available are:

Windows authentication (Windows)


Username and password (Database)
Azure Active Directory (Microsoft account)

For more information about authentication, go to Authentication with a data


source.
5. In Navigator, select the data you require, then either select Load to load the data
or Transform Data to transform the data.

Connect to an on-premises Oracle database


from Power Query Online
To make the connection, take the following steps:

1. Install and setup an on-premises data gateway.

2. From Power Query Online, select the Oracle database option in the data sources
selection.

3. In the Oracle database dialog that appears, specify the Oracle net service
name/TNS alias, Easy Connect Plus connection string, or connect descriptor to
connect to in Server.

4. Provide a Connection name, such as "testoracleserver".

5. Select the name of your on-premises data gateway.

7 Note
You must select an on-premises data gateway for this connector, whether the
Oracle database is on your local network or on a web site.

6. If you're connecting to this Oracle database for the first time, select the type of
credentials for the connection in Authentication kind. Choose Basic if you plan to
sign in with an Oracle username and password. Choose Windows when using
Windows operating system authentication and with both the Oracle client and
server running on Windows.

7. Enter your credentials.

8. Select Next to continue.

9. In Navigator, select the data you require, then select Transform data to transform
the data in Power Query Editor.

Connect to an Oracle Autonomous Database

7 Note

Currently, you can connect to an Oracle Autonomous Database from Excel, Power
BI Desktop, Power BI service, Fabric (Dataflow Gen2), Power Apps, SQL Server
Analysis Services, and BizTalk Server using the procedures in this section. These
tools use unmanaged ODP.NET to connect. Other Microsoft tools, including SQL
Server Data Tools, SQL Server Integration Services, and SQL Server Reporting
Services, use managed ODP.NET to connect to Oracle Autonomous Database using
largely similar procedures.

To connect Power BI to an Oracle Autonomous Database, you need the following


accounts and apps:

An Oracle.com account (Oracle.com Account Signup )


An Oracle Cloud account (Oracle Cloud Account Signup )
An Oracle Autonomous Database (Get an Always Free Autonomous Database )
Power BI Desktop (Get Power BI Desktop) or Power BI service account (Licensing
the Power BI service for users in your organization)
On-premises data gateway if using Power BI service (Download and install a
standard gateway)

Download your client credentials


The first step in setting up a connection to the Oracle Autonomous database is to
download your client credentials.

To download your client credentials:

1. In your Oracle Autonomous database details page, select DB Connection.

2. From the Database Connection page, select Download Wallet.

3. Enter a password you would like to use with this wallet, confirm the password, then
select Download.
Configure Oracle ADB credentials
1. On your Windows machine, go to the folder where you downloaded your Oracle
ADB credentials from Download your client credentials.

2. Unzip the credentials into the directory you specified in OCMT as the Oracle
Configuration File Directory. In this example, the credentials are extracted to
c:\data\wallet\wallet_contosomart.

7 Note
The tnsnames.ora file defines your Oracle Autonomous Database address and
connection information.

3. Open sqlnet.ora in an editor, such as Notepad.

4. Under WALLET_LOCATION, change the path to your wallet folder under the
Directory option. In this example:

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA =


(DIRECTORY=c:\data\wallet\Wallet_ContosoMart)))

5. Save and close the sqlnet.ora file.

Open the tnsnames.ora file in the wallets folder. The file contains a list of ADB net
service names that you can connect to. In this example, the names are
contosomart_high, contosomart_low, and contosomart_medium. Your ADB net service
names are different.

Connect Power BI Desktop to Oracle ADB


1. Open Power BI Desktop.

2. Select Get Data.

3. From Get Data, select Database > Oracle database.

4. Enter the net service name of the Oracle Autonomous Database server you want to
connect to. In this example, the Server is contosomart_high. Then select OK.
5. If you're signing in to this server from Power BI Desktop for the first time, you're
asked to enter your credentials. Select Database, then enter the user name and
password for the Oracle database. The credentials you enter here are the user
name and password for the specific Oracle Autonomous Database you want to
connect to. In this example, the database's initial administrator user name and
password are used. Then select Connect.

7 Note

You can use Azure Active Directory authentication to sign-on to Oracle


Autonomous Database via the Microsoft account option.

At this point, the Navigator appears and displays the connection data.
You might also come across one of several errors because the configuration hasn't been
properly set up. These errors are discussed in Troubleshooting.

One error that might occur in this initial test takes place in Navigator, where the
database appears to be connected, but contains no data. Instead, an Oracle: ORA-
28759: failure to open file error appears in place of the data.
If this error occurs, be sure that the wallet folder path you supplied in sqlnet.ora is the
full and correct path to the wallet folder.

Configure the gateway


1. Install and setup an on-premises data gateway.

2. In Power BI service, select the gear icon in the upper right-hand side, then select
Manage gateways.

3. In Add Data Source, select Add data sources to use the gateway.

4. In Data Source Name, enter the name you want to use as the data source setting.

5. In Data Source Type, select Oracle.


6. In Server, enter the net service name of the Oracle Autonomous Database server
to connect to.

7. In Authentication Method, select Basic.

8. Enter the user name and password for the Oracle Autonomous Database. In this
example, the default database administrator user name (ADMIN) and password are
used.

9. Select Add.

If everything has been installed and configured correctly, a Connection Successful


message appears. You can now connect to the Oracle Autonomous Database using the
same steps described in Connect to an on-premises Oracle database from Power Query
Online.

Connect using advanced options


Power Query Desktop and Power Query Online provide a set of advanced options that
you can add to your query if needed.

The following table lists all of the advanced options you can set in Power Query Desktop
and Power Query Online.
Advanced Description
option

Command If your connection lasts longer than 10 minutes (the default timeout), you can
timeout in enter another value in minutes to keep the connection open longer. This
minutes option is only available in Power Query Desktop.

SQL statement For information, go to Import data from a database using native database
query.

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, these columns don't appear.
columns

Navigate using If checked, the navigator displays the complete hierarchy of tables in the
full hierarchy database you're connecting to. If cleared, the navigator displays only the
tables whose columns and rows contain data.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your Oracle database.

Known issues and limitations


Power BI sessions can still be active on your Oracle database for approximately 30
minutes after a dataset refresh to that Oracle database. Only after approximately 30
minutes do those sessions become inactive/removed on the Oracle database. This
behavior is by design.

Troubleshooting
You might come across any of several errors from Oracle when the naming syntax is
either incorrect or not configured properly:

ORA-12154: TNS: could not resolve the connect identifier specified.


ORA-12514: TNS: listener does not currently know of service requested in connect
descriptor.
ORA-12541: TNS: no listener.
ORA-12170: TNS: connect timeout occurred.
ORA-12504: TNS: listener was not given the SERVICE_NAME in CONNECT_DATA.

These errors might occur if the Oracle tnsnames.ora database connect descriptor is
misconfigured, the net service name provided is misspelled, or the Oracle database
listener isn't running or not reachable, such as a firewall blocking the listener or
database port. Be sure you're meeting the minimum installation prerequisites. More
information: Prerequisites

Visit the Oracle Database Error Help Portal to review common causes and resolutions
for the specific Oracle error you encounter. Enter your Oracle error in the portal search
bar.

If you downloaded Power BI Desktop from the Microsoft Store, you might be unable to
connect to Oracle databases because of an Oracle driver issue. If you come across this
issue, the error message returned is: Object reference not set. To address the issue, do
the following:

Download Power BI Desktop from the Download Center instead of Microsoft Store.

If the Object reference not set error message occurs in Power BI when you connect to an
Oracle database using the on-premises data gateway, follow the instructions in Manage
your data source - Oracle.

If you're using Power BI Report Server, consult the guidance in the Oracle Connection
Type article.

Next steps
Optimize Power Query when expanding table columns
Palantir Foundry
Article • 07/14/2023

7 Note

The following connector article is provided by Palantir, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Palantir website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational Account (Power BI Desktop)


Personal Access Token

Prerequisites
This connector works with any active Palantir Foundry environment. Ensure you've
completed the following setup steps before using the connector:

1. Download and install the Foundry datasets ODBC driver .


2. If using OAuth authentication, ensure your Foundry administrators have enabled
the "Power BI®" third-party application in Control Panel. See the Foundry third-
party apps documentation for more details.

Capabilities supported
Import
DirectQuery (Power BI Datasets)
Connect to Palantir Foundry from Power Query
Desktop
To connect to Palantir Foundry from Power Query Desktop, take the following steps:

1. In the Get Data experience, select the Palantir Foundry option in the connector
selection.

2. In Connection Settings, provide the Base URL of your Foundry environment. For
example, https://<subdomain>.palantirfoundry.com/ . Optionally, provide a Dataset
RID and Branch.

3. Select either the Import or DirectQuery data connectivity mode.

4. Select OK.

5. If you're connecting to Foundry for the first time, select either the Foundry OAuth
(recommended) or Foundry Token authentication type. After signing in (Foundry
OAuth) or entering a token (Foundry Token), select Connect.
For more details on these authentication options, go to Foundry's Power BI:
Authenticate with Foundry documentation.

6. In Navigator, select the dataset(s) you want, then select either Load to load the
data or Transform Data to continue transforming the data in the Power Query
editor.

Connect to Palantir Foundry from Power Query


Online

7 Note

Before you begin, ensure you have access to an on-premises gateway with an
existing connection to Foundry.

To connect to Palantir Foundry from Power Query Online, take the following steps:

1. Select the Palantir Foundry option in the connector selection.

2. In Connection Settings, provide the Base URL that matches a connection already
configured on your on-premises data gateway. For example,
https://<subdomain>.palantirfoundry.com/ . Optionally, provide a Dataset RID and

Branch.
Ensure that the Connection dropdown shows the name of your on-premises
gateway.

3. Select Next to continue.

4. In Navigator, select the data you require, and then select Transform data.

Troubleshooting
If you encounter issues connecting to Foundry, refer to the following resources in
Palantir Foundry's documentation for troubleshooting steps:

Power BI Connector: FAQs


Troubleshooting issues when configuring ODBC & JDBC Drivers
Parquet
Article • 07/14/2023

Summary
Item Description

Release state General Availability

Products supported Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication types supported Anonymous (online)


Account key (online)
Basic (online)
Organizational account (online)
Shared access signature (SAS) (online)
Web API (online)
Windows (online)

Function reference docs —

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities supported
Import
Basic
Advanced

Connect to the Parquet file from Power Query


Desktop
To connect to a Parquet file from Power Query Desktop, take the following steps:
1. Select the Parquet option in the connector selection. More information: Where to
get data

2. In Parquet, provide the URL for the location of the Parquet file. Enter a path and
filename if you're connecting to a local file. You can also select Advanced and build
the URL from parts. In the example used in this article, the Parquet file is located in
Azure Blob Storage.

3. Select OK.

4. If you're connecting to this data source for the first time, select the authentication
type, input your credentials, and select the level to apply the authentication
settings to. Then select Connect.

From Power Query Desktop, select one of the following authentication methods:

Anonymous
Account key
Shared access signature (SAS)

For more information about authentication methods, go to Authentication with a


data source.

5. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.

Connect to a Parquet file from Power Query


Online
To connect to a Parquet file from Power Query Online, take the following steps:

1. Select the Parquet option in the connector selection.

2. In Parquet, provide the name of the server and database. Or enter a path and
filename if you're connecting to a local file.
3. If you're connecting to a local file, select the name of your on-premises data
gateway. If the data is online, you don't need to provide an on-premises data
gateway.

4. If you're connecting to this data source for the first time, select the authentication
kind and input your credentials. From Power Query Online, select one of the
following authentication kinds:

Anonymous (online)
Account key (online)
Windows (local file)

5. Select Next to continue to the Power Query editor where you can then begin to
transform your data.

Known issues and limitations


The Power Query Parquet connector only supports reading files from the local
filesystem, Azure Blob Storage, and Azure Data Lake Storage Gen2.

It might be possible to read small files from other sources using the Binary.Buffer
function to buffer the file in memory. However, if the file is too large you're likely to get
the following error:

Error: Parquet.Document cannot be used with streamed binary values.


Using the Binary.Buffer function in this way may also affect performance.
PDF
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Anonymous (online)


Basic (online)
Organizational account (online)
Windows (online)

Function Reference Documentation Pdf.Tables

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

7 Note

PDF is not supported in Power BI Premium.

Prerequisites
None.

Capabilities Supported
Import
Connect to a PDF file from Power Query
Desktop
To make the connection from Power Query Desktop:

1. Select the PDF option in the connector selection.

2. Browse for and select the PDF file you want to load. Then select Open.

If the PDF file is online, use the Web connector to connect to the file.

3. In Navigator, select the file information you want, then either select Load to load
the data or Transform Data to continue transforming the data in Power Query
Editor.
Connect to a PDF file from Power Query Online
To make the connection from Power Query Online:

1. Select the PDF option in the connector selection.

2. In the PDF dialog box that appears, either provide the file path or the URL to the
location of the PDF file. If you're loading a local file, you can also select Upload file
(Preview) to browse to the local file or drag and drop the file.

3. If necessary, select an on-premises data gateway to access the PDF file.

4. If this is the first time you've accessed this PDF file, select the authentication kind
and sign in to your account (if needed).

5. In Navigator, select the file information you want, and then select Transform Data
to continue transforming the data in Power Query Editor.
Limitations and considerations

Accessing large PDF files


You can use the following strategies to improve performance and reduce timeouts when
you access large PDF files. These strategies require that you edit your usage of the
Pdf.Tables function in either the formula bar or advanced editor.

Try selecting pages one at a time or one small range at a time using the StartPage
or EndPage options, iterating over the entire document as needed.
If the PDF document is one single, huge table, the MultiPageTables option can be
collecting very large intermediate values, so disabling it might help.

A full list of available options can be found in Pdf.Tables.

Handling multi-line rows


In cases where multi-line rows aren't properly identified, you might be able to clean up
the data using UI operations or custom M code. For example, you could copy
misaligned data to adjacent rows using Table.FillDown, or group and combine adjacent
rows using Table.Group.

Power BI dataflows in a Premium capacity


When working with the PDF connector on dataflows in a Premium capacity, the PDF
connector doesn't properly make the connection. To enable the PDF connector to work
on dataflows in a Premium capacity, configure that dataflow to use a gateway, and
confirm the connection to that dataflow goes through the gateway.
Planview OKR (Beta)
Article • 10/20/2023

Summary
Item Description

Release State Preview

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Planview Admin account

Prerequisites
Before you can sign in to Planview OKR, you must have a Planview Admin account.

Capabilities Supported
Import

Connect to Planview OKR data


To connect to Planview OKR data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, select Planview OKR (Beta), and then select
Connect.
2. If you're getting data through the Planview OKR connector for the first time, a
preview connector notice is displayed. Select Don't warn me again with this
connector if you don't want this message to be displayed again, and then select
Continue.

3. Enter the Planview OKR OData URL location that you want to access, and then
select OK.
4. To sign in to your Planview Admin account, select Sign in.

5. In the Planview OKR window that appears, provide your credentials to sign in to
your Planview OKR account.
6. Select Sign in.

7. Once you've successfully signed in, select Connect.

8. In Navigator, select the information you want, then either select Load to load the
data or Transform Data to continue transforming the data in the Power Query
editor.

Load data from Planview OKR in Power Query


Online
To load data from Planview OKR in Power Query Online:

1. Select the Planview OKR (Beta) option in the connector selection.

2. In the dialog that appears, enter the Planview OKR URL location in the text box. Fill
in the rest of the details as shown in the following screenshot.

3. If you're connecting using Planview OKR for the first time, you need to sign in into
your Planview Admin account.
4. After you sign in, select Next.

5. In Navigator, select the data you require, then select Transform data to transform
the data in the Power Query editor.
PostgreSQL
Article • 08/09/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Database (Username/Password)

Function Reference Documentation PostgreSQL.Database

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
As of the December 2019 release, NpgSQL 4.0.10 shipped with Power BI Desktop and no
additional installation is required. GAC Installation overrides the version provided with
Power BI Desktop, which will be the default. Refreshing is supported both through the
cloud in the Power BI service and also on premise through the on-premise data
gateway. To refresh data from the Power BI service without an on-premise data gateway,
PostgreSQL must be hosted in a manner that allows direct connection from the Power BI
services on Azure. This is natively supported for PostgreSQL hosted in Microsoft Azure.
For other hosting environments, consult your hosting provider about configuring your
PostgreSQL for direct access from the internet. If PostgreSQL is configured so that it
can't be directly accessed from the internet (recommended for security), you'll need to
use an on-premise data gateway for refreshes. In the Power BI service, NpgSQL 4.0.10
will be used, while on premise refresh will use the local installation of NpgSQL, if
available, and otherwise use NpgSQL 4.0.10.

For Power BI Desktop versions released before December 2019, you must install the
NpgSQL provider on your local machine. To install the NpgSQL provider, go to the
releases page , search for v4.0.10, and download and run the .msi file. The provider
architecture (32-bit or 64-bit) needs to match the architecture of the product where you
intend to use the connector. When installing, make sure that you select NpgSQL GAC
Installation to ensure NpgSQL itself is added to your machine.

We recommend NpgSQL 4.0.10. NpgSQL 4.1 and up won't work due to .NET version
incompatibilities.

For Power Apps, you must install the NpgSQL provider on your local machine. To install
the NpgSQL provider, go to the releases page and download the relevant version.
Download and run the installer (the NpgSQL-[version number].msi) file. Ensure you
select the NpgSQL GAC Installation and on completion restart your machine for this
installation to take effect.

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy

Connect to a PostgreSQL database from Power


Query Desktop
Once the matching Npgsql provider is installed, you can connect to a PostgreSQL
database. To make the connection, take the following steps:

1. Select the PostgreSQL database option in the connector selection.

2. In the PostgreSQL database dialog that appears, provide the name of the server
and database.

3. Select either the Import or DirectQuery data connectivity mode.

4. If this is the first time you're connecting to this database, input your PostgreSQL
credentials in the User name and Password boxes of the Database authentication
type. Select the level to apply the authentication settings to. Then select Connect.
For more information about using authentication methods, go to Authentication
with a data source.

7 Note

If the connection is not encrypted, you'll be prompted with the following


message.

Select OK to connect to the database by using an unencrypted connection, or


follow the instructions in Enable encrypted connections to the Database Engine to
set up encrypted connections to PostgreSQL database.

5. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.
Connect to a PostgreSQL database from Power
Query Online
To make the connection, take the following steps:

1. Select the PostgreSQL database option in the connector selection.

2. In the PostgreSQL database dialog that appears, provide the name of the server
and database.
3. Select the name of the on-premises data gateway you want to use.

4. Select the Basic authentication kind and input your PostgreSQL credentials in the
Username and Password boxes.

5. If your connection isn't encrypted, clear Use Encrypted Connection.

6. Select Next to connect to the database.

7. In Navigator, select the data you require, then select Transform data to transform
the data in Power Query Editor.

Connect using advanced options


Power Query Desktop provides a set of advanced options that you can add to your
query if needed.
The following table lists all of the advanced options you can set in Power Query
Desktop.

Advanced Description
option

Command If your connection lasts longer than 10 minutes (the default timeout), you can
timeout in enter another value in minutes to keep the connection open longer. This
minutes option is only available in Power Query Desktop.

SQL statement For information, go to Import data from a database using native database
query.

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, you won’t see those columns.
columns

Navigate using If checked, the navigator displays the complete hierarchy of tables in the
full hierarchy database you're connecting to. If cleared, the navigator displays only the
tables whose columns and rows contain data.

Once you've selected the advanced options you require, select OK in Power Query
Desktop to connect to your PostgreSQL database.

Native query folding


By default, native query folding is enabled. Operations that are capable of folding will be
applied on top of your native query according to normal Import or Direct Query logic.
Native Query folding isn't applicable with optional parameters present in
Value.NativeQuery().
In the rare case that folding doesn't work with native query folding enabled, you can
disable it. To disable native query folding, set the EnableFolding flag to false for
Value.NativeQuery() in the advanced editor.

Sample: Value.NativeQuery(target as any, query, null, [EnableFolding=false])

Troubleshooting
Your native query may throw the following error:

We cannot fold on top of this native query. Please modify the native query or

remove the 'EnableFolding' option.

A basic trouble shooting step is to check if the query in Value.NativeQuery() throws the
same error with a limit 1 clause around it:

select * from (query) _ limit 1


Profisee
Article • 07/18/2023

7 Note

The following connector article is provided by Profisee, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the Profisee support site and use the support
channels there.

Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
Before using the Profisee connector, you must have:

A Profisee account (username and password)


A running Profisee instance (version 2022.R1 or higher)

Capabilities supported
Import

Connect to Profisee in Power BI Desktop


To use the Profisee connector:

1. Open Power BI Desktop.

2. Select the Get Data option in the Home ribbon to open the Get Data dialogue.

3. Enter Profisee into the search box or select Online Services and select Profisee
from the list, then select Connect.

4. Enter the URL for your Profisee instance, then select OK.
.

5. If it's your first time connecting, you'll be prompted to sign in using an


authentication service.

6. Select Connect.

7. Once connected, the Navigator is displayed. This display lists all entities in your
Profisee instance. You can scroll through the navigator to locate specific entities, or
search for entities by name using the search bar.
8. Select the entities that you want to import into Power BI. You can preview the data
and choose to either first Transform Data if you want to edit the attribute columns,
apply filters, and so on, or Load the data directly into Power BI Desktop.

9. Once loaded, the entities appear in the model view, and you can view the
attributes ready for use in Power BI in the Fields dialog.
7 Note

Relationships in Profisee aren't created in the model in Power BI. After the entities
are loaded, you can view the model and create or modify relationships as desired.
QuickBooks Online (Beta)
Article • 01/24/2023

The Power BI QuickBooks Online connector enables connecting to your QuickBooks


Online account and viewing, analyzing, and reporting on your company QuickBooks
data in Power BI.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)

Authentication Types Supported QuickBooks Online account

2 Warning

QuickBooks Online has deprecated support for Internet Explorer 11, which Power
Query Desktop uses for authentication to online services. To be able to log in to
Quickbooks Online from Power BI Desktop, go to Enabling Microsoft Edge
(Chromium) for OAuth Authentication in Power BI Desktop.

Prerequisites
To use the QuickBooks Online connector, you must have a QuickBooks Online account
username and password.

The QuickBooks Online connector uses the QuickBooks ODBC driver. The QuickBooks
ODBC driver is shipped with Power BI Desktop and no additional installation is required.

Capabilities Supported
Import

Connect to QuickBooks Online


To connect to QuickBooks Online:

1. In the Get Data dialog box, enter QuickBooks in the Search box, select
QuickBooks Online (Beta) from the product-specific data connector list, and then
select Connect.

2. Select Continue in the Preview connector message.

3. Select Sign in to sign into your QuickBooks Online account.


4. In the following dialog, enter your QuickBooks credentials. You may be required to
provide 2FA (two factor authentication code) as well.
5. In the following dialog, select a company and then select Next.

6. Once you've successfully signed in, select Connect.

7. In the Navigator dialog box, select the QuickBooks tables you want to load. You
can then either load or transform the data.
Known issues
Beginning on August 1, 2020, Intuit will no longer support Microsoft Internet Explorer 11
(IE 11) for QuickBooks Online. When you use OAuth2 for authorizing QuickBooks Online,
after August 1, 2020, only the following browsers will be supported:

Microsoft Edge
Mozilla Firefox
Google Chrome
Safari 11 or newer (Mac only)

For more information, see Alert: Support for IE11 deprecating on July 31, 2020 for
Authorization screens .

For information about current Microsoft Edge support in Power BI Desktop, go to


Enabling Microsoft Edge (Chromium) for OAuth Authentication in Power BI Desktop.
Salesforce Objects
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Salesforce account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

2 Warning

By default, Salesforce does not support Internet Explorer 11, which is used as part
of the authentication experience to online services in Power Query Desktop. Please
opt-in for extended support for accessing Lightning Experience Using Microsoft
Internet Explorer 11 . You may also want to review Salesforce documentation on
configuring Internet Explorer . At this time, users will be impaired from
authenticating, but stored credentials should continue to work until their existing
authentication tokens expire. To resolve this, go to Enabling Microsoft Edge
(Chromium) for OAuth Authentication in Power BI Desktop.

Prerequisites
To use the Salesforce Objects connector, you must have a Salesforce account username
and password.
Also, Salesforce API access should be enabled. To verify access settings, go to your
personal Salesforce page, open your profile settings, and search for and make sure the
API Enabled checkbox is selected. Note that Salesforce trial accounts don't have API
access.

Capabilities Supported
Production
Custom
Custom domains
CNAME record redirects
Relationship columns

Connect to Salesforce Objects from Power


Query Desktop
To connect to Salesforce Objects data:

1. Select Salesforce Objects from the product-specific data connector list, and then
select Connect.

2. In Salesforce Objects, choose the Production URL if you use the Salesforce
production URL ( https://www.salesforce.com ) to sign in.

You can also select Custom and enter a custom URL to sign in. This custom URL
might be a custom domain you've created within Salesforce, such as
https://contoso.salesforce.com . You can also use the custom URL selection if

you're using your own CNAME record that redirects to Salesforce. Note that
lightning URLs aren't supported.
Also, you can select Include relationship columns. This selection alters the query
by including columns that might have foreign-key relationships to other tables. If
this box is unchecked, you won’t see those columns.

Once you've selected the URL, select OK to continue.

3. Select Sign in to sign in to your Salesforce account.

Once you've successfully signed in, select Connect.

4. If this is the first time you've signed in using a specific app, you'll be asked to verify
your authenticity by entering a code sent to your email address. You'll then be
asked whether you want the app you're using to access the data. For example,
you'll be asked if you want to allow Power BI Desktop to access your Salesforce
data. Select Allow.

5. In the Navigator dialog box, select the Salesforce Objects you want to load. You
can then either select Load to load the data or select Transform Data to transform
the data.

Connect to Salesforce Objects from Power


Query Online
To connect to Salesforce Objects data:

1. Select Salesforce objects from the product-specific data connector list, and then
select Connect.

2. In Salesforce objects, choose the URL you want to use to connect. Select the
Production URL if you use the Salesforce production URL
( https://www.salesforce.com ) to sign in.
7 Note

Currently, you may need to select the Custom URL, enter


https://www.salesforce.com in the text box, and then select Production to

connect to your data.

You can also select Custom and enter a custom URL to sign in. This custom URL
might be a custom domain you've created within Salesforce, such as
https://contoso.salesforce.com . You can also use the custom URL selection if

you're using your own CNAME record that redirects to Salesforce.

Also, you can select Include relationship columns. This selection alters the query by
including columns that might have foreign-key relationships to other tables. If this
box is unchecked, you won’t see those columns.

3. If this is the first time you've made this connection, select an on-premises data
gateway, if needed.

4. Select Sign in to sign in to your Salesforce account. Once you've successfully


signed in, select Next.

5. In the Navigator dialog box, select the Salesforce Objects you want to load. Then
select Transform Data to transform the data.
Specifying a Salesforce API version
We require you to specify a supported Salesforce API version to use the Salesforce
connector. You can do so by modifying the query using the Power Query advanced
editor. For example, Salesforce.Data("https://login.salesforce.com/",
[ApiVersion=48]) .

If you specify a version that isn't supported by Salesforce, you'll encounter an error
message indicating that you have specified an unsupported ApiVersion.

For more information on Salesforce API versions and support, visit the Salesforce
website .

Known issues and limitations


If there are missing tables in the results from the Salesforce Objects connector, one
cause might be that the ApiVersion must be specified or updated to a version
that's supported by Salesforce. To specify or update the ApiVersion, use the
Advanced Editor and explicitly add the ApiVersion parameter. For example:

Source = Salesforce.Data("https://login.salesforce.com/", [ApiVersion =


54.0])

ApiVersions are periodically deprecated by Salesforce, so ensure that you're


specifying a Salesforce supported ApiVersion .

There's a limit on the number of fields a query to Salesforce can contain. The limit
varies depending on the type of the columns, the number of computed columns,
and so on. When you receive the Query is either selecting too many fields or
the filter conditions are too complicated error, it means that your query

exceeds the limit. To avoid this error, use the Select Query advanced option and
specify fields that you really need.

Salesforce session settings can block this integration. Ensure that the setting Lock
sessions to the IP address from which they originated is disabled.

Salesforce API access should be enabled. To verify access settings, go to profile


settings for the current user and search for "API Enabled" checkbox.

Salesforce trial accounts don't have API access.

Custom fields of type "Picklist (Multi-Select)" are not supported by "Create record"
and "Update record" operations.
Lightning URLs aren't supported.

Salesforce limits the number of concurrent queries that can be executed by a


single account. 'INVALID_QUERY_LOCATOR ' error This is a Salesforce limit, and
can be encountered when there are multiple concurrent queries against Salesforce
using the same account. This includes all client applications (not just the
PowerQuery connector) using the Salesforce API. To reduce the likelihood of this
error, ensure that the number of Salesforce queries are kept to a minimum, use
available options (such as dataflows) to stage the data from Salesforce, and
confirm that other uses of the Salesforce API with the same account are not
scheduled concurrently.

For more information about Salesforce internal API limits, go to Salesforce Developer
Limits and Allocations Quick Reference .
Salesforce Reports
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Salesforce account

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

2 Warning

By default, Salesforce does not support Internet Explorer 11, which is used as part
of the authentication experience to online services in Power Query Desktop. Please
opt-in for extended support for accessing Lightning Experience Using Microsoft
Internet Explorer 11 . You may also want to review Salesforce documentation on
configuring Internet Explorer . At this time, users will be impaired from
authenticating, but stored credentials should continue to work until their existing
authentication tokens expire. To resolve this, go to Enabling Microsoft Edge
(Chromium) for OAuth Authentication in Power BI Desktop.

Prerequisites
To use the Salesforce Reports connector, you must have a Salesforce account username
and password.
Also, Salesforce API access should be enabled. To verify access settings, go to your
personal Salesforce page, open your profile settings, and search for and make sure the
API Enabled checkbox is selected. Note that Salesforce trial accounts don't have API
access.

Capabilities Supported
Production
Custom
Custom domains
CNAME record redirects

Connect to Salesforce Reports from Power


Query Desktop
To connect to Salesforce Reports data:

1. Select Salesforce Reports from the product-specific data connector list, and then
select Connect.

2. In Salesforce Reports, choose the Production URL if you use the Salesforce
production URL ( https://www.salesforce.com ) to sign in.

You can also select Custom and enter a custom URL to sign in. This custom URL
might be a custom domain you've created within Salesforce, such as
https://contoso.salesforce.com . You can also use the custom URL selection if

you're using your own CNAME record that redirects to Salesforce.

Once you've selected the URL, select OK to continue.


3. Select Sign in to sign in to your Salesforce account.

Once you've successfully signed in, select Connect.

4. If this is the first time you've signed in using a specific app, you'll be asked to verify
your authenticity by entering a code sent to your email address. You'll then be
asked whether you want the app you're using to access the data. For example,
you'll be asked if you want to allow Power BI Desktop to access your Salesforce
data. Select Allow.

5. In the Navigator dialog box, select the Salesforce Reports you want to load. You
can then either select Load to load the data or select Transform Data to transform
the data.

Connect to Salesforce Reports from Power


Query Online
To connect to Salesforce Reports data:

1. Select Salesforce reports from the product-specific data connector list, and then
select Connect.

2. In Salesforce reports, choose the URL you want to use to connect. Select the
Production URL if you use the Salesforce production URL
( https://www.salesforce.com ) to sign in.
7 Note

Currently, you may need to select the Custom URL, enter


https://www.salesforce.com in the text box, and then select Production to

connect to your data.

You can also select Custom and enter a custom URL to sign in. This custom URL
might be a custom domain you've created within Salesforce, such as
https://contoso.salesforce.com . You can also use the custom URL selection if

you're using your own CNAME record that redirects to Salesforce.

Also, you can select Include relationship columns. This selection alters the query by
including columns that might have foreign-key relationships to other tables. If this
box is unchecked, you won’t see those columns.

3. If this is the first time you've made this connection, select an on-premises data
gateway, if needed.

4. Select Sign in to sign in to your Salesforce account. Once you've successfully


signed in, select Next.

5. In the Navigator dialog box, select the Salesforce Reports you want to load. Then
select Transform Data to transform the data.

Specifying a Salesforce API version


We require you to specify a supported Salesforce API version to use the Salesforce
connector. You can do so by modifying the query using the Power Query Advanced
Editor. For example, Salesforce.Data("https://login.salesforce.com/",
[ApiVersion=48]) .

If you specify a version that isn't supported by Salesforce, you'll encounter an error
message indicating that you have specified an unsupported ApiVersion.

For more information on Salesforce API versions and support, visit the Salesforce
website .

Known issues and limitations


There's a limit on the number of fields a query to Salesforce can contain. The limit
varies depending on the type of the columns, the number of computed columns,
and so on. When you receive an Query is either selecting too many fields or
the filter conditions are too complicated error, it means that your query

exceeds the limit. To avoid this error, use the Select Query advanced option and
specify fields that you really need.

Salesforce session settings can block this integration. Ensure that the setting Lock
sessions to the IP address from which they originated is disabled.

The number of rows you can access in Salesforce Reports is limited by Salesforce
to 2000 rows. As a workaround for this issue, you can use the Salesforce Objects
connector in Power BI Desktop to retrieve all the rows from individual tables and
recreate reports you’d like. The Object connector doesn’t have the 2000-row limit.

Salesforce API access should be enabled. To verify access settings, go to profile


settings for the current user and search for "API Enabled" checkbox.

Salesforce trial accounts don't have API access.

Lightning URLs aren't supported.

For more information about Salesforce internal API limits, go to Salesforce Developer
Limits and Allocations Quick Reference .
SAP Business Warehouse Application
Server
Article • 07/14/2023

7 Note

The SAP Business Warehouse (BW) Application Server connector is now certified for
SAP BW/4HANA as of June 2020.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Analysis Services

Authentication Types Supported Windows (desktop)


Database (desktop)
Basic (online)

Function Reference Documentation SapBusinessWarehouse.Cubes


Sapbusinesswarehouseexecutionmode.DataStream
SapBusinessWarehouseExecutionMode.BasXml
SapBusinessWarehouseExecutionMode.BasXmlGzip

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites

) Important
Version 1.0 of the SAP BW Application Server connector has been deprecated. New
connections will use Implementation 2.0 of the SAP BW Application Server
connector. All support for version 1.0 will be removed from the connector in the
near future.

You'll need an SAP account to sign in to the website and download the drivers. If you're
unsure, contact the SAP administrator in your organization. The drivers need to be
installed on the gateway machine.

BW 7.3, BW 7.5 and BW/4HANA 2.0 is supported.

To use the SAP BW Application Server connector in Power BI Desktop or Power Query
Online, you must install the SAP .NET Connector 3.0. Access to the download requires a
valid S-user. Contact your SAP Basis team to get the SAP .NET Connector 3.0. You can
download the SAP .NET Connector 3.0 from SAP. The connector comes in 32-bit and
64-bit versions. Choose the version that matches your Power BI Desktop installation. For
Power Query Online, choose the 64-bit version. Currently, the website lists two versions
for .NET 4.0 framework:

SAP Connector for Microsoft .NET 3.0.23.0 for Windows 32 bit (x86) as a zip file
(6,928 KB), May 28, 2020
SAP Connector for Microsoft .NET 3.0.23.0 for Windows 64 bit (x64) as a zip file
(7,225 KB), May 28, 2020

When you install, in Optional setup steps, make sure you select Install assemblies to
GAC.
7 Note

Be sure to use SAP .NET Connector 3.0. The SAP BW Application Server connector
doesn't currently support SAP .NET Connector 3.1.

Capabilities Supported
Import
Direct Query (Power BI Datasets)
Advanced
Language code
Execution mode
Batch size
MDX statement
Enable characteristic structures

Connect to an SAP BW Application Server from


Power Query Desktop
To connect to an SAP BW Application Server:
1. From the Home tab of Power BI Desktop, select Get Data > SAP Business
Warehouse Application Server.

2. Enter the server name, system number, and client ID of the SAP BW Application
Server you want to connect to. This example uses SAPBWTestServer as the server
name, a system number of 00 , and a client ID of 837 .

The rest of this example describes how to import your data into Power Query
Desktop, which is the default setting for Data Connectivity mode. If you want to
use DirectQuery to load your data, go to Connect to SAP Business Warehouse by
using DirectQuery in Power BI.

If you want to use any of the advanced options for this connector to fine-tune your
query, go to Use advanced options.

When you've finished filling in the relevant information, select OK.

3. When accessing the database for the first time, the SAP BW Application Server
requires database user credentials. Power Query Desktop offers two authentication
modes for SAP BW connections—user name/password authentication (Database),
and Windows authentication (single sign-on). SAML authentication isn't currently
supported. Select either Windows or Database. If you select Database
authentication, enter your user name and password. If you select Windows
authentication, go to Windows Authentication and single sign-on to learn more
about the requirements for Windows authentication.
Then select Connect.

For more information about authentication, go to Authentication with a data


source.

4. From the Navigator dialog box, select the items you want to use. When you select
one or more items from the server, the Navigator dialog box creates a preview of
the output table. For more information about navigating the SAP BW Application
Server query objects in Power Query, go to Navigate the query objects.

5. From the Navigator dialog box, you can either transform the data in the Power
Query Editor by selecting Transform Data, or load the data by selecting Load.
Connect to an SAP BW Application Server from
Power Query Online
To connect to an SAP BW Application Server from Power Query Online:

1. From the Data sources page, select SAP BW Application Server.

2. Enter the server name, system number, and client ID of the SAP BW Application
Server you want to connect to. This example uses SAPBWTestServer as the server
name, a system number of 00 , and a client ID of 837 .

3. Select the on-premises data gateway you want to use to connect to the data.

4. Set Authentication Kind to Basic. Enter your user name and password.

5. You can also select from a set of advanced options to fine-tune your query.

6. Select Next to connect.

7. From the Navigator dialog box, select the items you want to use. When you select
one or more items from the server, the Navigator dialog box creates a preview of
the output table. For more information about navigating the SAP BW Application
Server query objects in Power Query, go to Navigate the query objects.
8. From the Navigator dialog box, you can transform the data in the Power Query
Editor by selecting Transform Data.

Connect using router strings


SAP router is an SAP program that acts as an intermediate station (proxy) in a network
connection between SAP systems, or between SAP systems and external networks. SAP
router controls the access to your network, and, as such, is a useful enhancement to an
existing firewall system (port filter). Figuratively, the firewall forms an impenetrable
"wall" around your network. However, since some connections need to penetrate this
wall, a "gate" has to be made in the firewall. SAP router assumes control of this gate. In
short, SAP router provides you with the means of controlling access to your SAP system.

Router strings in Power Query


To establish a connection to an SAP BW system through an SAP router, you can use
router strings in place of the server name in the Power Query connection settings.

The syntax used for the router string is:

<SAP router 1><SAP router 2>.....<SAP router n><destination : application server>

Where:

<SAP router> = /H/<sap router host name>/S/<sap router port>


<destination> = /H/<application server host name>/S/<application server port

or service name>

7 Note
/S/<port> can be omitted if the port is the default port (3299).

The following table contains examples of some router strings.

Destination Example Router string

Application FQDN for application /H/saprouter-


Server server maq/S/3299/H/bwmpas.mycompany.com/S/3214

Application SAP Router port /H/saprouter-maq/S/3299/H/bwmpas/S/3214


Server provided

Application SAP Router port /H/10.11.23.33/H/bwmpas/S/3214


Server omitted

Considerations
Router strings can include passwords, prefixed by either /P/ or /W/ . Passwords
aren't supported in Power Query router strings as this could be unsafe. Using a
password will result in an error.

Router strings also allow the use of symbolic SAP system names, prefixed with /R/ .
This type of string isn't supported in Power Query.

In Power Query, you can use the "router string" syntax to specify a custom port, so
router strings with a single station are allowed. Router strings can then be
identified as starting with either /H/ or /M/ . Any other input is assumed to be a
server name/IP address.

Next steps
Navigate the query objects
SAP Business Warehouse fundamentals
Use advanced options
SAP Business Warehouse connector troubleshooting
SAP Business Warehouse Message
Server
Article • 07/14/2023

7 Note

The SAP Business Warehouse (BW) Message Server connector is now certified for
SAP BW/4HANA as of June 2020.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)

Authentication Types Supported Windows (desktop)


Database (desktop)
Basic (online)

Function Reference Documentation SapBusinessWarehouse.Cubes


Sapbusinesswarehouseexecutionmode.DataStream
SapBusinessWarehouseExecutionMode.BasXml
SapBusinessWarehouseExecutionMode.BasXmlGzip

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites

) Important
Version 1.0 of the SAP BW Message Server connector has been deprecated. New
connections will use Implementation 2.0 of the SAP BW Message Server connector.
All support for version 1.0 will be removed from the connector in the near future.

You'll need an SAP account to sign in to the website and download the drivers. If you're
unsure, contact the SAP administrator in your organization.

To use the SAP BW Message Server connector in Power BI Desktop or Power Query
Online, you must install the SAP .NET Connector 3.0. Access to the download requires a
valid S-user. Contact your SAP Basis team to get the SAP .NET Connector 3.0. You can
download the SAP .NET Connector 3.0 from SAP. The connector comes in 32-bit and
64-bit versions. Choose the version that matches your Power BI Desktop installation. For
Power Query Online, choose the 64-bit version. Currently, the website lists two versions
for .NET 4.0 framework:

SAP Connector for Microsoft .NET 3.0.23.0 for Windows 32 bit (x86) as a zip file
(6,928 KB), May 28, 2020
SAP Connector for Microsoft .NET 3.0.23.0 for Windows 64 bit (x64) as a zip file
(7,225 KB), May 28, 2020

When you install, in Optional setup steps, make sure you select Install assemblies to
GAC.

7 Note
Be sure to use SAP .NET Connector 3.0. The SAP BW Application Server connector
doesn't currently support SAP .NET Connector 3.1.

Capabilities Supported
Import
Direct Query (Power BI Datasets)
Advanced
Language code
Execution mode
Batch size
MDX statement
Enable characteristic structures

Connect to an SAP BW Message Server from


Power Query Desktop
To connect to an SAP BW Message Server:

1. From the Home tab of Power BI Desktop, select Get Data > SAP Business
Warehouse Message Server.

2. Enter the server, system number, client ID, and logon group of the SAP BW
Message Server you want to connect to. This example uses SAPBWTestServer as the
server name, a system number of 100 , a client ID of 837 , and a logon group of
PURCHASING .

7 Note

You can also use router strings to connect to your data. More information:
Connect using router strings

The rest of this example describes how to import your data into Power Query
Desktop, which is the default setting for Data Connectivity mode. If you want to
use DirectQuery to load your data, see Connect to SAP Business Warehouse by
using DirectQuery in Power BI.

If you want to use any of the advanced options for this connector to fine-tune your
query, go to Use advanced options.
When you've finished filling in the relevant information, select OK.

3. When accessing the database for the first time, the SAP BW Message Server
requires database user credentials. Power Query Desktop offers two authentication
modes for SAP BW connections—user name/password authentication (Database),
and Windows authentication (single sign-on). SAML authentication isn't currently
supported. Select either Windows or Database. If you select Database
authentication, enter your user name and password. If you select Windows
authentication, go to Windows Authentication and single sign-on to learn more
about the requirements for Windows authentication.

Then select Connect.

For more information about authentication, go to Authentication with a data


source.

4. From the Navigator dialog box, select the items you want to use. When you select
one or more items from the server, the Navigator dialog box creates a preview of
the output table. For more information about navigating the SAP BW Message
Server query objects in Power Query, go to Navigate the query objects.

5. From the Navigator dialog box, you can either transform the data in the Power
Query Editor by selecting Transform Data, or load the data by selecting Load.

Connect to an SAP BW Message Server from


Power Query Online
To connect to an SAP BW Message Server from Power Query Online:

1. From the Data sources page, select SAP BW Message Server.

2. Enter the server, system number, client ID, and logo group of the SAP BW Message
Server you want to connect to. This example uses SAPBWTestServer as the server
name, a system number of 100 , a client ID of 837 , and a logon group of
PURCHASING .

3. Select the on-premises data gateway you want to use to connect to the data.

4. Set Authentication Kind to Basic. Enter your user name and password.
5. You can also select from a set of advanced options to fine-tune your query.

6. Select Next to connect.

7. From the Navigator dialog box, select the items you want to use. When you select
one or more items from the server, the Navigator dialog box creates a preview of
the output table. For more information about navigating the SAP BW Message
Server query objects in Power Query, go to Navigate the query objects.

8. From the Navigator dialog box, you can transform the data in the Power Query
Editor by selecting Transform Data.
Connect using router strings
SAP router is an SAP program that acts as an intermediate station (proxy) in a network
connection between SAP systems, or between SAP systems and external networks. SAP
router controls the access to your network, and, as such, is a useful enhancement to an
existing firewall system (port filter). Figuratively, the firewall forms an impenetrable
"wall" around your network. However, since some connections need to penetrate this
wall, a "gate" has to be made in the firewall. SAP router assumes control of this gate. In
short, SAP router provides you with the means of controlling access to your SAP system.

Router strings in Power Query


To establish a connection to an SAP BW system through an SAP router, you can use
router strings in place of the server name in the Power Query connection settings.

The syntax used for the router string is:

<SAP router 1><SAP router 2>.....<SAP router n><destination : message server>

Where:

<SAP router> = /H/<sap router host name>/S/<sap router port>

<destination> = /M/<message server host name>/S/<message server port or

service name>/G/<logon group>

7 Note

/S/<port> can be omitted if the port is the default port (3299).


The following table contains examples of some router strings.

Destination Example Router string

Message SAP Router with /H/saprouter-maq/S/3299/M/bwmpms.mycompany.com/S/3214


Server message server port

Message SAP Router with /H/saprouter-


Server logon group maq/S/3299/M/bwmpms.mycompany.com/S/3214/G/space

Considerations
Router strings can include passwords, prefixed by either /P/ or /W/ . Passwords
aren't supported in Power Query router strings as this could be unsafe. Using a
password will result in an error.

Router strings also allow the use of symbolic SAP system names, prefixed with /R/ .
This type of string isn't supported in Power Query.

In Power Query, you can use the "router string" syntax to specify a custom port, so
router strings with a single station are allowed. Router strings can then be
identified as starting with either /H/ or /M/ . Any other input is assumed to be a
server name/IP address.

To allow you to use the same router strings you use in other tools, the /G/ option
in the router string is supported. When provided, it should match the value
specified in the "Logon group" parameter.

If a message server port is specified, it will be sent. Under these circumstances, the
SystemId is omitted from the connection string as it’s no longer required.

However, you must still provide a value for SystemId even though it won't be used
to establish the connection.

See also
Navigate the query objects
SAP Business Warehouse fundamentals
Use advanced options
SAP Business Warehouse connector troubleshooting
SAP BW fundamentals
Article • 01/24/2023

This article describes basic terminology used when describing interactions between the
SAP BW server and Power Query. It also includes information about tools that you may
find useful when using the Power Query SAP BW connector.

Integration Architecture
From a technical point of view, the integration between applications and SAP BW is
based on the so-called Online Analytical Processing (OLAP) Business Application
Programming Interfaces (BAPI).

The OLAP BAPIs are delivered with SAP BW and provide 3rd-parties and developers with
standardized interfaces that enable them to access the data and metadata of SAP BW
with their own front-end tools.

Applications of all types can be connected with an SAP BW server using these methods.

The OLAP BAPIs are implemented in SAP BW as RFC-enabled function modules and are
invoked by applications over SAP’s RFC protocol. This requires the NetWeaver RFC
Library or SAP .NET Connector to be installed on the application's machine.

The OLAP BAPIs provide methods for browsing metadata and master data, and also for
passing MDX statements for execution to the MDX Processor.

The OLAP Processor is responsible for retrieving, processing, and formatting the data
from the SAP BW source objects, which are further described in SAP BW data source and
Data objects in SAP BW.
SAP Business Explorer and other SAP tools use a more direct interface to the SAP BW
OLAP Processor called Business Intelligence Consumer Services, commonly known as
BICS. BICS isn't available for 3rd party tools.

SAP BW data sources


The OLAP BAPIs provide 3rd party applications with access to SAP BW InfoProviders and
BEx Queries.

Typically, when a 3rd party tool like Power Query connects using the OLAP BAPIs, SAP
BW first responds with a list of catalogs available in the SAP BW system.

There's one catalog with the technical name $INFOCUBE that contains all InfoProviders in
the SAP BW system. This catalog is shown as a node in the navigator of Power Query. By
expanding this node in the navigator, you can select from the available InfoProviders in
the SAP BW system.

The other catalogs represent InfoProviders for which at least one Query exists. By
expanding one of these nodes in the navigator, you can select from the available queries
associated with the InfoProvider.

BEx Queries offer some advantages and additional functionality to create customized
data sources to meet end-user requirements. For example, you can parameterize queries
with variables that can limit the data set to what's important to the end user. Or, you can
recalculate key figures using formulas.
Although BEx Queries have advantages as data sources (go to Performance
considerations), you don't need a Query for every report. You'll need to weigh the cost
of developing and maintaining additional Queries against their reporting requirements.

Data objects in SAP BW


SAP BW comes with built-in tools for creating data models based on different data
objects. It's helpful to have a rudimentary understanding of how data is represented in
SAP BW and the terminology. The main data objects in SAP BW are briefly introduced
here:

InfoProvider is the generic term for a Business Intelligence (BI) object into which
data is loaded or which provides views of data. InfoProviders can be queried with
client tools, such as Business Explorer (or BEx) and also with Power Query.

InfoProviders can be seen as uniform data providers from the viewpoint of a query
definition. Their data can therefore be analyzed in a uniform way.

InfoCube is a type of InfoProvider. An InfoCube describes, from an analysis point


of view, a self-contained dataset, for a business-orientated area, for example
Purchasing. You can analyze an InfoCube directly as an InfoProvider with analysis
and reporting tools, including Power BI or Power Platform apps.

An InfoCube consists of a set of relational tables that are arranged according to an


enhanced star schema. This means there's a (large) fact table that contains the key
figures for the InfoCube, and also several (smaller) dimension tables that surround
it.

Key figure is an operational attribute that indicates a numerical measure such as


amount, weight, quantity, and so on.

Dimension is a grouping of related characteristics under a single generic term. For


example, the Customer dimension could be made up of the Customer Number, the
Customer Group, and the levels of the customer hierarchy.

A Sales dimension could contain the characteristics Sales Person, Sales Group, and
Sales Office.

A Time dimension could have the characteristics Day (in the form YYYYMMDD),
Week (in the form YYYY.WW), Month (in the form YYYY.MM), Year (in the form
YYYY) and Fiscal Period (in the form YYYY.PPP).

Characteristics refer to master data with their attributes and text descriptions, and
in some cases hierarchies. The characteristics of an InfoCube are stored in
dimensions.

For example, the Customer dimension could have the characteristics Sold-to-party,
Ship-to-party, and Payer.

The characteristic Sold-to-party could have the attributes Country, Region, City,
Street, and Industry. The text description of the characteristic would be the Name
of the Sold-to-party.

In MDX query terms, the attributes of characteristics are also referred to as


properties.

InfoObjects is the generic term for all characteristics and key figures. All
InfoObjects are maintained independently of the InfoCube in SAP BW. InfoObjects
are the smallest units of Business Intelligence (BI). Using InfoObjects, information
can be stored and mapped in a structured form. This is required for constructing
InfoProviders. InfoObjects with attributes or texts can themselves be InfoProviders.

DataStore Object (DSO) serves as a storage location for consolidated and cleansed
transaction data or master data on a document (atomic) level. Unlike the
multidimensional data in InfoCubes, the data in DataStore objects is stored in
transparent, flat database tables. The system doesn't create separate fact tables or
dimension tables for DSOs. Data in DSOs can be evaluated using a BEx query.

MultiProviders are a special type of InfoProvider that combine data from several
InfoProviders. They're then available for reporting. MultiProviders don't contain
any data, their data comes exclusively from the InfoProviders upon which they're
based. MultiProviders can be based upon any combination of InfoProviders,
including InfoCubes, DataStore Objects, InfoObjects, or InfoSets.

InfoSets are a special type of InfoProvider that doesn't store data physically.
InfoSets describe data that's based on joining the tables of other InfoProviders like
DataStore Objects, standard InfoCubes, or InfoObjects with master data
characteristics. InfoSets can be useful when you have to build a report spanning
two or more different data targets in SAP BW.

Composite Providers are a new data object in SAP BW systems that run on HANA, that
is, SAP BW 7.5 or BW4/HANA. A composite provider is based on a JOIN or UNION of
other InfoProviders or Analytic Indexes. Data in Composite Providers can be evaluated
using a BEx query.

See also
Navigate the query objects
Navigate the query objects
Article • 01/24/2023

After you connect to your SAP BW instance, the Navigator dialog box will show a list of
available catalogs in the selected server.

You'll see one catalog folder with the name $INFOCUBE. This folder contains all
InfoProviders in the SAP BW system.

The other catalog folders represent InfoProviders in SAP BW for which at least one
query exists.

The Navigator dialog box displays a hierarchical tree of data objects from the connected
SAP BW system. The following table describes the types of objects.

Symbol Description

Your SAP BW server


Symbol Description

Catalog—either $INFOCUBE or an InfoProvider

InfoCube or a BEx Query

Key figure

Characteristic

Characteristic level

Property (Attribute)

Hierarchy
7 Note

The navigator shows InfoCubes and BEx queries. For BEx queries, you may need to
go into Business Explorer, open the desired query and check Allow External Access
to this Query: By OLE DB for OLAP for the query to be available in the navigator.
7 Note

In Power BI Desktop, objects below an InfoCube or BEx Query node, such as the key
figures, characteristics, and properties are only shown in Import connectivity mode,
not in DirectQuery mode. In DirectQuery mode, all the available objects are
mapped to a Power BI model and will be available for use in any visual.

In the navigator, you can select from different display options to view the available
query objects in SAP BW:

Only selected items: This option limits the objects shown in the list to just the
selected items. By default, all query objects are displayed. This option is useful for a
review of the objects that you included in your query. Another approach to viewing
selected items is to select the column names in the preview area.
Enable data previews: This value is the default. This option allows you to control
whether a preview of the data should be displayed on the right-hand side in the
Navigator dialog box. Disabling data previews reduces the amount of server
interaction and response time. In Power BI Desktop, data preview is only available
in Import connectivity mode.

Technical names: SAP BW supports the notion of technical names for query
objects, as opposed to the descriptive names that are shown by default. Technical
names uniquely identify an object within SAP BW. With the option selected, the
technical names will appear next to the descriptive name of the object.

Characteristic hierarchies
A characteristic will always have at least one characteristic level (Level 01), even when no
hierarchy is defined on the characteristic. The Characteristic Level 01 object contains all
members for the characteristic as a flat list of values.

Characteristics in SAP BW can have more than one hierarchy defined. For those
characteristics, you can only select one hierarchy or the Level 01 object.

For characteristics with hierarchies, the properties selected for that characteristic will be
included for each selected level of the hierarchy.

Measure properties
When you pick a measure, you have an option to select the units/currency, formatted
value, and format string. In the screenshot below, it's useful to get the formatted value
for COGS. This helps us follow the same formatting standard across all the reports.

7 Note

Measure properties are not available in Power BI Desktop in DirectQuery mode.

Flattening of multi-dimensional data


Based on the selected objects and properties in the navigator, Power Query constructs
an MDX statement that is sent for execution to SAP BW. The MDX statement returns a
flattened data set that can be further manipulated using the Power Query Editor.

Power Query uses a newer interface that is available in SAP BW version 7.01 or higher.
The interface reduces memory consumption and the result set is not restricted by the
number of cells.

The flattened data set is aggregated in SAP BW at the level of the selected
characteristics and properties.

Even with these improvements, the resulting dataset can become very large and time-
consuming to process.

Performance recommendation

Only include the characteristics and properties that you ultimately need. Aim for higher
levels of aggregation, that is, do you need Material-level details in your report, or is
MaterialGroup-level enough? What hierarchy levels are required in Power BI? Try to
create smaller datasets, with higher levels of aggregation, or multiple smaller datasets,
that can be joined together later.

Query parameters
Queries in SAP BW can have dynamic filters defined that allow you to restrict the data
set that's returned by the query. In the BEx Query Designer, this type of dynamic filter
can be defined with what's called a Characteristic Restriction and assigning a Variable to
that restriction. Variables on a query can be required or optional, and they're available to
the user in the navigator.

When you select an SAP BW query with characteristic restrictions in the Power Query
navigator, you'll see the variables displayed as parameters above the data preview area.

Using the Show selector, you can display all parameters that are defined on the query,
or just the required ones.

The query shown in the previous image has several optional parameters, including one
for Material Group. You can select one or more material groups to only return
purchasing information for the selected values, that is, casings, motherboards, and
processors. You can also type the values directly into the values field. For variables with
multiple entries, comma-separated values are expected, in this example it would look
like [0D_MTLGROUP].[201], [0D_MTLGROUP].[202], [0D_MTLGROUP].[208] .

The value # means unassigned; in the example any data record without an assigned
material group value.
Performance recommendation

Filters based on parameter values get processed in the SAP BW data source, not in
Power BI. This type of processing can have performance advantages for larger datasets
when loading or refreshing SAP BW data into Power BI. The time it takes to load data
from SAP BW into Power BI increases with the size of the dataset, for example, the
number of columns and rows in the flattened result set. To reduce the number of
columns, only select the key figures, characteristics, and properties in the navigator that
you eventually want to see.

Similarly, to reduce the number of rows, use the available parameters on the query to
narrow the dataset, or to split up a larger dataset into multiple, smaller datasets that can
be joined together in the Power BI Desktop data model.

In many cases, it may also be possible to work with the author of the BEx Query in SAP
BW to clone and modify an existing query and optimize it for performance by adding
additional characteristic restrictions or removing unnecessary characteristics.

Loading SAP data into Power Query


Once you've selected the SAP data set you want in the navigator, you can import the
data into Power Query Editor. Select Transform Data to launch the Power Query Editor,
where you can perform additional data transformation and filtering steps.

In the example above, a parameter was used to only bring back records with a Material
Group of casings, motherboards, and processors.

In Power Query Desktop, you can also select Load to bring the entire data set from SAP
BW into Power BI Desktop. Power BI Desktop will take you to the Report view where you
can begin visualizing the data or make further modifications using the Data or
Relationships views.

See also
Transform and filter an SAP BW dataset
Transform and filter an SAP BW dataset
Article • 01/24/2023

With Power Query Editor, you can apply additional data transformations and filtering
steps before you bring the dataset from SAP BW into the Power BI Desktop or Microsoft
Power Platform data model.

In Power Query Editor, the Applied Steps for the query are shown in the Query Settings
pane on the right. To modify or review a step, select the gear icon next to a step.

For example, if you select the gear icon next to Added Items, you can review the
selected data objects in SAP BW, or modify the specified query parameters. This way it's
possible to filter a dataset using a characteristic that isn't included in the result set.

You can apply additional filters on the dataset by selecting the drop-down menu for one
of the columns.
Another easy way to set a filter is to right-click on one of the values in the table, then
select Member Filters or Text Filters.

For example, you could filter the dataset to only include records for Calendar
Year/Month FEB 2003, or apply a text filter to only include records where Calendar
Year/Month contains 2003.

Not every filter will get folded into the query against SAP BW. You can determine if a
filter is folded into the query by examining the icon in the top-left corner of the data
table, directly above the number 1 of the first data record.

If the icon is a cube, then the filter is applied in the query against the SAP BW system.
If the icon is a table, then the filter isn't part of the query and only applied to the table.

Behind the UI of Power Query Editor, code is generated based on the M formula
language for data mashup queries.

You can view the generated M code with the Advanced Editor option in the View tab.
To see a description for each function or to test it, right-click on the existing SAP BW
query in the Queries pane and select Create Function. In the formula bar at the top,
enter:

= <function name>

where <function name> is the name of the function you want to see described. The
following example shows the description of the Cube.Transform function.

The following examples show some descriptions of various cube functions:


Cube.Transform : Applies the list of functions or transforms on the cube.

Cube.ApplyParameter : Applies the specified values for a parameter.


Cube.DisplayFolders : Returns a nested tree of objects representing the display

folder hierarchy of the cube.


Cube.Parameters : Returns a table with the set of parameters that can be applied to

the cube.
Cube.Dimensions : Returns a table with the set of dimensions for the cube.
Cube.Measures : Returns a table with the set of measures for the cube.

See also
Power Query M formula language reference
Implementation details
Implementation details
Article • 03/08/2023

This article describes conversion information and specific features available in


Implementation 2 of the Power Query SAP Business Warehouse connector.

) Important

Version 1.0 of the SAP Business Warehouse connector has been deprecated. New
connections will use Implementation 2.0 of the SAP Business Warehouse connector.
All support for version 1.0 will be removed from the connector in the near future.
Use the information in this article to update existing version 1.0 reports so they can
use Implementation 2.0 of this connector.

New options for Implementation 2.0


Implementation 2.0 supports the following options:

ExecutionMode specifies the MDX interface used to execute queries on the server.
The following options are valid:

SapBusinessWarehouseExecutionMode.BasXml

SapBusinessWarehouseExecutionMode.BasXmlGzip

SapBusinessWarehouseExecutionMode.DataStream

The default value is SapBusinessWarehouseExecutionMode.BasXmlGzip .

Using SapBusinessWarehouseExecutionMode.BasXmlGzip may improve


performance when experiencing high latency for large datasets.

BatchSize specifies the maximum number of rows to retrieve at a time when


executing an MDX statement. A small number translates into more calls to the
server while retrieving a large dataset. A large number of rows may improve
performance, but could cause memory issues on the SAP BW server. The default
value is 50000 rows.

EnableStructures indicates whether characteristic structures are recognized. The


default value for this option is false. Affects the list of objects available for
selection. Not supported in Native query mode.
The ScaleMeasures option has been deprecated in this implementation. The behavior is
now the same as setting ScaleMeasures to false, always showing unscaled values.

Additional improvements for Implementation


2.0
The following list describes some of the additional improvements that come with the
new implementation:

Improved performance.
Ability to retrieve several million rows of data, and fine-tuning through the batch
size parameter.
Ability to switch execution modes.
Support for compressed mode. Especially beneficial for high latency connections
or large datasets.
Improved detection of Date variables.
Expose Date (ABAP type DATS) and Time (ABAP type TIMS) dimensions as dates
and times respectively, instead of text values. More information: Support for typed
dates in SAP BW
Better exception handling. Errors that occur in BAPI calls are now surfaced.
Column folding in BasXml and BasXmlGzip modes. For example, if the generated
MDX query retrieves 40 columns but the current selection only needs 10, this
request will be passed onto the server to retrieve a smaller dataset.

Changing existing reports to use Implementation 2.0


Changing existing reports to use Implementation 2.0 is only possible in import mode.
Follow these steps:

1. Open an existing report, select Edit Queries in the ribbon, and then select the SAP
Business Warehouse query to update.

2. Right-click the query and select Advanced Editor.

3. In the Advanced Editor, change the SapBusinessWarehouse.Cubes call as follows:

Determine whether the query already contains an option record, such as the
following example.
If so, add the Implementation 2.0 option, and remove the ScaleMeasures option, if
present, as shown.

If the query doesn't already include an options record, just add it. For the following
option:

Just change it to:

Every effort has been made to make Implementation 2.0 of the SAP BW connector
compatible with version 1. However, there may be some differences because of the
different SAP BW MDX execution modes being used. To resolve any discrepancies, try
switching between execution modes.

Support for typed dates in SAP BW


Implementation 2.0 of the SAP BW connector includes support for typed dates and
times. If you query a report that has dimensions with ABAP types, DATS, or TIMS, they
can now be output as dates instead of text.

The limitations for using this functionality are:

Only available in Implementation 2.0 of the SAP BW connector.


Only available in Import mode.
The account used to connect to the SAP BW server should have enough
permissions to call BAPI_IOBJ_GETDETAIL.
let
Source = SapBusinessWarehouse.Cubes("sapbwtestserver", "00", "837",
[ExecutionMode=SapBusinessWarehouseExecutionMode.BasXmlGzip,
Implementation="2.0"]),
#"$INFOCUBE" = Source{[Name="$INFOCUBE"]}[Data],
#"$0D_DECU" = #"$INFOCUBE"{[Id="$0D_DECU"]}[Data],
#"Added Items" = Cube.Transform(#"$0D_DECU",
{
{Cube.AddAndExpandDimensionColumn, "[0CALDAY]", {"[0CALDAY].
[LEVEL01]"}, {"Calendar day.Calendar day Level 01"}},
{Table.AddColumn, "Calendar day.Calendar day Level 01.Key", each
Cube.AttributeMemberProperty([Calendar day.Calendar day Level 01], "
[20CALDAY]")},
{Cube.AddMeasureColumn, "Billed Quantity", "[Measures].[0D_INV_QTY]"}
})
in
#"Added Items"

You'll need to add the key in to access the typed date. For example, if there's a
dimension attribute called [0CALDAY], you'll need to add the key [20CALDAY] to get the
typed value.

In the example above, this means that:

Calendar day.Calendar day Level 01 [0CALDAY] will be text (a caption). (Added by


default when the dimension is added.)
Calendar day.Calendar day Level 01.Key [20CALDAY] will be a date (must be
manually selected).

To manually add the key in Import mode, just expand Properties and select the key.
The key column will be of type date, and can be used for filtering. Filtering on this
column will fold to the server.

Support for SAP BW features


The following table lists all SAP BW features that aren't fully supported or will behave
differently when using the Power Query SAP BW connector.

Feature Description

Local calculations Local calculations defined in a BEX Query will change the numbers as
displayed through tools like Bex Analyzer. However, they aren't reflected in
the numbers returned from SAP, through the public MDX interface.

As such, the numbers seen in Power Query won't necessarily match those
for a corresponding visual in an SAP tool.

For instance, when connecting to a query cube from a BEx query that sets the
aggregation to be Cumulated (for example, running sum), Power Query would
get back the base numbers, ignoring that setting. An analyst could then apply
a running sum calculation locally in, for example, Power BI, but would need to
exercise caution in how the numbers are interpreted if this isn't done.
Feature Description

Aggregations In some cases (particularly when dealing with multiple currencies), the
aggregate numbers returned by the SAP public interface don't match those
shown by SAP tools.

As such, the numbers seen in Power Query won't necessarily match those
for a corresponding visual in an SAP tool.

For instance, totals over different currencies would show as "*" in Bex
Analyzer, but the total would get returned by the SAP public interface,
without any information that such an aggregate number is meaningless. Thus
the number (aggregating, say, $, EUR, and AUD) would get displayed by
Power Query.

Currency Any currency formatting (for example, $2,300 or 4000 AUD) isn't reflected in
formatting Power Query.

Units of measure Units of measure (for example, 230 KG) aren't reflected in Power Query.

Key versus text For an SAP BW characteristic like CostCenter, the navigator will show a single
(short, medium, item Cost Center Level 01. Selecting this item will include the default text for
long) Cost Center in the field list. Also, the Key value, Short Name, Medium Name,
and Long Name values are available for selection in the Properties node for
the characteristic (if maintained in SAP BW).

Note that this only applies to Import connectivity mode. For DirectQuery
mode, only the default text will be included in the data set.

Attributes The attributes of a characteristic will be available for selection in the


Properties for the characteristic. This only applies to Import connectivity
mode. For DirectQuery mode, attributes won't be available.

Multiple In SAP, a characteristic can have multiple hierarchies. Then in tools like BEx
hierarchies of a Analyzer, when a characteristic is included in a query, the user can select the
characteristic hierarchy to use.

In Power BI, the various hierarchies can be seen in the field list as different
hierarchies on the same dimension. However, selecting multiple levels from
two different hierarchies on the same dimension will result in empty data
being returned by SAP.
Feature Description

Treatment of SAP BW supports ragged hierarchies, where levels can be missed, for
ragged example:
hierarchies
Continent
Americas
Canada
USA
Not Assigned
Australia

In Power BI, this appears with (Blank) at the missing level:

Continent
Americas
Canada
USA
Not Assigned
(Blank)
Australia

Scaling In SAP, a key figure can have a scaling factor (for example, 1000) defined as a
factor/reverse formatting option, meaning that all displays will be scaled by that factor.
sign
It can similarly have a property set that reverses the sign. Use of such a key
figure in Power BI (in a visual, or as part of a calculation) will result in the
unscaled number being used (and the sign isn't reversed). The underlying
scaling factor isn't available. In Power BI visuals, the scale units shown on the
axis (K,M,B) can be controlled as part of the visual formatting.

Hierarchies Initially when connecting to SAP BW, the information on the levels of a
where levels hierarchy will be retrieved, resulting in a set of fields in the field list. This is
appear/disappear cached, and if the set of levels changes, then the set of fields doesn't change
dynamically until Refresh is invoked.

This is only possible in Power BI Desktop. Such a Refresh to reflect changes to


the levels cannot be invoked in the Power BI service after Publish.

Default filter A BEX query can include Default Filters, which will be applied automatically by
SAP Bex Analyzer. These aren't exposed, and so the equivalent usage in Power
Query won't apply the same filters by default.

Hidden Key A BEX query can control visibility of Key Figures, and those that are hidden
figures won't appear in SAP BEx Analyzer. This isn't reflected through the public API,
and so such hidden key figures will still appear in the field list. However, they
can then be hidden within Power Query.
Feature Description

Numeric Any numeric formatting (number of decimal positions, decimal point, and so
formatting on) won't automatically be reflected in Power Query. However, it's possible to
then control such formatting within Power Query.

Hierarchy SAP BW allows different versions of a hierarchy to be maintained, for


versioning example, the cost center hierarchy in 2007 versus 2008. Only the latest
version will be available in Power Query, as information on versions isn't
exposed by the public API.

Time-dependent When using Power Query, time-dependent hierarchies are evaluated at the
hierarchies current date.

Currency SAP BW supports currency conversion, based on rates held in the cube. Such
conversion capabilities aren't exposed by the public API, and are therefore not available
in Power Query.

Sort Order The sort order (by Text, or by Key) for a characteristic can be defined in SAP.
This sort order isn't reflected in Power Query. For example, months might
appear as "April", "Aug", and so on.

It isn't possible to change this sort order in Power Query.

Technical names In the navigator, the characteristic/measure names (descriptions) and


technical names can both be displayed using the Display Options selector.
The field list contains the characteristic/measure names (descriptions).

End user The locale used to connect to SAP BW is set as part of the connection details,
language setting and doesn't reflect the locale of the final report consumer.

Text Variables SAP BW allows field names to contain placeholders for variables (for example,
"$YEAR$ Actuals") that would then get replaced by the selected value. For
example, the field appears as "2016 Actuals" in BEx tools, if the year 2016
were selected for the variable.

The column name in Power Query won't be changed depending on the


variable value, and so would appear as "$YEAR$ Actuals". However, the
column name can then be changed in Power Query.

Customer Exit Customer Exit variables aren't exposed by the public API, and are therefore
Variables not supported by Power Query.

Performance Considerations
The following table provides a summary list of suggestions to improve performance for
data load and refresh from SAP BW.
Suggestion Description

Limit The time it takes to load data from SAP BW into Power Query increases with the
characteristics size of the dataset, that is, the number of columns and rows in the flattened
and properties result set. To reduce the number of columns, only select the characteristics and
(attribute) properties in the navigator that you eventually want to see in your report or
selection dashboard.

Make use of Using filters/parameters contributes to reducing the size of the result set, which
parameters significantly improves query runtimes.

Parameters are especially valuable when used with large dimensions, where
there's many members, such as customers, materials, or document numbers.

Limit number Selecting many key figures from a BEx query/BW model can have a significant
of key figures performance impact during query execution because of the time being spent on
loading metadata for units. Only include the key figures that you need in Power
Query.

Split up very For very large queries against InfoCubes or BEx queries, it may be beneficial to
large queries split up the query. For example, one query might be getting the key figures,
into multiple, while another query (or several other queries) is getting the characteristics data.
smaller queries You can join the individual query results in Power Query.

Avoid Virtual VirtualProviders are similar to structures without persistent storage. They are
Providers useful in many scenarios, but can show slower query performance because they
(MultiProviders represent an additional layer on top of actual data.
or InfoSets)

Avoid use of A query with a navigation attribute has to run an additional join, compared with
navigation a query with the same object as a characteristic in order to arrive at the values.
attributes in
BEx query

Use RSRT to Your SAP Admin can use the Query Monitor in SAP BW (transaction RSRT) to
monitor and analyze performance issues with SAP BW queries. Review SAP note 1591837 for
troubleshoot more information.
slow running
queries

Avoid Both are computed during query execution and can slow down query
Restricted Key performance.
Figures and
Calculated Key
Figures
Suggestion Description

Consider using Power BI refreshes the complete dataset with each refresh. If you're working
incremental with large volume of data, refreshing the full dataset on each refresh may not be
refresh to optimal. In this scenario, you can use incremental refresh, so you're refreshing
improve only a subset of data. For more details, go to Incremental refresh in Power BI.
performance

Comparison to Analysis for Office (AFO)


There are fundamental differences between the Analysis for Office (AFO) tool and the
Power Query SAP Business Warehouse connector, so the outputs of data might differ.
AFO doesn't use MDX, but instead uses a proprietary protocol developed by SAP that
isn't available for 3rd parties, such as the Power Query SAP Business Warehouse
connector. SAP only certifies connectors that use the MDX interface. AFO uses a
multidimensional querying strategy, which navigates the data differently whereas the
Power Query SAP Business Warehouse connector needs to flatten the data so it can be
represented as a table. So even though it's the same data, it's queried, represented, and
ultimately outputted differently.

See also
SAP Business Warehouse Application Server
SAP Business Warehouse Message Server
Import vs. DirectQuery for SAP BW
Import vs. DirectQuery for SAP BW
Article • 01/24/2023

7 Note

This article discusses the differences between Import and DirectQuery modes in
Power BI Desktop. For a description of using Import mode in Power Query Desktop
or Power Query Online, go to the following sections:

SAP BW Application Server connector:

Connect to an SAP BW Application Server from Power Query Desktop


Connect to an SAP BW Application Server from Power Query Online

SAP BW Message Server connector:

Connect to an SAP BW Message Server from Power Query Desktop


Connect to an SAP BW Message Server from Power Query Online

With Power Query, you can connect to a wide variety of data sources, including online
services, databases, different file formats, and others. If you are using Power BI Desktop,
you can connect to these data sources in two different ways: either import the data into
Power BI, or connect directly to data in the source repository, which is known as
DirectQuery. When you connect to an SAP BW system, you can also choose between
these two connectivity modes. For a complete list of data sources that support
DirectQuery, refer to Power BI data sources.

The main differences between the two connectivity modes are outlined here, as well as
guidelines and limitations, as they relate to SAP BW connections. For additional
information about DirectQuery mode, go to Using DirectQuery in Power BI.

Import Connections
When you connect to a data source with Power BI Desktop, the navigator will allow you
to select a set of tables (for relational sources) or a set of source objects (for
multidimensional sources).

For SAP BW connections, you can select the objects you want to include in your query
from the tree displayed. You can select an InfoProvider or BEx query for an InfoProvider,
expand its key figures and dimensions, and select specific key figures, characteristics,
attributes (properties), or hierarchies to be included in your query.

The selection defines a query that will return a flattened data set consisting of columns
and rows. The selected characteristics levels, properties and key figures will be
represented in the data set as columns. The key figures are aggregated according to the
selected characteristics and their levels. A preview of the data is displayed in the
navigator. You can edit these queries in Power Query prior to loading the data, for
example to apply filters, or aggregate the data, or join different tables.

When the data defined by the queries is loaded, it will be imported into the Power BI in-
memory cache.

As you start creating your visuals in Power BI Desktop, the imported data in the cache
will be queried. The querying of cached data is very fast and changes to the visuals will
be reflected immediately.

However, the user should take care when building visuals that further aggregate the
data, when dealing with non-additive measures. For example, if the query imported each
Sales Office, and the Growth % for each one, then if the user built a visual that will Sum
the Growth % values across all Sales Offices, that aggregation will be performed locally,
over the cached data. The result wouldn't be the same as requesting the overall Growth
% from SAP BW, and is probably not what's intended. To avoid such accidental
aggregations, it's useful to set the Default Summarization for such columns to Do not
summarize.

If the data in the underlying source changes, it won't be reflected in your visuals. It will
be necessary to do a Refresh, which will reimport the data from the underlying source
into the Power BI cache.

When you publish a report (.pbix file) to the Power BI service, a dataset is created and
uploaded to the Power BI server. The imported data in the cache is included with that
dataset. While you work with a report in the Power BI service, the uploaded data is
queried, providing a fast response time and interactivity. You can set up a scheduled
refresh of the dataset, or re-import the data manually. For on-premise SAP BW data
sources, it's necessary to configure an on-premises data gateway. Information about
installing and configuring the on-premises data gateway can be found in the following
documentation:

On-premises data gateway documentation


Manage gateway data source in Power BI
Data source management in Power Platform
DirectQuery Connections
The navigation experience is slightly different when connecting to an SAP BW source in
DirectQuery mode. The navigator will still display a list of available InfoProviders and BEx
queries in SAP BW, however no Power BI query is defined in the process. You'll select the
source object itself, that is, the InfoProvider or BEx query, and see the field list with the
characteristics and key figures once you connect.

For SAP BW queries with variables, you can enter or select values as parameters of the
query. Select the Apply button to include the specified parameters in the query.

Instead of a data preview, the metadata of the selected InfoCube or BEx Query is
displayed. Once you select the Load button in Navigator, no data will be imported.

You can make changes to the values for the SAP BW query variables with the Edit
Queries option on the Power BI Desktop ribbon.
As you start creating your visuals in Power BI Desktop, the underlying data source in SAP
BW is queried to retrieve the required data. The time it takes to update a visual depends
on the performance of the underlying SAP BW system.

Any changes in the underlying data won't be immediately reflected in your visuals. It will
still be necessary to do a Refresh, which will rerun the queries for each visual against the
underlying data source.

When you publish a report to the Power BI service, it will again result in the creation of a
dataset in the Power BI service, just as for an import connection. However, no data is
included with that dataset.

While you work with a report in the Power BI service, the underlying data source is
queried again to retrieve the necessary data. For DirectQuery connections to your SAP
BW and SAP HANA systems, you must have an on-premises data gateway installed and
the data source registered with the gateway.

For SAP BW queries with variables, end users can edit parameters of the query.

7 Note

For the end user to edit parameters, the dataset needs to be published to a
premium workspace, in DirectQuery mode, and single sign-on (SSO) needs to be
enabled.
General Recommendations
You should import data to Power BI whenever possible. Importing data takes advantage
of the high-performance query engine of Power BI and provides a highly interactive and
fully featured experience over your data.

However, DirectQuery provides the following advantages when connecting to SAP BW:

Provides the ability to access SAP BW data using SSO, to ensure that security
defined in the underlying SAP BW source is always applied. When accessing SAP
BW using SSO, the user’s data access permissions in SAP will apply, which may
produce different results for different users. Data that a user isn't authorized to
view will be trimmed by SAP BW.

Ensures that the latest data can easily be seen, even if it's changing frequently in
the underlying SAP BW source.

Ensures that complex measures can easily be handled, where the source SAP BW is
always queried for the aggregate data, with no risk of unintended and misleading
aggregates over imported caches of the data.

Avoids caches of data being extracted and published, which might violate data
sovereignty or security policies that apply.

Using DirectQuery is generally only feasible when the underlying data source can
provide interactive queries for the typical aggregate query within seconds and is able to
handle the query load that will be generated. Additionally, the list of limitations that
accompany use of DirectQuery should be considered, to ensure your goals can still be
met.

If you're working with either very large datasets or encounter slow SAP BW query
response time in DirectQuery mode, Power BI provides options in the report to send
fewer queries, which makes it easier to interact with the report. To access these options
in Power BI Desktop, go to File > Options and settings > Options, and select Query
reduction.
You can disable cross-highlighting throughout your entire report, which reduces the
number of queries sent to SAP BW. You can also add an Apply button to slicers and
filter selections. You can make as many slicer and filter selections as you want, but no
queries will be sent to SAP BW until you select the Apply button. Your selections will
then be used to filter all your data.

These changes will apply to your report while you interact with it in Power BI Desktop, as
well as when your users consume the report in the Power BI service.

In the Power BI service, the query cache for DirectQuery connections is updated on a
periodic basis by querying the data source. By default, this update happens every hour,
but it can be configured to a different interval in dataset settings. For more information,
go to Data refresh in Power BI.

Also, many of the general best practices described in Using DirectQuery in Power BI
apply equally when using DirectQuery over SAP BW. Additional details specific to SAP
BW are described in Connect to SAP Business Warehouse by using DirectQuery in Power
BI.

See also
Windows authentication and single sign-on
Windows authentication and single
sign-on
Article • 01/24/2023

7 Note

The following information about Windows authentication and single sign-on


applies only to Power Query Desktop. For more information about using Windows
authentication and single sign-on in Power Query Desktop, go to Overview of
single sign-on (SSO) for gateways in Power BI.

For Windows-based authentication and single sign-on functionality, your SAP BW server
must be configured for sign in using Secure Network Communications (SNC). SNC is a
mechanism provided by the SAP system that enables application-level security on data
exchanged between a client, such as Power BI Desktop, and the SAP BW server. SNC
works with different external security products and offers features that the SAP system
doesn't directly provide, including single sign-on.

In addition to your SAP BW server being configured for SNC sign in, your SAP user
account needs to be configured with an SNC name (transaction SU01 in your SAP
system).

For more detailed information, go to Secure Network Communication , and the


chapter Single Sign-On Configuration in this document.

Secure Login is a software solution by SAP that allows customers to benefit from the
advantages of SNC without having to set up a public-key infrastructure (PKI). Secure
Login allows users to authenticate with Windows Active Directory credentials.

Secure Login requires the installation of the Secure Login Client on your Power BI
Desktop machine. The installation package is named SAPSetupSCL.EXE and can be
obtained from the SAP Service Marketplace (requires SAP customer credentials).

For further information, go to Secure Login .

1. In the SAP Business Warehouse server dialog box, select the Windows tab.

2. Select to either use your current Windows credentials or specify alternate Windows
credentials.
3. Enter the SNC Partner Name. This name is the configured SNC name in the SAP
BW server’s security token. You can retrieve the SNC name with transaction RZ11
(Profile Parameter Maintenance) in SAPGUI and parameter name snc/identity/as.

For X.509 certificate security tokens, the format is:

p:<X.509 Distinguished Name>

Example (values are case-sensitive): p:CN=BW0, OU=BI, O=MyOrg, C=US

For Kerberos security tokens, the format is:

p:CN=<service_User_Principal_Name>

Example (values are case-sensitive):


p:[email protected]

4. Select the SNC Library that your SAP BW environment has been configured for.

The SNC_LIB or SNC_LIB_64 option will check the corresponding


environment variable on your machine and use the DLL that's specified there.

The NTLM and KERBEROS options will expect the corresponding DLL to be in
a folder that's been specified in the PATH variable on your local machine. The
libraries for 32-bit systems are GSSNTLM.DLL (for NTLM) and GSSKRB5.DLL
(for Kerberos). The libraries for 64-bit systems are GX64NTLM.DLL (for NTLM)
and GX64KRB5.DLL (for Kerberos).

The Custom option allows for the use of a custom developed library.

Validate the settings with your SAP Administrator.


5. Select Connect.

See also
Use advanced options
Use advanced options
Article • 01/24/2023

When you create a connection to an SAP Business Warehouse server, you can optionally
specify a language code, execution mode, batch size, and an MDX Statement. Also, you
can select whether you want to enable characteristic structures.

7 Note

Although the images in this article illustrate the advanced options in the SAP
Business Warehouse Application Server connector, they work the same way in the
SAP Business Warehouse Message Server connector.

Language code
You can optionally specify a language code when establishing a connection to the SAP
BW server.
The expected value is a two-letter language code as defined in the SAP system. In Power
Query Desktop, select the Help icon (question mark) next to the Language Code field for
a list of valid values.

After you set the language code, Power Query displays the descriptive names of the
data objects in SAP BW in the specified language, including the field names for the
selected objects.

7 Note

Not all listed languages might be configured in your SAP BW system, and object
descriptions might not be translated in all languages.

If no language code is specified, the default locale from the Options dialog will be used
and mapped to a valid SAP language code. To view or override the current locale in
Power BI Desktop, open the File > Options and settings > Options dialog box and
select Current File > Regional settings. To view or override the current locale in Power
Query Online, open the Home > Options > Project options dialog box. If you do
override the locale, your setting gets persisted in your M query and would be honored if
you copy-paste your query from Power Query Desktop to Power Query Online.

Execution mode
The Execution mode option specifies the MDX interface is used to execute queries on
the server. The following options are valid:

BasXml: Specifies the bXML flattening mode option for MDX execution in SAP
Business Warehouse.

BasXmlGzip: Specifies the Gzip compressed bXML flattening mode option for MDX
execution in SAP Business Warehouse. This option is recommended for low latency
or high volume queries. The default value for the execution mode option.

DataStream: Specifies the DataStream flattening mode option for MDX execution
in SAP Business Warehouse.

Batch size
Specifies the maximum number of rows to retrieve at a time when executing an MDX
statement. A small number translates into more calls to the server when retrieving a
large dataset. A large number of rows may improve performance, but could cause
memory issues on the SAP BW server. The default value is 50000 rows.

MDX Statement
7 Note

The MDX statement option is not available in Power Query Online.

Instead of using the navigator to browse through and select from available data objects
in SAP BW, a user who's familiar with the MDX query language can specify an MDX
statement for direct execution in SAP BW. However, be aware that no further query
folding will be applied when using a custom MDX statement.

The statement for the example used here would look as shown in the following sample,
based on the technical names of the objects and properties in SAP BW.

SELECT {[0EFUZM0P10X72MBPOYVBYIMLB].[0EFUZM0P10X72MBPOYVBYISWV]} ON COLUMNS


,
NON EMPTY CROSSJOIN(CROSSJOIN([0D_MATERIAL].[LEVEL01].MEMBERS,[0D_PUR_ORG].
[LEVEL01].MEMBERS) ,
[0D_VENDOR].[LEVEL01].MEMBERS)
DIMENSION PROPERTIES
[0D_MATERIAL].[20D_MATERIAL],
[0D_MATERIAL].[50D_MATERIAL],
[0D_PUR_ORG].[20D_PUR_ORG],
[0D_PUR_ORG].[50D_PUR_ORG],
[0D_VENDOR].[20D_VENDOR],
[0D_VENTOR].[50D_VENDOR] ON ROWS FROM [0D_PU_C01/0D_PU_C01_Q0013]
The SAP BW connector will display a preview of the data that is returned by the MDX
statement. You can then either select Load to load the data (Power Query Desktop only),
or select Transform Data to further manipulate the data set in the Power Query Editor.

To validate and troubleshoot an MDX statement, SAP BW provides the MDXTEST


transaction for SAP GUI for Windows users. Further, the MDXTEST transaction can be a
useful tool for analyzing server errors or performance concerns as a result of processing
that occurs within the SAP BW system.

For more detailed information on this transaction, go to MDX Test Environment .

MDXTEST can also be used to construct an MDX statement. The transaction screen
includes panels on the left that assist the user in browsing to a query object in SAP BW
and generating an MDX statement.

The transaction offers different execution modes/interfaces for the MDX statement.
Select Flattening (basXML) to mimic how Power Query would execute the query in SAP
BW. This interface in SAP BW creates the row set dynamically using the selections of the
MDX statement. The resulting dynamic table that's returned to Power Query Desktop
has a very compact form that reduces memory consumption.

The transaction will display the result set of the MDX statement and useful runtime
metrics.
Enable characteristic structures
The Enable characteristic structures selection changes the way characteristic structures
are displayed in the navigator. A structure is an SAP BW object that can be used when
building BEX queries. In the BEX UX they look like the following image.
If the Enable characteristic structures selection is clear (default), the connector will
produce a cartesian product of each dimension on the structure with each available
measure. For example:
If selected, the connector produces only the available measures. For example:

See also
Navigate the query objects
Transform and filter SAP BW dataset
SAP Business Warehouse connector troubleshooting
SAP Business Warehouse connector
troubleshooting
Article • 01/24/2023

This article provides troubleshooting situations (and possible solutions) for working with
the SAP Business Warehouse (BW) connector.

Collect SAP BW advanced traces

7 Note

Collecting a trace of a query sent to the SAP BW server requires some options and
settings that can only be provided by using Power BI Desktop. If you don't already
have a copy of Power BI Desktop, you can obtain a copy at the Microsoft
Download Center . You can set all of the required options and settings for
advanced traces using this free version.

Many times when an error occurs, it may be advantageous to collect a trace of the query
that was sent to the SAP BW server and its response. The following procedure shows
how to set up advanced traces for issues that occur using the SAP BW connector.

1. Close Power BI Desktop if it’s running.

2. Create a new environment variable:

a. From the Windows Control Panel, select System > Advanced System Settings.

You could also open a Command Prompt and enter sysdm.cpl.

b. In System Properties, select the Advanced tab, and then select Environment
Variables.

c. In Environment Variables, under System Variables, select New.

d. In New System Variable, under Variable name, enter PBI_EnableSapBwTracing


and under Variable value, enter true.

e. Select OK.

When this advanced tracing is activated, an additional folder called SapBw will be
created in the Traces folder. See the rest of this procedure for the location of the
Traces folder.

3. Open Power BI Desktop.

4. Clear the cache before capturing.


a. In Power BI desktop, select the File tab.
b. Select Options and settings > Options.
c. Under Global settings, choose Data Load.
d. Select Clear Cache.

5. While you're still in Options and settings, enable tracing.


a. Under Global settings, choose Diagnostics.
b. Select Enable tracing.

6. While you're still in Options and settings > Global > Diagnostics, select Open
crash dump/traces folder. Ensure the folder is clear before capturing new traces.

7. Reproduce the issue.

8. Once done, close Power BI Desktop so the logs are flushed to disk.

9. You can view the newly captured traces under the SapBw folder (the Traces folder
that contains the SapBw folder is shown by selecting Open crash dump/traces
folder on the Diagnostics page in Power BI Desktop).

10. Make sure you deactivate this advanced tracing once you’re done, by either
removing the environment variable or setting PBI_EnableSapBwTracing to false.

Collect SAP BW advanced traces with CPIC


traces
If you're investigating authentication or single sign-on (SSO) issues, use the same
procedure as described in Collect SAP BW advanced traces, except in step 2d, enter the
following additional system variables and values:

CPIC_TRACE—3
CPIC_TRACE_DIR—a valid folder, for example: E:\traces\CPIC

The rest of the procedure remains the same. You can view the CPIC traces in the folder
you specified in the CPIC_TRACE_DIR environment variable. You can also view the
regular traces under the SapBw folder.

Also make sure you deactivate this advanced tracing once you’re done, by either
removing the environment variables or setting BI_EnableSapBwTracing to false and
CPIC_TRACE to 0.

Perform clean installation of SAP .NET


connector
If it becomes necessary to reinstall the SAP .NET connector:

1. Remove (uninstall) the SAP .NET Connector.

2. After removing, verify that the SAP .NET Connector isn't installed in the Global
Assembly Cache (GAC), by making sure the following paths do NOT exist or do
NOT contain DLLs:

32 bit GAC:

C:\Windows\Microsoft.NET\assembly\GAC_32\sapnco\v4.0_3.0.0.42__50436dc
a5c7f7d23

C:\Windows\Microsoft.NET\assembly\GAC_32\sapnco_utils\v4.0_3.0.0.42__504
36dca5c7f7d23

64 bit GAC:

C:\Windows\Microsoft.NET\assembly\GAC_64\sapnco\v4.0_3.0.0.42__50436dc
a5c7f7d23

C:\Windows\Microsoft.NET\assembly\GAC_64\sapnco_utils\v4.0_3.0.0.42__504
36dca5c7f7d23

3. Verify that the binaries aren't in Program Files. Make sure the following locations
do NOT exist or are empty:

C:\Program Files\SAP\SAP_DotNetConnector3_Net40_x64

C:\Program Files (x86)\sap\SAP_DotNetConnector3_Net40_x86

4. Reinstall the connector, and remember to select the Install assemblies to GAC
option. We recommend you use the latest, 3.0.23.

Troubleshooting error messages

SAP BW ErrorCode method not found


Method not found: 'Int32

SAP.Middleware.Connector.RfcBaseException.get_ErrorCode()'

This error is thrown when an error occurs on the SAP BW server and the SAP .NET
connector tries to retrieve information about that error. However, this error may be
hiding the real error. This error can occur when:

Using an old version of the SAP .NET connector.

Multiple versions of the SAP .NET connector are installed.

The SAP .NET connector was installed twice, once in the Global Assembly Cache
(GAC) and once not in the GAC.

Follow the instructions under Perform clean installation of SAP .NET connector to
reinstall the connector.

This won't solve the problem, but will provide the actual error message.

Exception: The type initializer for


'Microsoft.Mashup.Engine1.Library.SapBusinessWarehouse.
SapBwMicrosoftProviderFactoryService' threw an
exception.
Follow instructions under Perform clean installation of SAP .NET connector to reinstall
the connector.

This connector requires one or more additional


components
If you receive this error message, use the following troubleshooting steps:

1. Verify that the version of the SAP .NET connector is installed in the correct bit
length. If you have Power BI Desktop 64-bit installed, make sure you installed the
64-bit SAP .NET connector.

2. Verify that, while installing the SAP .NET Connector, the Install assemblies to GAC
was checked. To verify GAC is installed, open Windows Explorer and go to:

C:\Windows\Microsoft.NET\assembly\GAC_64\sapnco

For example, the full path might be:


C:\Windows\Microsoft.NET\assembly\GAC_64\sapnco\v4.0_3.0.0.42__50436dca
5c7f7d23\sapnco.dll

If you installed the 32-bit version of the SAP .NET connector, it would be
C:\Windows\Microsoft.NET\assembly\GAC_32\sapnco\v4.0_3.0.0.42__50436dca5c7f7d23
\sapnco.dll (and you’d need a 32-bit version of Power BI Desktop).

Another way to check the GAC is to use gacutil (one of the options for disabling strong
name signing). You’d need to run it from a 64-bit command prompt. You can check the
contents of the GAC by opening a command prompt, navigating to the gacutil.exe path
and executing:

gacutil -l

For example, in the output you should see:

sapnco, Version=3.0.0.42, Culture=neutral, PublicKeyToken=50436dca5c7f7d23,


processorArchitecture=AMD64 sapnco_utils, Version=3.0.0.42, Culture=neutral,
PublicKeyToken=50436dca5c7f7d23, processorArchitecture=AMD64

“No RFC authorization for function …*


Implementation 2.0 requires access to the following BAPIs. To resolve, contact the SAP
Basis team and request permissions to these BAPIs and RFCs for the user.

Connectivity:
RFC_PING
RFC_METADATA_GET

MDX execution:
RSR_MDX_CREATE_OBJECT
BAPI_MDDATASET_CREATE_OBJECT
BAPI_MDDATASET_SELECT_DATA
BAPI_MDDATASET_DELETE_OBJECT
RSR_MDX_GET_AXIS_INFO
RSR_MDX_GET_AXIS_DATA
RSR_MDX_GET_CELL_DATA
BAPI_MDDATASET_GET_AXIS_INFO
BAPI_MDDATASET_GET_AXIS_DATA
BAPI_MDDATASET_GET_CELL_DATA

ExecutionMode flattening:
RSR_MDX_GET_FLAT_DATA
RSR_MDX_GET_FS_DATA
BAPI_MDDATASET_GET_FLAT_DATA
BAPI_MDDATASET_GET_FS_DATA

ExecutionMode streaming:
BAPI_MDDATASET_GET_STREAMDATA
BAPI_MDDATASET_GET_STREAMINFO

ExecutionMode BasXml:
RSR_MDX_BXML_GET_DATA
RSR_MDX_BXML_GET_GZIP_DATA
RSR_MDX_BXML_GET_INFO
RSR_MDX_BXML_SET_BINDING

Metadata:
BAPI_MDPROVIDER_GET_DIMENSIONS
BAPI_MDPROVIDER_GET_CATALOGS
BAPI_MDPROVIDER_GET_CUBES
BAPI_MDPROVIDER_GET_MEASURES
BAPI_MDPROVIDER_GET_HIERARCHYS
BAPI_MDPROVIDER_GET_LEVELS
BAPI_MDPROVIDER_GET_PROPERTIES
BAPI_MDPROVIDER_GET_MEMBERS
BAPI_MDPROVIDER_GET_VARIABLES

Information:
BAPI_IOBJ_GETDETAIL (required for typed dimensions (DATS, TIMS))
BAPI_USER_GET_DETAIL (only used for flattening interface)
RFC_READ_TABLE (required for catalog names and certain variable values calls)

Might be called by the underlying SAP .NET connector:


RFC_GET_FUNCTION_INTERFACE
FUNCTION_IMPORT_INTERFACE
DDIF_FIELDINFO_GET
SYSTEM_FINISH_ATTACH_GUI
BGRFC_DEST_CONFIRM
BGRFC_CHECK_UNIT_STATE_SERVER
BGRFC_DEST_SHIP
ARFC_DEST_SHIP
RFC_FUNCTION_SEARCH
RFC_SYSTEM_INFO
RFC_SET_REG_SERVER_PROPERTY
RFC_DOCU
SEO_GET_CLIF_REMOTE
SYSTEM_PREPARE_ATTACH_GUI
API_CLEAR_TID
ARFC_DEST_CONFIRM

Method not found 'Int32


SAP.Middleware.Connector.RfcBaseException.get_ErrorCode()
Verify that the SAP .NET connector is installed properly. Refer to Perform clean
installation of SAP .NET connector.

This error appears when the installed version in the GAC is lower than the expected
3.0.18.0 version. SAP Note 2417315 discusses this scenario.

Connection string properties set by the connector


When both SNC Partner Name and SNC Library are provided, the SAP BW Application
Server connector (implementation 2.0) will set these properties in the connection string:

SNC_MODE—SncModeApply
SNC_LIB—with the library path specified; if it's an environment variable, it's
expanded at this point
SNC_PARTNERNAME—with the value provided
SNC_QOP = RfcConfigParameters.RfcSncQOP.Default

These are used for both SAP BW Application Server and SAP BW Message Server
connections.

For both connection types, the connector sets:

LANG (Language)
CLIENT

For the SAP BW Application Server connection, the connector sets:

ASHOST (AppServerHost)
SYSNR (SystemNumber)

For SAP BW Message Server connections, the connector sets:

MSHOST (MessageServerHost)
SYSID (SystemID)
GROUP (LogonGroup)

Invalid MDX command with <internal>


This error comes directly from the SAP BW server. Implementation 1 of the connector,
based on Netweaver RFC, didn't expose these errors to the user, returning an empty
result set instead.

This issue is discussed in the following SAP Notes. Access to these notes requires an S-
user. Contact your SAP Basis team to apply the relevant fixes for this issue.

1084454—MDX: System error "Invalid MDX command with <internal>"


1282785—MDX: System error "Invalid MDX command with <internal>"
401607—Errors at MDX command with CROSSJOIN in the slicer
1786009—Invalid MDX when using a member formula with special char

Additionally, for other similar errors, you can review the contents of the following SAP
notes, and apply them as appropriate for your environment:

1142664—MDX: Composite SAP Note about performance improvements


1156101—MDX: Composite SAP Note about incorrect data

Issues and limitations

Changing variable names on an SAP cube puts


DirectQuery report in a broken, unrecoverable state
The following symptoms occur:

Error message— [Expression.Error] The import [XXXX] matches no exports.

In the logs— Message: [Expression.Error] The key didn't match any rows in the
table.

StackTrace:

at Microsoft.Mashup.Engine1.Runtime.TableValue.get_Item(Value key)
at
Microsoft.Mashup.Engine1.Library.Cube.CubeParametersModule.Cube.ApplyPa
rameterFunctionValue.GetParameterValue(CubeValue cubeValue, Value
parameter)
at
Microsoft.Mashup.Engine1.Library.Cube.CubeParametersModule.Cube.ApplyPa
rameterFunctionValue.TypedInvoke(TableValue cube, Value parameter,
Value arguments)
Detail: [Key = [Id = \"[!V000004]\"], Table = #table({...}, {...})]

One possible workaround is to:

1. Make a copy of the PBIX file (as things might break).

2. Add an environment variable called PBI_AlwaysEnableQueryEditor with a value of


true. This setting will allow access to the query editor even in DirectQuery mode.

7 Note

This environment variable is unsupported, so should only be used as outlined


here.

3. Right-click on the "Cube" query and select Advanced Editor.

4. The query there should have a line that starts with "{Cube.ApplyParameter, "
[!V000004]" (the missing parameter). Remove that line.

5. Select Done.

6. Close the Power Query Editor.

7. Refresh the affected visual.

If the above workaround doesn't work, the only alternative fix is for you to recreate the
report.

Numeric data from SAP BW

7 Note

The following information only applies when using Implementation 1.0 of the SAP
BW connector or Implementation 2.0 of the SAP BW connector with Flattening
mode (when ExecutionMode=67).

User accounts in SAP BW have default settings for how decimal or date/time values are
formatted when displayed to the user in the SAP GUI.
The default settings are maintained in the SAP system in the User Profile for an account,
and the user can view or change these settings in the SAP GUI with the menu path
System > User Profile > Own Data.

Power BI Desktop queries the SAP system for the decimal notation of the connected
user and uses that notation to format decimal values in the data from SAP BW.

SAP BW returns decimal data with either a , (comma) or a . (dot) as the decimal
separator. To specify which of those SAP BW should use for the decimal separator, the
driver used by Power BI Desktop makes a call to BAPI_USER_GET_DETAIL . This call returns
a structure called DEFAULTS , which has a field called DCPFM that stores Decimal Format
Notation. The field takes one of the following values:

' ' (space) = Decimal point is comma: N.NNN,NN


'X' = Decimal point is period: N,NNN.NN
'Y' = Decimal point is N NNN NNN,NN

Customers who have reported this issue found that the call to BAPI_USER_GET_DETAIL is
failing for a particular user, which is showing the incorrect data, with an error message
similar to the following message:

XML

You are not authorized to display users in group TI:


<item>
<TYPE>E</TYPE>
<ID>01</ID>
<NUMBER>512</NUMBER>
<MESSAGE>You are not authorized to display users in group
TI</MESSAGE>
<LOG_NO/>
<LOG_MSG_NO>000000</LOG_MSG_NO>
<MESSAGE_V1>TI</MESSAGE_V1>
<MESSAGE_V2/>
<MESSAGE_V3/>
<MESSAGE_V4/>
<PARAMETER/>
<ROW>0</ROW>
<FIELD>BNAME</FIELD>
<SYSTEM>CLNTPW1400</SYSTEM>
</item>

To solve this error, users must ask their SAP admin to grant the SAP BW user being used
in Power BI the right to execute BAPI_USER_GET_DETAIL . It’s also worth verifying that the
user has the required DCPFM value, as described earlier in this troubleshooting solution.

Connectivity for SAP BEx queries


You can perform BEx queries in Power BI Desktop by enabling a specific property, as
shown in the following image:

MDX interface limitation


A limitation of the MDX interface is that long variables lose their technical name and are
replaced by V00000#.

No data preview in Navigator window


In some cases, the Navigator dialog box doesn't display a data preview and instead
provides an object reference not set to an instance of an object error message.

SAP users need access to specific BAPI function modules to get metadata and retrieve
data from SAP BW's InfoProviders. These modules include:

BAPI_MDPROVIDER_GET_CATALOGS
BAPI_MDPROVIDER_GET_CUBES
BAPI_MDPROVIDER_GET_DIMENSIONS
BAPI_MDPROVIDER_GET_HIERARCHYS
BAPI_MDPROVIDER_GET_LEVELS
BAPI_MDPROVIDER_GET_MEASURES
BAPI_MDPROVIDER_GET_MEMBERS
BAPI_MDPROVIDER_GET_VARIABLES
BAPI_IOBJ_GETDETAIL

To solve this issue, verify that the user has access to the various MDPROVIDER modules
and BAPI_IOBJ_GETDETAIL . To further troubleshoot this or similar issues, you can enable
tracing. Select File > Options and settings > Options. In Options, select Diagnostics,
then select Enable tracing. Attempt to retrieve data from SAP BW while tracing is active,
and examine the trace file for more detail.

Memory Exceptions
In some cases, you might encounter one of the following memory errors:

Message: No more memory available to add rows to an internal table.

Message: [DataSource.Error] SAP Business Warehouse: The memory request for


[number] bytes could not be complied with.

Message: The memory request for [number] bytes could not be complied with.

These memory exceptions are from the SAP BW server and are due to the server
running out of available memory to process the query. This might happen when the
query returns a large set of results or when the query is too complex for the server to
handle, for example, when a query has many crossjoins.
To resolve this error, the recommendation is to simplify the query or divide it into
smaller queries. If possible, push more aggregation to the server. Alternatively, contact
your SAP Basis team to increase the resources available in the server.

Loading text strings longer than 60 characters in Power BI


Desktop fails
In some cases you may find that text strings are being truncated to 60 characters in
Power BI Desktop.

First, follow the instructions in 2777473 - MDX: FAQ for Power BI accessing BW or
BW/4HANA and see if that resolves your issue.

Because the Power Query SAP Business Warehouse connector uses the MDX interface
provided by SAP for 3rd party access, you'll need to contact SAP for possible solutions
as they own the layer between the MDX interface and the SAP BW server. Ask how "long
text is XL" can be specified for your specific scenario.

CHAR limit of 60 despite "long text" in SAP BW


There's a known limitation where "long text" characteristics in SAP BW appear with a 60
character limit in Power BI. This character limit is due to a limitation in the MDX
interface, and there's no known workaround available. SAP has documented this MDX
limitation in this SAP Note .
Migrating to implementation 2.0 when using Direct
Query
Due to the deprecation of implementation 1.0 of the SAP Business Warehouse
connector, you might need to update your queries to take advantage of implementation
2.0. When you use Direct Query, access to the query editor is restricted. So you can't
easily transition to implementation 2.0 without recreating your entire query. The
workaround is to add a system environment variable to allow access to the query editor.
Note that the following steps aren't officially supported, and should only be used as
outlined here.

1. Create a new environment variable either by navigating to File Explorer > This PC
> Properties > Advanced system settings > Environment Variables > System
Variables > New, or by opening a command prompt and entering sysdm.cpl and
then selecting New under System Variables.
2. Name the environment variable PBI_AlwaysEnableQueryEditor and set the value
true . This variable setting allows access to the query editor even in Direct Query
mode.
3. In Power BI Desktop, in the Home tab, select Transform Data to open the Power
Query editor.
4. Update the query to use implementation 2.0 by following these instructions,
starting with Step 2 in that article.

Your end query should look something like SapBusinessWarehouse.Cubes("server",


"system", "clientId", [Implementation = "2.0"]) .
SAP HANA database
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Analysis Services

Authentication Types Supported Basic


Database
Windows

Function Reference Documentation SapHana.Database

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
You'll need an SAP account to sign in to the website and download the drivers. If you're
unsure, contact the SAP administrator in your organization.

To use SAP HANA in Power BI Desktop or Excel, you must have the SAP HANA ODBC
driver installed on the local client computer for the SAP HANA data connection to work
properly. You can download the SAP HANA Client tools from SAP Development Tools ,
which contains the necessary ODBC driver. Or you can get it from the SAP Software
Download Center . In the Software portal, search for the SAP HANA CLIENT for
Windows computers. Since the SAP Software Download Center changes its structure
frequently, more specific guidance for navigating that site isn't available. For instructions
about installing the SAP HANA ODBC driver, go to Installing SAP HANA ODBC Driver on
Windows 64 Bits .
To use SAP HANA in Excel, you must have either the 32-bit or 64-bit SAP HANA ODBC
driver (depending on whether you're using the 32-bit or 64-bit version of Excel) installed
on the local client computer.

This feature is only available in Excel for Windows if you have Office 2019 or a Microsoft
365 subscription . If you're a Microsoft 365 subscriber, make sure you have the latest
version of Office .

HANA 1.0 SPS 12rev122.09, 2.0 SPS 3rev30 and BW/4HANA 2.0 is supported.

Capabilities Supported
Import
Direct Query (Power BI Datasets)
Advanced
SQL Statement

Connect to an SAP HANA database from Power


Query Desktop
To connect to an SAP HANA database from Power Query Desktop:

1. Select Get Data > SAP HANA database in Power BI Desktop or From Database >
From SAP HANA Database in the Data ribbon in Excel.

2. Enter the name and port of the SAP HANA server you want to connect to. The
example in the following figure uses SAPHANATestServer on port 30015 .

By default, the port number is set to support a single container database. If your
SAP HANA database can contain more than one multitenant database container,
select Multi-container system database (30013). If you want to connect to a
tenant database or a database with a non-default instance number, select Custom
from the Port drop-down menu.

If you're connecting to an SAP HANA database from Power BI Desktop, you're also
given the option of selecting either Import or DirectQuery. The example in this
article uses Import, which is the default (and the only mode for Excel). For more
information about connecting to the database using DirectQuery in Power BI
Desktop, go to Connect to SAP HANA data sources by using DirectQuery in Power
BI.

You can also enter an SQL statement or enable column binding from Advanced
options. More information, Connect using advanced options

Once you've entered all of your options, select OK.

3. If you're accessing a database for the first time, you'll be asked to enter your
credentials for authentication. In this example, the SAP HANA server requires
database user credentials, so select Database and enter your user name and
password. If necessary, enter your server certificate information.

Also, you may need to validate the server certificate. For more information about
using validate server certificate selections, see Using SAP HANA encryption. In
Power BI Desktop and Excel, the validate server certificate selection is enabled by
default. If you've already set up these selections in ODBC Data Source
Administrator, clear the Validate server certificate check box. To learn more about
using ODBC Data Source Administrator to set up these selections, go to Configure
SSL for ODBC client access to SAP HANA.

For more information about authentication, go to Authentication with a data


source.

Once you've filled in all required information, select Connect.

4. From the Navigator dialog box, you can either transform the data in the Power
Query editor by selecting Transform Data, or load the data by selecting Load.

Connect to an SAP HANA database from Power


Query Online
To connect to SAP HANA data from Power Query Online:

1. From the Data sources page, select SAP HANA database.

2. Enter the name and port of the SAP HANA server you want to connect to. The
example in the following figure uses SAPHANATestServer on port 30015 .

3. Optionally, enter an SQL statement from Advanced options. More information,


Connect using advanced options

4. Select the name of the on-premises data gateway to use for accessing the
database.

7 Note

You must use an on-premises data gateway with this connector, whether your
data is local or online.

5. Choose the authentication kind you want to use to access your data. You'll also
need to enter a username and password.

7 Note

Currently, Power Query Online only supports Basic authentication.

6. Select Use Encrypted Connection if you're using any encrypted connection, then
choose the SSL crypto provider. If you're not using an encrypted connection, clear
Use Encrypted Connection. More information: Enable encryption for SAP HANA
7. Select Next to continue.

8. From the Navigator dialog box, you can either transform the data in the Power
Query editor by selecting Transform Data, or load the data by selecting Load.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed.

The following table describes all of the advanced options you can set in Power Query.

Advanced option Description

SQL Statement More information, Import data from a database using native database
query
Advanced option Description

Enable column Binds variables to the columns of a SAP HANA result set when fetching
binding data. May potentially improve performance at the cost of slightly higher
memory utilization. This option is only available in Power Query Desktop.
More information: Enable column binding

ConnectionTimeout A duration that controls how long to wait before abandoning an attempt
to make a connection to the server. The default value is 15 seconds.

CommandTimeout A duration that controls how long the server-side query is allowed to run
before it is canceled. The default value is ten minutes.

Supported features for SAP HANA


The following list shows the supported features for SAP HANA. Not all features listed
here are supported in all implementations of the SAP HANA database connector.

Both the Power BI Desktop and Excel connector for an SAP HANA database use the
SAP ODBC driver to provide the best user experience.

In Power BI Desktop, SAP HANA supports both DirectQuery and Import options.

Power BI Desktop supports HANA information models, such as Analytic and


Calculation Views, and has optimized navigation.

With SAP HANA, you can also use SQL commands in the native database query
SQL statement to connect to Row and Column Tables in HANA Catalog tables,
which aren't included in the Analytic/Calculation Views provided by the Navigator
experience. You can also use the ODBC connector to query these tables.

Power BI Desktop includes Optimized Navigation for HANA Models.

Power BI Desktop supports SAP HANA Variables and Input parameters.

Power BI Desktop supports HDI-container-based Calculation Views.

The SapHana.Database function now supports connection and command timeouts.


More information: Connect using advanced options

To access your HDI-container-based Calculation Views in Power BI, ensure that


the HANA database users you use with Power BI have permission to access the
HDI runtime container that stores the views you want to access. To grant this
access, create a Role that allows access to your HDI container. Then assign the
role to the HANA database user you'll use with Power BI. (This user must also
have permission to read from the system tables in the _SYS_BI schema, as
usual.) Consult the official SAP documentation for detailed instructions on how
to create and assign database roles. This SAP blog post may be a good place
to start.

There are currently some limitations for HANA variables attached to HDI-based
Calculation Views. These limitations are because of errors on the HANA side.
First, it isn't possible to apply a HANA variable to a shared column of an HDI-
container-based Calculation View. To fix this limitation, upgrade to HANA 2
version 37.02 and onwards or to HANA 2 version 42 and onwards. Second,
multi-entry default values for variables and parameters currently don't show up
in the Power BI UI. An error in SAP HANA causes this limitation, but SAP hasn't
announced a fix yet.

Enable column binding


Data fetched from the data source is returned to the application in variables that the
application has allocated for this purpose. Before this can be done, the application must
associate, or bind, these variables to the columns of the result set; conceptually, this
process is the same as binding application variables to statement parameters. When the
application binds a variable to a result set column, it describes that variable - address,
data type, and so on - to the driver. The driver stores this information in the structure it
maintains for that statement and uses the information to return the value from the
column when the row is fetched.

Currently, when you use Power Query Desktop to connect to an SAP HANA database,
you can select the Enable column binding advanced option to enable column binding.

You can also enable column binding in existing queries or in queries used in Power
Query Online by manually adding the EnableColumnBinding option to the connection in
the Power Query formula bar or advanced editor. For example:

Power Query M

SapHana.Database("myserver:30015", [Implementation = "2.0",


EnableColumnBinding = true]),

There are limitations associated with manually adding the EnableColumnBinding option:

Enable column binding works in both Import and DirectQuery mode. However,
retrofitting an existing DirectQuery query to use this advanced option isn't
possible. Instead, a new query must be created for this feature to work correctly.
In SAP HANA Server version 2.0 or later, column binding is all or nothing. If some
columns can’t be bound, none will be bound, and the user will receive an
exception, for example, DataSource.Error: Column MEASURE_UNIQUE_NAME of type
VARCHAR cannot be bound (20002 > 16384) .

SAP HANA version 1.0 servers don't always report correct column lengths. In this
context, EnableColumnBinding allows for partial column binding. For some queries,
this could mean that no columns are bound. When no columns are bound, no
performance benefits are gained.

Native query support in the SAP HANA


database connector
The Power Query SAP HANA database connector supports native queries. For
information about how to use native queries in Power Query, go to Import data from a
database using native database query.

Query folding on native queries


The Power Query SAP HANA database connector now supports query folding on native
queries. More information: Query folding on native queries

7 Note

In the Power Query SAP HANA database connector, native queries don't support
duplicate column names when EnableFolding is set to true.

Parameters in native queries


The Power Query SAP HANA database connector now supports parameters in native
queries. You can specify parameters in native queries by using the Value.NativeQuery
syntax.

Unlike other connectors, the SAP HANA database connector supports EnableFolding =
True and specifying parameters at the same time.

To use parameters in a query, you place question marks (?) in your code as placeholders.
To specify the parameter, you use the SqlType text value and a value for that SqlType in
Value . Value can be any M value, but must be assigned to the value of the specified

SqlType .
There are multiple ways of specifying parameters:

Providing just the values as a list:

Power Query M

{ “Seattle”, 1, #datetime(2022, 5, 27, 17, 43, 7) }

Providing the values and the type as a list:

Power Query M

{ [ SqlType = "CHAR", Value = "M" ],


[ SqlType = "BINARY", Value = Binary.FromText("AKvN",
BinaryEncoding.Base64) ],
[ SqlType = "DATE", Value = #date(2022, 5, 27) ] }

Mix and match the two:

Power Query M

{ “Seattle”, 1, [ SqlType = "SECONDDATE", Value = #datetime(2022, 5,


27, 17, 43, 7) ] }

SqlType follows the standard type names defined by SAP HANA. For example, the

following list contains the most common types used:

BIGINT
BINARY
BOOLEAN
CHAR
DATE
DECIMAL
DOUBLE
INTEGER
NVARCHAR
SECONDDATE
SHORTTEXT
SMALLDECIMAL
SMALLINT
TIME
TIMESTAMP
VARBINARY
VARCHAR

The following example demonstrates how to provide a list of parameter values.

Power Query M

let
Source = Value.NativeQuery(
SapHana.Database(
"myhanaserver:30015",
[Implementation = "2.0"]
),
"select ""VARCHAR_VAL"" as ""VARCHAR_VAL""
from ""_SYS_BIC"".""DEMO/CV_ALL_TYPES""
where ""VARCHAR_VAL"" = ? and ""DATE_VAL"" = ?
group by ""VARCHAR_VAL""
",
{"Seattle", #date(1957, 6, 13)},
[EnableFolding = true]
)
in
Source

The following example demonstrates how to provide a list of records (or mix values and
records):

Power Query M

let
Source = Value.NativeQuery(
SapHana.Database(Server, [Implementation="2.0"]),
"select
""COL_VARCHAR"" as ""COL_VARCHAR"",
""ID"" as ""ID"",
sum(""DECIMAL_MEASURE"") as ""DECIMAL_MEASURE""
from ""_SYS_BIC"".""DEMO/CV_ALLTYPES""
where
""COL_ALPHANUM"" = ? or
""COL_BIGINT"" = ? or
""COL_BINARY"" = ? or
""COL_BOOLEAN"" = ? or
""COL_DATE"" = ?
group by
""COL_ALPHANUM"",
""COL_BIGINT"",
""COL_BINARY"",
""COL_BOOLEAN"",
""COL_DATE"",
{
[ SqlType = "CHAR", Value = "M" ],
// COL_ALPHANUM - CHAR
[ SqlType = "BIGINT", Value = 4 ],
// COL_BIGINT - BIGINT
[ SqlType = "BINARY", Value = Binary.FromText("AKvN",
BinaryEncoding.Base64) ], // COL_BINARY - BINARY
[ SqlType = "BOOLEAN", Value = true ],
// COL_BOOLEAN - BOOLEAN
[ SqlType = "DATE", Value = #date(2022, 5, 27) ],
// COL_DATE - TYPE_DATE
} ,
[EnableFolding=false]
)
in
Source

Support for dynamic attributes


The way in which the SAP HANA database connector treats calculated columns has been
improved. The SAP HANA database connector is a "cube" connector, and there are
some sets of operations (add items, collapse columns, and so on) that happen in "cube"
space. This cube space is exhibited in the Power Query Desktop and Power Query Online
user interface by the "cube" icon that replaces the more common "table" icon.

Before, when you added a table column (or another transformation that internally adds
a column), the query would "drop out of cube space", and all operations would be done
at a table level. At some point, this drop out could cause the query to stop folding.
Performing cube operations after adding a column was no longer possible.

With this change, the added columns are treated as dynamic attributes within the cube.
Having the query remain in cube space for this operation has the advantage of letting
you continue using cube operations even after adding columns.

7 Note
This new functionality is only available when you connect to Calculation Views in
SAP HANA Server version 2.0 or higher.

The following sample query takes advantage of this new capability. In the past, you
would get a "the value is not a cube" exception when applying
Cube.CollapseAndRemoveColumns.

Power Query M

let
Source = SapHana.Database(“someserver:someport”,
[Implementation="2.0"]),
Contents = Source{[Name="Contents"]}[Data],
SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models =
Contents{[Name="SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models"]}[Data],
PURCHASE_ORDERS1 =
SHINE_CORE_SCHEMA.sap.hana.democontent.epm.models{[Name="PURCHASE_ORDERS"]}
[Data],
#"Added Items" = Cube.Transform(PURCHASE_ORDERS1,
{
{Cube.AddAndExpandDimensionColumn, "[PURCHASE_ORDERS]", {"
[HISTORY_CREATEDAT].[HISTORY_CREATEDAT].Attribute", "[Product_TypeCode].
[Product_TypeCode].Attribute", "[Supplier_Country].
[Supplier_Country].Attribute"}, {"HISTORY_CREATEDAT", "Product_TypeCode",
"Supplier_Country"}},
{Cube.AddMeasureColumn, "Product_Price", "[Measures].
[Product_Price]"}
}),
#"Inserted Year" = Table.AddColumn(#"Added Items", "Year", each
Date.Year([HISTORY_CREATEDAT]), Int64.Type),
#"Filtered Rows" = Table.SelectRows(#"Inserted Year", each
([Product_TypeCode] = "PR")),
#"Added Conditional Column" = Table.AddColumn(#"Filtered Rows",
"Region", each if [Supplier_Country] = "US" then "North America" else if
[Supplier_Country] = "CA" then "North America" else if [Supplier_Country] =
"MX" then "North America" else "Rest of world"),
#"Filtered Rows1" = Table.SelectRows(#"Added Conditional Column", each
([Region] = "North America")),
#"Collapsed and Removed Columns" =
Cube.CollapseAndRemoveColumns(#"Filtered Rows1", {"HISTORY_CREATEDAT",
"Product_TypeCode"})
in
#"Collapsed and Removed Columns"

Next steps
Enable encryption for SAP HANA
The following articles contain more information that you may find useful when
connecting to an SAP HANA debase.

Manage your data source - SAP HANA


Use Kerberos for single sign-on (SSO) to SAP HANA
Enable encryption for SAP HANA
Article • 01/24/2023

We recommend that you encrypt connections to an SAP HANA server from Power Query
Desktop and Power Query Online. You can enable HANA encryption using SAP's
proprietary CommonCryptoLib (formerly known as sapcrypto) library. SAP recommends
using CommonCryptoLib.

7 Note

SAP no longer supports the OpenSSL, and as a result, Microsoft also has
discontinued its support. Use CommonCryptoLib instead.

This article provides an overview of enabling encryption using CommonCryptoLib, and


references some specific areas of the SAP documentation. We update content and links
periodically, but for comprehensive instructions and support, always refer to the official
SAP documentation. Use CommonCryptoLib to set up encryption instead of OpenSSL;
for steps to do so, go to How to Configure TLS/SSL in SAP HANA 2.0 . For steps on
how to migrate from OpenSSL to CommonCryptoLib, go to SAP Note 2093286 (s-user
required).

7 Note

The setup steps for encryption detailed in this article overlap with the setup and
configuration steps for SAML SSO. Use CommonCryptoLib as your HANA server's
encryption provider, and make sure that your choice of CommonCryptoLib is
consistent across SAML and encryption configurations.

There are four phases to enabling encryption for SAP HANA. We cover these phases
next. More information: Securing the Communication between SAP HANA Studio and
SAP HANA Server through SSL

Use CommonCryptoLib
Ensure your HANA server is configured to use CommonCryptoLib as its cryptographic
provider.
Create a certificate signing request
Create an X509 certificate signing request for the HANA server.

1. Using SSH, connect to the Linux machine that the HANA server runs on as
<sid>adm.

2. Go to the Home directory /usr/sap/<sid>/home/.ssl. The hidden .ssl file already


exists if the root CA has already been created.

If you don't already have a CA you can use, you can create a root CA yourself by
following the steps outlined in Securing the Communication between SAP HANA
Studio and SAP HANA Server through SSL .

3. Run the following command:

sapgenpse gen_pse -p cert.pse -r csr.txt -k GN-dNSName:<HOSTNAME with


FQDN> "CN=<HOSTNAME with FQDN>"

This command creates a certificate signing request and private key. Fill in <HOSTNAME
with FQDN> with the host name and fully qualified domain name (FQDN).

Get the certificate signed


Get the certificate signed by a certificate authority (CA) that is trusted by the client(s)
you'll use to connect to the HANA server.

1. If you already have a trusted company CA (represented by CA_Cert.pem and


CA_Key.pem in the following example), sign the certificate request by running the
following command:
openssl x509 -req -days 365 -in csr.txt -CA CA_Cert.pem -CAkey CA_Key.pem -
CAcreateserial -out cert.pem

2. Copy the new file, cert.pem, to the server.

3. Create the HANA server certificate chain:

sapgenpse import_own_cert -p cert.pse -c cert.pem

4. Restart the HANA server.

5. Verify the trust relationship between a client and the CA you used to sign the SAP
HANA server's certificate.

The client must trust the CA used to sign the HANA server's X509 certificate before
an encrypted connection can be made to the HANA server from the client's
machine.

There are various ways to ensure this trust relationship exists using Microsoft
Management Console (mmc) or the command line. You can import the CA's X509
certificate (cert.pem) into the Trusted Root Certification Authorities folder for the
user that will establish the connection, or into the same folder for the client
machine itself, if that is desirable.

You must first convert cert.pem into a .crt file before you can import the certificate
into the Trusted Root Certification Authorities folder.

Test the connection

7 Note

Before using the procedures in this section, you must be signed in to Power BI
using your admin account credentials.
Before you can validate a server certificate in the Power BI service online, you must have
a data source already set up for the on-premises data gateway. If you don't already have
a data source set up to test the connection, you'll have to create one. To set up the data
source on the gateway:

1. From the Power BI service, select the setup icon.

2. From the drop-down list, select Manage gateways.

3. Select the ellipsis (...) next to the name of the gateway you want to use with this
connector.

4. From the drop-down list, select Add data source.

5. In Data Source Settings, enter the data source name you want to call this new
source in the Data Source Name text box.

6. In Data Source Type, select SAP HANA.

7. Enter the server name in Server, and select the authentication method.

8. Continue following the instructions in the next procedure.

Test the connection in Power BI Desktop or the Power BI service.

1. In Power BI Desktop or in the Data Source Settings page of the Power BI service,
ensure that Validate server certificate is enabled before attempting to establish a
connection to your SAP HANA server. For SSL crypto provider, select
commoncrypto. Leave the SSL key store and SSL trust store fields blank.

Power BI Desktop
Power BI service

2. Verify that you can successfully establish an encrypted connection to the server
with the Validate server certificate option enabled, by loading data in Power BI
Desktop or refreshing a published report in Power BI service.

You'll note that only the SSL crypto provider information is required. However, your
implementation might require that you also use the key store and trust store. For more
information about these stores and how to create them, go to Client-Side TLS/SSL
Connection Properties (ODBC) .

Additional information
Server-Side TLS/SSL Configuration Properties for External Communication
(JDBC/ODBC)

Next steps
Configure SSL for ODBC client access to SAP HANA
Configure SSL for ODBC client access to
SAP HANA
Article • 01/24/2023

If you're connecting to an SAP HANA database from Power Query Online, you may need
to set up various property values to connect. These properties could be the SSL crypto
provider, an SSL key store, and an SSL trust store. You may also require that the
connection be encrypted. In this case, you can use the ODBC Data Source Administrator
application supplied with Windows to set up these properties.

In Power BI Desktop and Excel, you can set up these properties when you first sign in
using the Power Query SAP HANA database connector. The Validate server certificate
selection in the authentication dialog box is enabled by default. You can then enter
values in the SSL crypto provider, SSL key store, and SSL trust store properties in this
dialog box. However, all of the validate server certificate selections in the authentication
dialog box in Power BI Desktop and Excel are optional. They're optional in case you want
to use ODBC Data Source Administrator to set them up at the driver level.

7 Note

You must have the proper SAP HANA ODBC driver (32-bit or 64-bit) installed
before you can set these properties in ODBC Data Source Administrator.

If you're going to use ODBC Data Source Administrator to set up the SSL crypto
provider, SSL key store, and SSL trust store in Power BI or Excel, clear the Validate server
certificate check box when presented with the authentication dialog box.

To use ODBC Data Source Administrator to set up the validate server certificate
selections:
1. From the Windows Start menu, select Windows Administrative Tools > ODBC
Data Sources. If you're using a 32-bit version of Power BI Desktop or Excel, open
ODBC Data Sources (32-bit), otherwise open ODBC Data Sources (64-bit).

2. In the User DSN tab, select Add.

3. In the Create New Data Source dialog box, select the HDBODBC driver, and then
select Finish.
4. In the ODBC Configuration for SAP HANA dialog box, enter a Data source name.
Then enter your server and database information, and select Validate the TLS/SSL
certificate.

5. Select the Advanced button.

6. In the Advanced ODBC Connection Property Setup dialog box, select the Add
button.
7. In the Add/Modify Connection Property dialog box, enter sslCryptoProvider in
the Property text box.

8. In the Value text box, enter the name of the crypto provider you'll be using: either
sapcrypto, commoncrypto, openssl, or mscrypto.

9. Select OK.

10. You can also add the optional sslKeyStore and sslTrustStore properties and values if
necessary. If the connection must be encrypted, add ENCRYPT as the property and
TRUE as the value.

11. In the Advanced ODBC Connection Property Setup dialog box, select OK.

12. To test the connection you’ve set up, select Test connection in the ODBC
Configuration for SAP HANA dialog box.

13. When the test connection has completed successfully, select OK.

For more information about the SAP HANA connection properties, see Server-Side
TLS/SSL Configuration Properties for External Communication (JDBC/ODBC) .

7 Note

If you select Validate server certificate in the SAP HANA authentication dialog box
in Power BI Desktop or Excel, any values you enter in SSL crypto provider, SSL key
store, and SSL trust store in the authentication dialog box will override any
selections you've set up using ODBC Data Source Administrator.

Next steps
SAP HANA database connector troubleshooting
Troubleshooting
Article • 01/24/2023

The following section describes some issues that may occur while using the Power
Query SAP HANA connector, along with some possible solutions.

Known issues and limitations


There are a few limitations to using SAP HANA, shown below:

NVARCHAR strings are truncated to a maximum length of 4000 Unicode


characters.
SMALLDECIMAL isn't supported.
VARBINARY isn't supported.
Valid Dates are between 1899/12/30 and 9999/12/31.

Error: This connector requires one or more


additional components to be installed
The connector looks for the driver in the registry, so if the driver wasn’t properly
installed it won’t show up.

The registry key is:

HKEY_LOCAL_MACHINE\Software\ODBC\ODBCINST.INI\ODBC Drivers

If you’re on a 64-bit machine, but Excel or Power BI Desktop is 32-bit (like the
screenshots below), you can check for the driver in the WOW6432 node instead:

HKEY_LOCAL_MACHINE\Software\WOW6432Node\ODBC\ODBCINST.INI\ODBC Drivers

Note that the driver needs to match the bit version of your Excel or Power BI Desktop. If
you’re using:

32-bit Excel/Power BI Desktop, you'll need the 32-bit ODBC driver (HDBODBC32).
64-bit Excel/Power BI Desktop, you'll need the 64-bit ODBC driver (HDBODBC).

The driver is usually installed by running hdbsetup.exe.

Finally, the driver should also show up as "ODBC DataSources 32-bit" or "ODBC
DataSources 64-bit".
Collect SAP HANA ODBC Driver traces
To capture an SAP HANA trace:

1. Open a command-line window.

2. Depending on your installation, you may need to go to C:\Program Files instead of


C:\Program Files (x86). The command might also be hdbodbc_cons.exe instead of
hdbodb_cons32.exe.

3. Type the following commands:

cd C:\Program Files (x86)\sap\hdbclient


hdbodbc_cons32.exe config trace api on
hdbodbc_cons32.exe config trace sql on
hdbodbc_cons32.exe config trace debug on
hdbodbc_cons32.exe config trace short on
hdbodbc_cons32.exe config trace packet 99999999999999
hdbodbc_cons32.exe config trace filename D:\tmp\odbctraces\hana-%p.html
hdbodbc_cons32.exe trace refresh
hdbodbc_cons32.exe show all

4. Open Power BI, clear the cache, and rerun the scenario.

5. Once done, zip up the traces:

From the Log File Path in the Tracing tab of the ODBC Data Source
Administrator.
From the HANA trace based on the path configured with the command
hdbodbc_cons32.exe config trace filename.

6. Disable tracing by using the following command:

hdbodbc_cons.exe trace off

When capturing an SAP HANA trace, note the following considerations:

The trace commands should be run as the user that will be running the Mashup
process that accesses the SAP HANA server.
The trace file path you specify should be writable by the user that runs the Mashup
process.

For example:
To capture non-SSO connections from gateway, make sure you use the gateway
service user. That is, run the command-line window as the gateway user when you
want to execute the hdodbc_cons.exe calls. Make sure that the gateway server user
can write to the log file location you specify.
To capture SSO connections from Gateway, use the SSO user.

SAP HANA: insufficient privilege


This message might be because of:

The user legitimately not having enough privileges on the view they're trying to
access.

The following known issue:

Issue: Not able to connect to SAP Hana from PBI Desktop using SAP client 2.0
37.02, but if you downgrade the client version to 1.00.120.128, it works.

ERROR MESSAGE: External error: ERROR [S1000] [SAP AG][LIBODBCHDB DLL][HDBODBC]

General error;258 insufficient privilege: [2950] user is not authorized

Response from SAP:

Unfortunately, this is an SAP issue so you'll need to wait for a fix from SAP.
SharePoint folder
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Anonymous


Microsoft Account
Windows

Function Reference Documentation SharePoint.Contents


SharePoint.Files

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

7 Note

AAD/OAuth for SharePoint on-premises isn’t supported using the on-premises data
gateway.

Capabilities supported
Folder path
Combine
Combine and load
Combine and transform
Determine the site URL
When you're connecting to a SharePoint site, you'll be asked to enter the site URL. To
find the site URL that contains your SharePoint folder, first open a page in SharePoint.
From a page in SharePoint, you can usually get the site address by selecting Home in
the navigation pane, or the icon for the site at the top. Copy the address from your web
browser's address bar and save for later.

Connect to a SharePoint folder from Power


Query Desktop
To connect to a SharePoint folder:

1. From Get Data, select SharePoint folder.

2. Paste the SharePoint site URL you copied in Determine the site URL to the Site URL
text box in the SharePoint folder dialog box. In this example, the site URL is
https://contoso.sharepoint.com/marketing/data . If the site URL you enter is

invalid, a warning icon will appear next to the URL text box.
Select OK to continue.

3. If this is the first time you've visited this site address, select the appropriate
authentication method. Enter your credentials and choose which level to apply
these settings to. Then select Connect.

For more information about authentication methods and level settings, go to


Authentication with a data source.

4. When you select the SharePoint folder you want to use, the file information about
all of the files in that SharePoint folder are displayed. In addition, file information
about any files in any subfolders is also displayed.

5. Select Combine & Transform Data to combine the data in the files of the selected
SharePoint folder and load the data into the Power Query Editor for editing. Or
select Combine & Load to load the data from all of the files in the SharePoint
folder directly into your app.

7 Note

The Combine & Transform Data and Combine & Load buttons are the easiest ways
to combine data found in the files of the SharePoint folder you specify. You could
also use the Load button or the Transform Data buttons to combine the files as
well, but that requires more manual steps.

Connect to a SharePoint folder from Power


Query Online
To connect to a SharePoint folder:

1. From the Data sources page, select SharePoint folder.

2. Paste the SharePoint site URL you copied in Determine the site URL to the Site URL
text box in the SharePoint folder dialog box. In this example, the site URL is
https://contoso.sharepoint.com/marketing/data .
3. If the SharePoint folder is on-premises, enter the name of an on-premises data
gateway.

4. Select the authentication kind, and enter any credentials that are required.

5. Select Next.

6. When you select the SharePoint folder you want to use, the file information about
all of the files in that SharePoint folder are displayed. In addition, file information
about any files in any subfolders is also displayed.

7. Select Combine to combine the data in the files of the selected SharePoint folder
and load the data into the Power Query Editor for editing.

7 Note

The Combine button is the easiest way to combine data found in the files of
the SharePoint folder you specify. You could also use the Transform Data
buttons to combine the files as well, but that requires more manual steps.

Troubleshooting

Combining files
All of the files in the SharePoint folder you select will be included in the data to be
combined. If you have data files located in a subfolder of the SharePoint folder you
select, all of these files will also be included. To ensure that combining the file data
works properly, make sure that all of the files in the folder and the subfolders have the
same schema.

In some cases, you might have multiple folders on your SharePoint site containing
different types of data. In this case, you'll need to delete the unnecessary files. To delete
these files:

1. In the list of files from the SharePoint folder you chose, select Transform Data.

2. In the Power Query editor, scroll down to find the files you want to keep.
3. In the example shown in the screenshot above, the required files are the last rows
in the table. Select Remove Rows, enter the value of the last row before the files to
keep (in this case 903), and select OK.

4. Once you've removed all the unnecessary files, select Combine Files from the
Home ribbon to combine the data from all of the remaining files.

For more information about combining files, go to Combine files in Power Query.

Filename special characters


If a filename contains certain special characters, it may lead to authentication errors
because of the filename being truncated in the URL. If you're getting unusual
authentication errors, make sure all of the filenames you're using don't contain any of
the following special characters.

# % $
If these characters are present in the filename, the file owner must rename the file so
that it does NOT contain any of these characters.

Permissions
When requesting document library contents, you should have Read access to the
SharePoint site as well as the document library and any folders leading to the requested
file.

Azure Active Directory authentication


Azure Active Directory authentication (AAD) isn't supported for on-premises SharePoint
sources.
SharePoint list
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Anonymous


Windows
Microsoft Account

Function Reference Documentation SharePoint.Contents


SharePoint.Files
SharePoint.Tables

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

7 Note

AAD/OAuth for SharePoint on-premises isn’t supported using the on-premises data
gateway.

Capabilities supported
Site URL

Determine the site URL


When you're connecting to a SharePoint site, you'll be asked to enter the site URL. To
find the site URL that contains your SharePoint list, first open a page in SharePoint. From
a page in SharePoint, you can usually get the site address by selecting Home in the
navigation pane, or the icon for the site at the top. Copy the address from your web
browser's address bar and save for later.

Connect to a SharePoint list from Power Query


Desktop
To connect to a SharePoint list:

1. From Get Data, select SharePoint List.

2. Paste the SharePoint site URL you copied in Determine the site URL to the Site URL
field in the open dialog box.

If the URL address you enter is invalid, a warning icon will appear next to the
Site URL textbox.
Select OK to continue.

3. If this is the first time you've visited this site address, select the appropriate
authentication method. Enter your credentials and chose which level to apply these
settings to. Then select Connect.

For more information about authentication methods and level settings, go to


Authentication with a data source.

4. From the Navigator, you can select a location, then either transform the data in
the Power Query editor by selecting Transform Data, or load the data by selecting
Load.
Connect to a SharePoint list from Power Query
Online
To connect to a SharePoint list:

1. From the Data sources page, select SharePoint list.

2. Paste the SharePoint site URL you copied in Determine the site URL to the Site URL
field in the open dialog box.

3. Enter the name of an on-premises data gateway if needed.

4. Select the authentication kind, and enter any credentials that are required.

5. Select Next.

6. From the Navigator, you can select a location, then transform the data in the
Power Query editor by selecting Next.

Troubleshooting
Use root SharePoint address
Make sure you supply the root address of the SharePoint site, without any subfolders or
documents. For example, use link similar to the following:
https://contoso.sharepoint.com/teams/ObjectModel/

Inconsistent behavior around boolean data


When using the SharePoint list connector, Boolean values are represented inconsistently
as TRUE/FALSE or 1/0 in Power BI Desktop and Power BI service environments. This may
result in wrong data, incorrect filters, and empty visuals.

This issue only happens when the Data Type is not explicitly set for a column in the
Query View of Power BI Desktop. You can tell that the data type isn't set by seeing the
"ABC 123" image on the column and "Any" data type in the ribbon as shown below.

The user can force the interpretation to be consistent by explicitly setting the data type
for the column through the Power Query Editor. For example, the following image
shows the column with an explicit Boolean type.

Using OData to access a SharePoint List


If you use an OData feed to access a SharePoint List, there's an approximately 2100
character limitation to the URL you use to connect. More information: Maximum URL
length

Azure Active Directory authentication


Azure Active Directory authentication (AAD) isn't supported for on-premises SharePoint
sources.

Next steps
Optimize Power Query when expanding table columns
SharePoint Online list
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Anonymous


Windows
Microsoft Account

Function Reference Documentation SharePoint.Contents


SharePoint.Files
SharePoint.Tables

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities supported
Site URL

Determine the site URL


When you're connecting to a SharePoint site, you'll be asked to enter the site URL. To
find the site URL that contains your SharePoint Online list, first open a page in
SharePoint. From a page in SharePoint, you can usually get the site address by selecting
Home in the navigation pane, or the icon for the site at the top. Copy the address from
your web browser's address bar and save for later.
Connect to a SharePoint Online list from Power
Query Desktop
To connect to a SharePoint Online list:

1. From Get Data, select SharePoint Online list.

2. Paste the SharePoint site URL you copied in Determine the site URL to the Site URL
field in the open dialog box.

If the URL address you enter is invalid, a warning icon will appear next to the
Site URL textbox.

You can also select either the 1.0 implementation of this connector or the 2.0
implementation. More information: Connect to SharePoint Online list v2.0

Select OK to continue.
3. If this is the first time you've visited this site address, select the appropriate
authentication method. Enter your credentials and chose which level to apply these
settings to. Then select Connect.

For more information about authentication methods and level settings, go to


Authentication with a data source.

4. From the Navigator, you can select a location, then either transform the data in
the Power Query editor by selecting Transform Data, or load the data by selecting
Load.

Connect to a SharePoint Online list from Power


Query Online
To connect to a SharePoint Online list:

1. From the Data sources page, select SharePoint Online list.


2. Paste the SharePoint site URL you copied in Determine the site URL to the Site URL
field in the open dialog box.

3. Enter the name of an on-premises data gateway if needed.

4. Select the authentication kind, and enter any credentials that are required.

5. Select Next.

6. From the Navigator, you can select a location, then transform the data in the
Power Query editor by selecting Transform data.

Connect to SharePoint Online list v2.0


In the October 2020 release of Power BI Desktop, we introduced an updated version of
the SharePoint Online list connector. This connector has improved APIs and greater
usability, but isn't backwards compatible with usage of the 1.0 connector version. The
2.0 connector version is generally available as of September 2022.
To access it, you'll enter the same connector screen through step 2 in Connect to a
SharePoint Online list from Power Query Desktop. However, make sure you select 2.0
under Implementation if it isn't already selected.

With this update to the connector, we're making available two different views for the
same data:

All
Default

The All view includes all user created and system defined columns. You can see what
columns are included in the following screen.
The default view is what you'll see when looking at the list online in whichever view
you've set as Default in your settings. If you edit this view to add or remove either user
created or system defined columns, or by creating a new view and setting it as default,
these changes will propagate through the connector.

7 Note

If you set the default view in your SharePoint site to Calendar view or Board view,
SharePoint only returns the columns shown in the selected view. In this scenario,
Power BI will not retrieve all the columns in the list, even though you choose All
option. This is by design.

Troubleshooting

Use root SharePoint address


Make sure you supply the root address of the SharePoint site, without any subfolders or
documents. For example, use a link similar to
https://contoso.sharepoint.com/teams/ObjectModel/ .

Timezone issues
When using the SharePoint Online list (v1.0) connector, you may notice that timezone
data doesn't match what you would expect from your browser. The SharePoint web-
based client does a local timezone conversion based on the browser's knowledge of the
user's timezone.

The backend API for SharePoint uses UTC time and sends this UTC time directly to Power
BI. Power BI doesn't convert this UTC time, but reports it to the user.
To get time into local time, the user must do the same conversion the SharePoint client
does. An example of the column operations that would do this are:

#"Changed Type" = Table.TransformColumnTypes(#"Renamed Columns",


{{"Datewithtime", type datetimezone}}),
#"Timezone Shifted" = Table.TransformColumns(#"Changed Type",
{"Datewithtime", DateTimeZone.ToLocal})

The first operation changes the type to datetimezone , and the second operation
converts it to the computer's local time.

SharePoint join limit


This issue is limited to the SharePoint Online list v2.0 connector

The SharePoint Online list v2.0 connector uses a different API than the v1.0 connector
and, as such, is subject to a maximum of 12 join operations per query, as documented in
the SharePoint Online documentation under List view lookup threshold. This issue will
manifest as SharePoint queries failing when more than 12 columns are accessed
simultaneously from a SharePoint list. However, you can work around this situation by
creating a default view with less than 12 lookup columns.

Using OData to access a SharePoint Online list


If you use an OData feed to access a SharePoint Online list, there's an approximately
2100 character limitation to the URL you use to connect. More information: Maximum
URL length
SingleStore (Beta)
Article • 07/18/2023

7 Note

The following connector article is provided by SingleStore, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the SingleStore website and
use the support channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic


Windows

Capabilities Supported
Import
DirectQuery (Power BI Datasets)

Connect to SingleStore
To connect Microsoft Power BI Desktop to SingleStore DB or Managed Service:

1. In the Home ribbon, from the Get Data list, select More.

2. In the Get Data dialog, select SingleStore Direct Query Connector 1.0.
3. In the SingleStore database dialog box, enter the IP address or hostname of the
SingleStore cluster in Server. In Database, enter the database name.

Under Data Connectivity mode, select the Import or DirectQuery mode, and then
select OK.
4. In the SingleStore Direct Query Connector 1.0 dialog box, in the left pane, select
the authentication type (either Windows or Basic).

7 Note

SingleStoreDB Cloud users can only use Basic authentication.

For Windows authentication, set up a SingleStore server for Kerberos


authentication . Then select the Connect button.

7 Note

You need to run Power BI with the user account that maps to the
SingleStore DB user.

For Basic authentication, enter the username and password used to connect
to SingleStore, and then select the Connect button.
5. Once authenticated—for Import or DirectQuery mode—in the Navigator dialog
box, choose the desired tables and select the Load button.

You can now use Power BI to explore SingleStore data.

Create a Custom SQL Report

7 Note

Any user that creates a custom SQL report must only have read-only access to the
SingleStore databases.
To create a new custom SQL report:

1. Open Power BI Desktop.

2. In the Home ribbon, from the Get Data list, select Blank query.

3. In the Power Query Editor dialog, specify the query in the following format:

SingleStoreODBC.Query("<ip_address_or_hostname>", "<database>", "<query>")

Submit the query.

7 Note

If you're using the server for the first time, select Edit Credentials and enter
the credentials. Go to Step 4 in Connect to SingleStore for more information.

4. Verify the data to load, and select Close & Apply.

5. If you've worked with the dataset before and it's cached in memory, refresh the
report to reset the local cache. On the Home ribbon, select Refresh.

To update the existing custom SQL reports, select the Refresh button on the Home
ribbon.

Modify Credentials
To modify the credentials used to connect to SingleStore:

1. In the File ribbon, select Options and settings > Data source settings.

2. In the Data source settings dialog, select SingleStore DirectQuery Connector 1.0,
and then select Edit Permissions.
SIS-CC SDMX (Beta)
Article • 07/18/2023

7 Note

The following connector article is provided by the Statistical Information System


Collaboration Community (SIS-CC), the owner of this connector and a member of
the Microsoft Power Query Connector Certification Program. If you have questions
regarding the content of this article or have changes you would like to see made to
this article, visit the SIS-CC website and use the support channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Anonymous

Function Reference Documentation —

Prerequisites
Before you get started, make sure you've properly configured the URL from the service
provider’s API. The exact process here will depend on the service provider.

Capabilities supported
Import of SDMX-CSV 2.1 format. Other formats aren't supported.

Connection instructions
To connect to SDMX Web Service data:
1. Select Get Data from the Home ribbon in Power BI Desktop. Select All from the
categories on the left, and then select SIS-CC SDMX. Then select Connect.

2. Fill in the parameters:


a. In the Data query URL, enter an SDMX REST data query URL (the web service
must support the SDMX-CSV format).
b. In Display format, select one of the options:

Show codes and labels; example: FREQ: Frequency


Show codes; example: FREQ
Show labels; example: Frequency
Optionally, enter a language preference in Label language preference
using an IETF BCP 47 tag
3. If this is the first time you’re connecting to the REST web service in the previous
step Data query URL, this authentication step is displayed. As the connection is
Anonymous, select Connect

4. Select Load to import the data into Power BI, or Transform Data to edit the query
in Power Query Editor where you can refine the query before loading into Power
BI.

Limitations and issues


This version of the connector doesn't support importing data formats SDMX-ML or
SDMX-JSON.

Next steps
If you want to submit a feature request or contribute to the open-source project, then
go to the Gitlab project site .
Smartsheet
Article • 07/18/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational account

Function Reference Documentation —

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities Supported
Import

Limitations and issues

Connecting to different regions of Smartsheet


The Smartsheet connector only supports connecting to https://app.smartsheet.com .
There are currently no plans to support other domains or regions.
Snowflake
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)

Authentication Types Supported Database (Username/Password), Azure AD

Function Reference Documentation —

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Specify a text value to use as Role name
Relationship columns
Connection timeout in seconds
Command timeout in seconds
Database
Native SQL statement

Connect to Snowflake data warehouse from


Power Query Desktop
To make the connection to a Snowflake computing warehouse, take the following steps:
1. Select Get Data from the Home ribbon in Power BI Desktop, select Database from
the categories on the left, select Snowflake, and then select Connect.

2. In the Snowflake window that appears, enter the name of your Snowflake server in
Server and the name of your Snowflake computing warehouse in Warehouse.
3. Optionally, enter values in any advanced options that you want to use to modify
the connection query, such as a text value to use as a Role name or a command
timeout. More information: Connect using advanced options

4. Select OK.

5. To sign in to your Snowflake computing warehouse, enter your username and


password, and then select Connect.

7 Note

Once you enter your username and password for a particular Snowflake
server, Power BI Desktop uses those same credentials in subsequent
connection attempts. You can modify those credentials by going to File >
Options and settings > Data source settings. More information: Change the
authentication method

If you want to use the Microsoft account option, the Snowflake Azure Active
Directory (Azure AD) integration must be configured on the Snowflake side. More
information: Power BI SSO to Snowflake - Getting Started

6. In Navigator, select one or multiple elements to import and use in Power BI


Desktop. Then select either Load to load the table in Power BI Desktop, or
Transform Data to open the Power Query Editor where you can filter and refine the
set of data you want to use, and then load that refined set of data into Power BI
Desktop.
7. Select Import to import data directly into Power BI, or select DirectQuery, then
select OK. More information: Use DirectQuery in Power BI Desktop

7 Note

Azure Active Directory (Azure AD) Single Sign-On (SSO) only supports
DirectQuery.

Connect to a Snowflake database from Power


Query Online
To make the connection, take the following steps:
1. Select the Snowflake option in the connector selection.

2. In the Snowflake dialog that appears, enter the name of the server and warehouse.

3. Enter any values in the advanced options you want to use. If there are any
advanced options not represented in the UI, you can edit them in the Advanced
Editor in Power Query later.

4. Enter your connection credentials, including selecting or creating a new


connection, which gateway you would like to use, and a username and password.

5. Select Next to connect to the database.

6. In Navigator, select the data you require, then select Transform data to transform
the data in Power Query Editor.
Connect using advanced options
Power Query provides a set of advanced options that you can add to your query if
needed.

The following table lists all of the advanced options you can set in Power Query.

Advanced option Description

Role name Specifies the role that the report uses via the driver. This role must be
available to the user, otherwise no role will be set.

Include relationship If checked, includes columns that might have relationships to other tables.
columns If this box is cleared, you won’t see those columns.

Connection timeout Specifies how long to wait for a response when interacting with the
in seconds Snowflake service before returning an error. Default is 0 (no timeout).

Command timeout in Specifies how long to wait for a query to complete before returning an
seconds error. Default is 0 (no timeout).

Database Specifies a specific database in the warehouse. This option is only


available in Power Query Desktop.

SQL Statement For information, go to Import data from a database using native database
query. This option is only available in Power Query Desktop.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your Snowflake database.

Limitations and known issues


Hyphens in database names
If a database name has a hyphen in it, you may encounter an ODBC: ERROR[42000] SQL
compilation error . This is a known issue and there is no known workaround available. A

fix is being investigated and the documentation here will be updated when the fix is
ready.

Additional information
Connect to Snowflake in Power BI Service
Socialbakers (Beta)
Article • 07/14/2023

7 Note

The following connector article is provided by Socialbakers (now Emplifi), the owner
of this connector and a member of the Microsoft Power Query Connector
Certification Program. If you have questions regarding the content of this article or
have changes you would like to see made to this article, visit the Emplifi website
and use the support channels there.

Summary
Item Description

Release state Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic

Function Reference Documentation —

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
To use the Socialbakers (Emplifi) Connector, you must have Socialbakers (Emplifi)
credentials (Token and Secret). Contact the Emplifi Support team to get yours, if you
don't have them. The credentials allow the user to get the data and metrics from the
profiles the user has added to the Emplifi platform.

Capabilities Supported
Import

Connect to Socialbakers API from Power BI


Desktop

Authentication
When the connector is started for the first time, authentication is needed. Enter your
Token and Secret to the new modal window. Credentials can be provided to you by the
Emplifi Support team.

Once you sign in, the authentication information is automatically stored by Power BI for
future use. It can be found under File > Options and settings > Data source settings >
Global permissions.

At any time, the permissions can be cleared (or edited) and new credentials can be
entered.
Navigator
Once authenticated, a Navigator window pops up. All possible data sources can be
selected here.
Not all data sources are the same. The differences are described later.

Example usage
1. Choose the Data Source you would like to work with by selecting the checkbox.

Parameters can be selected on the right section of the Navigator.


2. Enter the data and select the metrics in the Parameters section. For more details
about the metrics, visit the Socialbakers API documentation .

An example selection follows below:


Start Date: 07/01/2021
End Date: 07/30/2021
End Today: No
Metrics: engagement_rate (FB, IG, TW, YT, VK), likes(FB, IG, TW, YT)
Time dimension: day
Dimensions: profile
Profiles Facebook: Emplifi (<profile_id>)
Profiles Instagram: Emplifi (<profile_id>)
Profiles YouTube: Emplifi (<profile_id>)

3. Once all required parameters are selected, select Apply.

7 Note

Not all parameters are explicitly mandatory, but they could be needed for
specific selections. For example, the profile selection is optional, but you still
need to select some profiles to get any data.
4. Once all data and metrics are selected, use the Load button to load the data to the
report. It's also possible to Transform Data before loading it into the report.

7 Note

It's possible to select more than one data source by checking more of the
boxes, setting their parameters, and then selecting Load.

Facebook Ads Data Source


Facebook Ads is a little bit more specific data source that differs from the other sources.
All other sources (for example, Aggregated Post Metrics, Post Metrics, Profile Metrics)
follow the same process as described above. Facebook Ads has an extra step to perform
before the Parameters are set.

When selected, Facebook Ads first display a list of the last 12 months.
By selecting the specific month(s), you're narrowing down all your Facebook Ad
Accounts to the ones that were active in the selected time period.
You can select the specific Ad Accounts in the Parameters section under the "Accounts"
parameter, along with the Campaigns selection.

Troubleshooting
If any error occurs, check the documentation and make sure you're following the
guidelines of the API.

Additional instructions
It's possible to clear the parameter selection by choosing Clear.
If Transform Data is chosen, you can see all of the function documentation from
which it’s possible to gain more understanding of what is going on behind the
screen.
SoftOne BI (Beta)
Article • 07/14/2023

7 Note

The following connector article is provided by SoftOne, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the SoftOne website and use the support
channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Basic (Soft1/Atlantis Web Services)

Prerequisites
You'll need to have the Soft1 ERP/CRM or Atlantis ERP product installed with a licensed
SoftOne BI connector module. A web account must be configured in the application
with access to the SoftOne BI Connector service. This account information and your
installation serial number will be validated during authentication by the SoftOne BI
connector.

The SoftOne BI connector is supported from Soft1 Series 5 version 500.521.11424 or


later and Atlantis ERP version 3.3.2697.1 or later.

Capabilities supported
Import

Connection instructions
SoftOne provides many templates as Power BI template files (.pbit) that you can use or
customize which will provide you with a start to your BI project. For example, Sales &
Collections, and Finance.

To connect in Power BI Desktop using a new report, follow the steps below. If you're
connecting from a report created using one of the SoftOne BI templates, see Using a
provided template later in this article.

Connect to your Soft1 or Atlantis data store from scratch


To load data from your installation with Power Query Desktop:

1. Select Get Data > More... > Online Services in Power BI Desktop and search for
SoftOne BI. Select Connect.
2. Select Sign in. An authentication form will display.

7 Note

If you enter incorrect credentials, you'll receive a message stating that


your sign in failed due to invalid credentials.
If the SoftOne BI Connector is not activated, or the Web Account that
you're using is not configured with the service, you'll receive a message
stating that access is denied because the selected module is not
activated.

3. After signing in with SoftOne Web Services, you can connect to your data store.
Selecting Connect will take you to the navigation table and display the available
tables from the data store from which you may select the data required.

4. In the navigator, you should now see the tables in your data store. Fetching the
tables can take some time.

You must have uploaded the data from your Soft1 or Atlantis installation (per the
product documentation) to see any tables. If you haven't uploaded your data, you
won't see any tables displayed in the Navigation Table.
In this case, you'll need to go back to your application and upload your data.

Using a provided template


1. Open the selected template, Power BI Desktop will attempt to load the data from
the data store, and will prompt for credentials.

2. Select Sign in and enter your credentials (Serial number, username, and password).
3. Once you're authenticated, select Connect.

Power BI Desktop will fetch the data from the data store.

4. After the refresh has completed, you're ready to start customizing the report or to
publish it as is to the Power BI Service.
) Important

If you're working with more than one Soft1/Atlantis installation, then when
switching between data stores, you must clear the SoftOne BI credentials saved by
Power BI Desktop.
SolarWinds Service Desk (Beta)
Article • 07/14/2023

This connector lets you import incident records from SolarWinds Service Desk to
Microsoft Power BI. You can import records from the past two years. You'll have raw data
on the topics most relevant to your organization, which you can then analyze and review
by viewing it in a variety of formats, such as tables, graphs, and charts.

7 Note

The following connector article is provided by SolarWinds, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or want to
request change to it, visit the SolarWinds Service Desk documentation website
and provide feedback at the bottom of the page.

Summary
Item Description

Release state Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported SolarWinds Service Desk JSON Web Token (JWT)

Prerequisites
Before you can use this connector to get SolarWinds Service Desk data, you must have a
SolarWinds Service Desk user who has set up Token authentication for API integration .

Capabilities Supported
Import

Connect to SolarWinds Service Desk


To connect to SolarWinds Service Desk:

1. In Power BI Desktop, select Get Data from Home. Select Other from the categories
on the left, select SolarWinds Service Desk, and then select Connect.

2. Sign in with the JSON web token you generated as described in prerequisites, and
then select Connect to verify your access to SolarWinds Service Desk.
3. In the Navigator dialog box, select the table you want to import. You can then
either load or transform the data.

Limitations and issues


Users should be aware of the following limitations and issues associated with accessing
SolarWinds Service Desk:

You can import only once per day; that is, once every 24 hours. If you attempt to
refresh prior to the allowed 24-hour cycle, you'll receive an error message.
There's no limit on the number of users who can pull data, but each user can
refresh only once every 24 hours.

Only incident data is imported, providing historical details from January 1, 2020 to
date. The incident fields that are imported are limited. If you need to import a field
that's not available, you can request that it be added. If you have any questions
about what can be imported or issues with the Power BI integration, email
[email protected].

Credential error in the Navigator


If a credential error occurs in the Navigator, clear your recent data source settings.

1. In Power BI Desktop, select File > Data source settings.


2. Select the SolarWinds Service Desk data source, and then select Clear permissions.

3. Establish the connection to the Navigator again.


SQL Server Analysis Services database
Article • 10/06/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power Apps (Dataflows)
Analysis Services

Authentication Types Supported Basic


Microsoft account
Organizational account
Windows

M Function Reference Sql.Database


Sql.Databases

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
Analysis Services must be installed along with your SQL Server. For information about
installing Analysis Services on your SQL Server, go to Install SQL Server Analysis Services.
This connector article assumes that you've already installed Analysis Services on your
SQL server and have an existing database on the server instance.

Capabilities Supported
Import
Connect live (Power BI Desktop)
Advanced options
MDX or DAX query
Connect to SQL Server Analysis Services
database from Power Query Desktop
To make the connection, take the following steps:

1. Select the SQL Server Analysis Services database option in the connector
selection. More information: Where to get data

2. In the SQL Server Analysis Services database dialog that appears, provide the
name of the server and database (optional).

7 Note

Only Power BI Desktop will display the Import and Connect live options. If
you're connecting using Power BI Desktop, selecting Connect live uses a live
connection to load the connected data directly to Power BI Desktop. In this
case, you can't use Power Query to transform your data before loading the
data to Power BI Desktop. For the purposes of this article, the Import option
is selected. For more information about using a live connection in Power BI
Desktop, go to Connect to Analysis Services tabular data in Power BI
Desktop.

3. Select OK.

4. If you're connecting to this database for the first time, select the authentication
type and input your credentials. Then select Connect.
5. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in the Power
Query editor.

Connect to SQL Server Analysis Services


database from Power Query Online
To make the connection, take the following steps:

1. Select the SQL Server Analysis Services database option in the connector
selection. More information: Where to get data
2. In the Connect to data source page, provide the name of the server and database
(optional).

3. If needed, select an on-premises data gateway.

4. If you're connecting to this database for the first time, select the authentication
kind and input your credentials.

5. Select Next to continue.

6. In Navigator, select the data you require, and then select Transform data.
Connect using advanced options
Power Query provides an advanced option that you can add to your query if needed.

Advanced option Description

MDX or DAX Optionally provides a specific MDX or DAX statement to the SQL Server
statement Analysis Services database server to execute.

Once you've entered a value in the advanced option, select OK in Power Query Desktop
or Next in Power Query Online to connect to your SQL Server Analysis Services
database.

See also
Connect to Analysis Services tabular data in Power BI Desktop
Connect to SSAS multidimensional models in Power BI Desktop
Connect to datasets in the Power BI service from Power BI Desktop
SQL Server
Article • 08/03/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Database (Username/Password)


Windows
Azure Active Directory (if the specified server supports it)

M Function Reference Sql.Database


Sql.Databases

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
By default, Power BI installs an OLE DB driver for SQL Server. However, for optimal
performance, we recommend that the customer installs the SQL Server Native Client
before using the SQL Server connector. SQL Server Native Client 11.0 and SQL Server
Native Client 10.0 are both supported in the latest version.

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Command timeout in minutes
Native SQL statement
Relationship columns
Navigate using full hierarchy
SQL Server failover support

Connect to SQL Server database from Power


Query Desktop
To make the connection, take the following steps:

1. Select the SQL Server database option in the connector selection.

2. In the SQL Server database dialog that appears, provide the name of the server
and database (optional).

3. Select either the Import or DirectQuery data connectivity mode (Power BI Desktop
only).

4. Select OK.

5. If this is the first time you're connecting to this database, select the authentication
type, input your credentials, and select the level to apply the authentication
settings to. Then select Connect.
7 Note

If the connection is not encrypted, you'll be prompted with the following


dialog.

Select OK to connect to the database by using an unencrypted connection, or


follow these instructions to setup encrypted connections to SQL Server.

6. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.
Connect to SQL Server database from Power
Query Online
To make the connection, take the following steps:

1. Select the SQL Server database option in the connector selection.

2. In the SQL Server database dialog that appears, provide the name of the server
and database (optional).

3. If needed, select an on-premises data gateway.


4. If this is the first time you're connecting to this database, select the authentication
kind and input your credentials.

5. If the connection is not encrypted, and the connection dialog contains a Use
Encrypted Connection check box, clear the check box.

6. Select Next to continue.

7. In Navigator, select the data you require, and then select Transform data.

Connect using advanced options


Both Power Query Desktop and Power Query Online provide a set of advanced options
that you can add to your query if needed. The following table lists all of the advanced
options you can set in Power Query Desktop and Power Query Online.

Advanced option Description

Command timeout If your connection lasts longer than 10 minutes (the default timeout), you
in minutes can enter another value in minutes to keep the connection open longer.
This option is only available in Power Query Desktop.

SQL statement For information, go to Import data from a database using native database
query.

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, you won’t see those columns.
columns

Navigate using full If checked, the Navigator displays the complete hierarchy of tables in the
hierarchy database you're connecting to. If cleared, Navigator displays only the tables
whose columns and rows contain data.
Advanced option Description

Enable SQL Server If checked, when a node in the SQL Server failover group isn't available,
Failover support Power Query moves from that node to another when failover occurs. If
cleared, no failover will occur.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your SQL Server database.

Known issues and limitations

Certificate errors
When using the SQL Server database connector, if encryption is disabled and the SQL
Server certificate isn't trusted on the client (Power BI Desktop or on-premises data
gateway), you'll experience the following error.

A connection was successfully established with the server, but then an error

occurred during the login process. (provider: SSL Provider, error: 0 - The
certificate chain was issued by an authority that is not trusted.

Always Encrypted columns


Power Query doesn't support 'Always Encrypted' columns.

Azure Active Directory authentication


Azure Active Directory authentication is only supported in the SQL Server connector if
the specified server also supports Azure Active Directory authentication. Otherwise, you
might encounter an error that says that "The OAuth authentication method is not
supported in this data source".

Note that in Power BI service, the Azure Active Directory authentication method shows
up as "OAuth2".

Next steps
Optimize Power Query when expanding table columns
Stripe (Deprecated)
Article • 01/24/2023

Summary
Item Description

Release State Deprecated

Products -

Authentication Types Supported -

Function Reference Documentation -

Deprecation
This connector is deprecated, and won't be supported soon. We recommend you
transition off existing connections using this connector, and don't use this connector for
new connections.
SumTotal
Article • 07/14/2023

7 Note

The following connector article is provided by SumTotal, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the SumTotal website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication types SumTotal credentials using OAuth 2.0

Prerequisites
You must have a SumTotal hosted environment with standard permissions to access the
portal, and read permissions to access data in tables.

Capabilities supported
Import
Query Multiple OData endpoints
Advanced
Optionally filter records by RowVersionId parameter to get incremental data

Finding your SumTotal hosted URL


Copy the SumTotal hosted root URL in full. This root URL is the unique URL specific to
your instance. The URL will be in the format of
https://<*yourcustomerdomain*>.sumtotal.host/ . Make sure not to copy the rest of the

URL. Keep this URL somewhere handy so you can use it later.

Connect to SumTotal BI from Power BI Desktop

7 Note

The Power Query SumTotal connector is currently only suited towards OData API
endpoints. For more information, go to SumTotal's OData API functionality .

To connect to SumTotal from Power BI Desktop:

1. Select Get data from the Home tab.

2. In the Get Data dialog box, select Other > SumTotal, and then select Connect.
3. Enter the server URL address of the data you want to load.

7 Note

You'll be prompted with a script error, which loads JS/CSS scripts the login
form uses. Select Yes.

4. When the table is loaded in Navigator, you'll be presented with the list of OData
API entities that are currently supported by the connector. You can select to load
one or multiple entities.

5. When you've finished selecting entities, select Load or Transform Data.

7 Note
If this is the first time you're connecting to this site, select Sign in and input your
credentials. Then select Connect.

Known issues and limitations


This section describes any issues or limitations associated with the SumTotal connector.

SumTotal OData API performance and throttling limits


For information about OData API performance and throttling limits for SumTotal
connections, go to SumTotal's OData API functionality under the Important Notes
section. These limitations apply to both the SumTotal connector (which uses the OData
API as an implementation detail) and the 'actual' OData Hosted API when accessing the
same endpoints.

Table retrieval rate


As a guideline, most default tables are retrieved at approximately 1000 rows per second
using the SumTotal connector. If you require faster retrieval rates, consider using the
RowVersionId filter parameter. You can pass this parameter directly to the environment
hosted URL by appending it as a query string parameter. Full URL example with
rowVersionId parameter: https://{host}.sumtotalystems.com/?rowVersionId=1234 .
Sybase database
Article • 02/15/2023

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Excel
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Database


Windows

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
Before you can connect to a Sybase database, you need the SAP SQL Anywhere driver
installed on your computer. Select the driver that matches your Excel installation (32-bit
or 64-bit).

Capabilities Supported
Import
Advanced options
Command timeout in minutes
SQL statement
Include relationship columns
Navigate using full hierarchy
Connect to a Sybase database from Power
Query Desktop
To make the connection, take the following steps:

1. Select the Sybase database option from Get Data. More information: Where to get
data

2. Specify the Sybase server to connect to in Server and the database where your
data is stored in Database.

3. Select OK.

4. If this is the first time you're connecting to this Sybase server and database, select
the authentication type you want to use, enter your credentials, and then select
Connect. For more information about using and managing authentication, go to
Authentication with a data source.
5. In Navigator, select the data you require, then either select Load to load the data
or Transform Data to transform the data.

Connect to a Sybase database from Power


Query Online
To make the connection, take the following steps:

1. Select the Sybase database option in the Choose data source page. More
information: Where to get data

2. Specify the Sybase server to connect to in Server and the database where your
data is stored in Database.

3. Select the name of your on-premises data gateway.

7 Note

You must select an on-premises data gateway for this connector, whether the
Sybase database is on your local network or online.

4. If this is the first time you're connecting to this Sybase server and database, select
the type of credentials for the connection in Authentication kind. Choose Basic if
you plan to use an account that's created in the Sybase database instead of
Windows authentication. For more information about using and managing
authentication, go to Authentication with a data source.

5. Enter your credentials.


6. Select Use Encrypted Connection if you want to use an encrypted connection, or
clear the option if you want to use an unencrypted connection.

7. Select Next to continue.

8. In Navigator, select the data you require, then select Transform data to transform
the data in the Power Query editor.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed.

The following table lists all of the advanced options you can set in Power Query.

Advanced Description
option

Command If your connection lasts longer than 10 minutes (the default timeout), you can enter
timeout in another value in minutes to keep the connection open longer.
minutes

SQL For information, go to Import data from a database using native database query.
statement
Advanced Description
option

Include If checked, includes columns that might have relationships to other tables. If this
relationship box is cleared, you won’t see those columns.
columns

Navigate If checked, the navigator displays the complete hierarchy of tables in the database
using full you're connecting to. If cleared, the navigator displays only the tables whose
hierarchy columns and rows contain data.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your Sybase database.
Teradata database
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Authentication Types Supported Database


Windows

Function Reference Documentation Teradata.Database

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
Before you can connect to a Teradata database, you need the .NET Data Provider for
Teradata installed on your computer.

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced options
Command timeout in minutes
SQL statement
Include relationship columns
Navigate using full hierarchy

Connect to a Teradata database from Power


Query Desktop
To make the connection, take the following steps:

1. Select the Teradata database option from Get Data. More information: Where to
get data

2. Specify the Teradata server to connect to in Server.

3. If you're connecting from Power BI Desktop, select either the Import or


DirectQuery data connectivity mode. The rest of these example steps use the
Import data connectivity mode. To learn more about DirectQuery, go to Use
DirectQuery in Power BI Desktop.

4. Select OK.

5. If this is the first time you're connecting to this Teradata database, select the
authentication type you want to use, enter your credentials, and then select
Connect. For more information about using and managing authentication, go to
Authentication with a data source.
6. In Navigator, select the data you require, then either select Load to load the data
or Transform Data to transform the data.

Connect to a Teradata database from Power


Query Online
To make the connection, take the following steps:

1. Select the Teradata database option in the Choose data source page. More
information: Where to get data

2. Specify the Teradata server to connect to in Server.


3. Select the name of your on-premises data gateway.

7 Note

You must select an on-premises data gateway for this connector, whether the
Teradata database is on your local network or online.

4. If this is the first time you're connecting to this Teradata database, select the type
of credentials for the connection in Authentication kind. Choose Basic if you plan
to use an account that's created in the Teradata database instead of Windows
authentication. For more information about using and managing authentication,
go to Authentication with a data source.

5. Enter your credentials.

6. Select Use Encrypted Connection if you want to use an encrypted connection, or


clear the option if you want to use an unencrypted connection.

7. Select Next to continue.


8. In Navigator, select the data you require, then select Transform data to transform
the data in the Power Query editor.

Connect using advanced options


Power Query provides a set of advanced options that you can add to your query if
needed.

The following table lists all of the advanced options you can set in Power Query.

Advanced option Description

Command If your connection lasts longer than 10 minutes (the default timeout), you
timeout in can enter another value in minutes to keep the connection open longer.
minutes

SQL statement For information, go to Import data from a database using native database
query.

Include If checked, includes columns that might have relationships to other tables. If
relationship this box is cleared, you won’t see those columns.
columns

Navigate using If checked, the navigator displays the complete hierarchy of tables in the
full hierarchy database you're connecting to. If cleared, the navigator displays only the
tables whose columns and rows contain data.

Once you've selected the advanced options you require, select OK in Power Query
Desktop or Next in Power Query Online to connect to your Teradata database.
Text/CSV
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Function Reference Documentation File.Contents


Lines.FromBinary
Csv.Document

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities supported
Import

Connect to local text/CSV file from Power


Query Desktop
To load a local text or CSV file:

1. Select the Text/CSV option in Get Data. This action launches a local file browser
where you can select your text file.
Select Open to open the file.

2. From the Navigator, you can either transform the data in the Power Query Editor
by selecting Transform Data, or load the data by selecting Load.

Connect to text/CSV file from Power Query


Online
To load a local text or CSV file:

1. From the Data sources page, select Text/CSV.

2. In Connection settings, enter a file path to the local text or CSV file you want.
3. Select an on-premises data gateway from Data gateway.

4. Enter a username and password.

5. Select Next.

6. From the Navigator, select Transform Data to begin transforming the data in the
Power Query Editor.

Load from the web


To load a text or CSV file from the web, select the Web connector, enter the web address
of the file, and follow any credential prompts.

Text/CSV delimiters
Power Query will treat CSVs as structured files with a comma as a delimiter—a special
case of a text file. If you choose a text file, Power Query will automatically attempt to
determine if it has delimiter separated values, and what that delimiter is. If it can infer a
delimiter, it will automatically treat it as a structured data source.

Unstructured Text
If your text file doesn't have structure, you'll get a single column with a new row per line
encoded in the source text. As a sample for unstructured text, you can consider a
notepad file with the following contents:

Hello world.
This is sample data.

When you load it, you're presented with a navigation screen that loads each of these
lines into their own row.

There's only one thing you can configure on this dialog, which is the File Origin
dropdown select. This dropdown lets you select which character set was used to
generate the file. Currently, character set isn't inferred, and UTF-8 will only be inferred if
it starts with a UTF-8 BOM.
CSV
You can find a sample CSV file here .

In addition to file origin, CSV also supports specifying the delimiter and how data type
detection will be handled.
Delimiters available include colon, comma, equals sign, semicolon, space, tab, a custom
delimiter (which can be any string), and a fixed width (splitting up text by some standard
number of characters).

The final dropdown allows you to select how you want to handle data type detection. It
can be done based on the first 200 rows, on the entire data set, or you can choose to
not do automatic data type detection and instead let all columns default to 'Text'.
Warning: if you do it on the entire data set it may cause the initial load of the data in the
editor to be slower.
Since inference can be incorrect, it's worth double checking settings before loading.

Structured Text
When Power Query can detect structure to your text file, it will treat the text file as a
delimiter separated value file, and give you the same options available when opening a
CSV—which is essentially just a file with an extension indicating the delimiter type.

For example, if you save the following example as a text file, it will be read as having a
tab delimiter rather than unstructured text.

Column 1 Column 2 Column 3


This is a string. 1 ABC123
This is also a string. 2 DEF456

This can be used for any kind of other delimiter-based file.

Editing Source
When editing the source step, you'll be presented with a slightly different dialog than
when initially loading. Depending on what you are currently treating the file as (that is,
text or csv) you'll be presented with a screen with a variety of dropdowns.

The Line breaks dropdown will allow you to select if you want to apply line breaks that
are inside quotes or not.

For example, if you edit the 'structured' sample provided above, you can add a line
break.

Column 1 Column 2 Column 3


This is a string. 1 "ABC
123"
This is also a string. 2 "DEF456"

If Line breaks is set to Ignore quoted line breaks, it will load as if there was no line
break (with an extra space).
If Line breaks is set to Apply all line breaks, it will load an extra row, with the content
after the line breaks being the only content in that row (exact output may depend on
structure of the file contents).

The Open file as dropdown will let you edit what you want to load the file as—
important for troubleshooting. For structured files that aren't technically CSVs (such as a
tab separated value file saved as a text file), you should still have Open file as set to
CSV. This setting also determines which dropdowns are available in the rest of the
dialog.

Text/CSV by Example
Text/CSV By Example in Power Query is a generally available feature in Power BI Desktop
and Power Query Online. When you use the Text/CSV connector, you'll see an option to
Extract Table Using Examples on the bottom-left corner of the navigator.
When you select that button, you’ll be taken into the Extract Table Using Examples
page. On this page, you specify sample output values for the data you’d like to extract
from your Text/CSV file. After you enter the first cell of the column, other cells in the
column are filled out. For the data to be extracted correctly, you may need to enter
more than one cell in the column. If some cells in the column are incorrect, you can fix
the first incorrect cell and the data will be extracted again. Check the data in the first few
cells to ensure that the data has been extracted successfully.

7 Note

We recommend that you enter the examples in column order. Once the column has
successfully been filled out, create a new column and begin entering examples in
the new column.
Once you’re done constructing that table, you can either select to load or transform the
data. Notice how the resulting queries contain a detailed breakdown of all the steps that
were inferred for the data extraction. These steps are just regular query steps that you
can customize as needed.

Troubleshooting
Loading Files from the Web
If you're requesting text/csv files from the web and also promoting headers, and you’re
retrieving enough files that you need to be concerned with potential throttling, you
should consider wrapping your Web.Contents call with Binary.Buffer() . In this case,
buffering the file before promoting headers will cause the file to only be requested
once.

Working with large CSV files


If you're dealing with large CSV files in the Power Query Online editor, you might receive
an Internal Error. We suggest you work with a smaller sized CSV file first, apply the steps
in the editor, and once you're done, change the path to the bigger CSV file. This method
lets you work more efficiently and reduces your chances of encountering a timeout in
the online editor. We don't expect you to encounter this error during refresh time, as we
allow for a longer timeout duration.

Unstructured text being interpreted as structured


In rare cases, a document that has similar comma numbers across paragraphs might be
interpreted to be a CSV. If this issue happens, edit the Source step in the Power Query
editor, and select Text instead of CSV in the Open File As dropdown select.

Columns in Power BI Desktop


When you import a CSV file, Power BI Desktop generates a columns=x (where x is the
number of columns in the CSV file during initial import) as a step in Power Query Editor.
If you subsequently add more columns and the data source is set to refresh, any
columns beyond the initial x count of columns aren't refreshed.

Error: Connection closed by host


When loading Text/CSV files from a web source and also promoting headers, you might
sometimes encounter the following errors: "An existing connection was forcibly
closed by the remote host" or "Received an unexpected EOF or 0 bytes from the

transport stream." These errors might be caused by the host employing protective

measures and closing a connection which might be temporarily paused, for example,
when waiting on another data source connection for a join or append operation. To
work around these errors, try adding a Binary.Buffer (recommended) or Table.Buffer call,
which will download the file, load it into memory, and immediately close the connection.
This should prevent any pause during download and keep the host from forcibly closing
the connection before the content is retrieved.

The following example illustrates this workaround. This buffering needs to be done
before the resulting table is passed to Table.PromoteHeaders.

Original:

Power Query M

Csv.Document(Web.Contents("https://.../MyFile.csv"))

With Binary.Buffer :

Power Query M

Csv.Document(Binary.Buffer(Web.Contents("https://.../MyFile.csv")))

With Table.Buffer :

Power Query M

Table.Buffer(Csv.Document(Web.Contents("https://.../MyFile.csv")))
TIBCO(R) Data Virtualization
Article • 07/14/2023

7 Note

The following connector article is provided by TIBCO, the owner of this connector
and a member of the Microsoft Power Query Connector Certification Program. If
you have questions regarding the content of this article or have changes you would
like to see made to this article, visit the TIBCO website and use the support
channels there.

Summary
Item Description

Release State General Availability

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Database (Username/Password)


Windows Authentication (NTLM/Kerberos)

Function Reference Documentation —

Prerequisites
To access the TIBCO eDelivery site, you must have purchased TIBCO software. There's no
TIBCO license required for the TIBCO(R) Data Virtualization (TDV) software—a TIBCO
customer only needs to have a valid contract in place. If you don't have access, then
you'll need to contact the TIBCO admin in your organization.

The Power BI Connector for TIBCO(R) Data Virtualization must first be downloaded from
https://edelivery.tibco.com and installed on the machine running Power BI Desktop.
The eDelivery site downloads a ZIP file (for example,
TIB_tdv_drivers_<VERSION>_all.zip*.zip where <VERSION>=TDV Version) that contains
an installer program that installs all TDV client drivers, including the Power BI Connector.
Once the connector is installed, configure a data source name (DSN) to specify the
connection properties needed to connect to the TIBCO(R) Data Virtualization server.

7 Note

The DSN architecture (32-bit or 64-bit) needs to match the architecture of the
product where you intend to use the connector.
7 Note

Power BI Connector for TIBCO(R) Data Virtualization is the driver used by the
TIBCO(R) Data Virtualization connector to connect Power BI Desktop to TDV.

Capabilities Supported
Import
DirectQuery (Power BI Datasets)
Advanced Connection Properties
Advanced
Native SQL statement

Connect to a TIBCO(R) Data Virtualization


server from Power Query Desktop
Once the matching connector is installed and a DSN is configured, you can connect to a
TIBCO(R) Data Virtualization server. To make the connection, take the following steps:

1. Select the TIBCO(R) Data Virtualization option in the connector selection.

2. In the Power BI Connector for TIBCO(R) Data Virtualization dialog that appears,
provide the Data Source Name.

3. Select either the Import or DirectQuery data connectivity mode.

4. If this is the first time you're connecting to this database, select the authentication
type. If applicable, enter the needed credentials. Then select Connect.

Anonymous: Uses the credentials stored in the DSN.


Basic: Username/password are submitted when creating the connection.
Windows: Authenticates using the current Windows user. This authentication
type requires the SSO connection property to be set. When using Kerberos,
the Use Platform Kerberos API must be set to true to use the current
Windows user.
5. In Navigator, select the database information you want, then either select Load to
load the data or Transform Data to continue transforming the data in Power Query
Editor.

Connect using advanced options


Power Query Desktop provides a set of advanced options that you can add to your
query if needed.

The following table lists all of the advanced options you can set in Power Query
Desktop.
Advanced option Description

Advanced Takes a semi-colon separated list of connection properties. This option is


Connection used to specify other connection properties not configured in the DSN.
Properties

SQL statement For information, go to Import data from a database using native database
query.

Once you've selected the advanced options you require, select OK in Power Query
Desktop to connect to your TIBCO(R) Data Virtualization Server.

Kerberos-based single sign-on (SSO) for


TIBCO(R) Data Virtualization
The TIBCO(R) Data Virtualization connector now supports Kerberos-based single sign-on
(SSO).

To use this feature:

1. Sign in to your Power BI account, and navigate to the Gateway management page.

2. Add a new data source under the gateway cluster you want to use.

3. Select the connector in the Data Source Type list.

4. Expand the Advanced Settings section.

5. Select the option to Use SSO via Kerberos for DirectQuery queries or Use SSO via
Kerberos for DirectQuery and Import queries.
More information: Configure Kerberos-based SSO from Power BI service to on-premises
data sources
Twilio (Deprecated) (Beta)
Article • 01/24/2023

Summary
Item Description

Release State Deprecated

Products -

Authentication Types Supported -

Function Reference Documentation -

Deprecation

7 Note

This connector is deprecated. We recommend that you transition off existing


connections using this connector, and don't use this connector for new
connections.
Usercube (Beta)
Article • 07/18/2023

7 Note

The following connector article is provided by Usercube, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Usercube website and
use the support channels there.

Summary
Item Description

Release State Beta

Products Power BI (Datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Client Credentials

Function Reference Documentation —

Prerequisites
You must have a Usercube instance with the PowerBI option.

Capabilities supported
Import

Connect to Usercube from Power Query


Desktop
To connect to a Usercube server:

1. Launch Power BI Desktop and enter the Get Data experience.


2. From the Other category, select Usercube, and then select Connect.

3. You'll be prompted to specify your Usercube server's URL.

4. Enter the client credentials. The Client Id must be built from the Identifier of an
OpenIdClient element. This element is defined in the configuration of your
Usercube instance. To this identifier, you must concatenate the @ character and the
domain name of the Usercube instance.

5. In Navigator, select the data you require. Then, either select Transform data to
transform the data in the Power Query Editor, or choose Load to load the data in
Power BI.
Vessel Insight
Article • 07/14/2023

7 Note

The following connector article is provided by Kongsberg, the owner of this


connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the Kongsberg website and
use the support channels there.

Summary
Item Description

Release State General Availability

Products Power BI (datasets)


Power BI (Dataflows)
Fabric (Dataflow Gen2)

Authentication Types Supported Organizational account

Prerequisites
Before you can sign in to Vessel Insight, you must have an organization account
(username/password) connected to a tenant.

Capabilities Supported
Import

Connect to Vessel Insight


To connect to Vessel Insight:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Other from the
categories on the left, select Vessel Insight, and then select Connect.
2. If this is the first time you're getting data through the Vessel Insight connector, a
third-party notice will be displayed. Select Don't warn me again with this
connector if you don't want this message to be displayed again, and then select
Continue.

3. To sign in to your Vessel Insight account, select Sign in.


4. In the window that appears, provide your Vessel Insight tenant URL in the format
<companyname>.kognif.ai . Then select Validate.

5. In the window that appears, provide your credentials to sign in to your Vessel
Insight account.
If you entered an email address and password, select Continue.

6. Once you've successfully signed in, select Save.

Once the connection is established, you can preview and select data within the
Navigator dialog box to create a single tabular output.
You can select the following options in the navigator:

Advanced: Write custom Time series Query Language (TQL) queries (native). For
advanced Kongsberg users.
Vessel Insight Data (deprecated): Time series data for your fleets in the old asset
hierarchy.
Vessel Insight Data 2.0: Time series data for your fleets in the new asset hierarchy.
Only tags with data will be shown.
Voyage: Voyage history and location data from Automatic Identification System
(AIS).

You can provide any optional input parameters required for the selected items. For more
information about these parameters, go to Optional input parameters.

If you don't input parameters for Vessel Insight Data 2.0, you'll get the latest value by
default.
For Voyage, you need to input IMOs that you want to fetch data for.
You can Load the selected time series data, which brings the one table for each selected
time series tag into Power BI Desktop, or you can select Transform Data to edit the
query, which opens the Power Query editor. You can then filter and refine the set of data
you want to use, and then load that refined set of data into Power BI Desktop.

Optional input parameters

Vessel Insight Data 2.0


When you import time series data through the Vessel Insight Data 2.0 node and you've
selected the tags you want to load or transform in the Power Query Navigator, you can
also limit the amount of data by selecting a set of optional input parameters.

These input parameters are:

Interval (optional): How you want the data to be aggregated when displayed (1s,
5s, >=30s, 1m, 1h, 1d).
Time (optional): Set the time filter type if you want to filter on time.
Latest: Get latest value only. Returns one value.
Period: Filter on the time range. Requires setting the Start and End date
described below.
Custom: Custom query to filter on the number of values to return.
Start (Time: Period), e.g. 2019-10-08T00:00:00Z (optional): Filter on range by
inserting the start date and time here. Possible to set yesterday and today.
Requires setting Time: Period.
End (Time: Period), e.g. 2019-10-08T01:00:00Z (optional): Filter on range by
inserting the end date and time here. Possible to set today and now. Requires
setting Time: Period.
Custom (Time: Custom), e.g. |> takebefore now 5 (optional): Add a custom query
to filter on the number of values. |> takebefore now 5 means take five values
before the time now. Requires Time: Custom.

When importing aggregated timeseries, the connector will return avg, min, max, and
count by default.

If you're importing multiple tags, it can be cumbersome to input the parameters


manually for each tag. In this case, we recommend you use Power Query parameters for
Start and End date in the Power Query Editor. More information: Power Query
parameters

Voyage
When you import voyage data through the Voyage node, you can limit the amount of
data for the History and Location History table by setting a set of optional input
parameters.

These input parameters are:

Comma Separated IMOs: Input one or multiple IMO numbers you want voyage
data for.
Start (Time: Period), e.g. 2019-10-08T00:00:00Z (optional): Filter on range by
inserting the start date and time here. Possible to set yesterday and today.
Requires setting Time: Period.
End (Time: Period), e.g. 2019-10-08T01:00:00Z (optional): Filter on range by
inserting the end date and time here. Possible to set today and now. Requires
setting Time: Period.

Limitations and issues


You should be aware of the following limitations and issues associated with accessing
Vessel Insight data.

There's a general limit of 1-GB data that's imported into Power BI, unless the
workspace is in a Power BI Premium capacity. We recommend that you aggregate
and choose a short date range when importing time series data, as it can become
heavy.

Each time series tag with associated values is outputted in a separate table in
Power BI. If it's necessary to combine tags and values into one table, the tags and
their values need to be merged in the Power Query editor or with TQL queries.

The time series data is currently stored in Couchbase, which might have
weaknesses that impact the Power BI connector.

The API request timeout is, by default, one minute.

For more guidelines on accessing Vessel Insight data, go to The Getting started guide .

Recommended content
You might also find the following Vessel Insight information useful:

About Vessel Insight Power BI connector


About Vessel Insight
Vessel Insight API
Web
Article • 07/14/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights

Authentication Types Supported Web.Contents


Anonymous
Windows
Basic
Web API
Organizational Account

Web.BrowserContents
Anonymous
Windows (preview feature)
Basic (preview feature)
Web API (preview feature)

Web.Page
Anonymous
Windows (current user's credentials only)
Web API

Function Reference Documentation Web.Contents


Web.BrowserContents
Web.Page

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.
Prerequisites
Web.Page requires Internet Explorer 10.

Capabilities supported
Connecting to a URL
Advanced
Using a combination of text constants and parameters to construct the URL
Specifying a command timeout
Defining HTTP request header parameters (Web.Contents only)

Load Web data using Power Query Desktop


To load data from a web site with Power Query Desktop:

1. Select Get Data > Web in Power BI or From Web in the Data ribbon in Excel.

2. Choose the Basic button and enter a URL address in the text box. For example,
enter
https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_Sta

tes . Then select OK.

If the URL address you enter is invalid, a warning icon will appear next to the
URL textbox.

If you need to construct a more advanced URL before you connect to the website,
go to Load Web data using an advanced URL.

3. Select the authentication method to use for this web site. In this example, select
Anonymous. Then select the level to you want to apply these settings to—in this
case, https://en.wikipedia.org/ . Then select Connect.
The available authentication methods for this connector are:

Anonymous: Select this authentication method if the web page doesn't


require any credentials.

Windows: Select this authentication method if the web page requires your
Windows credentials.

Basic: Select this authentication method if the web page requires a basic user
name and password.

Web API: Select this method if the web resource that you’re connecting to
uses an API Key for authentication purposes.

Organizational account: Select this authentication method if the web page


requires organizational account credentials.

7 Note

When uploading the report to the Power BI service, only the anonymous,
Windows and basic authentication methods are available.

The level you select for the authentication method determines what part of a URL
will have the authentication method applied to it. If you select the top-level web
address, the authentication method you select here will be used for that URL
address or any subaddress within that address. However, you might not want to
set the top URL address to a specific authentication method because different
subaddresses could require different authentication methods. For example, if you
were accessing two separate folders of a single SharePoint site and wanted to use
different Microsoft Accounts to access each one.
Once you've set the authentication method for a specific web site address, you
won't need to select the authentication method for that URL address or any
subaddress again. For example, if you select the https://en.wikipedia.org/
address in this dialog, any web page that begins with this address won't require
that you select the authentication method again.

7 Note

If you need to change the authentication method later, go to Changing the


authentication method.

4. From the Navigator dialog, you can select a table, then either transform the data
in the Power Query editor by selecting Transform Data, or load the data by
selecting Load.

The right side of the Navigator dialog displays the contents of the table you select
to transform or load. If you're uncertain which table contains the data you're
interested in, you can select the Web View tab. The web view lets you see the
entire contents of the web page, and highlights each of the tables that have been
detected on that site. You can select the check box above the highlighted table to
obtain the data from that table.
On the lower left side of the Navigator dialog, you can also select the Add table
using examples button. This selection presents an interactive window where you
can preview the content of the web page and enter sample values of the data you
want to extract. For more information on using this feature, go to Get webpage
data by providing examples.

Load Web data using Power Query Online


To load data from a web site with Power Query Online:

1. From the Get Data dialog box, select either Web page or Web API.

In most cases, you'll want to select the Web page connector. For security reasons,
you'll need to use an on-premises data gateway with this connector. The Web Page
connector requires a gateway because HTML pages are retrieved using a browser
control, which involves potential security concerns. This concern isn't an issue with
Web API connector, as it doesn't use a browser control.

In some cases, you might want to use a URL that points at either an API or a file
stored on the web. In those scenarios, the Web API connector (or file-specific
connectors) would allow you to move forward without using an on-premises data
gateway.

Also note that if your URL points to a file, you should use the specific file connector
instead of the Web page connector.

2. Enter a URL address in the text box. For this example, enter
https://en.wikipedia.org/wiki/List_of_states_and_territories_of_the_United_Sta
tes .
3. Select the name of your on-premises data gateway.

4. Select the authentication method you'll use to connect to the web page.

The available authentication methods for this connector are:

Anonymous: Select this authentication method if the web page doesn't


require any credentials.
Windows: Select this authentication method if the web page requires your
Windows credentials.

Basic: Select this authentication method if the web page requires a basic user
name and password.

Organizational account: Select this authentication method if the web page


requires organizational account credentials.

Once you've chosen the authentication method, select Next.

5. From the Navigator dialog, you can select a table, then transform the data in the
Power Query Editor by selecting Transform Data.

Load Web data using an advanced URL


When you select Get Data > From Web in Power Query Desktop, in most instances
you'll enter URLs in the Basic setting. However, in some cases you may want to assemble
a URL from its separate parts, set a timeout for the connection, or provide individualized
URL header data. In this case, select the Advanced option in the From Web dialog box.
Use the URL parts section of the dialog to assemble the URL you want to use to get
data. The first part of the URL in the URL parts section most likely would consist of the
scheme, authority, and path of the URI (for example, http://contoso.com/products/ ).
The second text box could include any queries or fragments that you would use to filter
the information provided to the web site. If you need to add more than one part, select
Add part to add another URL fragment text box. As you enter each part of the URL, the
complete URL that will be used when you select OK is displayed in the URL preview box.

Depending on how long the POST request takes to process data, you may need to
prolong the time the request continues to stay connected to the web site. The default
timeout for both POST and GET is 100 seconds. If this timeout is too short, you can use
the optional Command timeout in minutes to extend the number of minutes you stay
connected.

You can also add specific request headers to the POST you send to the web site using
the optional HTTP request header parameters drop-down box. The following table
describes the request headers you can select.

Request Description
Header

Accept Specifies the response media types that are acceptable.

Accept- Indicates which character sets are acceptable in the textual response content.
Charset
Request Description
Header

Accept- Indicates what response content encodings are acceptable in the response.
Encoding

Accept- Indicates the set of natural languages that are preferred in the response.
Language

Cache- Indicates the caching policies, specified by directives, in client requests and server
Control responses.

Content- Indicates the media type of the content.


Type

If- Conditionally determines if the web content has been changed since the date
Modified- specified in this field. If the content hasn't changed, the server responds with only
Since the headers that have a 304 status code. If the content has changed, the server will
return the requested resource along with a status code of 200.

Prefer Indicates that particular server behaviors are preferred by the client, but aren't
required for successful completion of the request.

Range Specifies one or more subranges of the selected representation data.

Referer Specifies a URI reference for the resource from which the target URI was obtained.

Import files from the web


Normally when you import a local on-premises file in Power Query Desktop, you'll use
the specific file-type connector to import that file, for example, the JSON connector to
import a JSON file or the CSV connector to import a CSV file. However, if you're using
Power Query Desktop and the file you want to import is located on the web, you must
use the Web connector to import that file. As in the local case, you'll then be presented
with the table that the connector loads by default, which you can then either Load or
Transform.

The following file types are supported by the Web Connector:

Access database
CSV document
Excel workbook
JSON
Text file
HTML page
XML tables
PDF

For example, you could use the following steps to import a JSON file on the
https://contoso.com/products web site:

1. From the Get Data dialog box, select the Web connector.

2. Choose the Basic button and enter the address in the URL box, for example:

http://contoso.com/products/Example_JSON.json

3. Select OK.

4. If this is the first time you're visiting this URL, select Anonymous as the
authentication type, and then select Connect.

5. Power Query Editor will now open with the data imported from the JSON file.
Select the View tab in the Power Query Editor, then select Formula Bar to turn on
the formula bar in the editor.

As you can see, the Web connector returns the web contents from the URL you
supplied, and then automatically wraps the web contents in the appropriate
document type specified by the URL ( Json.Document in this example).

See also
Extract data from a Web page by example
Troubleshooting the Power Query Web connector
Get webpage data by providing
examples
Article • 01/24/2023

Getting data from a web page lets users easily extract data from web pages. Often
however, data on Web pages aren't in tidy tables that are easy to extract. Getting data
from such pages can be challenging, even if the data is structured and consistent.

There's a solution. With the Get Data from Web by example feature, you can essentially
show Power Query data you want to extract by providing one or more examples within
the connector dialog. Power Query gathers other data on the page that match your
examples. With this solution you can extract all sorts of data from Web pages, including
data found in tables and other non-table data.

7 Note

Prices listed in the images are for example purposes only.

Using Get Data from Web by example


Select the Web option in the connector selection, and then select Connect to continue.
In From Web, enter the URL of the Web page from which you'd like to extract data. In
this article, we'll use the Microsoft Store Web page, and show how this connector works.

If you want to follow along, you can use the Microsoft Store URL that we use in this
article:

https://www.microsoft.com/store/top-paid/games/xbox?category=classics

When you select OK, you're taken to the Navigator dialog box where any autodetected
tables from the Web page are presented. In the case shown in the image below, no
tables were found. Select Add table using examples to provide examples.
Add table using examples presents an interactive window where you can preview the
content of the Web page. Enter sample values of the data you want to extract.

In this example, you'll extract the Name and Price for each of the games on the page.
You can do that by specifying a couple of examples from the page for each column. As
you enter examples, Power Query extracts data that fits the pattern of example entries
using smart data extraction algorithms.

7 Note

Value suggestions only include values less than or equal to 128 characters in length.

Once you're happy with the data extracted from the Web page, select OK to go to
Power Query Editor. You can then apply more transformations or shape the data, such as
combining this data with other data sources.
See also
Add a column from examples
Shape and combine data
Getting data
Troubleshooting the Power Query Web connector
Troubleshooting the Web connector
Article • 04/11/2023

What's the difference between Web.Contents,


Web.BrowserContents, and Web.Page?
Web.Contents

Web.Contents is used for retrieving web content that doesn't need to be accessed

through a browser, such as CSV files, JSON API results, and so on.
It supports the widest variety of authentication options.
It can be used in cloud environments, such as Power Query Online, without a
gateway.

Web.Page

Web.Page is a legacy function for retrieving web content that needs to be accessed
through a browser, such as HTML pages.
It's built on Internet Explorer. Because of this requirement, it's being replaced in
the UI with Web.BrowserContents . However, Web.Page will continue to be available
at the engine level for backward compatibility.
A gateway is required to use it in cloud environments, such as Power Query Online.

Web.BrowserContents

Web.BrowserContents is a new function for retrieving web content that needs to be

accessed through a browser, such as HTML pages.


In the UI, Web.BrowserContents is replacing Web.Page , because Web.Page is based
on Internet Explorer.
Web.BrowserContents was initially built on Chromium, but it's being migrated to

Microsoft Edge's WebView2 control.


A gateway is required to use it in cloud environments, such as Power Query Online.

The following table summarizes the differences.

Web.Contents Web.Page Web.BrowserContents

Non-browser content x
(.txt/.csv files, JSON, and
so on)
Web.Contents Web.Page Web.BrowserContents

Browser content (HTML) x x

Authentication Types Anonymous Anonymous Anonymous


Supported Windows Windows (current Windows (preview feature)
Basic user's credentials Basic (preview feature)
Web API only) Web API (preview feature)
Organizational Web API
Account

Requires a gateway in N Y Y
cloud hosts

Currently generated by All hosts Excel and Power Power BI Desktop


Query Online

Built on .NET Internet Explorer Chromium, but moving to


Microsoft Edge's WebView2
control

"This browser is no longer supported"


warnings in the Web View
When importing a web page, you can view a preview of the page using the Web View
tab in Power Query's navigator dialog. This preview may sometimes display a warning
such as "This browser is no longer supported". When this happens, it's usually because
the Web View currently renders the Web View preview using Internet Explorer, which is
no longer supported by some websites. However, this only affects the Web View and
doesn't affect the web connector itself. As long as you're using the connectors based on
Web.Contents or Web.BrowserContents (and not Web.Page), you can safely ignore such
warnings. Refer to What's the difference between Web.Contents, Web.BrowserContents,
and Web.Page? for more information on the different web connectors and their
underlying technologies.

Handling dynamic web pages


Web pages that load their content dynamically might require special handling. If you
notice sporadic errors in your web queries, it's possible that you're trying to access a
dynamic web page. One common example of this type of error is:

1. You refresh a query that connects to the site.


2. You see an error (for example, "the column 'Foo' of the table wasn't found").
3. You refresh the query again.
4. No error occurs.

These kinds of issues are usually due to timing. Pages that load their content
dynamically can sometimes be inconsistent since the content can change after the
browser considers loading complete. Sometimes the web connector downloads the
HTML after all the dynamic content has loaded. Other times the changes are still in
progress when it downloads the HTML, leading to sporadic errors.

The solution is to use the WaitFor option of Web.BrowserContents, which indicates


either a selector or a length of time that should be waited for before downloading the
HTML.

How can you tell if a page is dynamic? Usually it's pretty simple. Open the page in a
browser and watch it load. If the content shows up right away, it's a regular HTML page.
If it appears dynamically or changes over time, it's a dynamic page.

Using a gateway with the Web connector


Both Web.BrowserContents and Web.Page require the use of an on-premises data
gateway when published to a cloud service, such as Power BI datasets or dataflows, or
Power Apps dataflows. (Currently, Dynamics 365 Customer Insights doesn't support the
use of a gateway.)

If you're using Web.Page and receive a Please specify how to connect error, ensure
that you have Internet Explorer 10 or later installed on the machine that hosts your on-
premises data gateway.

Enabling the Edge WebView2 version of


Web.BrowserContents
Web.BrowserContents was initially built on Chromium, but it's being migrated to Edge's
WebView2 control.

To enable the updated Edge-based version of Web.BrowserContents in Power BI Desktop,


enable the "Web page connector infrastructure update" preview feature.

To enable the updated Edge-based version of Web.BrowserContents on a gateway


machine, set the following environment variable:

set PQ_WebView2Connector=true
Using Web.Page instead of
Web.BrowserContents
In cases where you need to use Web.Page instead of Web.BrowserContents , you can still
manually use Web.Page .

In Power BI Desktop, you can use the older Web.Page function by clearing the Enable
web table inference option:

1. Under the File tab, select Options and settings > Options.

2. In the Global section, select Power Query Editor.

3. Clear the Enable web table inference option, and then select OK.

4. Restart Power BI Desktop.

7 Note

Currently, you can't turn off the use of Web.BrowserContents in Power BI


Desktop optimized for Power BI Report Server.

You can also get a copy of a Web.Page query from Excel. To copy the code from Excel:

1. Select From Web from the Data tab.


2. Enter the address in the From Web dialog box, and then select OK.
3. In Navigator, choose the data you want to load, and then select Transform Data.
4. In the Home tab of Power Query, select Advanced Editor.
5. In the Advanced Editor, copy the M formula.
6. In the app that uses Web.BrowserContents , select the Blank Query connector.
7. If you're copying to Power BI Desktop:
a. In the Home tab, select Advanced Editor.
b. Paste the copied Web.Page query in the editor, and then select Done.
8. If you're copying to Power Query Online:
a. In the Blank Query, paste the copied Web.Page query in the blank query.
b. Select an on-premises data gateway to use.
c. Select Next.

You can also manually enter the following code into a blank query. Ensure that you enter
the address of the web page you want to load.

powerqury-m
let
Source = Web.Page(Web.Contents("<your address here>")),
Navigation = Source{0}[Data]
in
Navigation

Capturing web requests and certificate


revocation
We've strengthened the security of web connections to protect your data. However, this
means that certain scenarios, like capturing web requests with Fiddler, will no longer
work by default. To enable those scenarios:

1. Open Power BI Desktop.

2. Under the File tab, select Options and settings > Options.

3. In Options, under Global > Security, uncheck Enable certificate revocation check.

4. Select OK.

5. Restart Power BI Desktop.

) Important

Be aware that unchecking Enable certificate revocation check will make web
connections less secure.

To set this scenario in Group Policy, use the "DisableCertificateRevocationCheck" key


under the registry path
"Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft Power BI Desktop".
Setting "DisableCertificateRevocationCheck" to 0 will always enable the check (stopping
Fiddler and similar software from working) and setting
"DisableCertificateRevocationCheck" to 1 will always disable the check (enabling Fiddler
and similar software).

Changing the authentication method


In some cases, you may need to change the authentication method you use to access a
particular site. If this change is necessary, go to Change the authentication method.

Authenticating to arbitrary services


Some services support the ability for the Web connector to authenticate with
OAuth/AAD authentication out of the box. However, this won't work in most cases.

When attempting to authenticate, if you see the following error:

“We were unable to connect because this credential type isn’t supported for this
resource. Please choose another credential type.”

Contact the service owner. They'll either need to change the authentication
configuration or build a custom connector.

Web connector uses HTTP 1.1 to communicate


The Power Query Web connector communicates with a data source using HTTP 1.1. If
your data source is expecting to communicate using HTTP 1.0, you might receive an
error, such as 500 Internal Server Error .

It's not possible to switch Power Query to use HTTP 1.0. Power Query always sends an
Expect:100-continue when there's a body to avoid passing a possibly large payload

when the initial call itself might fail (for example, due to a lack of permissions). Currently,
this behavior can't be changed.

Connecting to Microsoft Graph


Power Query currently doesn't support connecting to Microsoft Graph REST APIs .
More information: Lack of Support for Microsoft Graph in Power Query

See also
Power Query Web connector
Get webpage data by providing examples
Workforce Dimensions (Beta)
(Deprecated)
Article • 01/24/2023

Summary
Item Description

Release State Deprecated

Products -

Authentication Types Supported -

Function Reference Documentation -

Deprecation

7 Note

This connector is deprecated because of end of support for the connector. We


recommend that users transition off existing connections using this connector, and
don't use this connector for new connections.
XML
Article • 07/18/2023

Summary
Item Description

Release State General Availability

Products Excel
Power BI (Datasets)
Power BI (Dataflows)
Fabric (Dataflow Gen2)
Power Apps (Dataflows)
Dynamics 365 Customer Insights
Analysis Services

Function Reference Documentation Xml.Tables


Xml.Document

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Capabilities supported
Import

Load a local XML file from Power Query


Desktop
To load a local XML file:

1. Select the XML option in the Get Data selection. This action will launch a local file
browser and allow you to select your XML file.
2. Browse to the directory containing the local XMl file you want to load, and then
select Open.

3. In Navigator, select the data you want, and then either select Load to load the
data, or Transform Data to continue transforming the data in Power Query Editor.

Load a local XML file from Power Query Online


To load a local XML file:

1. From the Data sources page, select XML.

2. Enter the path to the local XML file.


3. Select an on-premises data gateway from Data gateway.

4. If authentication is required, enter your credentials.

5. Select Next.

Loading the XML file will automatically launch the Power Query Editor. From the editor,
you can then transform the data if you want, or you can just save and close to load the
data.

Load an XML file from the web


If you want to load an XML file from the web, instead of selecting the XML connector
you can select the Web connector. Paste in the address of the desired file and you'll be
prompted with an authentication selection, since you're accessing a website instead of a
static file. If there's no authentication, you can just select Anonymous. As in the local
case, you'll then be presented with the table that the connector loads by default, which
you can Load or Transform.

Troubleshooting

Data Structure
Due to the fact that many XML documents have ragged or nested data, you may have to
do extra data shaping to get it in the sort of form that will make it convenient to do
analytics. This holds true whether you use the UI accessible Xml.Tables function, or the
Xml.Document function. Depending on your needs, you may find you have to do more or

less data shaping.

Text versus nodes


If your document contains a mixture of text and non-text sibling nodes, you may
encounter issues.

For example if you have a node like this:

XML

<abc>
Hello <i>world</i>
</abc>

Xml.Tables will return the "world" portion but ignore "Hello". Only the element(s) are

returned, not the text. However, Xml.Document will return "Hello <i>world</i>". The
entire inner node is turned to text, and structure isn't preserved.
Zendesk (Beta)
Article • 01/24/2023

Summary
Item Description

Release State Beta

Products Power BI (Datasets)

Authentication Types Supported Zendesk account

Prerequisites
Before you can sign in to Zendesk, you must have a Zendesk account
(username/password).

Capabilities Supported
Import

Connect to Zendesk data


To connect to Zendesk data:

1. Select Get Data from the Home ribbon in Power BI Desktop. Select Online Services
from the categories on the left, select Zendesk (Beta), and then select Connect.
2. If this is the first time you're getting data through the Zendesk connector, a
preview connector notice will be displayed. Select Don't warn me again with this
connector if you don't want this message to be displayed again, and then select
Continue.

3. Enter the Zendesk URL location that you want to access, and the select OK.
4. To sign in to your Zendesk account, select Sign in.

5. In the Zendesk window that appears, provide your credentials to sign in to your
Zendesk account.
6. Select Sign in.

7. Once you've successfully signed in, select Connect.


8. In Navigator, select the information you want, then either select Load to load the
data or Transform Data to continue transforming the data in Power Query Editor.

Limitations and issues


You should be aware of the following limitations and issues associated with accessing
Zendesk data.

Zendesk returns a 422 error status if the instance returns more than 1000 rows.
Power Query Online limits
Article • 08/30/2023

Power Query Online is integrated into a variety of Microsoft products. Since these
products target different scenarios, they may set different limits for Power Query Online
usage.

Limits are enforced at the beginning of query evaluations. Once an evaluation is


underway, only timeout limits are imposed.

Limit types
Hourly Evaluation Count: The maximum number of evaluation requests a user can issue
during any 60 minute period

Daily Evaluation Time: The net time a user can spend evaluating queries during any 24
hour period

Concurrent Evaluations: The maximum number of evaluations a user can have running at
any given time

Authoring limits
Authoring limits are the same across all products. During authoring, query evaluations
return previews that may be subsets of the data. Data is not persisted.

Hourly Evaluation Count: 1000

Daily Evaluation Time: Currently unrestricted

Per Query Timeout: 10 minutes

Refresh limits
During refresh (either scheduled or on-demand), query evaluations return complete
results. Data is typically persisted in storage.
Product Integration Hourly Daily Evaluation Concurrent Query
Evaluation Time (Hours) Evaluations (#)
Count (#)

Microsoft Flow (SQL Connector— 500 2 5


Transform Data Using Power
Query)

Dataflows in PowerApps.com 500 2 8


(Trial)

Dataflows in PowerApps.com 1000 8 20


(Production)

Data Integration in 1000 24 20


PowerApps.com Admin Portal

Dataflows in PowerBI.com 1000 100 20

Dataflows in Dynamics 365 1000 100 20


Customer Insights

Dataflow limits
Dataflow is a workload that leverages Power Query Online. Dataflow is integrated into
Power BI, PowerApps, Microsoft Fabric, and Dynamics 365 Customer Insights. A single
dataflow has a limit of 50 tables. If you need more than 50 tables, you can create
multiple dataflows. If you exceed the limit, an error message occurs during publishing
and refreshing.
Common Issues
Article • 12/21/2022

Power Query

Preserving sort
You might assume that if you sort your data, any downstream operations will preserve
the sort order.

For example, if you sort a sales table so that each store's largest sale is shown first, you
might expect that doing a "Remove duplicates" operation will return only the top sale
for each store. And this operation might, in fact, appear to work. However, this behavior
isn't guaranteed.

Because of the way Power Query optimizes certain operations, including skipping them
or offloading them to data sources (which can have their own unique ordering
behavior), sort order isn't guaranteed to be preserved through aggregations (such as
Table.Group ), merges (such as Table.NestedJoin ), or duplicate removal (such as

Table.Distinct ).

There are a number of ways to work around this. Here are a few suggestions:

Perform a sort after applying the downstream operation. For example, when
grouping rows, sort the nested table in each group before applying further steps.
Here's some sample M code that demonstrates this approach:
Table.Group(Sales_SalesPerson, {"TerritoryID"}, {{"SortedRows", each

Table.Sort(_, {"SalesYTD", Order.Descending})}})


Buffer the data (using Table.Buffer ) before applying the downstream operation. In
some cases, this operation will cause the downstream operation to preserve the
buffered sort order.
Use ranking. For example, instead of using Table.Distinct , you could order by the
column(s) containing the duplicate values, rank based on a tie-breaker column
(such as modified_date ), and then filter to keep just the rank 1 rows.

Data type inference


Sometimes Power Query may incorrectly detect a column's data type. This is due to the
fact that Power Query infers data types using only the first 200 rows of data. If the data
in the first 200 rows is somehow different than the data after row 200, Power Query can
end up picking the wrong type. (Be aware that an incorrect type won't always produce
errors. Sometimes the resulting values will simply be incorrect, making the issue harder
to detect.)

For example, imagine a column that contains integers in the first 200 rows (such as all
zeroes), but contains decimal numbers after row 200. In this case, Power Query will infer
the data type of the column to be Whole Number (Int64.Type). This inference will result
in the decimal portions of any non-integer numbers being truncated.

Or imagine a column that contains textual date values in the first 200 rows, and other
kinds of text values after row 200. In this case, Power Query will infer the data type of
the column to be Date. This inference will result in the non-date text values being
treated as type conversion errors.

Because type detection works on the first 200 rows, but Data Profiling can operate over
the entire dataset, you can consider using the Data Profiling functionality to get an early
indication in the Query Editor about Errors (from type detection or any number of other
reasons) beyond the top N rows.

Connections forcibly closed by the remote host


When connecting to various APIs, you might get the following warning:

Data source error: Unable to read data from the transport connection: An existing

connection was forcibly closed by the remote host

If you run into this error, it's most likely a networking issue. Generally, the first people to
check with are the owners of the data source you're attempting to connect to. If they
don’t think they’re the ones closing the connection, then it’s possible something along
the way is (for example, a proxy server, intermediate routers/gateways, and so on).

Whether this only reproduces with any data or only larger data sizes, it's likely that
there's a network timeout somewhere on the route. If it's only with larger data,
customers should consult with the data source owner to see if their APIs support
paging, so that they can split their requests into smaller chunks. Failing that, alternative
ways to extract data from the API (following data source best practices) should be
followed.

TLS RSA cipher suites are deprecated


Effective October 30, 2020, the following cipher suites are being deprecated from our
servers.

"TLS_RSA_WITH_AES_256_GCM_SHA384”
"TLS_RSA_WITH_AES_128_GCM_SHA256”
"TLS_RSA_WITH_AES_256_CBC_SHA256”
"TLS_RSA_WITH_AES_128_CBC_SHA256”

The following list are the supported cipher suites:

"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384"
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256"
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"

Cipher suites are used to encrypt messages to secure a network connection between
clients/servers and other servers. We're removing the above list of cipher suites to
comply with our current security protocols. Beginning March 1, 2021, customers can
only use our standard cipher suites.

These are the cipher suites the server you connect to must support to connect from
Power Query Online or Power BI.

In Power Query Desktop (Power BI, Excel), we don’t control your cipher suites. If you're
trying to connect to Power Platform (for example Power Platform Dataflows) or the
Power BI Service, you'll need one of those cipher suites enabled on your OS. You may
either upgrade the Windows version or update the Windows TLS registry to make sure
that your server endpoint supports one of these ciphers.

To verify that your server complies with the security protocol, you can perform a test
using a TLS cipher and scanner tool. One example might be SSLLABS .

Customers must upgrade their servers before March 1, 2021. For more information
about configuring TLS Cipher Suite order, see Manage Transport Layer Security (TLS).

Certificate revocation
An upcoming version of Power BI Desktop will cause SSL connections failure from
Desktop when any certificates in the SSL chain are missing certificate revocation status.
This is a change from the current state, where revocation only caused connection failure
in the case where the certificate was explicitly revoked. Other certificate issues might
include invalid signatures, and certificate expiration.

As there are configurations in which revocation status may be stripped, such as with
corporate proxy servers, we'll be providing another option to ignore certificates that
don't have revocation information. This option will allow situations where revocation
information is stripped in certain cases, but you don't want to lower security entirely, to
continue working.

It isn't recommended, but users will continue to be able to turn off revocation checks
entirely.

Error: Evaluation was canceled


Power Query will return the message "Evaluation was canceled" when background
analysis is disabled and the user switches between queries or closes the Query Editor
while a query is in the process of refreshing.

Error: The key didn't match any rows in the table


There are many reasons why Power Query may return an error that the key didn't match
any rows in the table. When this error happens, the Mashup Engine is unable to find the
table name it's searching for. Reasons why this error may happen include:

The table name has been changed, for example in the data source itself.
The account used to access the table doesn't have sufficient privileges to read the
table.
There may be multiple credentials for a single data source, which isn't supported in
Power BI Service. This error may happen, for example, when the data source is a
cloud data source and multiple accounts are being used to access the data source
at the same time with different credentials. If the data source is on-premises, you'll
need to use the on-premises data gateway.

Limitation: Domain-joined requirement for gateway


machines when using Windows authentication
Using Windows authentication with an on-premises gateway requires the gateway
machine to be domain joined. This applies to any connections that are set up with
“Windows authentication through the gateway”. Windows accounts that will be used to
access a data source might require read access to the shared components in the
Windows directory and the gateway installation.

Limitation: Cross tenant OAuth2 refresh isn't supported in


Power BI service
If you want to connect to a data source from Power BI service using OAuth2, the data
source must be in the same tenant as Power BI service. Currently, multi-tenant
connection scenarios aren’t supported with OAuth2.

Limitation: Custom AD FS authentication endpoint isn't


supported in Power BI service
The ability to use a custom Active Directory Federation Services (AD FS) authentication
endpoint isn't supported in Power BI service. Users might encounter the following error:
The token service reported by the resource is not trusted.

Limitation: Guest accounts aren't supported


Using guest accounts of a tenant to connect to data using Power Query connectors isn't
currently supported.

Expression.Error: Evaluation resulted in a stack overflow


and cannot continue
Stack overflow errors can be caused by a bug in your M code. For example, the
following function produces a stack overflow because it repeatedly calls back into itself
without any kind of end condition. A function that calls itself like this is known as a
"recursive" function.

let f = (x) => @f(x + 1) in f(0)

Here are some common ways to resolve a stack overflow in your M code.

Ensure that your recursive functions actually terminate when the expected end
condition is reached.
Replace recursion with iteration (for example, by using functions such as
List.Transform, List.Generate, or List.Accumulate).
Expression.Error: Evaluation ran out of memory and can't
continue
"Out of memory" errors (or OOMs) can be caused by doing too many memory intensive
operations against very large tables. For example, the following M code produces an
OOM because it attempts to load a billion rows into memory at once.

Table.Buffer(Table.FromList({1..1000000000}, Splitter.SplitByNothing()))

To resolve out of memory errors, optimize memory intensive operations like sorts, joins,
grouping, and distincts by ensuring they fold to the source, or by removing them
altogether where possible. Sorts, for example, are often unnecessary.

Dataflows

Cancel dataflow refresh


Sometimes you start a dataflow refresh, but after starting it you realize you wanted to
change one more thing before refreshing your data. In that case, you have to wait until
the refresh is finished. Stopping a refresh midway as the process is already working on
getting the data and updating the tables in your workspace or environment isn't
currently supported.

We do plan to add support for canceling a dataflow refresh in the future.


Review script changes in Power Query
Online
Article • 02/17/2023

Background
Due to the way that queries are stored in Power Query Online, there are cases where
manually entered M script (generally comments) is lost. The Review Script Changes
pane provides a "diff" experience highlighting the changes, which allows users to
understand what changes are being made. Users can then accept the changes or
rearrange their script to fix it.

There are three notable cases that may cause this experience.

Script for ribbon transforms


Ribbon transforms always generate the same M script, which may be different than the
way they are manually entered. This should always be equivalent script. Contact support
if this is not the case.

Comments
Comments always have to be inside the Let .. in expression, and above a step. This
will be shown in the user interface as a 'Step property'. We lose all other comments.
Comments that are written on the same line as one step, but above another step (for
example, after the comma that trails every step) will be moved down.

Removing script errors


In certain cases, your script will be updated if it results in a syntax error by escaping your
script (for example, when using the formula bar).

Experience
When you commit a query, Power Query Online will evaluate it to see if the 'stored'
version of the script differs at all from what you have submitted. If it does, it will present
you with a Review script changes dialog box that will allow you to accept or cancel.
If you accept, the changes will be made to your query.
If you cancel, you might rewrite your query to make sure that you move your
comments properly, or rearrange however else you want.
Power Query connector feedback
Article • 01/24/2023

This article describes how to submit feedback for Power Query connectors. It's important
to distinguish between Microsoft-owned connectors and non-Microsoft-owned
connectors, as the support and feedback channels are different.

To confirm whether a connector is Microsoft-owned, visit the connector reference. Only


connectors marked as "By Microsoft" are Microsoft-owned connectors.

Microsoft-owned connectors
This section outlines instructions to receive support or submit feedback on Microsoft-
owned connectors.

Support and troubleshooting


If you're finding an issue with a Power Query connector, use the dedicated support
channels for the product you're using Power Query connectors in. For example, for
Power BI, visit the Power BI support page .

If you're seeking help with using Microsoft-owned Power Query connectors, visit one of
the following resources.

Community forums for the product you're using Power Query in. For example, for
Power BI, this forum would be the Power BI Community and for PowerPlatform
dataflows, the forum would be Power Apps Community .
Power Query website resources .

Submitting feedback
To submit feedback about a Microsoft-owned connector, provide the feedback to the
"ideas" forum for the product you're using Power Query connectors in. For example, for
Power BI, visit the Power BI ideas forum . If you have one, you can also provide
feedback directly to your Microsoft account contact.

Non-Microsoft-owned connectors
This section outlines instructions to receive support or submit feedback on non-
Microsoft-owned connectors.

Support and troubleshooting


For non-Microsoft-owned connectors, support and troubleshooting questions should go
to the connector owner through their support channels. For example, for a Contoso-
owned connector, you should submit a request through the Contoso support channels.

You can also engage the Power Query community resources indicated above for
Microsoft-owned connectors, in case a member of the community can assist.

Submitting feedback
As non-Microsoft-owned connectors are managed and updated by the respective
connector owner, feedback should be sent directly to the connector owner. For example,
to submit feedback about a Contoso-owned connector, you should directly submit
feedback to Contoso.
Capture web requests with Fiddler
Article • 02/16/2022

When diagnosing issues that might occur when Power Query communicates with your
data, you might be asked to supply a Fiddler trace. The information provided by Fiddler
can be of significant use when troubleshooting connectivity issues.

7 Note

This article assumes that you are already familiar with how Fiddler works in general.

Set up Fiddler to capture secure HTTP


addresses
Before you can begin capturing web requests issued by Power Query, you must first
enable Fiddler to capture secure HTTP addresses (https://) on your device.

If you don't already have Fiddler installed, download and install Fiddler now. Be sure
to install Fiddler on the system where the issue is occurring.

To set up Fiddler to capture secure HTTP addresses:

1. Open Fiddler.

2. Under File, clear the check mark next to Capture Traffic.

3. Select Tools > Options.

4. Open the HTTPS tab.

5. Select Capture HTTPS CONNECTs.

6. Select Decrypt HTTPS traffic.

7. In the root certificate dialog box, select Yes.


8. When asked to confirm that you want to add the certificate to your PCs Trusted
Root List, select Yes.

9. In the HTTPS tab, select OK.

10. In the Fiddler traffic pane, select one of the current traces, and then press Ctrl + X.
This action clears all of the current traces from the traffic pane.

Guidelines for capturing web requests


Because Fiddler captures all network traffic during the recorded session, be sure to close
all other apps and web pages before capturing a fiddler trace. Closing all other apps and
web pages clears away most extra web traffic that's not associated with the issue you're
trying to capture.
Once you've closed all other apps and web pages not associated with the issue, clear
the Fiddler traffic pane as described in step 10 in the previous procedure. Then select
File > Capture traffic just before starting the operation in question. After the issue
occurs, immediately clear the check mark next to File > Capture traffic to stop the
capture.

These actions minimize the number of messages we have to dig through, and also helps
focus the investigation. It also avoids capturing other potentially sensitive information
that you don't want to share.

If you're only running Power Query and Fiddler, this minimum setup should yield a
sequence of HTTP requests and responses from whatever backend you're
communicating with, for example Power BI service, SharePoint, or Azure. The requests,
responses, headers, response codes, and sometimes the payload will all provide clues
we can use to troubleshoot your issue.

To save the capture session to a log file, select File > Save > All Sessions. You might also
be asked to compress the log file (.zip) before sending it.

Return your system to its original configuration


Once you've finished providing Fiddler log files and troubleshooting has completed,
you'll want to return your system to its original configuration and remove Fiddler as the
middleman.

To return your system to its original configuration:

1. In Fiddler, select Tools > Options.

2. In Options, open the HTTPS tab.

3. Clear the check mark next to Capture HTTPS CONNECTS.

4. Select Actions.

5. Select Reset All Certificates.


6. In Reset All Certificates, select OK.

7. In Do you want to allow this app to make changes to your device?, select Yes.

8. In TrustCert Success, select OK.

9. In Root Certificate Store, select Yes.

10. If the root certificate dialog box appears, close the dialog box without selecting Yes
or No.

11. In Success, select OK.

12. In Options, select OK.

See also
Query diagnostics
Power Query feedback
Getting started with Fiddler Classic
Power Query SDK overview
Article • 02/17/2023

The Power Query SDK is a set of tools designed to help you create Power Query
connectors. These connectors are often referred to as custom connectors or Power
Query extensions.

Custom connectors let you create new data sources or customize and extend an existing
source. Common use cases include:

Creating a business analyst-friendly view for a REST API.


Providing branding for a source that Power Query supports with an existing
connector (such as an OData service or ODBC driver).
Implementing OAuth v2 authentication flow for a SaaS offering.
Exposing a limited or filtered view over your data source to improve usability.
Enabling DirectQuery for a data source using an ODBC driver.

Installing the Power Query SDK


There are currently two versions of the Power Query SDK available:

Visual Studio Power Query SDK: Released in 2017 as an extension for Visual
Studio 2017 and 2019.
Visual Studio Code Power Query SDK (Preview): Released in 2022 as the new and
recommended way to create Power Query connectors.

We encourage all developers to install and use the newly released Visual Studio Code
Power Query SDK (Preview) as this version will eventually be the default SDK going
forward.

Visual Studio Power Query SDK


Follow the steps below to install the Visual Studio Power Query SDK:

1. Install the Power Query SDK from the Visual Studio Marketplace.
2. Create a new data connector project.
3. Define your connector logic.
4. Build the project to produce an extension file.
Visual Studio Code Power Query SDK (Preview)

7 Note

The new Visual Studio Code Power Query SDK is currently in public preview as of
September of 2022.

Install the new Visual Studio Code Power Query SDK from the Visual Studio Code
section of the Visual Studio Marketplace . Select Install to install the SDK.

Connector development process


While you can use either the Visual Studio Code or the Visual Studio Power Query SDK,
both use a similar process for the development of a connector at a high level.

The following sections describe, at a high level, the most common process to create a
Power Query connector using the SDK.

Creating a new extension


When creating a new project (Visual Studio Power Query SDK) or a new workspace
(Visual Studio Code Power Query SDK), you start with the following core files:

Connector definition file (<connectorName>.pq)


A query file (<connectorName>.query.pq)
A string resource file (resources.resx)
PNG files of various sizes used to create icons

Your connector definition file will start with an empty data source description. You can
learn more about a data source in the context of the Power Query SDK from the article
on handling data access.

Testing
The Power Query SDK provides basic query execution capabilities, allowing you to test
your extension without having to switch over to Power BI Desktop.

Query test file


In addition to the extension file, Data Connector projects can have a query file
(name.query.pq). This file can be used to run test queries within Visual Studio. The query
evaluation will automatically include your extension code, without having to register
your.pqx file, allowing you to call or test any shared functions in your extension code.

The query file can contain a single expression (for example, HelloWorld.Contents() ), a
let expression (such as what Power Query would generate), or a section document.

Build and deploy


Building your project produces your .mez file or extension file.

Power Query extensions are bundled in a ZIP file and given a .mez file extension. At
runtime, Power BI Desktop loads extensions from [Documents]\Microsoft Power BI
Desktop\Custom Connectors.

7 Note

Data Connector projects don't support custom post build steps to copy the
extension file to your [Documents]\Microsoft Power BI Desktop\Custom
Connectors directory. If this is something you want to do, you might want to use a
third party extension.

Extensions are defined within an M section document. A section document has a slightly
different format from the query document(s) generated in Power Query. Code you
import from Power Query typically requires modification to fit into a section document,
but the changes are minor. Section document differences you should be aware of
include:

They begin with a section declaration (for example, section HelloWorld; ).


Each expression ends with a semi-colon (for example, a = 1; or b = let c = 1 + 2
in c; ).

All functions and variables are local to the section document, unless they're
marked as shared. Shared functions become visible to other queries/functions, and
can be thought of as the exports for your extension (that is, they become callable
from Power Query).

More information about M section documents can be found in the M Language


specification.

Distribution of data connectors


Power Query connectors can be used in Power BI Desktop or the Power Query Online
experience, such as Power BI dataflows, Power BI datamarts, and other experiences.

Power BI Desktop
Power BI Desktop users can follow the steps below to consume a Power Query custom
connector:

1. Copy the extension file (.mez or.pqx) into [Documents]/Power BI Desktop/Custom


Connectors.
2. In Power BI Desktop, select the (Not Recommended) Allow any extension to load
without validation or warning option under File > Options and settings >
Options > Security > Data Extensions.
3. Restart Power BI Desktop.

Power BI Desktop will automatically load the extensions on restart.

Alternatively, as the owner of the data source and connector, you can submit your
connector to the Power Query Connector Certification program so it ships with Power BI
Desktop on every release.

Power Query Online

7 Note
The Power Query team is working hard towards enabling all Power QUery certified
connectors in the Power Query Online experience.

Only Power Query certified connectors are shown in the Power Query Online experience.
To learn more about the Power Query connector certification program, go to Power
Query Connector Certification.

More links and resources


M Library Functions
M Language Specification
Power BI Developer Center
Data Connector Tutorial
Using the Power Query SDK
Article • 03/14/2023

This article focuses on the experience available for the Power Query SDK found in Visual
Studio Code. You can learn more on how to install the Power Query SDK for Visual
Studio from the article on Installing the SDK.

Create a new project

 Tip

Before creating an extension project, we recommend that you create a new folder
where you'll store your extension project. During the creation of a new project, if no
folder is selected, the Power Query SDK will help you locate or create a new folder
before creating your extension project.

Once in Visual Studio Code, in the main Explorer pane of Visual Studio Code there's a
section with the name Power Query SDK. This section has only one button that reads
Create an extension project. Select this button.
This button opens an input field at the top of the Visual Studio Code interface. Enter the
name of your new extension project, and then select Enter.

After a few seconds, your Visual Studio Code window opens the main *.pq file for your
extension project that contains your connector logic. The Power Query SDK
automatically runs some necessary tasks to complete the setup of your workspace. You
can check these tasks in the output console in Visual Studio Code.
The Power Query SDK automatically creates the following set of files:

A settings.json file that dictates specific settings to work with at your workspace
level.
It builds the extension as a .mez file and stores it in a new bin\AnyCPU\Debug
folder.
A set of connector icons as .png files.
A resources.resx file that serves as the main storage for strings that are used in the
extension.
A .pq file that holds the main logic of your extension or connector.
A .query.pq file whose main purpose is to be used as a way to create test queries
that you can later evaluate.
A .proj file that holds information about the extension project.
Once an extension project is recognized by Visual Studio Code, the section for the
Power Query SDK changes its appearance, and now displays a list of tasks you can run
against your new extension project.

Credentials
) Important

Before you can evaluate any of your data connector's queries, a set of credentials
must first be created for the extension project.

The Power Query SDK offers multiple tasks through its user interface to allow you to set,
list, and delete credentials from your extension project.

Set credential
The Power Query SDK is primarily driven by tasks that can be triggered through multiple
entry points. Setting a credential can be done in two ways (the other credential tasks can
be done in the same way).

Through the entry in the Power Query SDK section in the explorer pane.

Through the Terminal by selecting the Run Task option and then selecting the Set
credential task.

When you run this task, Visual Studio Code will guide you through a series of prompts
to allow you to set the credential. These series of prompts are predictable and always
consist of the same stages:

1. Choose the data source kind.


2. Choose the connector file.
3. Choose the authentication method.

For the existing extension project, the authentication method available is anonymous.
Once the authentication is set, a message that confirms a credential has been generated
successfully is displayed at the bottom right corner of the window.

List credentials
Similar to setting a credential, the task to list credentials has two entry points in the
same places: the Power Query SDK section in the Explorer pane and inside the Terminal
menu.

When this task is executed, it showcases the available credentials inside the output
terminal.
Clear ALL credentials
Similar to the previous two tasks, the task to clear all credentials has two entry points in
the same places: the Power Query SDK section in the Explorer pane and inside the
Terminal menu.

This task serves as a way to clear all credentials from your current session when you
need to set a new credential to evaluate your queries.

The informational messages for this task are also shown in the output console.

Evaluate a query and the results panel


Before you can evaluate any test queries, a credential must be set. Using the connector
that was created in the previous section, you can open the *.query.pq file that serves as
your test query file.

For this specific connector where the project name was MyConnector, the code looks as
follows:

Power Query M

// Use this file to write queries to test your data connector


let
result = MyConnector.Contents()
in
result

Before evaluating the query, let's change the code as follows:

Power Query M

// Use this file to write queries to test your data connector


let
result = MyConnector.Contents("Hello World")
in
result

Make sure to save the file after changing its code.

To evaluate this query, you have three options:

Right select the file that's in use and select the Evaluate current power query file
option.
Go through the Terminal menu and select the Evaluate current file task.

Use the native Run and Debug option from Visual Studio Code, select the
hyperlink to create a launch.json file, and then evaluate the file.
After evaluating the query, the results are displayed in the console at the bottom of the
window and in a new panel called the result panel on the right.

The result panel contains three tabs:

Output tab: Displays a data preview of the query evaluated. If the data is a table,
it's displayed as grid.
Summary: Displays a summary of the activity that ran the evaluations, along with
the statistics associated with that activity.

DataSource: Displays general information about the data source used for the
evaluation.
To evaluate a different query, you just modify the *.query.pq file, save it, and then run
the evaluation task again with any of the three methods.

7 Note

The Power Query SDK doesn't manage any kind of caching mechanism for the
evaluations.

Bring a legacy extension project to the new


SDK
This section helps connector developers who have extension projects that were created
with the legacy Visual Studio based Power Query SDK .

To follow along, we recommend downloading the connector projects available from our
DataConnectors repository for the TripPin sample , specifically the sample 9-
TestConnection.

To bring the legacy extension project to the new SDK, follow these steps:

1. In Visual Studio code, select File > Open folder, then navigate to the folder where
your extension project is located.
2. Set up a workspace using the existing folder and its contents using one of the
following two methods:

The Power Query SDK has a mechanism to recognize the contents of your
folder and suggests that you enable the conversion to a new Power Query
SDK workspace.

Run the Setup workspace and the Build Task from the terminal menu. These
will effectively create the .mez file and the settings.json files needed for the
workspace.

The addition of two new folders and files is what transforms the current workspace into
a new Power Query SDK workspace.

Setup workspace
What the Setup workspace task does is effectively create a settings.json file for your
workspace that dictates some variables that will be used for your workspace when it
comes to evaluations and general settings.
Build an extension file
The build task allows you to create the .mez file for your extension on demand.

Run TestConnection function


TestConnection is a function that enables your connector to be refreshed in the
Microsoft Cloud through services such as Power BI. It's a record implemented inside of
your connector data source record. You can learn more about the implementation of the
TestConnection handler from the sample available for Test connection.

The task to run TestConnection enables you to test the handler inside the Power Query
SDK without having to manually try this handler in the Microsoft Cloud.

To run this task, first set a credential for your connector and then run the task either
from the Power Query SDK section in the Explorer or through the list of tasks inside the
terminal menu.

The result of this task is displayed in the output terminal at the bottom of the window.

Feedback and bugs


Do you have any feedback, suggestions, or bugs that you'd like to report? Visit our
official public repository on GitHub .

You can use the Power BI Community forum to post general questions around Power
Query, the M language, and custom connector development.
Creating your first connector: Hello
World
Article • 12/21/2022

Hello World sample


This sample provides a simple data source extension that can be run in Visual Studio,
and loaded in Power BI Desktop. As an overview, this sample shows the following:

Exporting function ( HelloWorld.Contents ), which takes an option text parameter.


Defining a data source kind that:
Declares that it uses Implicit (anonymous) authentication.
Uses string resources that allow for localization.
Declaring UI metadata so the extension can show up in the Power BI Desktop
Get Data dialog.

Following the instructions in Installing the PowerQuery SDK, create a new project called
"HelloWorld" and copy in the following M code, and then follow the rest of the
instructions to be able to open it in PowerBI.

In the following connector definition you'll find:

A section statement.
A data source function with metadata establishing it as a data source definition
with the Kind HelloWorld and Publish HelloWorld.Publish .
An Authentication record declaring that implicit (anonymous) is the only
authentication type for this source.
A publish record declaring that this connection is in Beta, what text to load from
the resx file, the source image, and the source type image.
A record associating icon sizes with specific PNGs in the build folder.

section HelloWorld;

[DataSource.Kind="HelloWorld", Publish="HelloWorld.Publish"]
shared HelloWorld.Contents = (optional message as text) =>
let
message = if (message <> null) then message else "Hello world"
in
message;
HelloWorld = [
Authentication = [
Implicit = []
],
Label = Extension.LoadString("DataSourceLabel")
];

HelloWorld.Publish = [
Beta = true,
ButtonText = { Extension.LoadString("FormulaTitle"), Extension.LoadStrin
g("FormulaHelp") },
SourceImage = HelloWorld.Icons,
SourceTypeImage = HelloWorld.Icons
];

HelloWorld.Icons = [
Icon16 = { Extension.Contents("HelloWorld16.png"), Extension.Contents("H
elloWorld20.png"), Extension.Contents("HelloWorld24.png"), Extension.Content
s("HelloWorld32.png") },
Icon32 = { Extension.Contents("HelloWorld32.png"), Extension.Contents("H
elloWorld40.png"), Extension.Contents("HelloWorld48.png"), Extension.Content
s("HelloWorld64.png") }
];

Once you've built the file and copied it to the correct directory, following the
instructions in Installing the PowerQuery SDK tutorial, open PowerBI. You can search for
"hello" to find your connector in the Get Data dialog.

This step will bring up an authentication dialog. Since there's no authentication options
and the function takes no parameters, there's no further steps in these dialogs.

Press Connect and the dialog will tell you that it's a "Preview connector", since Beta is
set to true in the query. Since there's no authentication, the authentication screen will
present a tab for Anonymous authentication with no fields. Press Connect again to
finish.

Finally, the query editor will come up showing what you expect—a function that returns
the text "Hello world".

For the fully implemented sample, see the Hello World Sample in the Data Connectors
sample repo.
TripPin Tutorial
Article • 03/14/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

This tutorial uses a public OData service (TripPin ) as a reference source. Although
this lesson requires the use of the M engine's OData functions, subsequent lessons will
use Web.Contents, making it applicable to (most) REST APIs.

Prerequisites
The following applications will be used throughout this tutorial:

Power BI Desktop , May 2017 release or later


Power Query SDK for Visual Studio
Fiddler —Optional, but recommended for viewing and debugging requests to
your REST service

It's strongly suggested that you review:

Installing the PowerQuery SDK


Start developing custom connectors
Creating your first connector: Hello World
Handling Data Access
Handling Authentication

7 Note

You can also start trace logging of your work at any time by enabling diagnostics,
which is described later on in this tutorial. More information: Enabling diagnostics

Parts
Part Lesson Details

1 OData Create a simple Data Connector over an OData service


Part Lesson Details

2 Rest Connect to a REST API that returns a JSON response

3 Nav Tables Provide a navigation experience for your source

4 Data Source How credentials are identified for your data source
Paths

5 Paging Read with a paged response from a web service

6 Enforcing Enforce table structure and column data types


Schema

7 Advanced Dynamically enforce table structure using M types and external


Schema metadata

8 Diagnostics Add detailed tracing to the connector

9 Test Connection Implement a TestConnection handler to enable refresh through the


gateway

10 Basic query Implement basic query folding handlers


Folding
TripPin part 1 - Data connector for an
OData service
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Create a new Data Connector project using the Visual Studio SDK
" Author a base function to pull data from a source
" Test your connector in Visual Studio
" Register your connector in Power BI Desktop

Creating a basic OData connector


In this section, you will create a new Data Connector project, provide some basic
information, and test it in Visual Studio.

Open Visual Studio, and create a new Project. Under the Power Query folder, select the
Data Connector project. For this sample, set the project name to TripPin .

Open the TripPin.pq file and paste in the following connector definition.
Power Query M

section TripPin;

[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Feed = Value.ReplaceType(TripPinImpl, type function (url as
Uri.Type) as any);

TripPinImpl = (url as text) =>


let
source = OData.Feed(url)
in
source;

// Data Source Kind description


TripPin = [
Authentication = [
Anonymous = []
],
Label = "TripPin Part 1 - OData"
];

// Data Source UI publishing description


TripPin.Publish = [
Beta = true,
Category = "Other",
ButtonText = { "TripPin OData", "TripPin OData" }
];

This connector definition contains:

A Data Source definition record for the TripPin connector


A declaration that Implicit (Anonymous) is the only authentication type for this
source
A function ( TripPinImpl ) with an implementation that calls OData.Feed
A shared function ( TripPin.Feed ) that sets the parameter type to Uri.Type
A Data Source publishing record that will allow the connector to appear in the
Power BI Get Data dialog

Open the TripPin.query.pq file. Replace the current contents with a call to your exported
function.

Power Query M

TripPin.Feed("https://services.odata.org/v4/TripPinService/")

Select the Start button to launch the M Query utility.


The <project>.query.pq file is used to test out your extension without having to deploy
it to your Power BI Desktop's bin folder. Selecting the Start button (or pressing F5)
automatically compiles your extension and launches the M Query utility.

Running your query for the first time results in a credential error. In Power Query, the
hosting application would convert this error into a credential prompt. In Visual Studio,
you'll receive a similar prompt that calls out which data source is missing credentials and
its data source path. Select the shortest of the data source paths
( https://services.odata.org/ )—this will apply your credential to all URLs under this
path.

Select the Anonymous credential type, and then select Set Credential.

Select OK to close the dialog, and then select the Start button once again. You see a
query execution status dialog, and finally a Query Result table showing the data
returned from your query.

You can try out a few different OData URLs in the test file to see what how different
results are returned. For example:

https://services.odata.org/v4/TripPinService/Me
https://services.odata.org/v4/TripPinService/GetPersonWithMostFriends()

https://services.odata.org/v4/TripPinService/People

The TripPin.query.pq file can contain single statements, let statements, or full section
documents.

Power Query M

let
Source = TripPin.Feed("https://services.odata.org/v4/TripPinService/"),
People = Source{[Name="People"]}[Data],
SelectColumns = Table.SelectColumns(People, {"UserName", "FirstName",
"LastName"})
in
SelectColumns

Open Fiddler to capture HTTP traffic, and run the query. You should see a few
different requests to services.odata.org, generated by the mashup container process.
You can see that accessing the root URL of the service results in a 302 status and a
redirect to the longer version of the URL. Following redirects is another behavior you get
“for free” from the base library functions.

One thing to note if you look at the URLs is that you can see the query folding that
happened with the SelectColumns statement.
https://services.odata.org/v4/TripPinService/People?

$select=UserName%2CFirstName%2CLastName

If you add more transformations to your query, you can see how they impact the
generated URL.

This behavior is important to note. Even though you did not implement explicit folding
logic, your connector inherits these capabilities from the OData.Feed function. M
statements are compose-able—filter contexts will flow from one function to another,
whenever possible. This is similar in concept to the way data source functions used
within your connector inherit their authentication context and credentials. In later
lessons, you'll replace the use of OData.Feed, which has native folding capabilities, with
Web.Contents, which does not. To get the same level of capabilities, you'll need to use
the Table.View interface and implement your own explicit folding logic.

Loading your extension in Power BI Desktop


To use your extension in Power BI Desktop, you'll need to copy your connector project's
output file (TripPin.mez) to your Custom Connectors directory.
1. In Visual Studio, select Build | Build Solution (F6) from the menu bar. This will
generate the .mez file for you project. By default, this will go in your project's
bin\Debug folder.
2. Create a [My Documents]\Power BI Desktop\Custom Connectors directory.
3. Copy the extension file into this directory.
4. Check the option (Not Recommended) Allow any extension to load without
validation or warning in Power BI Desktop (under File > Options and settings >
Options > Security > Data Extensions).
5. Restart Power BI Desktop.
6. Select Get Data > More to bring up the Get Data dialog.

You can locate your extension by typing its name into the search box.

Select the function name, and select Connect. A third-party message appears—select
Continue to continue. The function invocation dialog now appears. Enter the root URL
of the service ( https://services.odata.org/v4/TripPinService/ ), and select OK.

Since this is the first time you are accessing this data source, you'll receive a prompt for
credentials. Check that the shortest URL is selected, and then select Connect.
Notice that instead of getting a simple table of data, the navigator appears. This is
because the OData.Feed function returns a table with special metadata on top of it that
the Power Query experience knows to display as a navigation table. This walkthrough
will cover how you can create and customize your own navigation table in a future
lesson.

Select the Me table, and then select Transform Data. Notice that the columns already
have types assigned (well, most of them). This is another feature of the underlying
OData.Feed function. If you watch the requests in Fiddler , you'll see that you've
fetched the service's $metadata document. The engine's OData implementation does
this automatically to determine the service's schema, data types, and relationships.
Conclusion
This lesson walked you through the creation of a simple connector based on the
OData.Feed library function. As you saw, very little logic is needed to enable a fully
functional connector over the OData base function. Other extensibility enabled
functions, such as ODBC.DataSource, provide similar capabilities.

In the next lesson, you'll replace the use of OData.Feed with a less capable function—
Web.Contents. Each lesson will implement more connector features, including paging,
metadata/schema detection, and query folding to the OData query syntax, until your
custom connector supports the same range of capabilities as OData.Feed.

Next steps
TripPin Part 2 - Data Connector for a REST Service
TripPin part 2 - Data connector for a
REST service
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Create a base function that calls out to a REST API using Web.Contents
" Learn how to set request headers and process a JSON response
" Use Power BI Desktop to wrangle the response into a user friendly format

This lesson converts the OData based connector for the TripPin service (created in the
previous lesson) to a connector that resembles something you'd create for any RESTful
API. OData is a RESTful API, but one with a fixed set of conventions. The advantage of
OData is that it provides a schema, data retrieval protocol, and standard query
language. Taking away the use of OData.Feed will require us to build these capabilities
into the connector ourselves.

Recap of the OData connector


Before you remove the OData functions from your connector, let's do a quick review of
what it currently does (mostly behind the scenes) to retrieve data from the service.

Open the TripPin connector project from Part 1 in Visual Studio. Open the Query file and
paste in the following query:

TripPin.Feed("https://services.odata.org/v4/TripPinService/Me")

Open Fiddler and then select the Start button in Visual Studio.

In Fiddler, you'll see three requests to the server:


/Me —the actual URL you are requesting.

/$metadata —a call automatically made by the OData.Feed function to determine


schema and type information about the response.
/Me/BestFriend —one of the fields that was (eagerly) pulled when you listed the
/Me singleton. In this case the call resulted in a 204 No Content status.

M evaluation is mostly lazy. In most cases, data values are only retrieved/pulled when
they are needed. There are scenarios (like the /Me/BestFriend case) where a value is
pulled eagerly. This tends to occur when type information is needed for a member, and
the engine has no other way to determine the type than to retrieve the value and
inspect it. Making things lazy (that is, avoiding eager pulls) is one of the key aspects to
making an M connector performant.

Note the request headers that were sent along with the requests and the JSON format
of the response of the /Me request.

JSON

{
"@odata.context":
"https://services.odata.org/v4/TripPinService/$metadata#Me",
"UserName": "aprilcline",
"FirstName": "April",
"LastName": "Cline",
"MiddleName": null,
"Gender": "Female",
"Age": null,
"Emails": [ "[email protected]", "[email protected]" ],
"FavoriteFeature": "Feature1",
"Features": [ ],
"AddressInfo": [
{
"Address": "P.O. Box 555",
"City": {
"Name": "Lander",
"CountryRegion": "United States",
"Region": "WY"
}
}
],
"HomeAddress": null
}

When the query finishes evaluating, the M Query Output window should show the
Record value for the Me singleton.
If you compare the fields in the output window with the fields returned in the raw JSON
response, you'll notice a mismatch. The query result has additional fields ( Friends ,
Trips , GetFriendsTrips ) that don't appear anywhere in the JSON response. The

OData.Feed function automatically appended these fields to the record based on the
schema returned by $metadata. This is a good example of how a connector might
augment and/or reformat the response from the service to provide a better user
experience.

Creating a basic REST connector


You'll now be adding a new exported function to your connector that calls
Web.Contents.

To be able to make successful web requests to the OData service, however, you'll have
to set some standard OData headers . You'll do this by defining a common set of
headers as a new variable in your connector:

Power Query M

DefaultRequestHeaders = [
#"Accept" = "application/json;odata.metadata=minimal", // column name
and values only
#"OData-MaxVersion" = "4.0" // we only
support v4
];

You'll change your implementation of your TripPin.Feed function so that rather than
using OData.Feed , it uses Web.Contents to make a web request, and parses the result as
a JSON document.

Power Query M

TripPinImpl = (url as text) =>


let
source = Web.Contents(url, [ Headers = DefaultRequestHeaders ]),
json = Json.Document(source)
in
json;

You can now test this out in Visual Studio using the query file. The result of the /Me
record now resembles the raw JSON that you saw in the Fiddler request.

If you watch Fiddler when running the new function, you'll also notice that the
evaluation now makes a single web request, rather than three. Congratulations—you've
achieved a 300% performance increase! Of course, you've now lost all the type and
schema information, but there's no need to focus on that part just yet.

Update your query to access some of the TripPin Entities/Tables, such as:

https://services.odata.org/v4/TripPinService/Airlines
https://services.odata.org/v4/TripPinService/Airports

https://services.odata.org/v4/TripPinService/Me/Trips

You'll notice that the paths that used to return nicely formatted tables now return a top
level "value" field with an embedded [List]. You'll need to do some transformations on
the result to make it usable for Power BI scenarios.
Authoring transformations in Power Query
While it is certainly possible to author your M transformations by hand, most people
prefer to use Power Query to shape their data. You'll open your extension in Power BI
Desktop and use it to design queries to turn the output into a more user friendly format.
Rebuild your solution, copy the new extension file to your Custom Data Connectors
directory, and relaunch Power BI Desktop.

Start a new Blank Query, and paste the following into the formula bar:

= TripPin.Feed("https://services.odata.org/v4/TripPinService/Airlines")

Be sure to include the = sign.

Manipulate the output until it looks like the original OData feed—a table with two
columns: AirlineCode and Name.

The resulting query should look something like this:

Power Query M

let
Source =
TripPin.Feed("https://services.odata.org/v4/TripPinService/Airlines"),
value = Source[value],
toTable = Table.FromList(value, Splitter.SplitByNothing(), null, null,
ExtraValues.Error),
expand = Table.ExpandRecordColumn(toTable, "Column1", {"AirlineCode",
"Name"}, {"AirlineCode", "Name"})
in
expand

Give the query a name ("Airlines").

Create a new Blank Query. This time, use the TripPin.Feed function to access the
/Airports entity. Apply transforms until you get something similar to the share shown
below. The matching query can also be found below—give this query a name
("Airports") as well.
Power Query M

let
Source =
TripPin.Feed("https://services.odata.org/v4/TripPinService/Airports"),
value = Source[value],
#"Converted to Table" = Table.FromList(value, Splitter.SplitByNothing(),
null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table",
"Column1", {"Name", "IcaoCode", "IataCode", "Location"}, {"Name",
"IcaoCode", "IataCode", "Location"}),
#"Expanded Location" = Table.ExpandRecordColumn(#"Expanded Column1",
"Location", {"Address", "Loc", "City"}, {"Address", "Loc", "City"}),
#"Expanded City" = Table.ExpandRecordColumn(#"Expanded Location",
"City", {"Name", "CountryRegion", "Region"}, {"Name.1", "CountryRegion",
"Region"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded City",{{"Name.1",
"City"}}),
#"Expanded Loc" = Table.ExpandRecordColumn(#"Renamed Columns", "Loc",
{"coordinates"}, {"coordinates"}),
#"Added Custom" = Table.AddColumn(#"Expanded Loc", "Latitude", each
[coordinates]{1}),
#"Added Custom1" = Table.AddColumn(#"Added Custom", "Longitude", each
[coordinates]{0}),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom1",
{"coordinates"}),
#"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",
{{"Name", type text}, {"IcaoCode", type text}, {"IataCode", type text},
{"Address", type text}, {"City", type text}, {"CountryRegion", type text},
{"Region", type text}, {"Latitude", type number}, {"Longitude", type
number}})
in
#"Changed Type"

You can repeat this process for additional paths under the service. Once you're ready,
move onto the next step of creating a (mock) navigation table.

Simulating a navigation table


Now you are going to build a table (using M code) that presents your nicely formatted
TripPin entities.

Start a new Blank Query and bring up the Advanced Editor.

Paste in the following query:


Power Query M

let
source = #table({"Name", "Data"}, {
{ "Airlines", Airlines },
{ "Airports", Airports }
})
in
source

If you have not set your Privacy Levels setting to "Always ignore Privacy level settings"
(also known as "Fast Combine") you'll see a privacy prompt.

Privacy prompts appear when you're combining data from multiple sources and have
not yet specified a privacy level for the source(s). Select the Continue button and set the
privacy level of the top source to Public.

Select Save and your table will appear. While this isn't a navigation table yet, it provides
the basic functionality you need to turn it into one in a subsequent lesson.
Data combination checks do not occur when accessing multiple data sources from
within an extension. Since all data source calls made from within the extension inherit
the same authorization context, it is assumed they are "safe" to combine. Your extension
will always be treated as a single data source when it comes to data combination rules.
Users would still receive the regular privacy prompts when combining your source with
other M sources.

If you run Fiddler and click the Refresh Preview button in the Query Editor, you'll notice
separate web requests for each item in your navigation table. This indicates that an
eager evaluation is occurring, which isn't ideal when building navigation tables with a lot
of elements. Subsequent lessons will show how to build a proper navigation table that
supports lazy evaluation.

Conclusion
This lesson showed you how to build a simple connector for a REST service. In this case,
you turned an existing OData extension into a standard REST extension (using
Web.Contents), but the same concepts apply if you were creating a new extension from
scratch.

In the next lesson, you'll take the queries created in this lesson using Power BI Desktop
and turn them into a true navigation table within the extension.

Next steps
TripPin Part 3 - Navigation Tables
TripPin part 3 - Navigation tables
Article • 04/28/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Create a navigation table for a fixed set of queries


" Test the navigation table in Power BI Desktop

This lesson adds a navigation table to the TripPin connector created in the previous
lesson. When your connector used the OData.Feed function (Part 1), you received the
navigation table “for free”, as derived from the OData service’s $metadata document.
When you moved to the Web.Contents function (Part 2), you lost the built-in navigation
table. In this lesson, you'll take a set of fixed queries you created in Power BI Desktop
and add the appropriate metadata for Power Query to popup the Navigator dialog for
your data source function.

See the Navigation Table documentation for more information about using navigation
tables.

Defining fixed queries in the connector


A simple connector for a REST API can be thought of as a fixed set of queries, each
returning a table. These tables are made discoverable through the connector’s
navigation table. Essentially, each item in the navigator is associated with a specific URL
and set of transformations.

You'll start by copying the queries you wrote in Power BI Desktop (in the previous
lesson) into your connector file. Open the TripPin Visual Studio project, and paste the
Airlines and Airports queries into the TripPin.pq file. You can then turn those queries into
functions that take a single text parameter:

Power Query M

GetAirlinesTable = (url as text) as table =>


let
source = TripPin.Feed(url & "Airlines"),
value = source[value],
toTable = Table.FromList(value, Splitter.SplitByNothing(), null,
null, ExtraValues.Error),
expand = Table.ExpandRecordColumn(toTable, "Column1",
{"AirlineCode", "Name"}, {"AirlineCode", "Name"})
in
expand;

GetAirportsTable = (url as text) as table =>


let
source = TripPin.Feed(url & "Airports"),
value = source[value],
#"Converted to Table" = Table.FromList(value,
Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to
Table", "Column1", {"Name", "IcaoCode", "IataCode", "Location"}, {"Name",
"IcaoCode", "IataCode", "Location"}),
#"Expanded Location" = Table.ExpandRecordColumn(#"Expanded Column1",
"Location", {"Address", "Loc", "City"}, {"Address", "Loc", "City"}),
#"Expanded City" = Table.ExpandRecordColumn(#"Expanded Location",
"City", {"Name", "CountryRegion", "Region"}, {"Name.1", "CountryRegion",
"Region"}),
#"Renamed Columns" = Table.RenameColumns(#"Expanded City",
{{"Name.1", "City"}}),
#"Expanded Loc" = Table.ExpandRecordColumn(#"Renamed Columns",
"Loc", {"coordinates"}, {"coordinates"}),
#"Added Custom" = Table.AddColumn(#"Expanded Loc", "Latitude", each
[coordinates]{1}),
#"Added Custom1" = Table.AddColumn(#"Added Custom", "Longitude",
each [coordinates]{0}),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom1",
{"coordinates"}),
#"Changed Type" = Table.TransformColumnTypes(#"Removed Columns",
{{"Name", type text}, {"IcaoCode", type text}, {"IataCode", type text},
{"Address", type text}, {"City", type text}, {"CountryRegion", type text},
{"Region", type text}, {"Latitude", type number}, {"Longitude", type
number}})
in
#"Changed Type";

Next you'll import the mock navigation table query you wrote that creates a fixed table
linking to these data set queries. Call it TripPinNavTable :

Power Query M

TripPinNavTable = (url as text) as table =>


let
source = #table({"Name", "Data"}, {
{ "Airlines", GetAirlinesTable(url) },
{ "Airports", GetAirportsTable(url) }
})
in
source;
Finally you'll declare a new shared function, TripPin.Contents , that will be used as your
main data source function. You'll also remove the Publish value from TripPin.Feed so
that it no longer shows up in the Get Data dialog.

Power Query M

[DataSource.Kind="TripPin"]
shared TripPin.Feed = Value.ReplaceType(TripPinImpl, type function (url as
Uri.Type) as any);

[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Contents = Value.ReplaceType(TripPinNavTable, type function
(url as Uri.Type) as any);

7 Note

Your extension can mark multiple functions as shared , with or without associating
them with a DataSource.Kind . However, when you associate a function with a
specific DataSource.Kind , each function must have the same set of required
parameters, with the same name and type. This is because the data source function
parameters are combined to make a 'key' used for looking up cached credentials.

You can test your TripPin.Contents function using your TripPin.query.pq file. Running
the following test query will give you a credential prompt, and a simple table output.

Power Query M

TripPin.Contents("https://services.odata.org/v4/TripPinService/")
Creating a navigation table
You'll use the handy Table.ToNavigationTable function to format your static table into
something that Power Query will recognize as a navigation table. Since this function is
not part of Power Query's standard library, you'll need to copy its source code into your
.pq file.

With this helper function in place, next update your TripPinNavTable function to add
the navigation table fields.

Power Query M

TripPinNavTable = (url as text) as table =>


let
source = #table({"Name", "Data", "ItemKind", "ItemName", "IsLeaf"},
{
{ "Airlines", GetAirlinesTable(url), "Table", "Table", true },
{ "Airports", GetAirportsTable(url), "Table", "Table", true }
}),
navTable = Table.ToNavigationTable(source, {"Name"}, "Name", "Data",
"ItemKind", "ItemName", "IsLeaf")
in
navTable;

Running your test query again will give you a similar result as last time—with a few
more columns added.

7 Note

You will not see the Navigator window appear in Visual Studio. The M Query
Output window always displays the underlying table.
If you copy your extension over to your Power BI Desktop custom connector and invoke
the new function from the Get Data dialog, you'll see your navigator appear.

If you right click on the root of the navigation tree and select Edit, you'll see the same
table as you did within Visual Studio.

Conclusion
In this tutorial, you added a Navigation Table to your extension. Navigation Tables are a
key feature that make connectors easier to use. In this example your navigation table
only has a single level, but the Power Query UI supports displaying navigation tables
that have multiple dimensions (even when they are ragged).
Next steps
TripPin Part 4 - Data Source Paths
TripPin part 4 - Data source paths
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Simplify the connection logic for your connector


" Improve the navigation table experience

This lesson simplifies the connector built in the previous lesson by removing its required
function parameters, and improving the user experience by moving to a dynamically
generated navigation table.

For an in-depth explanation of how credentials are identified, see the Data Source Paths
section of Handling Authentication.

Data source paths


When invoking a data source function, the M engine identifies which credentials to use
during an evaluation by doing a lookup based on the Data Source Kind and Data Source
Path values.

In the previous lesson you shared two data source functions, both with a single Uri.Type
parameter.

Power Query M

[DataSource.Kind="TripPin"]
shared TripPin.Feed = Value.ReplaceType(TripPinImpl, type function (url as
Uri.Type) as any);

[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Contents = Value.ReplaceType(TripPinNavTable, type function
(url as Uri.Type) as any);

The first time you run a query that uses one of the functions, you'll receive a credential
prompt with drop downs that lets you select a path and an authentication type.
If you run the same query again, with the same parameters, the M engine is able to
locate the cached credentials, and no credential prompt is shown. If you modify the url
argument to your function so that the base path no longer matches, a new credential
prompt is displayed for the new path.

You can see any cached credentials on the Credentials table in the M Query Output
window.

Depending on the type of change, modifying the parameters of your function will likely
result in a credential error.

Simplifying the connector


You'll now simplify your connector by removing the parameters for your data source
function ( TripPin.Contents ). You'll also remove the shared qualifier for TripPin.Feed ,
and leave it as an internal-only function.

One of the design philosophies of Power Query is to keep the initial data source dialog
as simple as possible. If at all possible, you should provide the user with choices at the
Navigator level, rather on the connection dialog. If a user provided value can be
determined programmatically, consider adding it as the top level of your navigation
table rather than a function parameter.

For example, when connecting to a relational database, you might need server,
database, and table names. Once you know the server to connect to, and credentials
have been provided, you could use the database's API to fetch a list of databases, and a
list of tables contained within each database. In this case, to keep your initial connect
dialog as simple as possible, only the server name should be a required parameter—
Database and Table would be levels of your navigation table.

Since the TripPin service has a fixed URL endpoint, you don't need to prompt the user
for any values. You'll remove the url parameter from your function, and define a BaseUrl
variable in your connector.

Power Query M

BaseUrl = "https://services.odata.org/v4/TripPinService/";

[DataSource.Kind="TripPin", Publish="TripPin.Publish"]
shared TripPin.Contents = () => TripPinNavTable(BaseUrl) as table;

You'll keep the TripPin.Feed function, but no longer make it shared, no longer associate
it with a Data Source Kind, and simplify its declaration. From this point on, you'll only
use it internally within this section document.

Power Query M

TripPin.Feed = (url as text) =>


let
source = Web.Contents(url, [ Headers = DefaultRequestHeaders ]),
json = Json.Document(source)
in
json;

If you update the TripPin.Contents() call in your TripPin.query.pq file and run it in
Visual Studio, you'll see a new credential prompt. Note that there is now a single Data
Source Path value—TripPin.
Improving the navigation table
In the first tutorial you used the built-in OData functions to connect to the TripPin
service. This gave you a really nice looking navigation table, based on the TripPin service
document, with no additional code on your side. The OData.Feed function automatically
did the hard work for you. Since you're "roughing it" by using Web.Contents rather than
OData.Feed, you'll need to recreate this navigation table yourself.
You're going to make the following changes:

1. Define a list of items to show in your navigation table


2. Do away with the entity specific functions ( GetAirlineTables and
GetAirportsTable )

Generating a navigation table from a list


You'll list the entities you want to expose in the navigation table, and build the
appropriate URL to access them. Since all of the entities are under the same root path,
you'll be able build these URLs dynamically.

To simplify the example, you'll only expose the three entity sets (Airlines, Airports,
People), which would be exposed as Tables in M, and skip the singleton (Me) which
would be exposed as a Record. You'll skip adding the functions until a later lesson.

Power Query M

RootEntities = {
"Airlines",
"Airports",
"People"
};

You then update your TripPinNavTable function to build the table a column at a time.
The [Data] column for each entity is retrieved by calling TripPin.Feed with the full URL
to the entity.

Power Query M

TripPinNavTable = (url as text) as table =>


let
entitiesAsTable = Table.FromList(RootEntities,
Splitter.SplitByNothing()),
rename = Table.RenameColumns(entitiesAsTable, {{"Column1",
"Name"}}),
// Add Data as a calculated column
withData = Table.AddColumn(rename, "Data", each
TripPin.Feed(Uri.Combine(url, [Name])), Uri.Type),
// Add ItemKind and ItemName as fixed text values
withItemKind = Table.AddColumn(withData, "ItemKind", each "Table",
type text),
withItemName = Table.AddColumn(withItemKind, "ItemName", each
"Table", type text),
// Indicate that the node should not be expandable
withIsLeaf = Table.AddColumn(withItemName, "IsLeaf", each true, type
logical),
// Generate the nav table
navTable = Table.ToNavigationTable(withIsLeaf, {"Name"}, "Name",
"Data", "ItemKind", "ItemName", "IsLeaf")
in
navTable;

When dynamically building URL paths, make sure you're clear where your forward
slashes (/) are! Note that Uri.Combine uses the following rules when combining paths:

When the relativeUri parameter starts with a /, it will replace the entire path of
the baseUri parameter
If the relativeUri parameter does not start with a / and baseUri ends with a /, the
path is appended
If the relativeUri parameter does not start with a / and baseUri does not end with
a /, the last segment of the path is replaced

The following image shows examples of this:

Remove the entity specific functions


To make your connector easier to maintain, you'll remove the entity specific formatting
functions you used in the previous lesson— GetAirlineTables and GetAirportsTable .
Instead, you'll update TripPin.Feed to process the JSON response in a way that will
work for all of your entities. Specifically, you take the value field of the returned OData
JSON payload, and convert it from a list of records to a table.

Power Query M

TripPin.Feed = (url as text) =>


let
source = Web.Contents(url, [ Headers = DefaultRequestHeaders ]),
json = Json.Document(source),
// The response is a JSON record - the data we want is a list of
records in the "value" field
value = json[value],
asTable = Table.FromList(value, Splitter.SplitByNothing()),
// expand all columns from the record
fields = Record.FieldNames(Table.FirstValue(asTable, [Empty =
null])),
expandAll = Table.ExpandRecordColumn(asTable, "Column1", fields)
in
expandAll;

7 Note

A disadvantage of using a generic approach to process your entities is that you lose
the nice formating and type information for your entities. A later section in this
tutorial shows how to enforce schema on REST API calls.

Conclusion
In this tutorial, you cleaned up and simplified your connector by fixing your Data Source
Path value, and moving to a more flexible format for your navigation table. After
completing these steps (or using the sample code in this directory), the
TripPin.Contents function returns a navigation table in Power BI Desktop.
Next steps
TripPin Part 5 - Paging
TripPin part 5 - Paging
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Add paging support to the connector

Many Rest APIs will return data in "pages", requiring clients to make multiple requests to
stitch the results together. Although there are some common conventions for
pagination (such as RFC 5988 ), it generally varies from API to API. Thankfully, TripPin is
an OData service, and the OData standard defines a way of doing pagination using
odata.nextLink values returned in the body of the response.

To simplify previous iterations of the connector, the TripPin.Feed function was not page
aware. It simply parsed whatever JSON was returned from the request and formatted it
as a table. Those familiar with the OData protocol might have noticed that a number of
incorrect assumptions were made on the format of the response (such as assuming
there is a value field containing an array of records).

In this lesson you'll improve your response handling logic by making it page aware.
Future tutorials will make the page handling logic more robust and able to handle
multiple response formats (including errors from the service).

7 Note

You do not need to implement your own paging logic with connectors based on
OData.Feed, as it handles it all for you automatically.

Paging checklist
When implementing paging support, you'll need to know the following things about
your API:

How do you request the next page of data?


Does the paging mechanism involve calculating values, or do you extract the URL
for the next page from the response?
How do you know when to stop paging?
Are there parameters related to paging that you should be aware of? (such as
"page size")

The answer to these questions will impact the way you implement your paging logic.
While there is some amount of code reuse across paging implementations (such as the
use of Table.GenerateByPage, most connectors will end up requiring custom logic.

7 Note

This lesson contains paging logic for an OData service, which follows a specific
format. Check the documentation for your API to determine the changes you'll
need to make in your connector to support its paging format.

Overview of OData Paging


OData paging is driven by nextLink annotations contained within the response
payload. The nextLink value contains the URL to the next page of data. You'll know if
there is another page of data by looking for an odata.nextLink field in outermost object
in the response. If there's no odata.nextLink field, you've read all of your data.

JSON

{
"odata.context": "...",
"odata.count": 37,
"value": [
{ },
{ },
{ }
],
"odata.nextLink": "...?$skiptoken=342r89"
}

Some OData services allow clients to supply a max page size preference , but it is up to
the service whether or not to honor it. Power Query should be able to handle responses
of any size, so you don't need to worry about specifying a page size preference—you
can support whatever the service throws at you.

More information about Server-Driven Paging can be found in the OData


specification.
Testing TripPin
Before fixing your paging implementation, confirm the current behavior of the extension
from the previous tutorial. The following test query will retrieve the People table and
add an index column to show your current row count.

Power Query M

let
source = TripPin.Contents(),
data = source{[Name="People"]}[Data],
withRowCount = Table.AddIndexColumn(data, "Index")
in
withRowCount

Turn on fiddler, and run the query in Visual Studio. You'll notice that the query returns a
table with 8 rows (index 0 to 7).

If you look at the body of the response from fiddler, you'll see that it does in fact
contain an @odata.nextLink field, indicating that there are more pages of data available.

JSON

{
"@odata.context":
"https://services.odata.org/V4/TripPinService/$metadata#People",
"@odata.nextLink": "https://services.odata.org/v4/TripPinService/People?
%24skiptoken=8",
"value": [
{ },
{ },
{ }
]
}

Implementing paging for TripPin


You're now going to make the following changes to your extension:

1. Import the common Table.GenerateByPage function


2. Add a GetAllPagesByNextLink function that uses Table.GenerateByPage to glue all
pages together
3. Add a GetPage function that can read a single page of data
4. Add a GetNextLink function to extract the next URL from the response
5. Update TripPin.Feed to use the new page reader functions

7 Note

As stated earlier in this tutorial, paging logic will vary between data sources. The
implementation here tries to break up the logic into functions that should be
reusable for sources that use next links returned in the response.

Table.GenerateByPage
To combine the (potentially) multiple pages returned by the source into a single table,
we'll use Table.GenerateByPage. This function takes as its argument a getNextPage
function which should do just what its name suggests: fetch the next page of data.
Table.GenerateByPage will repeatedly call the getNextPage function, each time passing it

the results produced the last time it was called, until it returns null to signal back that
no more pages are available.

Since this function is not part of Power Query's standard library, you'll need to copy its
source code into your .pq file.

Implementing GetAllPagesByNextLink
The body of your GetAllPagesByNextLink function implements the getNextPage function
argument for Table.GenerateByPage. It will call the GetPage function, and retrieve the
URL for the next page of data from the NextLink field of the meta record from the
previous call.

Power Query M

// Read all pages of data.


// After every page, we check the "NextLink" record on the metadata of the
previous request.
// Table.GenerateByPage will keep asking for more pages until we return
null.
GetAllPagesByNextLink = (url as text) as table =>
Table.GenerateByPage((previous) =>
let
// if previous is null, then this is our first page of data
nextLink = if (previous = null) then url else
Value.Metadata(previous)[NextLink]?,
// if NextLink was set to null by the previous call, we know we
have no more data
page = if (nextLink <> null) then GetPage(nextLink) else null
in
page
);

Implementing GetPage
Your GetPage function will use Web.Contents to retrieve a single page of data from the
TripPin service, and convert the response into a table. It passes the response from
Web.Contents to the GetNextLink function to extract the URL of the next page, and sets
it on the meta record of the returned table (page of data).

This implementation is a slightly modified version of the TripPin.Feed call from the
previous tutorials.

Power Query M

GetPage = (url as text) as table =>


let
response = Web.Contents(url, [ Headers = DefaultRequestHeaders ]),
body = Json.Document(response),
nextLink = GetNextLink(body),
data = Table.FromRecords(body[value])
in
data meta [NextLink = nextLink];

Implementing GetNextLink
Your GetNextLink function simply checks the body of the response for an
@odata.nextLink field, and returns its value.

Power Query M

// In this implementation, 'response' will be the parsed body of the


response after the call to Json.Document.
// Look for the '@odata.nextLink' field and simply return null if it doesn't
exist.
GetNextLink = (response) as nullable text => Record.FieldOrDefault(response,
"@odata.nextLink");
Putting it all together
The final step to implement your paging logic is to update TripPin.Feed to use the new
functions. For now, you're simply calling through to GetAllPagesByNextLink , but in
subsequent tutorials, you'll be adding new capabilities (such as enforcing a schema, and
query parameter logic).

Power Query M

TripPin.Feed = (url as text) as table => GetAllPagesByNextLink(url);

If you re-run the same test query from earlier in the tutorial, you should now see the
page reader in action. You should also see that you have 20 rows in the response rather
than 8.

If you look at the requests in fiddler, you should now see separate requests for each
page of data.
7 Note

You'll notice duplicate requests for the first page of data from the service, which is
not ideal. The extra request is a result of the M engine's schema checking behavior.
Ignore this issue for now and resolve it in the next tutorial, where you'll apply an
explicit schema.

Conclusion
This lesson showed you how to implement pagination support for a Rest API. While the
logic will likely vary between APIs, the pattern established here should be reusable with
minor modifications.

In the next lesson, you'll look at how to apply an explicit schema to your data, going
beyond the simple text and number data types you get from Json.Document .

Next steps
TripPin Part 6 - Schema
TripPin part 6 - Schema
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Define a fixed schema for a REST API


" Dynamically set data types for columns
" Enforce a table structure to avoid transformation errors due to missing columns
" Hide columns from the result set

One of the big advantages of an OData service over a standard REST API is its
$metadata definition . The $metadata document describes the data found on this
service, including the schema for all of its Entities (Tables) and Fields (Columns). The
OData.Feed function uses this schema definition to automatically set data type
information—so instead of getting all text and number fields (like you would from
Json.Document ), end users will get dates, whole numbers, times, and so on, providing a

better overall user experience.

Many REST APIs don't have a way to programmatically determine their schema. In these
cases, you'll need to include schema definitions within your connector. In this lesson
you'll define a simple, hardcoded schema for each of your tables, and enforce the
schema on the data you read from the service.

7 Note

The approach described here should work for many REST services. Future lessons
will build upon this approach by recursively enforcing schemas on structured
columns (record, list, table), and provide sample implementations that can
programmatically generate a schema table from CSDL or JSON Schema
documents.

Overall, enforcing a schema on the data returned by your connector has multiple
benefits, such as:

Setting the correct data types


Removing columns that don't need to be shown to end users (such as internal IDs
or state information)
Ensuring that each page of data has the same shape by adding any columns that
might be missing from a response (a common way for REST APIs to indicate a field
should be null)

Viewing the existing schema with Table.Schema


The connector created in the previous lesson displays three tables from the TripPin
service— Airlines , Airports , and People . Run the following query to view the Airlines
table:

Power Query M

let
source = TripPin.Contents(),
data = source{[Name="Airlines"]}[Data]
in
data

In the results you'll see four columns returned:

@odata.id
@odata.editLink
AirlineCode
Name
The "@odata.*" columns are part of OData protocol, and not something you'd want or
need to show to the end users of your connector. AirlineCode and Name are the two
columns you'll want to keep. If you look at the schema of the table (using the handy
Table.Schema function), you can see that all of the columns in the table have a data type
of Any.Type .

Power Query M

let
source = TripPin.Contents(),
data = source{[Name="Airlines"]}[Data]
in
Table.Schema(data)

Table.Schema returns a lot of metadata about the columns in a table, including names,
positions, type information, and many advanced properties, such as Precision, Scale, and
MaxLength. Future lessons will provide design patterns for setting these advanced
properties, but for now you need only concern yourself with the ascribed type
( TypeName ), primitive type ( Kind ), and whether the column value might be null
( IsNullable ).

Defining a simple schema table


Your schema table will be composed of two columns:

Column Details

Name The name of the column. This must match the name in the results returned by the
service.

Type The M data type you're going to set. This can be a primitive type ( text , number ,
datetime , and so on), or an ascribed type ( Int64.Type , Currency.Type , and so on).

The hardcoded schema table for the Airlines table will set its AirlineCode and Name
columns to text , and looks like this:

Power Query M
Airlines = #table({"Name", "Type"}, {
{"AirlineCode", type text},
{"Name", type text}
});

The Airports table has four fields you'll want to keep (including one of type record ):

Power Query M

Airports = #table({"Name", "Type"}, {


{"IcaoCode", type text},
{"Name", type text},
{"IataCode", type text},
{"Location", type record}
});

Finally, the People table has seven fields, including lists ( Emails , AddressInfo ), a nullable
column ( Gender ), and a column with an ascribed type ( Concurrency ).

Power Query M

People = #table({"Name", "Type"}, {


{"UserName", type text},
{"FirstName", type text},
{"LastName", type text},
{"Emails", type list},
{"AddressInfo", type list},
{"Gender", type nullable text},
{"Concurrency", Int64.Type}
})

The SchemaTransformTable helper function


The SchemaTransformTable helper function described below will be used to enforce
schemas on your data. It takes the following parameters:

Parameter Type Description

table table The table of data you'll want to enforce your schema on.

schema table The schema table to read column information from, with the
following type: type table [Name = text, Type = type] .
Parameter Type Description

enforceSchema number (optional) An enum that controls behavior of the function.


The default value ( EnforceSchema.Strict = 1 ) ensures that the output
table will match the schema table that was provided by adding any
missing columns, and removing extra columns.
The EnforceSchema.IgnoreExtraColumns = 2 option can be used to
preserve extra columns in the result.
When EnforceSchema.IgnoreMissingColumns = 3 is used, both missing
columns and extra columns will be ignored.

The logic for this function looks something like this:

1. Determine if there are any missing columns from the source table.
2. Determine if there are any extra columns.
3. Ignore structured columns (of type list , record , and table ), and columns set to
type any .
4. Use Table.TransformColumnTypes to set each column type.
5. Reorder columns based on the order they appear in the schema table.
6. Set the type on the table itself using Value.ReplaceType.

7 Note

The last step to set the table type will remove the need for the Power Query UI to
infer type information when viewing the results in the query editor. This removes
the double request issue you saw at the end of the previous tutorial.

The following helper code can be copy and pasted into your extension:

Power Query M

EnforceSchema.Strict = 1; // Add any missing columns, remove


extra columns, set table type
EnforceSchema.IgnoreExtraColumns = 2; // Add missing columns, do not
remove extra columns
EnforceSchema.IgnoreMissingColumns = 3; // Do not add or remove columns

SchemaTransformTable = (table as table, schema as table, optional


enforceSchema as number) as table =>
let
// Default to EnforceSchema.Strict
_enforceSchema = if (enforceSchema <> null) then enforceSchema else
EnforceSchema.Strict,

// Applies type transforms to a given table


EnforceTypes = (table as table, schema as table) as table =>
let
map = (t) => if Type.Is(t, type list) or Type.Is(t, type
record) or t = type any then null else t,
mapped = Table.TransformColumns(schema, {"Type", map}),
omitted = Table.SelectRows(mapped, each [Type] <> null),
existingColumns = Table.ColumnNames(table),
removeMissing = Table.SelectRows(omitted, each
List.Contains(existingColumns, [Name])),
primativeTransforms = Table.ToRows(removeMissing),
changedPrimatives = Table.TransformColumnTypes(table,
primativeTransforms)
in
changedPrimatives,

// Returns the table type for a given schema


SchemaToTableType = (schema as table) as type =>
let
toList = List.Transform(schema[Type], (t) => [Type=t,
Optional=false]),
toRecord = Record.FromList(toList, schema[Name]),
toType = Type.ForRecord(toRecord, false)
in
type table (toType),

// Determine if we have extra/missing columns.


// The enforceSchema parameter determines what we do about them.
schemaNames = schema[Name],
foundNames = Table.ColumnNames(table),
addNames = List.RemoveItems(schemaNames, foundNames),
extraNames = List.RemoveItems(foundNames, schemaNames),
tmp = Text.NewGuid(),
added = Table.AddColumn(table, tmp, each []),
expanded = Table.ExpandRecordColumn(added, tmp, addNames),
result = if List.IsEmpty(addNames) then table else expanded,
fullList =
if (_enforceSchema = EnforceSchema.Strict) then
schemaNames
else if (_enforceSchema = EnforceSchema.IgnoreMissingColumns)
then
foundNames
else
schemaNames & extraNames,

// Select the final list of columns.


// These will be ordered according to the schema table.
reordered = Table.SelectColumns(result, fullList,
MissingField.Ignore),
enforcedTypes = EnforceTypes(reordered, schema),
withType = if (_enforceSchema = EnforceSchema.Strict) then
Value.ReplaceType(enforcedTypes, SchemaToTableType(schema)) else
enforcedTypes
in
withType;
Updating the TripPin connector
You'll now make the following changes to your connector to make use of the new
schema enforcement code.

1. Define a master schema table ( SchemaTable ) that holds all of your schema
definitions.
2. Update the TripPin.Feed , GetPage , and GetAllPagesByNextLink to accept a schema
parameter.
3. Enforce your schema in GetPage .
4. Update your navigation table code to wrap each table with a call to a new function
( GetEntity )—this will give you more flexibility to manipulate the table definitions
in the future.

Master schema table


You'll now consolidate your schema definitions into a single table, and add a helper
function ( GetSchemaForEntity ) that lets you look up the definition based on an entity
name (for example, GetSchemaForEntity("Airlines") )

Power Query M

SchemaTable = #table({"Entity", "SchemaTable"}, {


{"Airlines", #table({"Name", "Type"}, {
{"AirlineCode", type text},
{"Name", type text}
})},

{"Airports", #table({"Name", "Type"}, {


{"IcaoCode", type text},
{"Name", type text},
{"IataCode", type text},
{"Location", type record}
})},

{"People", #table({"Name", "Type"}, {


{"UserName", type text},
{"FirstName", type text},
{"LastName", type text},
{"Emails", type list},
{"AddressInfo", type list},
{"Gender", type nullable text},
{"Concurrency", Int64.Type}
})}
});

GetSchemaForEntity = (entity as text) as table => try


SchemaTable{[Entity=entity]}[SchemaTable] otherwise error "Couldn't find
entity: '" & entity &"'";

Adding schema support to data functions


You'll now add an optional schema parameter to the TripPin.Feed , GetPage , and
GetAllPagesByNextLink functions. This will allow you to pass down the schema (when

you want to) to the paging functions, where it will be applied to the results you get back
from the service.

Power Query M

TripPin.Feed = (url as text, optional schema as table) as table => ...


GetPage = (url as text, optional schema as table) as table => ...
GetAllPagesByNextLink = (url as text, optional schema as table) as table =>
...

You'll also update all of the calls to these functions to make sure that you pass the
schema through correctly.

Enforcing the schema


The actual schema enforcement will be done in your GetPage function.

Power Query M

GetPage = (url as text, optional schema as table) as table =>


let
response = Web.Contents(url, [ Headers = DefaultRequestHeaders ]),
body = Json.Document(response),
nextLink = GetNextLink(body),
data = Table.FromRecords(body[value]),
// enforce the schema
withSchema = if (schema <> null) then SchemaTransformTable(data,
schema) else data
in
withSchema meta [NextLink = nextLink];

[Note] This GetPage implementation uses Table.FromRecords to convert the list of


records in the JSON response to a table. A major downside to using
Table.FromRecords is that it assumes all records in the list have the same set of
fields. This works for the TripPin service, since the OData records are guarenteed to
contain the same fields, but this might not be the case for all REST APIs. A more
robust implementation would use a combination of Table.FromList and
Table.ExpandRecordColumn. Later tutorials will change the implementation to get
the column list from the schema table, ensuring that no columns are lost or missing
during the JSON to M translation.

Adding the GetEntity function


The GetEntity function will wrap your call to TripPin.Feed. It will look up a schema
definition based on the entity name, and build the full request URL.

Power Query M

GetEntity = (url as text, entity as text) as table =>


let
fullUrl = Uri.Combine(url, entity),
schemaTable = GetSchemaForEntity(entity),
result = TripPin.Feed(fullUrl, schemaTable)
in
result;

You'll then update your TripPinNavTable function to call GetEntity , rather than making
all of the calls inline. The main advantage to this is that it will let you continue modifying
your entity building code, without having to touch your nav table logic.

Power Query M

TripPinNavTable = (url as text) as table =>


let
entitiesAsTable = Table.FromList(RootEntities,
Splitter.SplitByNothing()),
rename = Table.RenameColumns(entitiesAsTable, {{"Column1",
"Name"}}),
// Add Data as a calculated column
withData = Table.AddColumn(rename, "Data", each GetEntity(url,
[Name]), type table),
// Add ItemKind and ItemName as fixed text values
withItemKind = Table.AddColumn(withData, "ItemKind", each "Table",
type text),
withItemName = Table.AddColumn(withItemKind, "ItemName", each
"Table", type text),
// Indicate that the node should not be expandable
withIsLeaf = Table.AddColumn(withItemName, "IsLeaf", each true, type
logical),
// Generate the nav table
navTable = Table.ToNavigationTable(withIsLeaf, {"Name"}, "Name",
"Data", "ItemKind", "ItemName", "IsLeaf")
in
navTable;
Putting it all together
Once all of the code changes are made, compile and re-run the test query that calls
Table.Schema for the Airlines table.

Power Query M

let
source = TripPin.Contents(),
data = source{[Name="Airlines"]}[Data]
in
Table.Schema(data)

You now see that your Airlines table only has the two columns you defined in its
schema:

If you run the same code against the People table...

Power Query M

let
source = TripPin.Contents(),
data = source{[Name="People"]}[Data]
in
Table.Schema(data)

You'll see that the ascribed type you used ( Int64.Type ) was also set correctly.

An important thing to note is that this implementation of SchemaTransformTable doesn't


modify the types of list and record columns, but the Emails and AddressInfo
columns are still typed as list . This is because Json.Document will correctly map JSON
arrays to M lists, and JSON objects to M records. If you were to expand the list or record
column in Power Query, you'd see that all of the expanded columns will be of type any.
Future tutorials will improve the implementation to recursively set type information for
nested complex types.

Conclusion
This tutorial provided a sample implementation for enforcing a schema on JSON data
returned from a REST service. While this sample uses a simple hardcoded schema table
format, the approach could be expanded upon by dynamically building a schema table
definition from another source, such as a JSON schema file, or metadata
service/endpoint exposed by the data source.

In addition to modifying column types (and values), your code is also setting the correct
type information on the table itself. Setting this type information benefits performance
when running inside of Power Query, as the user experience always attempts to infer
type information to display the right UI queues to the end user, and the inference calls
can end up triggering additional calls to the underlying data APIs.

If you view the People table using the TripPin connector from the previous lesson, you'll
see that all of the columns have a 'type any' icon (even the columns that contain lists):

Running the same query with the TripPin connector from this lesson, you'll now see that
the type information is displayed correctly.
Next steps
TripPin Part 7 - Advanced Schema with M Types
TripPin part 7 - Advanced schema with
M types
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Enforce a table schema using M Types


" Set types for nested records and lists
" Refactor code for reuse and unit testing

In the previous lesson you defined your table schemas using a simple "Schema Table"
system. This schema table approach works for many REST APIs/Data Connectors, but
services that return complete or deeply nested data sets might benefit from the
approach in this tutorial, which leverages the M type system.

This lesson will guide you through the following steps:

1. Adding unit tests.


2. Defining custom M types.
3. Enforcing a schema using types.
4. Refactoring common code into separate files.

Adding unit tests


Before you start making use of the advanced schema logic, you'll add a set of unit tests
to your connector to reduce the chance of inadvertently breaking something. Unit
testing works like this:

1. Copy the common code from the UnitTest sample into your TripPin.query.pq file.
2. Add a section declaration to the top of your TripPin.query.pq file.
3. Create a shared record (called TripPin.UnitTest ).
4. Define a Fact for each test.
5. Call Facts.Summarize() to run all of the tests.
6. Reference the previous call as the shared value to ensure that it gets evaluated
when the project is run in Visual Studio.
Power Query M

section TripPinUnitTests;

shared TripPin.UnitTest =
[
// Put any common variables here if you only want them to be evaluated
once
RootTable = TripPin.Contents(),
Airlines = RootTable{[Name="Airlines"]}[Data],
Airports = RootTable{[Name="Airports"]}[Data],
People = RootTable{[Name="People"]}[Data],

// Fact(<Name of the Test>, <Expected Value>, <Actual Value>)


// <Expected Value> and <Actual Value> can be a literal or let statement
facts =
{
Fact("Check that we have three entries in our nav table", 3,
Table.RowCount(RootTable)),
Fact("We have Airline data?", true, not Table.IsEmpty(Airlines)),
Fact("We have People data?", true, not Table.IsEmpty(People)),
Fact("We have Airport data?", true, not Table.IsEmpty(Airports)),
Fact("Airlines only has 2 columns", 2,
List.Count(Table.ColumnNames(Airlines))),
Fact("Airline table has the right fields",
{"AirlineCode","Name"},

Record.FieldNames(Type.RecordFields(Type.TableRow(Value.Type(Airlines))))
)
},

report = Facts.Summarize(facts)
][report];

Selecting run on the project will evaluate all of the Facts, and give you a report output
that looks like this:
Using some principles from test-driven development , you'll now add a test that
currently fails, but will soon be reimplemented and fixed (by the end of this tutorial).
Specifically, you'll add a test that checks one of the nested records (Emails) you get back
in the People entity.

Power Query M

Fact("Emails is properly typed", type text,


Type.ListItem(Value.Type(People{0}[Emails])))

If you run the code again, you should now see that you have a failing test.

Now you just need to implement the functionality to make this work.

Defining custom M types


The schema enforcement approach in the previous lesson used "schema tables" defined
as Name/Type pairs. It works well when working with flattened/relational data, but didn't
support setting types on nested records/tables/lists, or allow you to reuse type
definitions across tables/entities.

In the TripPin case, the data in the People and Airports entities contain structured
columns, and even share a type ( Location ) for representing address information. Rather
than defining Name/Type pairs in a schema table, you'll define each of these entities
using custom M type declarations.

Here is a quick refresher about types in the M language from the Language
Specification:

A type value is a value that classifies other values. A value that is classified by a type
is said to conform to that type. The M type system consists of the following kinds of
types:
Primitive types, which classify primitive values ( binary , date , datetime ,
datetimezone , duration , list , logical , null , number , record , text , time ,
type ) and also include a number of abstract types ( function , table , any , and

none )
Record types, which classify record values based on field names and value
types
List types, which classify lists using a single item base type
Function types, which classify function values based on the types of their
parameters and return values
Table types, which classify table values based on column names, column types,
and keys
Nullable types, which classifies the value null in addition to all the values
classified by a base type
Type types, which classify values that are types

Using the raw JSON output you get (and/or looking up the definitions in the service's
$metadata ), you can define the following record types to represent OData complex
types:

Power Query M

LocationType = type [
Address = text,
City = CityType,
Loc = LocType
];

CityType = type [
CountryRegion = text,
Name = text,
Region = text
];

LocType = type [
#"type" = text,
coordinates = {number},
crs = CrsType
];

CrsType = type [
#"type" = text,
properties = record
];

Note how the LocationType references the CityType and LocType to represent its
structured columns.
For the top level entities (that you want represented as Tables), you define table types:

Power Query M

AirlinesType = type table [


AirlineCode = text,
Name = text
];

AirportsType = type table [


Name = text,
IataCode = text,
Location = LocationType
];

PeopleType = type table [


UserName = text,
FirstName = text,
LastName = text,
Emails = {text},
AddressInfo = {nullable LocationType},
Gender = nullable text,
Concurrency = Int64.Type
];

You then update your SchemaTable variable (which you use as a "lookup table" for entity
to type mappings) to use these new type definitions:

Power Query M

SchemaTable = #table({"Entity", "Type"}, {


{"Airlines", AirlinesType },
{"Airports", AirportsType },
{"People", PeopleType}
});

Enforcing a schema using types


You'll rely on a common function ( Table.ChangeType ) to enforce a schema on your data,
much like you used SchemaTransformTable in the previous lesson. Unlike
SchemaTransformTable , Table.ChangeType takes in an actual M table type as an

argument, and will apply your schema recursively for all nested types. It's signature looks
like this:

Power Query M

Table.ChangeType = (table, tableType as type) as nullable table => ...


The full code listing for the Table.ChangeType function can be found in the
Table.ChangeType.pqm file.

7 Note

For flexibility, the function can be used on tables, as well as lists of records (which is
how tables would be represented in a JSON document).

You then need to update the connector code to change the schema parameter from a
table to a type , and add a call to Table.ChangeType in GetEntity .

Power Query M

GetEntity = (url as text, entity as text) as table =>


let
fullUrl = Uri.Combine(url, entity),
schema = GetSchemaForEntity(entity),
result = TripPin.Feed(fullUrl, schema),
appliedSchema = Table.ChangeType(result, schema)
in
appliedSchema;

GetPage is updated to use the list of fields from the schema (to know the names of what
to expand when you get the results), but leaves the actual schema enforcement to
GetEntity .

Power Query M

GetPage = (url as text, optional schema as type) as table =>


let
response = Web.Contents(url, [ Headers = DefaultRequestHeaders ]),
body = Json.Document(response),
nextLink = GetNextLink(body),

// If we have no schema, use Table.FromRecords() instead


// (and hope that our results all have the same fields).
// If we have a schema, expand the record using its field names
data =
if (schema <> null) then
Table.FromRecords(body[value])
else
let
// convert the list of records into a table (single
column of records)
asTable = Table.FromList(body[value],
Splitter.SplitByNothing(), {"Column1"}),
fields =
Record.FieldNames(Type.RecordFields(Type.TableRow(schema))),
expanded = Table.ExpandRecordColumn(asTable, fields)
in
expanded
in
data meta [NextLink = nextLink];

Confirming that nested types are being set


The definition for your PeopleType now sets the Emails field to a list of text ( {text} ). If
you're applying the types correctly, the call to Type.ListItem in your unit test should now
be returning type text rather than type any .

Running your unit tests again show that they are now all passing.

Refactoring common code into separate files

7 Note

The M engine will have improved support for referencing external


modules/common code in the future, but this approach should carry you through
until then.

At this point, your extension almost has as much "common" code as TripPin connector
code. In the future these common functions will either be part of the built-in standard
function library, or you'll be able to reference them from another extension. For now,
you refactor your code in the following way:

1. Move the reusable functions to separate files (.pqm).


2. Set the Build Action property on the file to Compile to make sure it gets included
in your extension file during the build.
3. Define a function to load the code using Expression.Evaluate.
4. Load each of the common functions you want to use.

The code to do this is included in the snippet below:

Power Query M

Extension.LoadFunction = (fileName as text) =>


let
binary = Extension.Contents(fileName),
asText = Text.FromBinary(binary)
in
try
Expression.Evaluate(asText, #shared)
catch (e) =>
error [
Reason = "Extension.LoadFunction Failure",
Message.Format = "Loading '#{0}' failed - '#{1}': '#{2}'",
Message.Parameters = {fileName, e[Reason], e[Message]},
Detail = [File = fileName, Error = e]
];

Table.ChangeType = Extension.LoadFunction("Table.ChangeType.pqm");
Table.GenerateByPage = Extension.LoadFunction("Table.GenerateByPage.pqm");
Table.ToNavigationTable =
Extension.LoadFunction("Table.ToNavigationTable.pqm");

Conclusion
This tutorial made a number of improvements to the way you enforce a schema on the
data you get from a REST API. The connector is currently hard coding its schema
information, which has a performance benefit at runtime, but is unable to adapt to
changes in the service's metadata overtime. Future tutorials will move to a purely
dynamic approach that will infer the schema from the service's $metadata document.

In addition to the schema changes, this tutorial added Unit Tests for your code, and
refactored the common helper functions into separate files to improve overall
readability.

Next steps
TripPin Part 8 - Adding Diagnostics
TripPin part 8 - Adding diagnostics
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Learn about the Diagnostics.Trace function


" Use the Diagnostics helper functions to add trace information to help debug your
connector

Enabling diagnostics
Power Query users can enable trace logging by selecting the checkbox under Options |
Diagnostics.
Once enabled, any subsequent queries will cause the M engine to emit trace information
to log files located in a fixed user directory.

When running M queries from within the Power Query SDK, tracing is enabled at the
project level. On the project properties page, there are three settings related to tracing:

Clear Log—when this is set to true , the log will be reset/cleared when you run
your queries. We recommend you keep this set to true .
Show Engine Traces—this setting controls the output of built-in traces from the M
engine. These traces are generally only useful to members of the Power Query
team, so you'll typically want to keep this set to false .
Show User Traces—this setting controls trace information output by your
connector. You'll want to set this to true .
Once enabled, you'll start seeing log entries in the M Query Output window, under the
Log tab.

Diagnostics.Trace
The Diagnostics.Trace function is used to write messages into the M engine's trace log.

Power Query M

Diagnostics.Trace = (traceLevel as number, message as text, value as any,


optional delayed as nullable logical as any) => ...

) Important

M is a functional language with lazy evaluation. When using Diagnostics.Trace ,


keep in mind that the function will only be called if the expression its a part of is
actually evaluated. Examples of this can be found later in this tutorial.

The traceLevel parameter can be one of the following values (in descending order):

TraceLevel.Critical

TraceLevel.Error

TraceLevel.Warning
TraceLevel.Information

TraceLevel.Verbose
When tracing is enabled, the user can select the maximum level of messages they would
like to see. All trace messages of this level and under will be output to the log. For
example, if the user selects the "Warning" level, trace messages of TraceLevel.Warning ,
TraceLevel.Error , and TraceLevel.Critical would appear in the logs.

The message parameter is the actual text that will be output to the trace file. Note that
the text will not contain the value parameter unless you explicitly include it in the text.

The value parameter is what the function will return. When the delayed parameter is
set to true , value will be a zero parameter function that returns the actual value you're
evaluating. When delayed is set to false , value will be the actual value. An example of
how this works can be found below.

Using Diagnostics.Trace in the TripPin connector


For a practical example of using Diagnostics.Trace and the impact of the delayed
parameter, update the TripPin connector's GetSchemaForEntity function to wrap the
error exception:

Power Query M

GetSchemaForEntity = (entity as text) as type =>


try
SchemaTable{[Entity=entity]}[Type]
otherwise
let
message = Text.Format("Couldn't find entity: '#{0}'", {entity})
in
Diagnostics.Trace(TraceLevel.Error, message, () => error
message, true);

You can force an error during evaluation (for test purposes!) by passing an invalid entity
name to the GetEntity function. Here you change the withData line in the
TripPinNavTable function, replacing [Name] with "DoesNotExist" .

Power Query M

TripPinNavTable = (url as text) as table =>


let
// Use our schema table as the source of top level items in the
navigation tree
entities = Table.SelectColumns(SchemaTable, {"Entity"}),
rename = Table.RenameColumns(entities, {{"Entity", "Name"}}),
// Add Data as a calculated column
withData = Table.AddColumn(rename, "Data", each GetEntity(url,
"DoesNotExist"), type table),
// Add ItemKind and ItemName as fixed text values
withItemKind = Table.AddColumn(withData, "ItemKind", each "Table",
type text),
withItemName = Table.AddColumn(withItemKind, "ItemName", each
"Table", type text),
// Indicate that the node should not be expandable
withIsLeaf = Table.AddColumn(withItemName, "IsLeaf", each true, type
logical),
// Generate the nav table
navTable = Table.ToNavigationTable(withIsLeaf, {"Name"}, "Name",
"Data", "ItemKind", "ItemName", "IsLeaf")
in
navTable;

Enable tracing for your project, and run your test queries. On the Errors tab you should
see the text of the error you raised:

Also, on the Log tab, you should see the same message. Note that if you use different
values for the message and value parameters, these would be different.
Also note that the Action field of the log message contains the name (Data Source
Kind) of your extension (in this case, Engine/Extension/TripPin ). This makes it easier to
find the messages related to your extension when there are multiple queries involved
and/or system (mashup engine) tracing is enabled.

Delayed evaluation
As an example of how the delayed parameter works, you'll make some modifications
and run the queries again.

First, set the delayed value to false , but leave the value parameter as-is:

Power Query M

Diagnostics.Trace(TraceLevel.Error, message, () => error message, false);

When you run the query, you'll receive an error that "We cannot convert a value of type
Function to type Type", and not the actual error you raised. This is because the call is
now returning a function value, rather than the value itself.

Next, remove the function from the value parameter:

Power Query M

Diagnostics.Trace(TraceLevel.Error, message, error message, false);


When you run the query, you'll receive the correct error, but if you check the Log tab,
there will be no messages. This is because the error ends up being raised/evaluated
during the call to Diagnostics.Trace , so the message is never actually output.

Now that you understand the impact of the delayed parameter, be sure to reset
your connector back to a working state before proceeding.

Diagnostic helper functions in Diagnostics.pqm


The Diagnostics.pqm file included in this project contains a number of helper
functions that make tracing easier. As shown in the previous tutorial, you can include
this file in your project (remembering to set the Build Action to Compile), and then load
it in your connector file. The bottom of your connector file should now look something
like the code snippet below. Feel free to explore the various functions this module
provides, but in this sample, you'll only be using the Diagnostics.LogValue and
Diagnostics.LogFailure functions.

Power Query M

// Diagnostics module contains multiple functions. We can take the ones we


need.
Diagnostics = Extension.LoadFunction("Diagnostics.pqm");
Diagnostics.LogValue = Diagnostics[LogValue];
Diagnostics.LogFailure = Diagnostics[LogFailure];

Diagnostics.LogValue
The Diagnostics.LogValue function is a lot like Diagnostics.Trace , and can be used to
output the value of what you're evaluating.

Power Query M

Diagnostics.LogValue = (prefix as text, value as any) as any => ...

The prefix parameter is prepended to the log message. You'd use this to figure out
which call output the message. The value parameter is what the function will return,
and will also be written to the trace as a text representation of the M value. For example,
if value is equal to a table with columns A and B, the log will contain the equivalent
#table representation: #table({"A", "B"}, {{"row1 A", "row1 B"}, {"row2 A", row2

B"}})
7 Note

Serializing M values to text can be an expensive operation. Be aware of the


potential size of the values you are outputting to the trace.

7 Note

Most Power Query environments will truncate trace messages to a maximum


length.

As an example, you'll update the TripPin.Feed function to trace the url and schema
arguments passed into the function.

Power Query M

TripPin.Feed = (url as text, optional schema as type) as table =>


let
_url = Diagnostics.LogValue("Accessing url", url),
_schema = Diagnostics.LogValue("Schema type", schema),
//result = GetAllPagesByNextLink(url, schema)
result = GetAllPagesByNextLink(_url, _schema)
in
result;

Note that you have to use the new _url and _schema values in the call to
GetAllPagesByNextLink . If you used the original function parameters, the

Diagnostics.LogValue calls would never actually be evaluated, resulting in no messages


written to the trace. Functional programming is fun!

When you run your queries, you should now see new messages in the log.

Accessing url:
Schema type:

Note that you see the serialized version of the schema parameter type , rather than what
you'd get when you do a simple Text.FromValue on a type value (which results in
"type").

Diagnostics.LogFailure
The Diagnostics.LogFailure function can be used to wrap function calls, and will only
write to the trace if the function call fails (that is, returns an error ).

Power Query M

Diagnostics.LogFailure = (text as text, function as function) as any => ...

Internally, Diagnostics.LogFailure adds a try operator to the function call. If the call
fails, the text value is written to the trace before returning the original error . If the
function call succeeds, the result is returned without writing anything to the trace. Since

M errors don't contain a full stack trace (that is, you typically only see the message of
the error), this can be useful when you want to pinpoint where the error was actually
raised.

As a (poor) example, modify the withData line of the TripPinNavTable function to force
an error once again:

Power Query M

withData = Table.AddColumn(rename, "Data", each


Diagnostics.LogFailure("Error in GetEntity", () => GetEntity(url,
"DoesNotExist")), type table),

In the trace, you can find the resulting error message containing your text , and the
original error information.
Be sure to reset your function to a working state before proceeding with the next
tutorial.

Conclusion
This brief (but important!) lesson showed you how to make use of the diagnostic helper
functions to log to the Power Query trace files. When used properly, these functions are
extremely useful in debugging issues within your connector.

7 Note

As a connector developer, it is your responsibility to ensure that you do not log


sensitive or personally identifiable information (PII) as part of your diagnostic
logging. You must also be careful to not output too much trace information, as it
can have a negative performance impact.

Next steps
TripPin Part 9 - TestConnection
TripPin part 9 - TestConnection
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you'll:

" Add a TestConnection handler


" Configure the on-premises data gateway (personal mode)
" Test scheduled refresh through the Power BI service

Custom connector support was added to the April 2018 release of the personal on-
premises data gateway. This new (preview) functionality allows for Scheduled Refresh of
reports that make use of your custom connector.

This tutorial will cover the process of enabling your connector for refresh, and provide a
quick walkthrough of the steps to configure the gateway. Specifically you'll:

1. Add a TestConnection handler to your connector.


2. Install the On-Premises Data Gateway in Personal mode.
3. Enable Custom Connector support in the Gateway.
4. Publish a workbook that uses your connector to PowerBI.com.
5. Configure scheduled refresh to test your connector.

Go to Handling Gateway Support for more information on the TestConnection handler.

Background
There are three prerequisites for configuring a data source for scheduled refresh using
PowerBI.com:

The data source is supported: This means that the target gateway environment is
aware of all of the functions contained in the query you want to refresh.
Credentials are provided: To present the right credential entry dialog, Power BI
needs to know the support authentication mechanism for a given data source.
The credentials are valid: After the user provides credentials, they're validated by
calling the data source's TestConnection handler.
The first two items are handled by registering your connector with the gateway. When
the user attempts to configure scheduled refresh in PowerBI.com, the query information
is sent to your personal gateway to determine if any data sources that aren't recognized
by the Power BI service (that is, custom ones that you created) are available there. The
third item is handled by invoking the TestConnection handler defined for your data
source.

Adding a TestConnection handler


The TestConnection handler is added to the Data Source Kind declaration record (the
same place you declare its supported authentication type(s)). The handler is a function
with a single parameter of type any , which returns a list . The first value in the list is
the function that will be called to actually test the connection. This is generally the same
as your main data source function. In some cases you may need to expose a separate
shared function to provide an efficient connection test, however, this should generally

be avoided.

Since the TripPin data source function has no required arguments, the implementation
for TestConnection is fairly simple:

Power Query M

// Data Source Kind description


TripPin = [
// TestConnection is required to enable the connector through the
Gateway
TestConnection = (dataSourcePath) => { "TripPin.Contents" },
Authentication = [
Anonymous = []
],
Label = "TripPin Part 9 - TestConnection"
];

7 Note

Future versions of the Power Query SDK will provide a way to validate the
TestConnection handler from Visual Studio. Currently, the only mechanism that uses
TestConnection is the on-premises data gateway.

Enabling custom connectors in the personal


gateway
Download and install the on-premises data gateway. When you run the installer, select
the personal mode.

After installation is complete, launch the gateway and sign into Power BI. The sign-in
process will automatically register your gateway with the Power BI services. Once signed
in, perform the following steps:

1. Select the Connectors tab.


2. Select the switch to enable support for Custom data connectors.
3. Select the directory you want to load custom connectors from. This will usually be
the same directory that you'd use for Power BI Desktop, but the value is
configurable.
4. The page should now list all extension files in your target directory.

Go to the online documentation for more information about the gateway.


Testing scheduled refresh
Open Power BI Desktop and create a report that imports data using the TripPin
connector.

Add one or more visuals to your report page (optional), and then publish the report to
PowerBI.com.

After publishing, go to PowerBI.com and find the dataset for the report you just
published. Select the ellipses, and then select Schedule Refresh. Expand the Gateway
connection and Data source credentials sections.
7 Note

If the dataset configuration page says that the report contains unknown data
sources, your gateway/custom connector might not be configured properly. Go to
the personal gateway configuration UI and make sure that there are no errors next
to the TripPin connector. You may need to restart the gateway (on the Service
Settings tab) to pick up the latest configuration.

Select the Edit credentials link to bring up the authentication dialog, and then select
sign-in.

7 Note

If you receive an error similar to the one below ("Failed to update data source
credentials"), you most likely have an issue with your TestConnection handler.
After a successful call to TestConnection, the credentials will be accepted. You can now
schedule refresh, or select the dataset ellipse and then select Refresh Now. You can
select the Refresh history link to view the status of the refresh (which generally takes a
few minutes to get kicked off).
Conclusion
Congratulations! You now have a production ready custom connector that supports
automated refresh through the Power BI service.

Next steps
TripPin Part 10 - Query Folding
TripPin part 10 - Basic query folding
Article • 02/17/2023

This multi-part tutorial covers the creation of a new data source extension for Power
Query. The tutorial is meant to be done sequentially—each lesson builds on the
connector created in previous lessons, incrementally adding new capabilities to your
connector.

In this lesson, you will:

" Learn the basics of query folding


" Learn about the Table.View function
" Replicate OData query folding handlers for:

$top

$skip

$count
$select

$orderby

One of the powerful features of the M language is its ability to push transformation
work to underlying data source(s). This capability is referred to as Query Folding (other
tools/technologies also refer to similar function as Predicate Pushdown, or Query
Delegation).

When creating a custom connector that uses an M function with built-in query folding
capabilities, such as OData.Feed or Odbc.DataSource, your connector will automatically
inherit this capability for free.

This tutorial will replicate the built-in query folding behavior for OData by implementing
function handlers for the Table.View function. This part of the tutorial will implement
some of the easier handlers to implement (that is, ones that don't require expression
parsing and state tracking).

To understand more about the query capabilities that an OData service might offer, go
to OData v4 URL Conventions .

7 Note

As stated above, the OData.Feed function will automatically provide query folding
capabilities. Since the TripPin series is treating the OData service as a regular REST
API, using Web.Contents rather than OData.Feed, you'll need to implement the
query folding handlers yourself. For real world usage, we recommend that you use
OData.Feed whenever possible.

Go to Power Query query folding for more information about query folding.

Using Table.View
The Table.View function allows a custom connector to override default transformation
handlers for your data source. An implementation of Table.View will provide a function
for one or more of the supported handlers. If a handler is unimplemented, or returns an
error during evaluation, the M engine will fall back to its default handler.

When a custom connector uses a function that doesn't support implicit query folding,
such as Web.Contents, default transformation handlers will always be performed locally.
If the REST API you're connecting to supports query parameters as part of the query,
Table.View lets you add optimizations that allow transformation work to be pushed to
the service.

The Table.View function has the following signature:

Power Query M

Table.View(table as nullable table, handlers as record) as table

Your implementation will wrap your main data source function. There are two required
handlers for Table.View:

GetType —returns the expected table type of the query result

GetRows —returns the actual table result of your data source function

The simplest implementation would be similar to the following:

Power Query M

TripPin.SuperSimpleView = (url as text, entity as text) as table =>


Table.View(null, [
GetType = () => Value.Type(GetRows()),
GetRows = () => GetEntity(url, entity)
]);

Update the TripPinNavTable function to call TripPin.SuperSimpleView rather than


GetEntity :
Power Query M

withData = Table.AddColumn(rename, "Data", each TripPin.SuperSimpleView(url,


[Name]), type table),

If you re-run the unit tests, you'll see that the behavior of your function hasn't changed.
In this case your Table.View implementation is simply passing through the call to
GetEntity . Since you haven't implemented any transformation handlers (yet), the
original url parameter remains untouched.

Initial implementation of Table.View


The above implementation of Table.View is simple, but not very useful. The following
implementation will be used as your baseline—it doesn't implement any folding
functionality, but has the scaffolding you'll need to do it.

Power Query M

TripPin.View = (baseUrl as text, entity as text) as table =>


let
// Implementation of Table.View handlers.
//
// We wrap the record with Diagnostics.WrapHandlers() to get some
automatic
// tracing if a handler returns an error.
//
View = (state as record) => Table.View(null,
Diagnostics.WrapHandlers([
// Returns the table type returned by GetRows()
GetType = () => CalculateSchema(state),

// Called last - retrieves the data from the calculated URL


GetRows = () =>
let
finalSchema = CalculateSchema(state),
finalUrl = CalculateUrl(state),

result = TripPin.Feed(finalUrl, finalSchema),


appliedType = Table.ChangeType(result, finalSchema)
in
appliedType,

//
// Helper functions
//
// Retrieves the cached schema. If this is the first call
// to CalculateSchema, the table type is calculated based on
// the entity name that was passed into the function.
CalculateSchema = (state) as type =>
if (state[Schema]? = null) then
GetSchemaForEntity(entity)
else
state[Schema],

// Calculates the final URL based on the current state.


CalculateUrl = (state) as text =>
let
urlWithEntity = Uri.Combine(state[Url], state[Entity])
in
urlWithEntity
]))
in
View([Url = baseUrl, Entity = entity]);

If you look at the call to Table.View, you'll see an additional wrapper function around
the handlers record— Diagnostics.WrapHandlers . This helper function is found in the
Diagnostics module (that was introduced in the adding diagnostics lesson), and
provides you with a useful way to automatically trace any errors raised by individual
handlers.

The GetType and GetRows functions have been updated to make use of two new helper
functions— CalculateSchema and CalculateUrl . Right now, the implementations of
those functions are fairly straightforward—you'll notice they contain parts of what was
previously done by the GetEntity function.

Finally, you'll notice that you're defining an internal function ( View ) that accepts a state
parameter. As you implement more handlers, they will recursively call the internal View
function, updating and passing along state as they go.

Update the TripPinNavTable function once again, replacing the call to


TripPin.SuperSimpleView with a call to the new TripPin.View function, and re-run the
unit tests. You won't see any new functionality yet, but you now have a solid baseline for
testing.

Implementing query folding


Since the M engine will automatically fall back to local processing when a query can't be
folded, you must take some additional steps to validate that your Table.View handlers
are working correctly.

The manual way to validate folding behavior is to watch the URL requests your unit tests
make using a tool like Fiddler. Alternatively, the diagnostic logging you added to
TripPin.Feed will emit the full URL being run, which should include the OData query

string parameters your handlers will add.

An automated way to validate query folding is to force your unit test execution to fail if
a query doesn't fully fold. You can do this by opening the project properties, and setting
Error on Folding Failure to True. With this setting enabled, any query that requires local
processing results in the following error:

We couldn't fold the expression to the source. Please try a simpler expression.

You can test this out by adding a new Fact to your unit test file that contains one or
more table transformations.

Power Query M

// Query folding tests


Fact("Fold $top 1 on Airlines",
#table( type table [AirlineCode = text, Name = text] , {{"AA", "American
Airlines"}} ),
Table.FirstN(Airlines, 1)
)

7 Note

The Error on Folding Failure setting is an "all or nothing" approach. If you want to
test queries that aren't designed to fold as part of your unit tests, you'll need to
add some conditional logic to enable/disable tests accordingly.

The remaining sections of this tutorial will each add a new Table.View handler. You'll be
taking a Test Driven Development (TDD) approach, where you first add failing unit
tests, and then implement the M code to resolve them.

Each handler section below will describe the functionality provided by the handler, the
OData equivalent query syntax, the unit tests, and the implementation. Using the
scaffolding code described above, each handler implementation requires two changes:

Adding the handler to Table.View that will update the state record.
Modifying CalculateUrl to retrieve the values from the state and add to the url
and/or query string parameters.

Handling Table.FirstN with OnTake


The OnTake handler receives a count parameter, which is the maximum number of rows
to take from GetRows . In OData terms, you can translate this to the $top query
parameter.

You'll use the following unit tests:

Power Query M

// Query folding tests


Fact("Fold $top 1 on Airlines",
#table( type table [AirlineCode = text, Name = text] , {{"AA", "American
Airlines"}} ),
Table.FirstN(Airlines, 1)
),
Fact("Fold $top 0 on Airports",
#table( type table [Name = text, IataCode = text, Location = record] ,
{} ),
Table.FirstN(Airports, 0)
),

These tests both use Table.FirstN to filter to the result set to the first X number of rows.
If you run these tests with Error on Folding Failure set to False (the default), the tests
should succeed, but if you run Fiddler (or check the trace logs), you'll notice that the
request you send doesn't contain any OData query parameters.

If you set Error on Folding Failure to True , the tests will fail with the Please try a
simpler expression. error. To fix this, you'll define your first Table.View handler for

OnTake .

The OnTake handler looks like this:


Power Query M

OnTake = (count as number) =>


let
// Add a record with Top defined to our state
newState = state & [ Top = count ]
in
@View(newState),

The CalculateUrl function is updated to extract the Top value from the state record,
and set the right parameter in the query string.

Power Query M

// Calculates the final URL based on the current state.


CalculateUrl = (state) as text =>
let
urlWithEntity = Uri.Combine(state[Url], state[Entity]),

// Uri.BuildQueryString requires that all field values


// are text literals.
defaultQueryString = [],

// Check for Top defined in our state


qsWithTop =
if (state[Top]? <> null) then
// add a $top field to the query string record
defaultQueryString & [ #"$top" = Number.ToText(state[Top]) ]
else
defaultQueryString,

encodedQueryString = Uri.BuildQueryString(qsWithTop),
finalUrl = urlWithEntity & "?" & encodedQueryString
in
finalUrl

Rerunning the unit tests, you'll notice that the URL you're accessing now contains the
$top parameter. (Note that due to URL encoding, $top appears as %24top , but the

OData service is smart enough to convert it automatically).


Handling Table.Skip with OnSkip
The OnSkip handler is a lot like OnTake . It receives a count parameter, which is the
number of rows to skip from the result set. This translates nicely to the OData $skip
query parameter.

Unit tests:

Power Query M

// OnSkip
Fact("Fold $skip 14 on Airlines",
#table( type table [AirlineCode = text, Name = text] , {{"EK",
"Emirates"}} ),
Table.Skip(Airlines, 14)
),
Fact("Fold $skip 0 and $top 1",
#table( type table [AirlineCode = text, Name = text] , {{"AA", "American
Airlines"}} ),
Table.FirstN(Table.Skip(Airlines, 0), 1)
),

Implementation:

Power Query M

// OnSkip - handles the Table.Skip transform.


// The count value should be >= 0.
OnSkip = (count as number) =>
let
newState = state & [ Skip = count ]
in
@View(newState),

Matching updates to CalculateUrl :

Power Query M

qsWithSkip =
if (state[Skip]? <> null) then
qsWithTop & [ #"$skip" = Number.ToText(state[Skip]) ]
else
qsWithTop,

More information: Table.Skip

Handling Table.SelectColumns with OnSelectColumns


The OnSelectColumns handler is called when the user selects or removes columns from
the result set. The handler receives a list of text values, representing the column(s) to
be selected.

In OData terms, this operation will map to the $select query option.

The advantage of folding column selection becomes apparent when you are dealing
with tables with many columns. The $select operator will remove unselected columns
from the result set, resulting in more efficient queries.

Unit tests:

Power Query M

// OnSelectColumns
Fact("Fold $select single column",
#table( type table [AirlineCode = text] , {{"AA"}} ),
Table.FirstN(Table.SelectColumns(Airlines, {"AirlineCode"}), 1)
),
Fact("Fold $select multiple column",
#table( type table [UserName = text, FirstName = text, LastName = text],
{{"russellwhyte", "Russell", "Whyte"}}),
Table.FirstN(Table.SelectColumns(People, {"UserName", "FirstName",
"LastName"}), 1)
),
Fact("Fold $select with ignore column",
#table( type table [AirlineCode = text] , {{"AA"}} ),
Table.FirstN(Table.SelectColumns(Airlines, {"AirlineCode",
"DoesNotExist"}, MissingField.Ignore), 1)
),
The first two tests select different numbers of columns with Table.SelectColumns, and
include a Table.FirstN call to simplify the test case.

7 Note

If the test were to simply return the column names (using Table.ColumnNames and
not any data, the request to the OData service will never actually be sent. This is
because the call to GetType will return the schema, which contains all of the
information the M engine needs to calculate the result.

The third test uses the MissingField.Ignore option, which tells the M engine to ignore
any selected columns that don't exist in the result set. The OnSelectColumns handler
doesn't need to worry about this option—the M engine will handle it automatically (that
is, missing columns won't be included in the columns list).

7 Note

The other option for Table.SelectColumns, MissingField.UseNull, requires a


connector to implement the OnAddColumn handler. This will be done in a subsequent
lesson.

The implementation for OnSelectColumns does two things:

Adds the list of selected columns to the state .


Re-calculates the Schema value so you can set the right table type.

Power Query M

OnSelectColumns = (columns as list) =>


let
// get the current schema
currentSchema = CalculateSchema(state),
// get the columns from the current schema (which is an M Type
value)
rowRecordType = Type.RecordFields(Type.TableRow(currentSchema)),
existingColumns = Record.FieldNames(rowRecordType),
// calculate the new schema
columnsToRemove = List.Difference(existingColumns, columns),
updatedColumns = Record.RemoveFields(rowRecordType,
columnsToRemove),
newSchema = type table (Type.ForRecord(updatedColumns, false))
in
@View(state &
[
SelectColumns = columns,
Schema = newSchema
]
),

CalculateUrl is updated to retrieve the list of columns from the state, and combine

them (with a separator) for the $select parameter.

Power Query M

// Check for explicitly selected columns


qsWithSelect =
if (state[SelectColumns]? <> null) then
qsWithSkip & [ #"$select" = Text.Combine(state[SelectColumns], ",")
]
else
qsWithSkip,

Handling Table.Sort with OnSort


The OnSort handler receives a list of records of type:

Power Query M

type [ Name = text, Order = Int16.Type ]

Each record contains a Name field, indicating the name of the column, and an Order field
which is equal to Order.Ascending or Order.Descending.

In OData terms, this operation will map to the $orderby query option. The $orderby
syntax has the column name followed by asc or desc to indicate ascending or
descending order. When sorting on multiple columns, the values are separated with a
comma. Note that if the columns parameter contains more than one item, it's important
to maintain the order in which they appear.

Unit tests:

Power Query M

// OnSort
Fact("Fold $orderby single column",
#table( type table [AirlineCode = text, Name = text], {{"TK", "Turkish
Airlines"}}),
Table.FirstN(Table.Sort(Airlines, {{"AirlineCode", Order.Descending}}),
1)
),
Fact("Fold $orderby multiple column",
#table( type table [UserName = text], {{"javieralfred"}}),
Table.SelectColumns(Table.FirstN(Table.Sort(People, {{"LastName",
Order.Ascending}, {"UserName", Order.Descending}}), 1), {"UserName"})
)

Implementation:

Power Query M

// OnSort - receives a list of records containing two fields:


// [Name] - the name of the column to sort on
// [Order] - equal to Order.Ascending or Order.Descending
// If there are multiple records, the sort order must be maintained.
//
// OData allows you to sort on columns that do not appear in the result
// set, so we do not have to validate that the sorted columns are in our
// existing schema.
OnSort = (order as list) =>
let
// This will convert the list of records to a list of text,
// where each entry is "<columnName> <asc|desc>"
sorting = List.Transform(order, (o) =>
let
column = o[Name],
order = o[Order],
orderText = if (order = Order.Ascending) then "asc" else
"desc"
in
column & " " & orderText
),
orderBy = Text.Combine(sorting, ", ")
in
@View(state & [ OrderBy = orderBy ]),

Updates to CalculateUrl :

Power Query M

qsWithOrderBy =
if (state[OrderBy]? <> null) then
qsWithSelect & [ #"$orderby" = state[OrderBy] ]
else
qsWithSelect,

Handling Table.RowCount with GetRowCount


Unlike the other query handlers you've implemented, the GetRowCount handler will
return a single value—the number of rows expected in the result set. In an M query, this
would typically be the result of the Table.RowCount transform.

You have a few different options on how to handle this as part of an OData query:

The $count query parameter , which returns the count as a separate field in the
result set.
The /$count path segment , which will return only the total count, as a scalar
value.

The downside to the query parameter approach is that you still need to send the entire
query to the OData service. Since the count comes back inline as part of the result set,
you'll have to process the first page of data from the result set. While this is still more
efficient then reading the entire result set and counting the rows, it's probably still more
work than you want to do.

The advantage of the path segment approach is that you'll only receive a single scalar
value in the result. This makes the entire operation a lot more efficient. However, as
described in the OData specification, the /$count path segment will return an error if
you include other query parameters, such as $top or $skip , which limits its usefulness.

In this tutorial, you'll implement the GetRowCount handler using the path segment
approach. To avoid the errors you'd get if other query parameters are included, you'll
check for other state values, and return an "unimplemented error" ( ... ) if you find any.
Returning any error from a Table.View handler tells the M engine that the operation
can't be folded, and it should fallback to the default handler instead (which in this case
would be counting the total number of rows).

First, add a simple unit test:

Power Query M

// GetRowCount
Fact("Fold $count", 15, Table.RowCount(Airlines)),

Since the /$count path segment returns a single value (in plain/text format) rather than
a JSON result set, you'll also have to add a new internal function ( TripPin.Scalar ) for
making the request and handling the result.

Power Query M

// Similar to TripPin.Feed, but is expecting back a scalar value.


// This function returns the value from the service as plain text.
TripPin.Scalar = (url as text) as text =>
let
_url = Diagnostics.LogValue("TripPin.Scalar url", url),
headers = DefaultRequestHeaders & [
#"Accept" = "text/plain"
],

response = Web.Contents(_url, [ Headers = headers ]),


toText = Text.FromBinary(response)
in
toText;

The implementation will then use this function (if no other query parameters are found
in the state ):

Power Query M

GetRowCount = () as number =>


if (Record.FieldCount(Record.RemoveFields(state, {"Url", "Entity",
"Schema"}, MissingField.Ignore)) > 0) then
...
else
let
newState = state & [ RowCountOnly = true ],
finalUrl = CalculateUrl(newState),
value = TripPin.Scalar(finalUrl),
converted = Number.FromText(value)
in
converted,

The CalculateUrl function is updated to append /$count to the URL if the


RowCountOnly field is set in the state .

Power Query M

// Check for $count. If all we want is a row count,


// then we add /$count to the path value (following the entity name).
urlWithRowCount =
if (state[RowCountOnly]? = true) then
urlWithEntity & "/$count"
else
urlWithEntity,

The new Table.RowCount unit test should now pass.

To test the fallback case, you'll add another test that forces the error.

First, add a helper method that checks the result of a try operation for a folding error.

Power Query M
// Returns true if there is a folding error, or the original record (for
logging purposes) if not.
Test.IsFoldingError = (tryResult as record) =>
if ( tryResult[HasError]? = true and tryResult[Error][Message] = "We
couldn't fold the expression to the data source. Please try a simpler
expression.") then
true
else
tryResult;

Then add a test that uses both Table.RowCount and Table.FirstN to force the error.

Power Query M

// test will fail if "Fail on Folding Error" is set to false


Fact("Fold $count + $top *error*", true, Test.IsFoldingError(try
Table.RowCount(Table.FirstN(Airlines, 3)))),

An important note here is that this test will now return an error if Error on Folding Error
is set to false , because the Table.RowCount operation will fall back to the local (default)
handler. Running the tests with Error on Folding Error set to true will cause
Table.RowCount to fail, and allows the test to succeed.

Conclusion
Implementing Table.View for your connector adds a significant amount of complexity to
your code. Since the M engine can process all transformations locally, adding
Table.View handlers doesn't enable new scenarios for your users, but will result in more
efficient processing (and potentially, happier users). One of the main advantages of the
Table.View handlers being optional is that it allows you to incrementally add new
functionality without impacting backwards compatibility for your connector.

For most connectors, an important (and basic) handler to implement is OnTake (which
translates to $top in OData), as it limits the amount of rows returned. The Power Query
experience will always perform an OnTake of 1000 rows when displaying previews in the
navigator and query editor, so your users might see significant performance
improvements when working with larger data sets.
GitHub Connector Sample
Article • 02/17/2023

The GitHub M extension shows how to add support for an OAuth 2.0 protocol
authentication flow. You can learn more about the specifics of GitHub's authentication
flow on the GitHub Developer site .

Before you get started creating an M extension, you need to register a new app on
GitHub, and replace the client_id and client_secret files with the appropriate values
for your app.

Note about compatibility issues in Visual Studio: The Power Query SDK uses an Internet
Explorer based control to popup OAuth dialogs. GitHub has deprecated its support for the
version of IE used by this control, which will prevent you from completing the permission
grant for you app if run from within Visual Studio. An alternative is to load the extension
with Power BI Desktop and complete the first OAuth flow there. After your application has
been granted access to your account, subsequent logins will work fine from Visual Studio.

OAuth and Power BI


OAuth is a form of credentials delegation. By logging in to GitHub and authorizing the
"application" you create for GitHub, the user is allowing your "application" to login on
their behalf to retrieve data into Power BI. The "application" must be granted rights to
retrieve data (get an access_token) and to refresh the data on a schedule (get and use a
refresh_token). Your "application" in this context is your Data Connector used to run
queries within Power BI. Power BI stores and manages the access_token and
refresh_token on your behalf.

7 Note

To allow Power BI to obtain and use the access_token, you must specify the redirect
url as https://oauth.powerbi.com/views/oauthredirect.html .

When you specify this URL and GitHub successfully authenticates and grants
permissions, GitHub will redirect to PowerBI's oauthredirect endpoint so that Power BI
can retrieve the access_token and refresh_token.

How to register a GitHub app


Your Power BI extension needs to login to GitHub. To enable this, you register a new
OAuth application with GitHub at https://github.com/settings/applications/new .

1. Application name : Enter a name for the application for your M extension.
2. Authorization callback URL : Enter
https://oauth.powerbi.com/views/oauthredirect.html .
3. Scope : In GitHub, set scope to user, repo .

7 Note

A registered OAuth application is assigned a unique Client ID and Client Secret. The
Client Secret should not be shared. You get the Client ID and Client Secret from the
GitHub application page. Update the files in your Data Connector project with the
Client ID ( client_id file) and Client Secret ( client_secret file).

How to implement GitHub OAuth


This sample will walk you through the following steps:

1. Create a Data Source Kind definition that declares it supports OAuth.


2. Provide details so the M engine can start the OAuth flow ( StartLogin ).
3. Convert the code received from GitHub into an access_token ( FinishLogin and
TokenMethod ).

4. Define functions that access the GitHub API ( GithubSample.Contents ).

Step 1 - Create a Data Source definition


A Data Connector starts with a record that describes the extension, including its unique
name (which is the name of the record), supported authentication type(s), and a friendly
display name (label) for the data source. When supporting OAuth, the definition
contains the functions that implement the OAuth contract—in this case, StartLogin and
FinishLogin .

//
// Data Source definition
//
GithubSample = [
Authentication = [
OAuth = [
StartLogin = StartLogin,
FinishLogin = FinishLogin
]
],
Label = Extension.LoadString("DataSourceLabel")
];

Step 2 - Provide details so the M engine can start the


OAuth flow
The GitHub OAuth flow starts when you direct users to the
https://github.com/login/oauth/authorize page. For the user to login, you need to

specify a number of query parameters:

Name Type Description

client_id string Required. The client ID you received from GitHub when you registered.

redirect_uri string The URL in your app where users will be sent after authorization. See details
below about redirect urls. For M extensions, the redirect_uri must be
"https://oauth.powerbi.com/views/oauthredirect.html".

scope string A comma separated list of scopes. If not provided, scope defaults to an
empty list of scopes for users that don't have a valid token for the app. For
users who do already have a valid token for the app, the user won't be
shown the OAuth authorization page with the list of scopes. Instead, this
step of the flow will automatically complete with the same scopes that were
used last time the user completed the flow.

state string An un-guessable random string. It's used to protect against cross-site
request forgery attacks.

The following code snippet describes how to implement a StartLogin function to start
the login flow. A StartLogin function takes a resourceUrl , state , and display value. In
the function, create an AuthorizeUrl that concatenates the GitHub authorize URL with
the following parameters:

client_id : You get the client ID after you register your extension with GitHub from
the GitHub application page.
scope : Set scope to " user, repo ". This sets the authorization scope (that is, what

your app wants to access) for the user.


state : An internal value that the M engine passes in.

redirect_uri : Set to https://oauth.powerbi.com/views/oauthredirect.html .


StartLogin = (resourceUrl, state, display) =>
let
AuthorizeUrl = "https://github.com/login/oauth/authorize?" &
Uri.BuildQueryString([
client_id = client_id,
scope = "user, repo",
state = state,
redirect_uri = redirect_uri])
in
[
LoginUri = AuthorizeUrl,
CallbackUri = redirect_uri,
WindowHeight = windowHeight,
WindowWidth = windowWidth,
Context = null
];

If this is the first time the user is logging in with your app (identified by its client_id
value), they'll see a page that asks them to grant access to your app. Subsequent login
attempts will simply ask for their credentials.

Step 3 - Convert the code received from GitHub into an


access_token
If the user completes the authentication flow, GitHub redirects back to the Power BI
redirect URL with a temporary code in a code parameter, as well as the state you
provided in the previous step in a state parameter. Your FinishLogin function will
extract the code from the callbackUri parameter, and then exchange it for an access
token (using the TokenMethod function).

FinishLogin = (context, callbackUri, state) =>


let
Parts = Uri.Parts(callbackUri)[Query]
in
TokenMethod(Parts[code]);

To get a GitHub access token, you pass the temporary code from the GitHub Authorize
Response. In the TokenMethod function, you formulate a POST request to GitHub's
access_token endpoint ( https://github.com/login/oauth/access_token ). The following
parameters are required for the GitHub endpoint:

Name Type Description


Name Type Description

client_id string Required. The client ID you received from GitHub when you registered.

client_secret string Required. The client secret you received from GitHub when you registered.

code string Required. The code you received in FinishLogin .

redirect_uri string The URL in your app where users will be sent after authorization. See
details below about redirect URLs.

Here are the details used parameters for the Web.Contents call.

Argument Description Value

url The URL for the Web site. https://github.com/login/oauth/access_token

options A record to control the Not used in this case


behavior of this function.

Query Programmatically add query Content = Text.ToBinary(


parameters to the URL. Uri.BuildQueryString(
[
client_id = client_id,
client_secret = client_secret,
code = code,
redirect_uri = redirect_uri
]
))
Where

client_id : Client ID from GitHub


application page.
client_secret : Client secret from GitHub
application page.
code : Code in GitHub authorization
response.
redirect_uri : The URL in your app where
users will be sent after authorization.

Headers A record with additional Headers= [


headers for the HTTP request. #"Content-type" = "application/x-www-form-
urlencoded",
#"Accept" = "application/json"
]

This code snippet describes how to implement a TokenMethod function to exchange an


auth code for an access token.
TokenMethod = (code) =>
let
Response =
Web.Contents("https://Github.com/login/oauth/access_token", [
Content = Text.ToBinary(Uri.BuildQueryString([
client_id = client_id,
client_secret = client_secret,
code = code,
redirect_uri = redirect_uri])),
Headers=[#"Content-type" = "application/x-www-form-
urlencoded",#"Accept" = "application/json"]]),
Parts = Json.Document(Response)
in
Parts;

The JSON response from the service will contain an access_token field. The TokenMethod
method converts the JSON response into an M record using Json.Document, and returns
it to the engine.

Sample response:

JSON

{
"access_token":"e72e16c7e42f292c6912e7710c838347ae178b4a",
"scope":"user,repo",
"token_type":"bearer"
}

Step 4 - Define functions that access the GitHub API


The following code snippet exports two functions ( GithubSample.Contents and
GithubSample.PagedTable ) by marking them as shared , and associates them with the
GithubSample Data Source Kind.

[DataSource.Kind="GithubSample", Publish="GithubSample.UI"]
shared GithubSample.Contents = Value.ReplaceType(Github.Contents, type
function (url as Uri.Type) as any);

[DataSource.Kind="GithubSample"]
shared GithubSample.PagedTable = Value.ReplaceType(Github.PagedTable, type
function (url as Uri.Type) as nullable table);
The GithubSample.Contents function is also published to the UI (allowing it to appear in
the Get Data dialog). The Value.ReplaceType function is used to set the function
parameter to the Url.Type ascribed type.

By associating these functions with the GithubSample data source kind, they'll
automatically use the credentials that the user provided. Any M library functions that
have been enabled for extensibility (such as Web.Contents) will automatically inherit
these credentials as well.

For more details on how credential and authentication works, see Handling
Authentication.

Sample URL
This connector is able to retrieve formatted data from any of the GitHub v3 REST API
endpoints. For example, the query to pull all commits to the Data Connectors repo
would look like this:

GithubSample.Contents("https://api.github.com/repos/microsoft/dataconnectors
/commits")
List of Samples
Article • 02/17/2023

We maintain a list of samples on the DataConnectors repo on GitHub. Each of the links
below links to a folder in the sample repository. Generally these folders include a
readme, one or more .pq / .query.pq files, a project file for Visual Studio, and in some
cases icons. To open these files in Visual Studio, make sure you've set up the SDK
properly, and run the .mproj file from the cloned or downloaded folder.

Functionality
Sample Description Link

Hello This simple sample shows the basic structure of a connector. GitHub
World Link

Hello Similar to the Hello World sample, this sample shows how to add GitHub
World with documentation to a shared function. Link
Docs

Navigation This sample provides two examples of how to create a navigation table for GitHub
Tables your data connector using the Table.ToNavigationTable function. Link

Unit This sample shows how you can add simple unit testing to your GitHub
Testing <extension>.query.pq file. Link

OAuth
Sample Description Link

GitHub This sample corresponds to the GitHub connector tutorial. GitHub Link

ODBC
Sample Description Link

SQL This connector sample serves as a template for ODBC connectors. GitHub
Link

Redshift This connector sample uses the Redshift ODBC driver, and is based on GitHub
the connector template. Link
Sample Description Link

Hive LLAP This connector sample uses the Hive ODBC driver, and is based on the GitHub
connector template. Link

Snowflake This connector sample uses the Snowflake ODBC driver, and is based GitHub
on the connector template. Link

Impala This connector sample uses the Cloudera Impala ODBC driver, and is GitHub
based on the connector template. Link

Direct Query This sample creates an ODBC-based custom connector that enables GitHub
for SQL Direct Query for SQL Server. Link

TripPin
Sample Description Link

Part 1 This sample corresponds to TripPin Tutorial Part 1 - OData. GitHub


Link

Part 2 This sample corresponds to TripPin Tutorial Part 2 - REST. GitHub


Link

Part 3 This sample corresponds to TripPin Tutorial Part 3 - Navigation Tables. GitHub
Link

Part 4 This sample corresponds to TripPin Tutorial Part 4 - Data Source Paths. GitHub
Link

Part 5 This sample corresponds to TripPin Tutorial Part 5 - Paging. GitHub


Link

Part 6 This sample corresponds to TripPin Tutorial Part 6 - Enforcing Schema. GitHub
Link

Part 7 This sample corresponds to TripPin Tutorial Part 7 - Advanced Schema GitHub
with M Types. Link

Part 8 This sample corresponds to TripPin Tutorial Part 8 - Adding Diagnostics. GitHub
Link

Part 9 This sample corresponds to TripPin Tutorial Part 9 - Test Connection. GitHub
Link

Part 10 This sample corresponds to TripPin Tutorial Part 10 - Basic Query GitHub
Folding. Link
List of Samples
Article • 02/17/2023

We maintain a list of samples on the DataConnectors repo on GitHub. Each of the links
below links to a folder in the sample repository. Generally these folders include a
readme, one or more .pq / .query.pq files, a project file for Visual Studio, and in some
cases icons. To open these files in Visual Studio, make sure you've set up the SDK
properly, and run the .mproj file from the cloned or downloaded folder.

Functionality
Sample Description Link

Hello This simple sample shows the basic structure of a connector. GitHub
World Link

Hello Similar to the Hello World sample, this sample shows how to add GitHub
World with documentation to a shared function. Link
Docs

Navigation This sample provides two examples of how to create a navigation table for GitHub
Tables your data connector using the Table.ToNavigationTable function. Link

Unit This sample shows how you can add simple unit testing to your GitHub
Testing <extension>.query.pq file. Link

OAuth
Sample Description Link

GitHub This sample corresponds to the GitHub connector tutorial. GitHub Link

ODBC
Sample Description Link

SQL This connector sample serves as a template for ODBC connectors. GitHub
Link

Redshift This connector sample uses the Redshift ODBC driver, and is based on GitHub
the connector template. Link
Sample Description Link

Hive LLAP This connector sample uses the Hive ODBC driver, and is based on the GitHub
connector template. Link

Snowflake This connector sample uses the Snowflake ODBC driver, and is based GitHub
on the connector template. Link

Impala This connector sample uses the Cloudera Impala ODBC driver, and is GitHub
based on the connector template. Link

Direct Query This sample creates an ODBC-based custom connector that enables GitHub
for SQL Direct Query for SQL Server. Link

TripPin
Sample Description Link

Part 1 This sample corresponds to TripPin Tutorial Part 1 - OData. GitHub


Link

Part 2 This sample corresponds to TripPin Tutorial Part 2 - REST. GitHub


Link

Part 3 This sample corresponds to TripPin Tutorial Part 3 - Navigation Tables. GitHub
Link

Part 4 This sample corresponds to TripPin Tutorial Part 4 - Data Source Paths. GitHub
Link

Part 5 This sample corresponds to TripPin Tutorial Part 5 - Paging. GitHub


Link

Part 6 This sample corresponds to TripPin Tutorial Part 6 - Enforcing Schema. GitHub
Link

Part 7 This sample corresponds to TripPin Tutorial Part 7 - Advanced Schema GitHub
with M Types. Link

Part 8 This sample corresponds to TripPin Tutorial Part 8 - Adding Diagnostics. GitHub
Link

Part 9 This sample corresponds to TripPin Tutorial Part 9 - Test Connection. GitHub
Link

Part 10 This sample corresponds to TripPin Tutorial Part 10 - Basic Query GitHub
Folding. Link
List of Samples
Article • 02/17/2023

We maintain a list of samples on the DataConnectors repo on GitHub. Each of the links
below links to a folder in the sample repository. Generally these folders include a
readme, one or more .pq / .query.pq files, a project file for Visual Studio, and in some
cases icons. To open these files in Visual Studio, make sure you've set up the SDK
properly, and run the .mproj file from the cloned or downloaded folder.

Functionality
Sample Description Link

Hello This simple sample shows the basic structure of a connector. GitHub
World Link

Hello Similar to the Hello World sample, this sample shows how to add GitHub
World with documentation to a shared function. Link
Docs

Navigation This sample provides two examples of how to create a navigation table for GitHub
Tables your data connector using the Table.ToNavigationTable function. Link

Unit This sample shows how you can add simple unit testing to your GitHub
Testing <extension>.query.pq file. Link

OAuth
Sample Description Link

GitHub This sample corresponds to the GitHub connector tutorial. GitHub Link

ODBC
Sample Description Link

SQL This connector sample serves as a template for ODBC connectors. GitHub
Link

Redshift This connector sample uses the Redshift ODBC driver, and is based on GitHub
the connector template. Link
Sample Description Link

Hive LLAP This connector sample uses the Hive ODBC driver, and is based on the GitHub
connector template. Link

Snowflake This connector sample uses the Snowflake ODBC driver, and is based GitHub
on the connector template. Link

Impala This connector sample uses the Cloudera Impala ODBC driver, and is GitHub
based on the connector template. Link

Direct Query This sample creates an ODBC-based custom connector that enables GitHub
for SQL Direct Query for SQL Server. Link

TripPin
Sample Description Link

Part 1 This sample corresponds to TripPin Tutorial Part 1 - OData. GitHub


Link

Part 2 This sample corresponds to TripPin Tutorial Part 2 - REST. GitHub


Link

Part 3 This sample corresponds to TripPin Tutorial Part 3 - Navigation Tables. GitHub
Link

Part 4 This sample corresponds to TripPin Tutorial Part 4 - Data Source Paths. GitHub
Link

Part 5 This sample corresponds to TripPin Tutorial Part 5 - Paging. GitHub


Link

Part 6 This sample corresponds to TripPin Tutorial Part 6 - Enforcing Schema. GitHub
Link

Part 7 This sample corresponds to TripPin Tutorial Part 7 - Advanced Schema GitHub
with M Types. Link

Part 8 This sample corresponds to TripPin Tutorial Part 8 - Adding Diagnostics. GitHub
Link

Part 9 This sample corresponds to TripPin Tutorial Part 9 - Test Connection. GitHub
Link

Part 10 This sample corresponds to TripPin Tutorial Part 10 - Basic Query GitHub
Folding. Link
Additional connector functionality
Article • 02/15/2023

This article provides information about different types of additional connector


functionality that connector developers might want to invest in. For each type, this
article outlines availability and instructions to enable the functionality.

If there's connector-specific or platform functionality that requires direct Microsoft


engagement or investment, end users and developer partners can express their need
through the standard feedback process.

Authentication
While implementing authentication is covered in the authentication article, there are
other methods that connector owners might be interested in offering.

Windows authentication
Windows authentication is supported. To enable Windows-based authentication in your
connector, add the following line in the Authentication section of your connector.

Power Query M

Windows = [ SupportsAlternateCredentials = true ]

This change will expose Windows authentication as an option in the Power BI Desktop
authentication experience. The SupportsAlternateCredentials flag will expose the
option to "Connect using alternative credentials". After this flag is enabled, you can
specify explicit Windows account credentials (username and password). You can use this
feature to test impersonation by providing your own account credentials.

Single sign-on authentication


This section outlines options available for implementing single sign-on (SSO)
functionality into your certified connector. Currently, there is no support for "plug and
play" extensibility for SSO. Enabling SSO would require changes and collaboration both
on the Microsoft and data source or connector sides, so reach out to your Microsoft
contact prior to starting work.
Azure Active Directory SSO
Azure Active Directory (Azure AD)-based SSO is supported in cloud scenarios. The data
source must accept Azure AD access tokens, as the Power BI Azure AD user token will be
exchanged with a data source token from Azure AD. If you have a certified connector,
reach out to your Microsoft contact to learn more.

Kerberos SSO
Kerberos-based single sign-on is supported in gateway scenarios. The data source must
support Windows authentication. Generally, these scenarios involve Direct Query-based
reports, and a connector based on an ODBC driver. The primary requirements for the
driver are that it can determine Kerberos configuration settings from the current thread
context, and that it supports thread-based user impersonation. The gateway must be
configured to support Kerberos Constrained Delegation (KCD). An example can be
found in the Impala sample connector.

Power BI will send the current user information to the gateway. The gateway will use
Kerberos Constrained Delegation to invoke the query process as the impersonated user.

After making the above changes, the connector owner can test the following scenarios
to validate functionality.

In Power BI Desktop: Windows impersonation (current user)


In Power BI Desktop: Windows impersonation using alternate credentials
In the gateway: Windows impersonation using alternate credentials, by pre-
configuring the data source with Windows account credentials in the Gateway
Power BI Admin portal.

Connector developers can also use this procedure to test their implementation of
Kerberos-based SSO.

1. Set up an on-premises data gateway with single sign-on enabled using instructions
in the Power BI Kerberos SSO documentation article.

2. Validate the setup by testing with SQL Server and Windows accounts. Set up the
SQL Server Kerberos configuration manager. If you can use Kerberos SSO with SQL
Server then your Power BI data gateway is properly set up to enable Kerberos SSO
for other data sources as well.

3. Create an application (for example, a command-line tool) that connects to your


server through your ODBC driver. Ensure that your application can use Windows
authentication for the connection.
4. Modify your test application so that it can take a username (UPN) as an argument
and use the WindowsIdentity constructor with it. Once complete, with the
privileges granted to the gateway account set up in Step 1, you should be able to
obtain the user's AccessToken property and impersonate this token.

5. Once you've made the changes to your application, ensure that you can use
impersonation to load and connect to your service through the ODBC driver.
Ensure that data can be retrieved. If you want to use native C or C++ code instead,
you'll need to use LsaLoginUser to retrieve a token with just the username and use
the KERB_S4U_LOGON option.

After this functionality is validated, Microsoft will make a change to thread the UPN
from the Power BI Service down through the gateway. Once at the gateway, it will
essentially act the same way as your test application to retrieve data.

Reach out to your Microsoft contact prior to starting work to learn more on how to
request this change.

SAML SSO

SAML-based SSO is often not supported by end data sources and isn't a recommended
approach. If your scenario requires the use of SAML-based SSO, reach out to your
Microsoft contact or visit our documentation to learn more.

Native database query support


Some Power Query connectors offer end users the ability to specify native database
queries under Advanced options in the connection experience. Custom connector
developers may be interested in offering native database query support in their
connector.

For information on how to implement native database query support in your connector,
follow the walk through in the handling native query support article.
Handling authentication
Article • 05/23/2023

Authentication kinds
An extension can support one or more kinds of Authentication. Each authentication kind
is a different type of credential. The authentication UI displayed to end users in Power
Query is driven by the type of credential(s) that an extension supports.

The list of supported authentication types is defined as part of an extension's Data


Source Kind definition. Each Authentication value is a record with specific fields. The
following table lists the expected fields for each kind. All fields are required unless
marked otherwise.

Authentication Field Description


Kind

Anonymous The Anonymous (also called Implicit ) authentication kind


doesn't have any fields.

OAuth StartLogin Function that provides the URL and state information for
starting an OAuth flow.

Go to the Implementing an OAuth Flow section.

FinishLogin Function that extracts the access_token and other properties


related to the OAuth flow.

Refresh (optional) Function that retrieves a new access token from a


refresh token.

Logout (optional) Function that invalidates the user's current access


token.

Label (optional) A text value that allows you to override the default
label for this AuthenticationKind.

Aad AuthorizationUri text value or function that returns the Azure AD authorization
endpoint (example:
"https://login.microsoftonline.com/common/oauth2/authorize" ).

Go to the Azure Active Directory authentication section.

Resource text value or function that returns the Azure AD resource value
for your service.
Authentication Field Description
Kind

UsernamePassword UsernameLabel (optional) A text value to replace the default label for the
Username text box on the credentials UI.

PasswordLabel (optional) A text value to replace the default label for the
Password text box on the credentials UI.

Label (optional) A text value that allows you to override the default
label for this AuthenticationKind.

Windows UsernameLabel (optional) A text value to replace the default label for the
Username text box on the credentials UI.

PasswordLabel (optional) A text value to replace the default label for the
Password text box on the credentials UI.

Label (optional) A text value that allows you to override the default
label for this AuthenticationKind.

Key KeyLabel (optional) A text value to replace the default label for the API
Key text box on the credentials UI.

Label (optional) A text value that allows you to override the default
label for this AuthenticationKind.

The following sample shows the Authentication record for a connector that supports
OAuth, Key, Windows, Basic (Username and Password), and Anonymous credentials.

Example:

Power Query M

Authentication = [
OAuth = [
StartLogin = StartLogin,
FinishLogin = FinishLogin,
Refresh = Refresh,
Logout = Logout
],
Key = [],
UsernamePassword = [],
Windows = [],
Anonymous = []
]

Accessing the current credentials


The current credentials can be retrieved using the Extension.CurrentCredential
function.

M data source functions that have been enabled for extensibility automatically inherit
your extension's credential scope. In most cases, you don't need to explicitly access the
current credentials, however, there are exceptions, such as:

Passing in the credential in a custom header or query string parameter (such as


when you're using the API Key auth type).
Setting connection string properties for ODBC or ADO.NET extensions.
Checking custom properties on an OAuth token.
Using the credentials as part of an OAuth v1 flow.

The Extension.CurrentCredential function returns a record object. The fields it contains


are authentication type specific. The following table contains details.

Field Description Used By

AuthenticationKind Contains the name of the authentication kind All


assigned to this credential (UsernamePassword,
OAuth, and so on).

Username Username value UsernamePassword,


Windows

Password Password value. Typically used with Key,


UsernamePassword, but it's also set for Key. UsernamePassword,
Windows

access_token OAuth access token value. OAuth

Properties A record containing other custom properties for a OAuth


given credential. Typically used with OAuth to store
other properties (such as the refresh_token) returned
with the access_token during the authentication flow.

Key The API key value. Note, the key value is also Key
available in the Password field as well. By default, the
mashup engine inserts this key in an Authorization
header as if this value were a basic auth password
(with no username). If this type of behavior isn't what
you want, you must specify the ManualCredentials =
true option in the options record.
Field Description Used By

EncryptConnection A logical value that determined whether to require an All


encrypted connection to the data source. This value
is available for all Authentication Kinds, but is only set
if EncryptConnection is specified in the Data Source
definition.

The following code sample accesses the current credential for an API key and uses it to
populate a custom header ( x-APIKey ).

Example:

Power Query M

MyConnector.Raw = (_url as text) as binary =>


let
apiKey = Extension.CurrentCredential()[Key],
headers = [

#"x-APIKey" = apiKey,
Accept = "application/vnd.api+json",
#"Content-Type" = "application/json"
],
request = Web.Contents(_url, [ Headers = headers, ManualCredentials =
true ])
in
request

Implementing an OAuth flow


The OAuth authentication type allows an extension to implement custom logic for their
service. To do this, an extension provides functions for StartLogin (returning the
authorization URI to initiate the OAuth flow) and FinishLogin (exchanging the
authorization code for an access token). Extensions can optionally implement Refresh
(exchanging a refresh token for a new access token) and Logout (expiring the current
refresh and access tokens) functions as well.

7 Note

Power Query extensions are evaluated in applications running on client machines.


Data Connectors should not use confidential secrets in their OAuth flows, as users
may inspect the extension or network traffic to learn the secret. Go to Proof Key for
Code Exchange by OAuth Public Clients RFC (also known as PKCE) for further
details on providing flows that don't rely on shared secrets. A sample
implementation of this flow can be found on our GitHub site.

There are two sets of OAuth function signatures: the original signature that contains a
minimal number of parameters, and an advanced signature that accepts more
parameters. Most OAuth flows can be implemented using the original signatures. You
can also mix and match signature types in your implementation. The function calls are
matches based on the number of parameters (and their types). The parameter names
aren't taken into consideration.

Go to the GitHub sample for more details.

Original OAuth signatures


Power Query M

StartLogin = (dataSourcePath, state, display) => ...;

FinishLogin = (context, callbackUri, state) => ...;

Refresh = (dataSourcePath, refreshToken) => ...;

Logout = (accessToken) => ...;

Advanced OAuth signatures


Notes about the advanced signatures:

All signatures accept a clientApplication record value, which is reserved for future
use.
All signatures accept a dataSourcePath (also referred to as resourceUrl in most
samples).
The Refresh function accepts an oldCredential parameter, which is the previous
record returned by your FinishLogin function (or previous call to Refresh ).

Power Query M

StartLogin = (clientApplication, dataSourcePath, state, display) => ...;

FinishLogin = (clientApplication, dataSourcePath, context, callbackUri,


state) => ...;

Refresh = (clientApplication, dataSourcePath, oldCredential) => ...;


Logout = (clientApplication, dataSourcePath, accessToken) => ...;

Azure Active Directory authentication


The Aad authentication kind is a specialized version of OAuth for Azure Active Directory.
It uses the same Azure AD client as the built-in Power Query connectors that support
Organization Account authentication.

7 Note

If your data source requires scopes other than user_impersonation , or is


incompatible with the use of user_impersonation , then you should use the OAuth
authentication kind.

7 Note

If you implement your own OAuth flow for Azure AD, users who have enabled
Conditional Access for their tenant might encounter issues when refreshing using
the Power BI service. This won't impact gateway-based refresh, but would impact a
certified connector that supports refresh from the Power BI service. Users might run
into a problem stemming from the connector using a public client application
when configuring web-based credentials through the Power BI service. The access
token generated by this flow will ultimately be used on a different computer (that
is, the Power BI service in an Azure data center, not on the company's network)
than the one used to originally authenticate (that is, the computer of the user who
configures the data source credentials on the company's network). The built-in Aad
type works around this problem by using a different Azure AD client when
configuring credentials in the Power BI service. This option won't be available to
connectors that use the OAuth authentication kind.

Most connectors need to provide values for the AuthorizationUri and Resource fields.
Both fields can be text values, or a single argument function that returns a text value .

Power Query M

AuthorizationUri =
"https://login.microsoftonline.com/common/oauth2/authorize"
Power Query M

AuthorizationUri = (dataSourcePath) =>


FunctionThatDeterminesAadEndpointFromDataSourcePath(dataSourcePath)

Power Query M

Resource = "77256ee0-fe79-11ea-adc1-0242ac120002" // Azure AD resource


value for your service - Guid or URL

Power Query M

Resource = (dataSourcePath) =>


FunctionThatDeterminesResourceFromDataSourcePath(dataSourcePath)

Connectors that use a Uri based identifier don't need to provide a Resource value. By
default, the value is equal to the root path of the connector's Uri parameter. If the data
source's Azure AD resource is different than the domain value (for example, it uses a
GUID), then a Resource value needs to be provided.

Aad authentication kind samples


In the following case, the data source supports global cloud Azure AD using the
common tenant (no Azure B2B support).

Power Query M

Authentication = [
Aad = [
AuthorizationUri =
"https://login.microsoftonline.com/common/oauth2/authorize",
Resource = "77256ee0-fe79-11ea-adc1-0242ac120002" // Azure AD
resource value for your service - Guid or URL
]
]

In the following case, the data source supports tenant discovery based on OpenID
Connect (OIDC) or similar protocol. This ability allows the connector to determine the
correct Azure AD endpoint to use based on one or more parameters in the data source
path. This dynamic discovery approach allows the connector to support Azure B2B.

Power Query M
// Implement this function to retrieve or calculate the service URL based on
the data source path parameters
GetServiceRootFromDataSourcePath = (dataSourcePath) as text => ...;

GetAuthorizationUrlFromWwwAuthenticate = (url as text) as text =>


let
// Sending an unauthenticated request to the service returns
// a 302 status with WWW-Authenticate header in the response. The
value will
// contain the correct authorization_uri.
//
// Example:
// Bearer
authorization_uri="https://login.microsoftonline.com/{tenant_guid}/oauth2/au
thorize"
responseCodes = {302, 401},
endpointResponse = Web.Contents(url, [
ManualCredentials = true,
ManualStatusHandling = responseCodes
])
in
if (List.Contains(responseCodes, Value.Metadata(endpointResponse)
[Response.Status]?)) then
let
headers =
Record.FieldOrDefault(Value.Metadata(endpointResponse), "Headers", []),
wwwAuthenticate = Record.FieldOrDefault(headers, "WWW-
Authenticate", ""),
split = Text.Split(Text.Trim(wwwAuthenticate), " "),
authorizationUri = List.First(List.Select(split, each
Text.Contains(_, "authorization_uri=")), null)
in
if (authorizationUri <> null) then
// Trim and replace the double quotes inserted before
the url

Text.Replace(Text.Trim(Text.Trim(Text.AfterDelimiter(authorizationUri,
"=")), ","), """", "")
else
error Error.Record("DataSource.Error", "Unexpected WWW-
Authenticate header format or value during authentication.", [
#"WWW-Authenticate" = wwwAuthenticate
])
else
error Error.Unexpected("Unexpected response from server during
authentication.");

<... snip ...>

Authentication = [
Aad = [
AuthorizationUri = (dataSourcePath) =>
GetAuthorizationUrlFromWwwAuthenticate(
GetServiceRootFromDataSourcePath(dataSourcePath)
),
Resource = "https://myAadResourceValue.com", // Azure AD resource
value for your service - Guid or URL
]
]

Other types of authentication


For information on other types of authentication not covered in this article, such as
Kerberos-based single sign-on, visit the additional connector functionality article to
learn more.
Handling Data Access
Article • 12/21/2022

Data Source Functions


A Data Connector wraps and customizes the behavior of a data source function in the M
Library. For example, an extension for a REST API would make use of the Web.Contents
function to make HTTP requests. Currently, a limited set of data source functions have
been enabled to support extensibility.

Web.Contents
OData.Feed
Odbc.DataSource
AdoDotNet.DataSource
OleDb.DataSource

Example:

[DataSource.Kind="HelloWorld", Publish="HelloWorld.Publish"]
shared HelloWorld.Contents = (optional message as text) =>
let
message = if (message <> null) then message else "Hello world"
in
message;

Data Source Kind


Functions marked as shared in your extension can be associated with a specific data
source by including a DataSource.Kind literal attribute on the function with the name of
a Data Source definition record. The Data Source record defines the authentication
types supported by your data source, and basic branding information (like the display
name / label). The name of the record becomes its unique identifier.

Each function associated with the same data source must have the same required
function parameters, including name, type, and order. (For purposes of Data Source
Kind, a parameter is not considered required if it is marked optional or if its metadata
contains DataSource.Path = false .)
Functions for a specific Data Source Kind can only use credentials associated with that
Kind. Credentials are identified at runtime by performing a lookup based on the
combination of the function's required parameters. For more information about how
credentials are identified, see Data Source Paths.

Example:

HelloWorld = [
Authentication = [
Implicit = []
],
Label = Extension.LoadString("DataSourceLabel")
];

Properties
The following table lists the fields for your Data Source definition record.

Field Type Details

Authentication record Specifies one or more types of authentication supported by your


data source. At least one kind is required. Each kind will be
displayed as an option in the Power Query credential prompt. For
more information, see Authentication Kinds.

Label text (optional) Friendly display name for this extension in credential
dialogs.

SupportsEncryption logical (optional) When true, the UI will present the option to connect to
the data source using an encrypted connection. This is typically
used for data sources with a non-encrypted fallback mechanism
(generally ODBC or ADO.NET based sources).

Publish to UI
Similar to the Data Source definition record, the Publish record provides the Power
Query UI the information it needs to expose this extension in the Get Data dialog.

Example:
HelloWorld.Publish = [
Beta = true,
ButtonText = { Extension.LoadString("FormulaTitle"),
Extension.LoadString("FormulaHelp") },
SourceImage = HelloWorld.Icons,
SourceTypeImage = HelloWorld.Icons
];

HelloWorld.Icons = [
Icon16 = { Extension.Contents("HelloWorld16.png"),
Extension.Contents("HelloWorld20.png"),
Extension.Contents("HelloWorld24.png"),
Extension.Contents("HelloWorld32.png") },
Icon32 = { Extension.Contents("HelloWorld32.png"),
Extension.Contents("HelloWorld40.png"),
Extension.Contents("HelloWorld48.png"),
Extension.Contents("HelloWorld64.png") }
];

Properties
The following table lists the fields for your Publish record.

Field Type Details

ButtonText list List of text items. The first item defines the name displayed next
to the data source's icon in the Power BI Get Data dialog. The
second item (optional) defines the tool tip that will be displayed
when the preceding name is moused-over.

Category text Where the extension should be displayed in the Get Data dialog.
Currently the only category values with special handing are
Azure and Database . All other values will end up under the Other
category.

Beta logical (optional) When set to true, the UI will display a Preview/Beta
identifier next to your connector name and a warning dialog that
the implementation of the connector is subject to breaking
changes.

LearnMoreUrl text (optional) Url to website containing more information about this
data source or connector.

SupportsDirectQuery logical (optional) Enables Direct Query for your extension.

SourceImage record (optional) A record containing a list of binary images (sourced


from the extension file using the Extension.Contents method).
The record contains two fields (Icon16, Icon32), each with its own
list. Each icon should be a different size.
Field Type Details

SourceTypeImage record (optional) Similar to SourceImage, except the convention for


many out of the box connectors is to display a sheet icon with
the source specific icon in the bottom right corner. Having a
different set of icons for SourceTypeImage is optional—many
extensions simply reuse the same set of icons for both fields.
Enabling DirectQuery for an ODBC-
based connector
Article • 09/01/2022

Using M's built-in Odbc.DataSource function is the recommended way to create custom
connectors for data sources that have an existing ODBC driver and/or support a SQL
query syntax. Wrapping the Odbc.DataSource function allows your connector to inherit
default query folding behavior based on the capabilities reported by your driver. This
will enable the M engine to generate SQL statements based on filters and other
transformations defined by the user within the Power Query experience, without having
to provide this logic within the connector itself.

ODBC extensions can optionally enable DirectQuery mode, allowing Power BI to


dynamically generate queries at runtime without pre-caching the user's data model.

7 Note

Enabling DirectQuery support raises the difficulty and complexity level of your
connector. When DirectQuery is enabled, Power BI prevents the M engine from
compensating for operations that can't be fully pushed to the underlying data
source.

This article assumes familiarity with the creation of a basic custom connector.

Refer to the SqlODBC sample for most of the code examples in the following sections.
Other samples can be found in the ODBC samples directory.

ODBC extensibility functions


The M engine provides two ODBC-related data source functions: Odbc.DataSource, and
Odbc.Query.

The Odbc.DataSource function provides a default navigation table with all databases,
tables, and views from your system. This function also supports query folding, and
allows for a range of customization options. Most ODBC-based extensions use this
function as their primary extensibility function. The function accepts two arguments—a
connection string, and an options record to provide behavior overrides.

The Odbc.Query function allows you to execute SQL statements through an ODBC
driver. It acts as a passthrough for query execution. Unlike the Odbc.DataSource
function, it doesn't provide query folding functionality, and requires that SQL queries be
provided by the connector (or end user). When building a custom connector, this
function is typically used internally to run queries to retrieve metadata that might not be
exposed through regular ODBC channels. The function accepts two arguments—a
connection string, and a SQL query.

Data source function guidelines


Custom connectors can accept any number of function arguments. But to remain
consistent with the built-in data source functions shipped with Power Query, the
following guidelines are recommended:

Require the minimal set of parameters used to establish a connection to your


server. The fewer parameters end users need to provide, the easier your connector
will be to use.

Although you can define parameters with a fixed number of values (that is, a
dropdown list in the UI), parameters are entered before the user is authenticated.
Any values that can be discovered programmatically after the user is authenticated
(such as catalog or database name) should be selectable through the Navigator.
The default behavior for the Odbc.DataSource function is to return a hierarchical
navigation table consisting of Catalog (Database), Schema, and Table names.
However, this behavior can be overridden within your connector.

If you feel your users will typically know what values to enter for items they would
select from the Navigator (such as the database name), make these parameters
optional. Parameters that can be discovered programmatically shouldn't be made
required.

The last parameter for your function should be an optional record called "options".
This parameter typically allows advanced users to set common ODBC-related
properties (such as CommandTimeout ), set behavior overrides specific to your
connector, and allows for future extensibility without impacting backwards
compatibility for your function.

Security/credential related arguments MUST never be part of your data source


function parameters, as values entered in the connect dialog will be persisted to
the user's query. Credential-related parameters should be specified as part of the
connector's supported Authentication methods.

By default, all required parameters for your data source function are factored into the
Data Source Path value used to identify user credentials.
While the UI for the built-in Odbc.DataSource function provides a dropdown that allows
the user to select a DSN, this functionality isn't available through extensibility. If your
data source configuration is complex enough to require a fully customizable
configuration dialog, we recommended that you require your end users to pre-
configure a system DSN, and have your function take in the DSN name as a text field.

Next steps
Parameters for Odbc.DataSource
Parameters for Odbc.DataSource
Article • 09/01/2022

The Odbc.DataSource function takes two parameters—a connectionString for your


driver, and an options record that lets you override various driver behaviors. Through
the options record you can override capabilities and other information reported by the
driver, control the navigator behavior, and affect the SQL queries generated by the M
engine.

The supported options records fields fall into two categories—those that are public and
always available, and those that are only available in an extensibility context.

The following table describes the public fields in the options record.

Field Description

CommandTimeout A duration value that controls how long the server-side query is
allowed to run before it's canceled.

Default: 10 minutes

ConnectionTimeout A duration value that controls how long to wait before abandoning
an attempt to make a connection to the server.

Default: 15 seconds

CreateNavigationProperties A logical value that sets whether to generate navigation properties


on the returned tables. Navigation properties are based on foreign
key relationships reported by the driver. These properties show up
as “virtual” columns that can be expanded in the query editor,
creating the appropriate join.

If calculating foreign key dependencies is an expensive operation


for your driver, you may want to set this value to false.

Default: true

HierarchicalNavigation A logical value that sets whether to view the tables grouped by
their schema names. When set to false, tables are displayed in a
flat list under each database.

Default: false
Field Description

SqlCompatibleWindowsAuth A logical value that determines whether to produce a SQL Server


compatible connection string when using Windows Authentication
— Trusted_Connection=Yes .

If your driver supports Windows Authentication, but requires extra


or alternative settings in your connection string, you should set this
value to false and use the CredentialConnectionString options
record field described in the next table.

Default: true

The following table describes the options record fields that are only available through
extensibility. Fields that aren't simple literal values are described in later sections.

Field Description

AstVisitor A record containing one or more overrides to control SQL query


generation. The most common usage of this field is to provide
logic to generate a LIMIT/OFFSET clause for drivers that don't
support TOP.

Fields include Constant and LimitClause .

More information: Overriding AstVisitor

CancelQueryExplicitly A logical value that instructs the M engine to explicitly cancel any
running calls through the ODBC driver before terminating the
connection to the ODBC server.

This field is useful in situations where query execution is managed


independently of the network connections to the server, for
example in some Spark deployments. In most cases, this value
doesn't need to be set because the query in the server is canceled
when the network connection to the server is terminated.

Default: false

ClientConnectionPooling A logical value that enables client-side connection pooling for the
ODBC driver. Most drivers will want to set this value to true.

Default: false

CredentialConnectionString A text or record value used to specify credential-related


connection string properties.
Field Description

HideNativeQuery A logical value that controls whether or not the connector shows
generated SQL statements in the Power Query user experience.
This should only be set to true if the back end data source natively
supports SQL-92.

Default: false

ImplicitTypeConversions A table value containing implicit type conversions supported by


your driver or backend server. Values in this table are additive to
the conversions reported by the driver itself.

This field is typically used with the SQLGetTypeInfo field when


overriding data type information reported by the driver.

OnError An error handling function that receives an errorRecord parameter


of type record .

Common uses of this function include handling SSL connection


failures, providing a download link if your driver isn't found on the
system, and reporting authentication errors.

SoftNumbers Allows the M engine to select a compatible data type when


conversion between two specific numeric types isn't declared as
supported in the SQL_CONVERT_* capabilities.

Default: false

SqlCapabilities A record providing various overrides of driver capabilities, and a


way to specify capabilities that aren't expressed through ODBC 3.8.

More information: Overriding SqlCapabilities

SQLColumns A function that allows you to modify column metadata returned by


the SQLColumns function.

More information: Overriding SQLColumns

SQLGetFunctions A record that allows you to override values returned by calls to


SQLGetFunctions .

A common use of this field is to disable the use of parameter


binding, or to specify that generated queries should use CAST
rather than CONVERT.

More information: Overriding SQLGetFunctions


Field Description

SQLGetInfo A record that allows you to override values returned by calls to


SQLGetInfo .

More information: Overriding SQLGetInfo

SQLGetTypeInfo A table or function that returns a table that overrides the type
information returned by SQLGetTypeInfo .

When the value is set to a table, the value completely replaces the
type information reported by the driver. SQLGetTypeInfo won't be
called.

When the value is set to a function, your function will receive the
result of the original call to SQLGetTypeInfo , allowing you to
modify the table.

This field is typically used when there's a mismatch between data


types reported by SQLGetTypeInfo and SQLColumns .

More information: Overriding SQLGetTypeInfo

SQLTables A function that allows you to modify the table metadata returned
by a call to SQLTables .

TolerateConcatOverflow Allows concatenation of text values to occur even if the result


might be truncated to fit within the range of an available type.

For example, when concatenating a VARCHAR(4000) field with a


VARCHAR(4000) field on a system that supports a maximize
VARCHAR size of 4000 and no CLOB type, the concatenation is
folded even though the result might get truncated.

Default: false

UseEmbeddedDriver (internal use): A logical value that controls whether the ODBC
driver should be loaded from a local directory (using new
functionality defined in the ODBC 4.0 specification). This value is
generally only set by connectors created by Microsoft that ship
with Power Query.

When set to false, the system ODBC driver manager is used to


locate and load the driver.

Most connectors shouldn't need to set this field.

Default: false
Overriding AstVisitor
The AstVisitor field is set through the Odbc.DataSource options record. It's used to
modify SQL statements generated for specific query scenarios.

7 Note

Drivers that support LIMIT and OFFSET clauses (rather than TOP) will want to
provide a LimitClause override for AstVisitor .

Constant
Providing an override for this value has been deprecated and may be removed from
future implementations.

LimitClause
This field is a function that receives two Int64.Type arguments ( skip , take ), and returns
a record with two text fields ( Text , Location ).

LimitClause = (skip as nullable number, take as number) as record => ...

The skip parameter is the number of rows to skip (that is, the argument to OFFSET). If
an offset isn't specified, the skip value will be null. If your driver supports LIMIT, but
doesn't support OFFSET, the LimitClause function should return an unimplemented
error (...) when skip is greater than 0.

The take parameter is the number of rows to take (that is, the argument to LIMIT).

The Text field of the result contains the SQL text to add to the generated query.

The Location field specifies where to insert the clause. The following table describes
supported values.

Value Description Example


Value Description Example

AfterQuerySpecification LIMIT clause is put at the end of the generated SELECT a, b, c


SQL.
FROM table
This is the most commonly supported LIMIT
syntax. WHERE a > 10

LIMIT 5

BeforeQuerySpecification LIMIT clause is put before the generated SQL LIMIT 5 ROWS
statement.
SELECT a, b, c

FROM table

WHERE a > 10

AfterSelect LIMIT goes after the SELECT statement, and SELECT DISTINCT
after any modifiers (such as DISTINCT). LIMIT 5 a, b, c

FROM table

WHERE a > 10

AfterSelectBeforeModifiers LIMIT goes after the SELECT statement, but SELECT LIMIT 5
before any modifiers (such as DISTINCT). DISTINCT a, b, c

FROM table

WHERE a > 10

The following code snippet provides a LimitClause implementation for a driver that
expects a LIMIT clause, with an optional OFFSET, in the following format: [OFFSET
<offset> ROWS] LIMIT <row_count>

LimitClause = (skip, take) =>


let
offset = if (skip > 0) then Text.Format("OFFSET #
{0} ROWS", {skip}) else "",
limit = if (take <> null) then Text.Format("LIMIT #
{0}", {take}) else ""
in
[
Text = Text.Format("#{0} #{1}", {offset, limit}),
Location = "AfterQuerySpecification"
]
The following code snippet provides a LimitClause implementation for a driver that
supports LIMIT, but not OFFSET. Format: LIMIT <row_count> .

LimitClause = (skip, take) =>


if (skip > 0) then error "Skip/Offset not supported"
else
[
Text = Text.Format("LIMIT #{0}", {take}),
Location = "AfterQuerySpecification"
]

Overriding SqlCapabilities
Field Details

FractionalSecondsScale A number value ranging from 1 to 7 that indicates the number


of decimal places supported for millisecond values. This value
should be set by connectors that want to enable query folding
over datetime values.

Default: null

PrepareStatements A logical value that indicates that statements should be


prepared using SQLPrepare.

Default: false

SupportsTop A logical value that indicates the driver supports the TOP clause
to limit the number of returned rows.

Default: false

StringLiteralEscapeCharacters A list of text values that specify the character(s) to use when
escaping string literals and LIKE expressions.

Example: {""}

Default: null
Field Details

SupportsDerivedTable A logical value that indicates the driver supports derived tables
(sub-selects).

This value is assumed to be true for drivers that set their


conformance level to SQL_SC_SQL92_FULL (reported by the
driver or overridden with the Sql92Conformance setting. For all
other conformance levels, this value defaults to false.

If your driver doesn't report the SQL_SC_SQL92_FULL


compliance level, but does support-derived tables, set this
value to true.

Supporting derived tables is required for many DirectQuery


scenarios.

SupportsNumericLiterals A logical value that indicates whether the generated SQL should
include numeric literals values. When set to false, numeric
values are always specified using parameter binding.

Default: false

SupportsStringLiterals A logical value that indicates whether the generated SQL should
include string literals values. When set to false, string values are
always specified using parameter binding.

Default: false

SupportsOdbcDateLiterals A logical value that indicates whether the generated SQL should
include date literals values. When set to false, date values are
always specified using parameter binding.

Default: false

SupportsOdbcTimeLiterals A logical value that indicates whether the generated SQL should
include time literals values. When set to false, time values are
always specified using parameter binding.

Default: false

SupportsOdbcTimestampLiterals A logical value that indicates whether the generated SQL should
include timestamp literals values. When set to false, timestamp
values are always specified using parameter binding.

Default: false

Overriding SQLColumns
SQLColumns is a function handler that receives the results of an ODBC call to

SQLColumns. The source parameter contains a table with the data type information. This
override is typically used to fix up data type mismatches between calls to
SQLGetTypeInfo and SQLColumns .

For details of the format of the source table parameter, go to SQLColumns Function.

Overriding SQLGetFunctions
This field is used to override SQLFunctions values returned by an ODBC driver. It
contains a record whose field names are equal to the FunctionId constants defined for
the ODBC SQLGetFunctions function. Numeric constants for each of these fields can be
found in the ODBC specification .

Field Details

SQL_CONVERT_FUNCTIONS Indicates which function(s) are supported when doing type conversions.
By default, the M Engine attempts to use the CONVERT function. Drivers
that prefer the use of CAST can override this value to report that only
SQL_FN_CVT_CAST (numeric value of 0x2) is supported.

SQL_API_SQLBINDCOL A logical (true/false) value that indicates whether the mashup engine
should use the SQLBindCol API when retrieving data. When set to false,
SQLGetData is used instead.

Default: false

The following code snippet provides an example explicitly telling the M engine to use
CAST rather than CONVERT.

SQLGetFunctions = [
SQL_CONVERT_FUNCTIONS = 0x2 /* SQL_FN_CVT_CAST */
]

Overriding SQLGetInfo
This field is used to override SQLGetInfo values returned by an ODBC driver. It contains
a record whose fields are names equal to the InfoType constants defined for the ODBC
SQLGetInfo function. Numeric constants for each of these fields can be found in the
ODBC specification . The full list of InfoTypes that are checked can be found in the
mashup engine trace files.

The following table contains commonly overridden SQLGetInfo properties:

Field Details

SQL_SQL_CONFORMANCE An integer value that indicates the level of SQL-92 supported by the
driver:

(1) SQL_SC_SQL92_ENTRY: Entry level SQL-92 compliant.


(2) SQL_SC_FIPS127_2_TRANSITIONAL: FIPS 127-2 transitional level
compliant.
(4) SQL_SC_ SQL92_INTERMEDIATE" Intermediate level SQL-92
compliant.
(8) SQL_SC_SQL92_FULL: Full level SQL-92 compliant.

In Power Query scenarios, the connector is used in a Read Only mode.


Most drivers will want to report a SQL_SC_SQL92_FULL compliance
level, and override specific SQL generation behavior using the
SQLGetInfo and SQLGetFunctions properties.

SQL_SQL92_PREDICATES A bitmask enumerating the predicates supported in a SELECT


statement, as defined in SQL-92.

Go to SQL_SP_* constants in the ODBC specification.

SQL_AGGREGATE_FUNCTIONS A bitmask enumerating support for aggregation functions.

SQL_AF_ALL
SQL_AF_AVG
SQL_AF_COUNT
SQL_AF_DISTINCT
SQL_AF_MAX
SQL_AF_MIN
SQL_AF_SUM

Go to SQL_AF_* constants in the ODBC specification.


Field Details

SQL_GROUP_BY An integer value that specifies the relationship between the columns
in the GROUP BY clause and the non-aggregated columns in the
select list:

SQL_GB_COLLATE: A COLLATE clause can be specified at the end of


each grouping column.

SQL_GB_NOT_SUPPORTED: GROUP BY clauses aren't supported.

SQL_GB_GROUP_BY_EQUALS_SELECT: The GROUP BY clause must


contain all non-aggregated columns in the select list. It can't contain
any other columns. For example, SELECT DEPT, MAX(SALARY) FROM
EMPLOYEE GROUP BY DEPT.

SQL_GB_GROUP_BY_CONTAINS_SELECT: The GROUP BY clause must


contain all non-aggregated columns in the select list. It can contain
columns that aren't in the select list. For example, SELECT DEPT,
MAX(SALARY) FROM EMPLOYEE GROUP BY DEPT, AGE.

SQL_GB_NO_RELATION: The columns in the GROUP BY clause and the


select list aren't related. The meaning of non-grouped, non-
aggregated columns in the select list is data source–dependent. For
example, SELECT DEPT, SALARY FROM EMPLOYEE GROUP BY DEPT,
AGE.

Go to SQL_GB_* constants in the ODBC specification.

The following helper function can be used to create bitmask values from a list of integer
values:

Flags = (flags as list) =>


let
Loop = List.Generate(
()=> [i = 0, Combined = 0],
each [i] < List.Count(flags),

each [i = [i]+1, Combined =*Number.BitwiseOr([Combined], fla


gs{i})],
each [Combined]),
Result = List.Last(Loop, 0)
in
Result;
Overriding SQLGetTypeInfo
SQLGetTypeInfo can be specified in two ways:

A fixed table value that contains the same type information as an ODBC call to
SQLGetTypeInfo .
A function that accepts a table argument, and returns a table. The argument
contains the original results of the ODBC call to SQLGetTypeInfo . Your function
implementation can modify or add to this table.

The first approach is used to completely override the values returned by the ODBC
driver. The second approach is used if you want to add to or modify these values.

For details of the format of the types table parameter and expected return value, go to
SQLGetTypeInfo function reference.

SQLGetTypeInfo using a static table


The following code snippet provides a static implementation for SQLGetTypeInfo .

SQLGetTypeInfo = #table(
{ "TYPE_NAME", "DATA_TYPE", "COLUMN_SIZE", "LITERAL_PREF",
"LITERAL_SUFFIX", "CREATE_PARAS", "NULLABLE", "CASE_SENSITIVE",
"SEARCHABLE", "UNSIGNED_ATTRIBUTE", "FIXED_PREC_SCALE", "AUTO_UNIQUE_VALUE",
"LOCAL_TYPE_NAME", "MINIMUM_SCALE", "MAXIMUM_SCALE", "SQL_DATA_TYPE",
"SQL_DATETIME_SUB", "NUM_PREC_RADIX", "INTERNAL_PRECISION", "USER_DATA_TYPE"
}, {

{ "char", 1, 65535, "'", "'",


"max. length", 1, 1, 3, null,
0, null, "char", null,
null, -8, null, null, 0,
0 },
{ "int8", -5, 19, "'", "'",
null, 1, 0, 2, 0,
10, 0, "int8", 0,
0, -5, null, 2, 0,
0 },
{ "bit", -7, 1, "'", "'",
null, 1, 1, 3, null,
0, null, "bit", null,
null, -7, null, null, 0,
0 },
{ "bool", -7, 1, "'", "'",
null, 1, 1, 3, null,
0, null, "bit", null,
null, -7, null, null, 0,
0 },
{ "date", 9, 10, "'", "'",
null, 1, 0, 2, null,
0, null, "date", null,
null, 9, 1, null, 0,
0 },
{ "numeric", 3, 28, null, null,
null, 1, 0, 2, 0,
0, 0, "numeric", 0,
0, 2, null, 10, 0,
0 },
{ "float8", 8, 15, null, null,
null, 1, 0, 2, 0,
0, 0, "float8", null,
null, 6, null, 2, 0,
0 },
{ "float8", 6, 17, null, null,
null, 1, 0, 2, 0,
0, 0, "float8", null,
null, 6, null, 2, 0,
0 },
{ "uuid", -11, 37, null, null,
null, 1, 0, 2, null,
0, null, "uuid", null,
null, -11, null, null, 0,
0 },
{ "int4", 4, 10, null, null,
null, 1, 0, 2, 0,
0, 0, "int4", 0,
0, 4, null, 2, 0,
0 },
{ "text", -1, 65535, "'", "'",
null, 1, 1, 3, null,
0, null, "text", null,
null, -10, null, null, 0,
0 },
{ "lo", -4, 255, "'", "'",
null, 1, 0, 2, null,
0, null, "lo", null,
null, -4, null, null, 0,
0 },
{ "numeric", 2, 28, null, null,
"precision, scale", 1, 0, 2, 0,
10, 0, "numeric", 0,
6, 2, null, 10, 0,
0 },
{ "float4", 7, 9, null, null,
null, 1, 0, 2, 0,
10, 0, "float4", null,
null, 7, null, 2, 0,
0 },
{ "int2", 5, 19, null, null,
null, 1, 0, 2, 0,
10, 0, "int2", 0,
0, 5, null, 2, 0,
0 },
{ "int2", -6, 5, null, null,
null, 1, 0, 2, 0,
10, 0, "int2", 0,
0, 5, null, 2, 0,
0 },
{ "timestamp", 11, 26, "'", "'",
null, 1, 0, 2, null,
0, null, "timestamp", 0,
38, 9, 3, null, 0,
0 },
{ "date", 91, 10, "'", "'",
null, 1, 0, 2, null,
0, null, "date", null,
null, 9, 1, null, 0,
0 },
{ "timestamp", 93, 26, "'", "'",
null, 1, 0, 2, null,
0, null, "timestamp", 0,
38, 9, 3, null, 0,
0 },
{ "bytea", -3, 255, "'", "'",
null, 1, 0, 2, null,
0, null, "bytea", null,
null, -3, null, null, 0,
0 },
{ "varchar", 12, 65535, "'", "'",
"max. length", 1, 0, 2, null,
0, null, "varchar", null,
null, -9, null, null, 0,
0 },
{ "char", -8, 65535, "'", "'",
"max. length", 1, 1, 3, null,
0, null, "char", null,
null, -8, null, null, 0,
0 },
{ "text", -10, 65535, "'", "'",
"max. length", 1, 1, 3, null,
0, null, "text", null,
null, -10, null, null, 0,
0 },
{ "varchar", -9, 65535, "'", "'",
"max. length", 1, 1, 3, null,
0, null, "varchar", null,
null, -9, null, null, 0,
0 },
{ "bpchar", -8, 65535, "'", "'",
"max. length", 1, 1, 3, null,
0, null, "bpchar", null,
null, -9, null, null, 0,
0 } }
);
SQLGetTypeInfo using a function
The following code snippets append the bpchar type to the existing types returned by
the driver.

SQLGetTypeInfo = (types as table) as table =>


let
newTypes = #table(
{
"TYPE_NAME",
"DATA_TYPE",
"COLUMN_SIZE",
"LITERAL_PREF",
"LITERAL_SUFFIX",
"CREATE_PARAS",
"NULLABLE",
"CASE_SENSITIVE",
"SEARCHABLE",
"UNSIGNED_ATTRIBUTE",
"FIXED_PREC_SCALE",
"AUTO_UNIQUE_VALUE",
"LOCAL_TYPE_NAME",
"MINIMUM_SCALE",
"MAXIMUM_SCALE",
"SQL_DATA_TYPE",
"SQL_DATETIME_SUB",
"NUM_PREC_RADIX",
"INTERNAL_PRECISION",
"USER_DATA_TYPE"
},
// we add a new entry for each type we want to add
{
{
"bpchar",
-8,
65535,
"'",
"'",
"max. length",
1,
1,
3,
null,
0,
null,
"bpchar",
null,
null,
-9,
null,
null,
0,
0
}
}),
append = Table.Combine({types, newTypes})
in
append;

Setting the connection string


The connection string for your ODBC driver is set using the first argument to the
Odbc.DataSource and Odbc.Query functions. The value can be text, or an M record.
When using the record, each field in the record will become a property in the
connection string. All connection strings require a Driver field (or DSN field if you
require users to pre-configure a system level DSN). Credential-related properties are set
separately. Other properties are driver specific.

The code snippet below shows the definition of a new data source function, creation of
the ConnectionString record, and invocation of the Odbc.DataSource function.

[DataSource.Kind="SqlODBC", Publish="SqlODBC.Publish"]
shared SqlODBC.Contents = (server as text) =>
let
ConnectionString = [
Driver = "SQL Server Native Client 11.0",
Server = server,
MultiSubnetFailover = "Yes",
ApplicationIntent = "ReadOnly",
APP = "PowerBICustomConnector"
],
OdbcDatasource = Odbc.DataSource(ConnectionString)
in
OdbcDatasource;

Next steps
Test and troubleshoot an ODBC-based connector
Test and troubleshoot an ODBC-based
connector
Article • 09/01/2022

While you're building your ODBC-based connector, it's a good idea to occasionally test
and troubleshoot the connector. This section describes how to set up and use some test
and troubleshooting tools.

Use the tracing tool in Power BI Desktop


One of the basic tools for testing and troubleshooting an ODBC-based connector is the
ability to use tracing in Power BI Desktop to determine why errors may be occurring.

To enable tracing in Power BI Desktop:

1. Go to File > Options and settings > Options.


2. Select the Diagnostics tab.
3. Select the Enable tracing option.
4. Select the Open traces folder link (should be %LOCALAPPDATA%/Microsoft/Power BI
Desktop/Traces ).

5. Delete existing trace files.


6. Do your tests.
7. Close Power BI Desktop to ensure all log files are flushed to disk.

Here are steps you can take for initial testing in Power BI Desktop:

1. Close Power BI Desktop.


2. Clear your trace directory.
3. Open Power BI desktop, and enable tracing.
4. Connect to your data source, and select Direct Query mode.
5. Select a table in the navigator, and select Edit.
6. Manipulate the query in various ways, including:

Take the First N rows (for example, 10).


Set equality filters on different data types (int, string, bool, and so on).
Set other range filters (greater than, less than).
Filter on NULL / NOT NULL.
Select a subset of columns.
Aggregate / Group By different column combinations.
Add a column calculated from other columns ([C] = [A] + [B]).
Sort on one column, multiple columns.

7. Expressions that fail to fold will result in a warning bar. Note the failure, remove
the step, and move to the next test case. Details about the cause of the failure
should be emitted to the trace logs.
8. Close Power BI Desktop.
9. Copy the trace files to a new directory.
10. Open the trace files in your text editor of choice.
11. Search for OdbcQuery/FoldingWarning entries in the trace files. These entries should
contain more information as to why the query engine believes that query folding
isn't possible for this operation.

Once you have simple queries working, you can then try DirectQuery scenarios (for
example, building reports in the Report Views). The queries generated in DirectQuery
mode are significantly more complex (that is, use of sub-selects, COALESCE statements,
and aggregations).

Concatenation of strings in DirectQuery mode


The M engine does basic type size limit validation as part of its query folding logic. If
you're receiving a folding error when trying to concatenate two strings that potentially
overflow the maximum size of the underlying database type:

1. Ensure that your database can support up-conversion to CLOB types when string
concatenation overflow occurs.
2. Set the TolerateConcatOverflow option for Odbc.DataSource to true .

7 Note

The DAX CONCATENATE function isn't currently supported by Power Query/ODBC


extensions. Extension authors should ensure string concatenation works through
the query editor by adding calculated columns ( [stringCol1] & [stringCol2] ).
When the capability to fold the CONCATENATE operation is added in the future, it
should work seamlessly with existing extensions.
Handling resource path
Article • 05/23/2023

The M engine identifies a data source using a combination of its Kind and Path. When a
data source is encountered during a query evaluation, the M engine tries to find
matching credentials. If no credentials are found, the engine returns a special error that
results in a credential prompt in Power Query.

The Kind value comes from the Data Source Kind definition.

The Path value is derived from the required parameters of your data source function.
Optional parameters aren't factored into the data source path identifier. As a result, all
data source functions associated with a data source kind must have the same
parameters. There's special handling for functions that have a single parameter of type
Uri.Type . Go to Functions with a Uri parameter for details.

You can see an example of how credentials are stored in the Data source settings dialog
in Power BI Desktop. In this dialog, the Kind is represented by an icon, and the Path
value is displayed as text.

7 Note
If you change your data source function's required parameters during
development, previously stored credentials no longer work (because the path
values no longer match). You should delete any stored credentials any time you
change your data source function parameters. If incompatible credentials are
found, you might receive an error at runtime.

Data source path format


The Path value for a data source is derived from the data source function's required
parameters. Required parameters can be excluded from the path by adding
DataSource.Path = false to the function's metadata. For more information, go to
Excluding required parameters from your data source path.

By default, you can see the actual string value in the Data source settings dialog in
Power BI Desktop, and in the credential prompt. If the Data Source Kind definition has
included a Label value, you'll see the label value instead.

For example, the data source function in the HelloWorldWithDocs sample has the
following signature:

Power Query M

HelloWorldWithDocs.Contents = (message as text, optional count as number) as


table => ...

The function has a single required parameter ( message ) of type text , and is used to
calculate the data source path. The optional parameter ( count ) is ignored. The path
would be displayed as follows:

Credential prompt
Data source settings UI

When a Label value is defined, the data source path value isn't shown:
7 Note

We currently recommend that you do not include a Label for your data source if
your function has required parameters, as users won't be able to distinguish
between the different credentials they've entered. We are hoping to improve this in
the future (that is, allowing data connectors to display their own custom data
source paths).

Excluding required parameters from your data source


path
If you want a function parameter to be required, but not to be included as part of your
data source path, you can add DataSource.Path = false to the function documentation
metadata. This property can be added to one or more parameters for your function. This
field removes the value from your data source path (meaning that it's no longer passed
to your TestConnection function), so it should only be used for parameters that aren't
required to identify your data source, or distinguish between user credentials.

For example, the connector in the HelloWorldWithDocs sample would require


different credentials for different message values. Adding DataSource.Path = false to
the message parameter removes it from the data source path calculation, effectively
making the connector a "singleton". All calls to HelloWorldWithDocs.Contents are treated
as the same data source, and the user only provides credentials once.

Power Query M

HelloWorldType = type function (


message as (type text meta [
DataSource.Path = false,
Documentation.FieldCaption = "Message",
Documentation.FieldDescription = "Text to display",
Documentation.SampleValues = {"Hello world", "Hola mundo"}
]),
optional count as (type number meta [
Documentation.FieldCaption = "Count",
Documentation.FieldDescription = "Number of times to repeat the
message",
Documentation.AllowedValues = { 1, 2, 3 }
]))
as table meta [
Documentation.Name = "Hello - Name",
Documentation.LongDescription = "Hello - Long Description",
Documentation.Examples = {[
Description = "Returns a table with 'Hello world' repeated 2
times",
Code = "HelloWorldWithDocs.Contents(""Hello world"", 2)",
Result = "#table({""Column1""}, {{""Hello world""}, {""Hello
world""}})"
],[
Description = "Another example, new message, new count!",
Code = "HelloWorldWithDocs.Contents(""Goodbye"", 1)",
Result = "#table({""Column1""}, {{""Goodbye""}})"
]}
];

Functions with a Uri parameter


Because data sources with a Uri based identifier are so common, there's special handling
in the Power Query UI when dealing with Uri-based data source paths. When a Uri-
based data source is encountered, the credential dialog provides a dropdown, which
allows the user to select the base path rather than the full path (and all paths in-
between).

As Uri.Type is an ascribed type rather than a primitive type in the M language, you'll
need to use the Value.ReplaceType function to indicate that your text parameter should
be treated as a Uri.

Power Query M

shared GithubSample.Contents = Value.ReplaceType(Github.Contents, type


function (url as Uri.type) as any);
Paging
Article • 02/17/2023

REST APIs typically have some mechanism to transmit large volumes of records broken
up into pages of results. Power Query has the flexibility to support many different
paging mechanisms. However, since each paging mechanism is different, some amount
of modification of the paging examples is likely to be necessary to fit your situation.

Typical Patterns
The heavy lifting of compiling all page results into a single table is performed by the
Table.GenerateByPage() helper function, which can generally be used with no
modification. The code snippets presented in the Table.GenerateByPage() helper
function section describe how to implement some common paging patterns. Regardless
of pattern, you'll need to understand:

1. How do you request the next page of data?


2. Does the paging mechanism involve calculating values, or do you extract the URL
for the next page from the response?
3. How do you know when to stop paging?
4. Are there parameters related to paging (such as "page size") that you should be
aware of?
Handling transformations
Article • 02/17/2023

For situations where the data source response isn't presented in a format that Power BI
can consume directly, Power Query can be used to perform a series of transformations.

Static transformations
In most cases, the data is presented in a consistent way by the data source: column
names, data types, and hierarchical structure are consistent for a given endpoint. In this
situation it's appropriate to always apply the same set of transformations to get the data
in a format acceptable to Power BI.

An example of static transformation can be found in the TripPin Part 2 - Data Connector
for a REST Service tutorial when the data source is treated as a standard REST service:

Power Query M

let
Source =
TripPin.Feed("https://services.odata.org/v4/TripPinService/Airlines"),
value = Source[value],
toTable = Table.FromList(value, Splitter.SplitByNothing(), null, null,
ExtraValues.Error),
expand = Table.ExpandRecordColumn(toTable, "Column1", {"AirlineCode",
"Name"}, {"AirlineCode", "Name"})
in
expand

The transformations in this example are:

1. Source is a Record returned from a call to TripPin.Feed(...) .


2. You pull the value from one of Source 's key-value pairs. The name of the key is
value , and you store the result in a variable called value .
3. value is a list, which you convert to a table. Each element in value becomes a row
in the table, which you can call toTable .
4. Each element in value is itself a Record. toTable has all of these in a single
column: "Column1" . This step pulls all data with key "AirlineCode" into a column
called "AirlineCode" and all data with key "Name" into a column called "Name" , for
each row in toTable . "Column1" is replaced by these two new columns.
At the end of the day you're left with data in a simple tabular format that Power BI can
consume and easily render:

It's important to note that a sequence of static transformations of this specificity are
only applicable to a single endpoint. In the example above, this sequence of
transformations will only work if "AirlineCode" and "Name" exist in the REST endpoint
response, since they are hard-coded into the M code. Thus, this sequence of
transformations may not work if you try to hit the /Event endpoint.

This high level of specificity may be necessary for pushing data to a navigation table, but
for more general data access functions it's recommended that you only perform
transformations that are appropriate for all endpoints.

7 Note

Be sure to test transformations under a variety of data circumstances. If the user


doesn't have any data at the /airlines endpoint, do your transformations result in
an empty table with the correct schema? Or is an error encountered during
evaluation? See TripPin Part 7: Advanced Schema with M Types for a discussion on
unit testing.

Dynamic Transformations
More complex logic is sometimes needed to convert API responses into stable and
consistent forms appropriate for Power BI data models.

Inconsistent API Responses


Basic M control flow (if statements, HTTP status codes, try...catch blocks, and so on) are
typically sufficient to handle situations where there are a handful of ways in which the
API responds.

Determining Schema On-The-Fly


Some APIs are designed such that multiple pieces of information must be combined to
get the correct tabular format. Consider Smartsheet's /sheets endpoint response ,
which contains an array of column names and an array of data rows. The Smartsheet
Connector is able to parse this response in the following way:

Power Query M

raw = Web.Contents(...),
columns = raw[columns],
columnTitles = List.Transform(columns, each [title]),
columnTitlesWithRowNumber = List.InsertRange(columnTitles, 0,
{"RowNumber"}),

RowAsList = (row) =>


let
listOfCells = row[cells],
cellValuesList = List.Transform(listOfCells, each if
Record.HasFields(_, "value") then [value]
else null),
rowNumberFirst = List.InsertRange(cellValuesList, 0,
{row[rowNumber]})
in
rowNumberFirst,

listOfRows = List.Transform(raw[rows], each RowAsList(_)),


result = Table.FromRows(listOfRows, columnTitlesWithRowNumber)

1. First deal with column header information. You can pull the title record of each
column into a List, prepending with a RowNumber column that you know will always
be represented as this first column.
2. Next you can define a function that allows you to parse a row into a List of cell
value s. You can again prepend rowNumber information.

3. Apply your RowAsList() function to each of the row s returned in the API response.
4. Convert the List to a table, specifying the column headers.
Handling schema
Article • 02/17/2023

Depending on your data source, information about data types and column names may
or may not be provided explicitly. OData REST APIs typically handle this using the
$metadata definition , and the Power Query OData.Feed method automatically handles
parsing this information and applying it to the data returned from an OData source.

Many REST APIs don't have a way to programmatically determine their schema. In these
cases you'll need to include a schema definition in your connector.

Simple hardcoded approach


The simplest approach is to hardcode a schema definition into your connector. This is
sufficient for most use cases.

Overall, enforcing a schema on the data returned by your connector has multiple
benefits, such as:

Setting the correct data types.


Removing columns that don't need to be shown to end users (such as internal IDs
or state information).
Ensuring that each page of data has the same shape by adding any columns that
might be missing from a response (REST APIs commonly indicate that fields should
be null by omitting them entirely).

Viewing the existing schema with Table.Schema


Consider the following code that returns a simple table from the TripPin OData sample
service :

Power Query M

let
url = "https://services.odata.org/TripPinWebApiService/Airlines",
source = Json.Document(Web.Contents(url))[value],
asTable = Table.FromRecords(source)
in
asTable

7 Note
TripPin is an OData source, so realistically it would make more sense to simply use
the OData.Feed function's automatic schema handling. In this example you'll be
treating the source as a typical REST API and using Web.Contents to demonstrate
the technique of hardcoding a schema by hand.

This table is the result:

You can use the handy Table.Schema function to check the data type of the columns:

Power Query M

let
url = "https://services.odata.org/TripPinWebApiService/Airlines",
source = Json.Document(Web.Contents(url))[value],
asTable = Table.FromRecords(source)
in
Table.Schema(asTable)

Both AirlineCode and Name are of any type. Table.Schema returns a lot of metadata
about the columns in a table, including names, positions, type information, and many
advanced properties such as Precision, Scale, and MaxLength. For now you should only
concern yourself with the ascribed type ( TypeName ), primitive type ( Kind ), and whether
the column value might be null ( IsNullable ).

Defining a simple schema table


Your schema table will be composed of two columns:

Column Details

Name The name of the column. This must match the name in the results returned by the
service.

Type The M data type you're going to set. This can be a primitive type (text, number,
datetime, and so on), or an ascribed type (Int64.Type, Currency.Type, and so on).
The hardcoded schema table for the Airlines table will set its AirlineCode and Name
columns to text and looks like this:

Power Query M

Airlines = #table({"Name", "Type"}, {


{"AirlineCode", type text},
{"Name", type text}
})

As you look to some of the other endpoints, consider the following schema tables:

The Airports table has four fields you'll want to keep (including one of type record ):

Power Query M

Airports = #table({"Name", "Type"}, {


{"IcaoCode", type text},
{"Name", type text},
{"IataCode", type text},
{"Location", type record}
})

The People table has seven fields, including list s ( Emails , AddressInfo ), a nullable
column ( Gender ), and a column with an ascribed type ( Concurrency ):

Power Query M

People = #table({"Name", "Type"}, {


{"UserName", type text},
{"FirstName", type text},
{"LastName", type text},
{"Emails", type list},
{"AddressInfo", type list},
{"Gender", type nullable text},
{"Concurrency", Int64.Type}
})

You can put all of these tables into a single master schema table SchemaTable :

Power Query M

SchemaTable = #table({"Entity", "SchemaTable"}, {


{"Airlines", Airlines},
{"Airports", Airports},
{"People", People}
})
The SchemaTransformTable helper function
The SchemaTransformTable helper function described below will be used to enforce
schemas on your data. It takes the following parameters:

Parameter Type Description

table table The table of data you'll want to enforce your schema on.

schema table The schema table to read column info from, with the following type:
type table [Name = text, Type = type] .

enforceSchema number (optional) An enum that controls behavior of the function.


The default value ( EnforceSchema.Strict = 1 ) ensures that the output
table will match the schema table that was provided by adding any
missing columns, and removing extra columns.
The EnforceSchema.IgnoreExtraColumns = 2 option can be used to
preserve extra columns in the result.
When EnforceSchema.IgnoreMissingColumns = 3 is used, both missing
columns and extra columns will be ignored.

The logic for this function looks something like this:

1. Determine if there are any missing columns from the source table.
2. Determine if there are any extra columns.
3. Ignore structured columns (of type list , record , and table ), and columns set to
type any .
4. Use Table.TransformColumnTypes to set each column type.
5. Reorder columns based on the order they appear in the schema table.
6. Set the type on the table itself using Value.ReplaceType.

7 Note

The last step to set the table type will remove the need for the Power Query UI to
infer type information when viewing the results in the query editor, which can
sometimes result in a double-call to the API.

Putting it all together


In the greater context of a complete extension, the schema handling will take place
when a table is returned from the API. Typically this functionality takes place at the
lowest level of the paging function (if one exists), with entity information passed
through from a navigation table.

Because so much of the implementation of paging and navigation tables is context-


specific, the complete example of implementing a hardcoded schema-handling
mechanism won't be shown here. This TripPin example demonstrates how an end-to-
end solution might look.

Sophisticated approach
The hardcoded implementation discussed above does a good job of making sure that
schemas remain consistent for simple JSON repsonses, but it's limited to parsing the
first level of the response. Deeply nested data sets would benefit from the following
approach, which takes advantage of M Types.

Here is a quick refresh about types in the M language from the Language Specification:

A type value is a value that classifies other values. A value that is classified by a type
is said to conform to that type. The M type system consists of the following kinds of
types:

Primitive types, which classify primitive values ( binary , date , datetime ,


datetimezone , duration , list , logical , null , number , record , text , time ,

type ) and also include a number of abstract types ( function , table , any , and
none ).

Record types, which classify record values based on field names and value
types.
List types, which classify lists using a single item base type.
Function types, which classify function values based on the types of their
parameters and return values.
Table types, which classify table values based on column names, column types,
and keys.
Nullable types, which classify the value null in addition to all the values
classified by a base type.
Type types, which classify values that are types.

Using the raw JSON output you get (and/or by looking up the definitions in the service's
$metadata ), you can define the following record types to represent OData complex
types:

Power Query M

LocationType = type [
Address = text,
City = CityType,
Loc = LocType
];

CityType = type [
CountryRegion = text,
Name = text,
Region = text
];

LocType = type [
#"type" = text,
coordinates = {number},
crs = CrsType
];

CrsType = type [
#"type" = text,
properties = record
];

Notice how LocationType references the CityType and LocType to represent its
structured columns.

For the top-level entities that you'll want represented as Tables, you can define table
types:

Power Query M

AirlinesType = type table [


AirlineCode = text,
Name = text
];
AirportsType = type table [
Name = text,
IataCode = text,
Location = LocationType
];
PeopleType = type table [
UserName = text,
FirstName = text,
LastName = text,
Emails = {text},
AddressInfo = {nullable LocationType},
Gender = nullable text,
Concurrency Int64.Type
];

You can then update your SchemaTable variable (which you can use as a lookup table for
entity-to-type mappings) to use these new type definitions:

Power Query M

SchemaTable = #table({"Entity", "Type"}, {


{"Airlines", AirlinesType},
{"Airports", AirportsType},
{"People", PeopleType}
});

You can rely on a common function (Table.ChangeType) to enforce a schema on your


data, much like you used SchemaTransformTable in the earlier exercise. Unlike
SchemaTransformTable , Table.ChangeType takes an actual M table type as an argument,

and will apply your schema recursively for all nested types. Its signature is:

Power Query M

Table.ChangeType = (table, tableType as type) as nullable table => ...

7 Note

For flexibility, the function can be used on tables as well as lists of records (which is
how tables are represented in a JSON document).

You'll then need to update the connector code to change the schema parameter from a
table to a type , and add a call to Table.ChangeType . Again, the details for doing so are
very implementation-specific and thus not worth going into in detail here. This extended
TripPin connector example demonstrates an end-to-end solution implementing this
more sophisticated approach to handling schema.
Status Code Handling with Web.Contents
Article • 12/21/2022

The Web.Contents function has some built-in functionality for dealing with certain HTTP
status codes. The default behavior can be overridden in your extension using the
ManualStatusHandling field in the options record.

Automatic retry
Web.Contents will automatically retry requests that fail with one of the following status
codes:

Code Status

408 Request Timeout

429 Too Many Requests

503 Service Unavailable

504 Gateway Timeout

509 Bandwidth Limit Exceeded

Requests will be retried up to three times before failing. The engine uses an exponential
back-off algorithm to determine how long to wait until the next retry, unless the
response contains a Retry-after header. When the header is found, the engine will wait
the specified number of seconds before the next retry. The minimum supported wait
time is 0.5 seconds, and the maximum value is 120 seconds.

7 Note

The Retry-after value must be in the delta-seconds format. The HTTP-date format
is currently not supported.

Authentication exceptions
The following status codes will result in a credentials exception, causing an
authentication prompt asking the user to provide credentials (or sign in again in the
case of an expired OAuth token).
Code Status

401 Unauthorized

403 Forbidden

7 Note

Extensions are able to use the ManualStatusHandling option with status codes 401
and 403, which is not something that can be done in Web.Contents calls made
outside of a custom data connector (that is, directly from Power Query).

Redirection
The following status codes will result in an automatic redirect to the URI specified in the
Location header. A missing Location header will result in an error.

Code Status

300 Multiple Choices

301 Moved Permanently

302 Found

303 See Other

307 Temporary Redirect

7 Note

Only status code 307 will keep a POST request method. All other redirect status
codes will result in a switch to GET .
Wait-Retry Pattern
Article • 02/17/2023

In some situations, a data source's behavior doesn't match that expected by Power
Query's default HTTP code handling. The examples below show how to work around this
situation.

In this scenario you'll be working with a REST API that occasionally returns a 500 status
code, indicating an internal server error. In these instances, you could wait a few seconds
and retry, potentially a few times before you give up.

ManualStatusHandling
If Web.Contents gets a 500 status code response, it throws a DataSource.Error by
default. You can override this behavior by providing a list of codes as an optional
argument to Web.Contents :

response = Web.Contents(url, [ManualStatusHandling={404, 500}])

By specifying the status codes in this way, Power Query will continue to process the web
response as normal. However, normal response processing is often not appropriate in
these cases. You'll need to understand that an abnormal response code has been
received and perform special logic to handle it. To determine the response code that
was returned from the web service, you can access it from the meta Record that
accompanies the response:

responseCode = Value.Metadata(response)[Response.Status]

Based on whether responseCode is 200 or 500, you can either process the result as
normal, or follow your wait-retry logic that you'll flesh out in the next section.

IsRetry
Power Query has a local cache that stores the results of previous calls to Web.Contents.
When polling the same URL for a new response, or when retrying after an error status,
you'll need to ensure that the query ignores any cached results. You can do this by
including the IsRetry option in the call to the Web.Contents function. In this sample,
we'll set IsRetry to true after the first iteration of the Value.WaitFor loop.

Value.WaitFor
Value.WaitFor() is a standard helper function that can usually be used with no
modification. It works by building a List of retry attempts.

producer Argument

This contains the task to be (possibly) retried. It's represented as a function so that the
iteration number can be used in the producer logic. The expected behavior is that
producer will return null if a retry is determined to be necessary. If anything other than

null is returned by producer , that value is in turn returned by Value.WaitFor .

delay Argument

This contains the logic to execute between retries. It's represented as a function so that
the iteration number can be used in the delay logic. The expected behavior is that
delay returns a Duration.

count Argument (optional)

A maximum number of retries can be set by providing a number to the count argument.

Putting It All Together


The following example shows how ManualStatusHandling and Value.WaitFor can be
used to implement a delayed retry in the event of a 500 response. Wait time between
retries doubles with each try, with a maximum of five retries.

let
waitForResult = Value.WaitFor(
(iteration) =>
let
result = Web.Contents(url, [ManualStatusHandling = {500},
IsRetry = iteration > 0]),
status = Value.Metadata(result)[Response.Status],
actualResult = if status = 500 then null else result
in
actualResult,
(iteration) => #duration(0, 0, 0, Number.Power(2, iteration)),
5)
in
if waitForResult = null then
error "Value.WaitFor() Failed after multiple retry attempts"
else
waitForResult
Handling Unit Testing
Article • 02/17/2023

For both simple and complex connectors, adding unit tests is a best practice and highly
recommended.

Unit testing is accomplished in the context of Visual Studio's Power Query SDK . Each
test is defined as a Fact that has a name, an expected value, and an actual value. In
most cases, the "actual value" will be an M expression that tests part of your expression.

Consider a very simple extension that exports three functions:

Power Query M

section Unittesting;

shared UnitTesting.ReturnsABC = () => "ABC";


shared UnitTesting.Returns123 = () => "123";
shared UnitTesting.ReturnTableWithFiveRows = () =>
Table.Repeat(#table({"a"},{{1}}),5);

This unit test code is made up of a number of Facts, and a bunch of common code for
the unit test framework ( ValueToText , Fact , Facts , Facts.Summarize ). The following
code provides an example set of Facts (go to UnitTesting.query.pq for the common
code):

Power Query M

section UnitTestingTests;

shared MyExtension.UnitTest =
[
// Put any common variables here if you only want them to be evaluated
once

// Fact(<Name of the Test>, <Expected Value>, <Actual Value>)


facts =
{
Fact("Check that this function returns 'ABC'", // name of the test
"ABC", // expected value
UnitTesting.ReturnsABC() // expression to
evaluate (let or single statement)
),
Fact("Check that this function returns '123'",
"123",
UnitTesting.Returns123()
),
Fact("Result should contain 5 rows",
5,
Table.RowCount(UnitTesting.ReturnTableWithFiveRows())
),
Fact("Values should be equal (using a let statement)",
"Hello World",
let
a = "Hello World"
in
a
)
},
report = Facts.Summarize(facts)
][report];

Running the sample in Visual Studio will evaluate all of the Facts and give you a visual
summary of the pass rates:

Implementing unit testing early in the connector development process enables you to
follow the principles of test-driven development. Imagine that you need to write a
function called Uri.GetHost that returns only the host data from a URI. You might start
by writing a test case to verify that the function appropriately performs the expected
function:

Power Query M

Fact("Returns host from URI",


"https://bing.com",
Uri.GetHost("https://bing.com/subpath/query?param=1&param2=hello")
),
Fact("Handles port number appropriately",
"https://bing.com:8080",
Uri.GetHost("https://bing.com:8080/subpath/query?param=1&param2=hello")
)
Additional tests can be written to ensure that the function appropriately handles edge
cases.

An early version of the function might pass some but not all tests:

Power Query M

Uri.GetHost = (url) =>


let
parts = Uri.Parts(url)
in
parts[Scheme] & "://" & parts[Host]

The final version of the function should pass all unit tests. This also makes it easy to
ensure that future updates to the function do not accidentally remove any of its basic
functionality.
Helper Functions
Article • 02/17/2023

This topic contains a number of helper functions commonly used in M extensions. These
functions may eventually be moved to the official M library, but for now can be copied
into your extension file code. You shouldn't mark any of these functions as shared
within your extension code.

Navigation Tables

Table.ToNavigationTable
This function adds the table type metadata needed for your extension to return a table
value that Power Query can recognize as a Navigation Tree. See Navigation Tables for
more information.

Power Query M

Table.ToNavigationTable = (
table as table,
keyColumns as list,
nameColumn as text,
dataColumn as text,
itemKindColumn as text,
itemNameColumn as text,
isLeafColumn as text
) as table =>
let
tableType = Value.Type(table),
newTableType = Type.AddTableKey(tableType, keyColumns, true) meta
[
NavigationTable.NameColumn = nameColumn,
NavigationTable.DataColumn = dataColumn,
NavigationTable.ItemKindColumn = itemKindColumn,
Preview.DelayColumn = itemNameColumn,
NavigationTable.IsLeafColumn = isLeafColumn
],
navigationTable = Value.ReplaceType(table, newTableType)
in
navigationTable;

Parameter Details

table Your navigation table.


Parameter Details

keyColumns List of column names that act as the primary key for your navigation table.

nameColumn The name of the column that should be used as the display name in the
navigator.

dataColumn The name of the column that contains the Table or Function to display.

itemKindColumn The name of the column to use to determine the type of icon to display.
Valid values for the column are listed in the Handling Navigation article.

itemNameColumn The name of the column to use to determine the type of tooltip to display.
Valid values for the column are Table and Function .

isLeafColumn The name of the column used to determine if this is a leaf node, or if the
node can be expanded to contain another navigation table.

Example usage:

Power Query M

shared MyExtension.Contents = () =>


let
objects = #table(
{"Name", "Key", "Data",
"ItemKind", "ItemName", "IsLeaf"},{
{"Item1", "item1", #table({"Column1"}, {{"Item1"}}),
"Table", "Table", true},
{"Item2", "item2", #table({"Column1"}, {{"Item2"}}),
"Table", "Table", true},
{"Item3", "item3", FunctionCallThatReturnsATable(),
"Table", "Table", true},
{"MyFunction", "myfunction", AnotherFunction.Contents(),
"Function", "Function", true}
}),
NavTable = Table.ToNavigationTable(objects, {"Key"}, "Name", "Data",
"ItemKind", "ItemName", "IsLeaf")
in
NavTable;

URI Manipulation

Uri.FromParts
This function constructs a full URL based on individual fields in the record. It acts as the
reverse of Uri.Parts.
Power Query M

Uri.FromParts = (parts) =>


let
port = if (parts[Scheme] = "https" and parts[Port] = 443) or
(parts[Scheme] = "http" and parts[Port] = 80) then "" else ":" &
Text.From(parts[Port]),
div1 = if Record.FieldCount(parts[Query]) > 0 then "?" else "",
div2 = if Text.Length(parts[Fragment]) > 0 then "#" else "",
uri = Text.Combine({parts[Scheme], "://", parts[Host], port,
parts[Path], div1, Uri.BuildQueryString(parts[Query]), div2,
parts[Fragment]})
in
uri;

Uri.GetHost
This function returns the scheme, host, and default port (for HTTP/HTTPS) for a given
URL. For example, https://bing.com/subpath/query?param=1&param2=hello would
become https://bing.com:443 .

This is particularly useful for building ResourcePath .

Power Query M

Uri.GetHost = (url) =>


let
parts = Uri.Parts(url),
port = if (parts[Scheme] = "https" and parts[Port] = 443) or
(parts[Scheme] = "http" and parts[Port] = 80) then "" else ":" &
Text.From(parts[Port])
in
parts[Scheme] & "://" & parts[Host] & port;

ValidateUrlScheme
This function checks if the user entered an HTTPS URL and raises an error if they don't.
This is required for user entered URLs for certified connectors.

Power Query M

ValidateUrlScheme = (url as text) as text => if (Uri.Parts(url)[Scheme] <>


"https") then error "Url scheme must be HTTPS" else url;

To apply it, just wrap your url parameter in your data access function.
Power Query M

DataAccessFunction = (url as text) as table =>


let
_url = ValidateUrlScheme(url),
source = Web.Contents(_url)
in
source;

Retrieving Data

Value.WaitFor
This function is useful when making an asynchronous HTTP request and you need to
poll the server until the request is complete.

Power Query M

Value.WaitFor = (producer as function, interval as function, optional count


as number) as any =>
let
list = List.Generate(
() => {0, null},
(state) => state{0} <> null and (count = null or state{0} <
count),
(state) => if state{1} <> null then {null, state{1}} else {1 +
state{0}, Function.InvokeAfter(() => producer(state{0}),
interval(state{0}))},
(state) => state{1})
in
List.Last(list);

Table.GenerateByPage
This function is used when an API returns data in an incremental/paged format, which is
common for many REST APIs. The getNextPage argument is a function that takes in a
single parameter, which will be the result of the previous call to getNextPage , and should
return a nullable table .

Power Query M

getNextPage = (lastPage) as nullable table => ...;


getNextPage is called repeatedly until it returns null . The function will collate all pages

into a single table. When the result of the first call to getNextPage is null, an empty table
is returned.

Power Query M

// The getNextPage function takes a single argument and is expected to


return a nullable table
Table.GenerateByPage = (getNextPage as function) as table =>
let
listOfPages = List.Generate(
() => getNextPage(null), // get the first page of
data
(lastPage) => lastPage <> null, // stop when the function
returns null
(lastPage) => getNextPage(lastPage) // pass the previous page to
the next function call
),
// concatenate the pages together
tableOfPages = Table.FromList(listOfPages,
Splitter.SplitByNothing(), {"Column1"}),
firstRow = tableOfPages{0}?
in
// if we didn't get back any pages of data, return an empty table
// otherwise set the table type based on the columns of the first
page
if (firstRow = null) then
Table.FromRows({})
// check for empty first table
else if (Table.IsEmpty(firstRow[Column1])) then
firstRow[Column1]
else
Value.ReplaceType(
Table.ExpandTableColumn(tableOfPages, "Column1",
Table.ColumnNames(firstRow[Column1])),
Value.Type(firstRow[Column1])
);

Additional notes:

The getNextPage function will need to retrieve the next page URL (or page number,
or whatever other values are used to implement the paging logic). This is generally
done by adding meta values to the page before returning it.
The columns and table type of the combined table (that is, all pages together) are
derived from the first page of data. The getNextPage function should normalize
each page of data.
The first call to getNextPage receives a null parameter.
getNextPage must return null when there are no pages left.
An example of using this function can be found in the Github sample, and the TripPin
paging sample.

Power Query M

Github.PagedTable = (url as text) => Table.GenerateByPage((previous) =>


let
// If we have a previous page, get its Next link from metadata on
the page.
next = if (previous <> null) then Value.Metadata(previous)[Next]
else null,
// If we have a next link, use it, otherwise use the original URL
that was passed in.
urlToUse = if (next <> null) then next else url,
// If we have a previous page, but don't have a next link, then
we're done paging.
// Otherwise retrieve the next page.
current = if (previous <> null and next = null) then null else
Github.Contents(urlToUse),
// If we got data back from the current page, get the link for the
next page
link = if (current <> null) then Value.Metadata(current)[Next] else
null
in
current meta [Next=link]);

SchemaTransformTable
Power Query M

EnforceSchema.Strict = 1; // Add any missing columns, remove


extra columns, set table type
EnforceSchema.IgnoreExtraColumns = 2; // Add missing columns, do not
remove extra columns
EnforceSchema.IgnoreMissingColumns = 3; // Do not add or remove columns

SchemaTransformTable = (table as table, schema as table, optional


enforceSchema as number) as table =>
let
// Default to EnforceSchema.Strict
_enforceSchema = if (enforceSchema <> null) then enforceSchema else
EnforceSchema.Strict,

// Applies type transforms to a given table


EnforceTypes = (table as table, schema as table) as table =>
let
map = (t) => if Type.Is(t, type list) or Type.Is(t, type
record) or t = type any then null else t,
mapped = Table.TransformColumns(schema, {"Type", map}),
omitted = Table.SelectRows(mapped, each [Type] <> null),
existingColumns = Table.ColumnNames(table),
removeMissing = Table.SelectRows(omitted, each
List.Contains(existingColumns, [Name])),
primativeTransforms = Table.ToRows(removeMissing),
changedPrimatives = Table.TransformColumnTypes(table,
primativeTransforms)
in
changedPrimatives,

// Returns the table type for a given schema


SchemaToTableType = (schema as table) as type =>
let
toList = List.Transform(schema[Type], (t) => [Type=t,
Optional=false]),
toRecord = Record.FromList(toList, schema[Name]),
toType = Type.ForRecord(toRecord, false)
in
type table (toType),

// Determine if we have extra/missing columns.


// The enforceSchema parameter determines what we do about them.
schemaNames = schema[Name],
foundNames = Table.ColumnNames(table),
addNames = List.RemoveItems(schemaNames, foundNames),
extraNames = List.RemoveItems(foundNames, schemaNames),
tmp = Text.NewGuid(),
added = Table.AddColumn(table, tmp, each []),
expanded = Table.ExpandRecordColumn(added, tmp, addNames),
result = if List.IsEmpty(addNames) then table else expanded,
fullList =
if (_enforceSchema = EnforceSchema.Strict) then
schemaNames
else if (_enforceSchema = EnforceSchema.IgnoreMissingColumns)
then
foundNames
else
schemaNames & extraNames,

// Select the final list of columns.


// These will be ordered according to the schema table.
reordered = Table.SelectColumns(result, fullList,
MissingField.Ignore),
enforcedTypes = EnforceTypes(reordered, schema),
withType = if (_enforceSchema = EnforceSchema.Strict) then
Value.ReplaceType(enforcedTypes, SchemaToTableType(schema)) else
enforcedTypes
in
withType;

Table.ChangeType
Power Query M
let
// table should be an actual Table.Type, or a List.Type of Records
Table.ChangeType = (table, tableType as type) as nullable table =>
// we only operate on table types
if (not Type.Is(tableType, type table)) then error "type argument
should be a table type" else
// if we have a null value, just return it
if (table = null) then table else
let
columnsForType = Type.RecordFields(Type.TableRow(tableType)),
columnsAsTable = Record.ToTable(columnsForType),
schema = Table.ExpandRecordColumn(columnsAsTable, "Value",
{"Type"}, {"Type"}),
previousMeta = Value.Metadata(tableType),

// make sure we have a table


parameterType = Value.Type(table),
_table =
if (Type.Is(parameterType, type table)) then table
else if (Type.Is(parameterType, type list)) then
let
asTable = Table.FromList(table,
Splitter.SplitByNothing(), {"Column1"}),
firstValueType =
Value.Type(Table.FirstValue(asTable, null)),
result =
// if the member is a record (as expected), then
expand it.
if (Type.Is(firstValueType, type record)) then
Table.ExpandRecordColumn(asTable, "Column1",
schema[Name])
else
error Error.Record("Error.Parameter", "table
argument is a list, but not a list of records", [ ValueType = firstValueType
])
in
if (List.IsEmpty(table)) then
#table({"a"}, {})
else result
else
error Error.Record("Error.Parameter", "table argument
should be a table or list of records", [ValueType = parameterType]),

reordered = Table.SelectColumns(_table, schema[Name],


MissingField.UseNull),

// process primitive values - this will call


Table.TransformColumnTypes
map = (t) => if Type.Is(t, type table) or Type.Is(t, type list)
or Type.Is(t, type record) or t = type any then null else t,
mapped = Table.TransformColumns(schema, {"Type", map}),
omitted = Table.SelectRows(mapped, each [Type] <> null),
existingColumns = Table.ColumnNames(reordered),
removeMissing = Table.SelectRows(omitted, each
List.Contains(existingColumns, [Name])),
primativeTransforms = Table.ToRows(removeMissing),
changedPrimatives = Table.TransformColumnTypes(reordered,
primativeTransforms),

// Get the list of transforms we'll use for Record types


recordColumns = Table.SelectRows(schema, each Type.Is([Type],
type record)),
recordTypeTransformations = Table.AddColumn(recordColumns,
"RecordTransformations", each (r) => Record.ChangeType(r, [Type]), type
function),
recordChanges =
Table.ToRows(Table.SelectColumns(recordTypeTransformations, {"Name",
"RecordTransformations"})),

// Get the list of transforms we'll use for List types


listColumns = Table.SelectRows(schema, each Type.Is([Type], type
list)),
listTransforms = Table.AddColumn(listColumns,
"ListTransformations", each (t) => List.ChangeType(t, [Type]),
Function.Type),
listChanges = Table.ToRows(Table.SelectColumns(listTransforms,
{"Name", "ListTransformations"})),

// Get the list of transforms we'll use for Table types


tableColumns = Table.SelectRows(schema, each Type.Is([Type],
type table)),
tableTransforms = Table.AddColumn(tableColumns,
"TableTransformations", each (t) => @Table.ChangeType(t, [Type]),
Function.Type),
tableChanges = Table.ToRows(Table.SelectColumns(tableTransforms,
{"Name", "TableTransformations"})),

// Perform all of our transformations


allColumnTransforms = recordChanges & listChanges &
tableChanges,
changedRecordTypes = if (List.IsEmpty(allColumnTransforms)) then
changedPrimatives else Table.TransformColumns(changedPrimatives,
allColumnTransforms, null, MissingField.Ignore),

// set final type


withType = Value.ReplaceType(changedRecordTypes, tableType)
in
if (List.IsEmpty(Record.FieldNames(columnsForType))) then table
else withType meta previousMeta,

// If given a generic record type (no predefined fields), the original


record is returned
Record.ChangeType = (record as record, recordType as type) =>
let
// record field format is [ fieldName = [ Type = type, Optional
= logical], ... ]
fields = try Type.RecordFields(recordType) otherwise error
"Record.ChangeType: failed to get record fields. Is this a record type?",
fieldNames = Record.FieldNames(fields),
fieldTable = Record.ToTable(fields),
optionalFields = Table.SelectRows(fieldTable, each [Value]
[Optional])[Name],
requiredFields = List.Difference(fieldNames, optionalFields),
// make sure all required fields exist
withRequired = Record.SelectFields(record, requiredFields,
MissingField.UseNull),
// append optional fields
withOptional = withRequired & Record.SelectFields(record,
optionalFields, MissingField.Ignore),
// set types
transforms = GetTransformsForType(recordType),
withTypes = Record.TransformFields(withOptional, transforms,
MissingField.Ignore),
// order the same as the record type
reorder = Record.ReorderFields(withTypes, fieldNames,
MissingField.Ignore)
in
if (List.IsEmpty(fieldNames)) then record else reorder,

List.ChangeType = (list as list, listType as type) =>


if (not Type.Is(listType, type list)) then error "type argument
should be a list type" else
let
listItemType = Type.ListItem(listType),
transform = GetTransformByType(listItemType),
modifiedValues = List.Transform(list, transform),
typed = Value.ReplaceType(modifiedValues, listType)
in
typed,

// Returns a table type for the provided schema table


Schema.ToTableType = (schema as table) as type =>
let
toList = List.Transform(schema[Type], (t) => [Type=t,
Optional=false]),
toRecord = Record.FromList(toList, schema[Name]),
toType = Type.ForRecord(toRecord, false),
previousMeta = Value.Metadata(schema)
in
type table (toType) meta previousMeta,

// Returns a list of transformations that can be passed to


Table.TransformColumns, or Record.TransformFields
// Format: {"Column", (f) => ...) .... ex: {"A", Number.From}
GetTransformsForType = (_type as type) as list =>
let
fieldsOrColumns = if (Type.Is(_type, type record)) then
Type.RecordFields(_type)
else if (Type.Is(_type, type table)) then
Type.RecordFields(Type.TableRow(_type))
else error "GetTransformsForType: record or
table type expected",
toTable = Record.ToTable(fieldsOrColumns),
transformColumn = Table.AddColumn(toTable, "Transform", each
GetTransformByType([Value][Type]), Function.Type),
transformMap = Table.ToRows(Table.SelectColumns(transformColumn,
{"Name", "Transform"}))
in
transformMap,

GetTransformByType = (_type as type) as function =>


if (Type.Is(_type, type number)) then Number.From
else if (Type.Is(_type, type text)) then Text.From
else if (Type.Is(_type, type date)) then Date.From
else if (Type.Is(_type, type datetime)) then DateTime.From
else if (Type.Is(_type, type duration)) then Duration.From
else if (Type.Is(_type, type datetimezone)) then DateTimeZone.From
else if (Type.Is(_type, type logical)) then Logical.From
else if (Type.Is(_type, type time)) then Time.From
else if (Type.Is(_type, type record)) then (t) => if (t <> null)
then @Record.ChangeType(t, _type) else t
else if (Type.Is(_type, type table)) then (t) => if (t <> null) then
@Table.ChangeType(t, _type) else t
else if (Type.Is(_type, type list)) then (t) => if (t <> null) then
@List.ChangeType(t, _type) else t
else (t) => t
in
Table.ChangeType
Adding function documentation
Article • 02/17/2023

Power Query will automatically generate an invocation UI for you based on the
arguments for your function. By default, this UI will contain the name of your function,
and an input for each of your parameters.

Similarly, evaluating the name of your function, without specifying parameters, will
display information about it.

You might notice that built-in functions typically provide a better user experience, with
descriptions, tooltips, and even sample values. You can take advantage of this same
mechanism by defining specific meta values on your function type. This topic describes
the meta fields that are used by Power Query, and how you can make use of them in
your extensions.
Function Types
You can provide documentation for your function by defining custom type values. The
process looks like this:

1. Define a type for each parameter.


2. Define a type for your function.
3. Add various Documentation.* fields to your types metadata record.
4. Call Value.ReplaceType to ascribe the type to your shared function.

You can find more information about types and metadata values in the M Language
Specification.

Using this approach allows you to supply descriptions and display names for your
function, as well as individual parameters. You can also supply sample values for
parameters, as well as defining a preset list of values (turning the default text box
control into a drop down).

The Power Query experience retrieves documentation from meta values on the type of
your function, using a combination of calls to Value.Type, Type.FunctionParameters, and
Value.Metadata.
Function documentation
The following table lists the Documentation fields that can be set in the metadata for
your function. All fields are optional.

Field Type Details

Documentation.Examples list List of record objects with example usage of the


function. Only displayed as part of the function info.
Each record should contain the following optional text
fields: Description , Code , and Result .

Documentation.LongDescription text Full description of what the function does, displayed in


the function info.

Documentation.Name text Text to display across the top of the function


invocation dialog.

Parameter documentation
The following table lists the Documentation fields that can be set in the metadata for
your function parameters. All fields are optional.

Field Type Details

Documentation.AllowedValues list List of valid values for this parameter. Providing this
field will change the input from a textbox to a drop
down list. Note, this doesn't prevent a user from
manually editing the query to supply alternative
values.

Documentation.FieldCaption text Friendly display name to use for the parameter.

Documentation.FieldDescription text Description to show next to the display name.

Documentation.SampleValues list List of sample values to be displayed (as faded text)


inside of the text box.

Formatting.IsMultiLine boolean Allows you to create a multi-line input, for example


for pasting in native queries.

Formatting.IsCode boolean Formats the input field for code, commonly with
multi-line inputs. Uses a code-like font rather than
the standard font.

Basic example
The following code snippet (and resulting dialogs) are from the HelloWorldWithDocs
sample.

Power Query M

[DataSource.Kind="HelloWorldWithDocs", Publish="HelloWorldWithDocs.Publish"]
shared HelloWorldWithDocs.Contents = Value.ReplaceType(HelloWorldImpl,
HelloWorldType);

HelloWorldType = type function (


message as (type text meta [
Documentation.FieldCaption = "Message",
Documentation.FieldDescription = "Text to display",
Documentation.SampleValues = {"Hello world", "Hola mundo"}
]),
optional count as (type number meta [
Documentation.FieldCaption = "Count",
Documentation.FieldDescription = "Number of times to repeat the
message",
Documentation.AllowedValues = { 1, 2, 3 }
]))
as table meta [
Documentation.Name = "Hello - Name",
Documentation.LongDescription = "Hello - Long Description",
Documentation.Examples = {[
Description = "Returns a table with 'Hello world' repeated 2
times",
Code = "HelloWorldWithDocs.Contents(""Hello world"", 2)",
Result = "#table({""Column1""}, {{""Hello world""}, {""Hello
world""}})"
],[
Description = "Another example, new message, new count!",
Code = "HelloWorldWithDocs.Contents(""Goodbye"", 1)",
Result = "#table({""Column1""}, {{""Goodbye""}})"
]}
];

HelloWorldImpl = (message as text, optional count as number) as table =>


let
_count = if (count <> null) then count else 5,
listOfMessages = List.Repeat({message}, _count),
table = Table.FromList(listOfMessages, Splitter.SplitByNothing())
in
table;

This code results in the following dialogs in Power BI.


Function invocation

Function info

Multi-line example
Power Query M

[DataSource.Kind="HelloWorld", Publish="HelloWorld.Publish"]
shared HelloWorld.Contents =
let
HelloWorldType = type function (
message1 as (type text meta [
Documentation.FieldCaption = "Message 1",
Documentation.FieldDescription = "Text to display for
message 1",
Documentation.SampleValues = {"Hello world"},
Formatting.IsMultiLine = true,
Formatting.IsCode = true
]),
message2 as (type text meta [
Documentation.FieldCaption = "Message 2",
Documentation.FieldDescription = "Text to display for
message 2",
Documentation.SampleValues = {"Hola mundo"},
Formatting.IsMultiLine = true,
Formatting.IsCode = false
])) as text,
HelloWorldFunction = (message1 as text, message2 as text) as text =>
message1 & message2
in
Value.ReplaceType(HelloWorldFunction, HelloWorldType);

This code (with associated publish information, and so on) results in the following
dialogue in Power BI. New lines will be represented in text with '#(lf)', or 'line feed'.
Handling navigation
Article • 02/17/2023

Navigation Tables (or nav tables) are a core part of providing a user-friendly experience
for your connector. The Power Query experience displays them to the user after they've
entered any required parameters for your data source function, and have authenticated
with the data source.

Behind the scenes, a nav table is just a regular M Table value with specific metadata
fields defined on its Type. When your data source function returns a table with these
fields defined, Power Query will display the navigator dialog. You can actually see the
underlying data as a Table value by right-clicking on the root node and selecting Edit.

Table.ToNavigationTable
You can use the Table.ToNavigationTable function to add the table type metadata
needed to create a nav table.

7 Note
You currently need to copy and paste this function into your M extension. In the
future it will likely be moved into the M standard library.

The following table describes the parameters for this function:

Parameter Details

table Your navigation table.

keyColumns List of column names that act as the primary key for your navigation table.

nameColumn The name of the column that should be used as the display name in the
navigator.

dataColumn The name of the column that contains the Table or Function to display.

itemKindColumn The name of the column to use to determine the type of icon to display. See
below for the list of valid values for the column.

itemNameColumn The name of the column to use to determine the preview behavior. This is
typically set to the same value as itemKind.

isLeafColumn The name of the column used to determine if this is a leaf node, or if the
node can be expanded to contain another navigation table.

The function adds the following metadata to the table type:

Field Parameter

NavigationTable.NameColumn nameColumn

NavigationTable.DataColumn dataColumn

NavigationTable.ItemKindColumn itemKindColumn

NavigationTable.IsLeafColumn isLeafColumn

Preview.DelayColumn itemNameColumn

Values for ItemKind


Each of the following item kind values provide a different icon in the navigation table.

Feed
Cube
CubeDatabase
CubeView
CubeViewFolder
Database
DatabaseServer
Dimension
Table
Folder
Function
View
Sheet
Subcube
DefinedName
Record

The following screenshot shows the icons for item kinds in Power BI Desktop.
Examples

Flat navigation table


The following code sample displays a flat nav table with three tables and a function.

Power Query M

shared NavigationTable.Simple = () =>


let
objects = #table(
{"Name", "Key", "Data",
"ItemKind", "ItemName", "IsLeaf"},{
{"Item1", "item1", #table({"Column1"}, {{"Item1"}}),
"Table", "Table", true},
{"Item2", "item2", #table({"Column1"}, {{"Item2"}}),
"Table", "Table", true},
{"Item3", "item3", FunctionCallThatReturnsATable(),
"Table", "Table", true},
{"MyFunction", "myfunction", AnotherFunction.Contents,
"Function", "Function", true}
}),
NavTable = Table.ToNavigationTable(objects, {"Key"}, "Name", "Data",
"ItemKind", "ItemName", "IsLeaf")
in
NavTable;

shared FunctionCallThatReturnsATable = () =>


#table({"DynamicColumn"}, {{"Dynamic Value"}});
This code will result in the following Navigator display in Power BI Desktop:

Multi-level navigation table


It is possible to use nested navigation tables to create a hierarchical view over your data
set. You do this by setting the IsLeaf value for that row to false (which marks it as a
node that can be expanded), and format the Data column to also be another nav table.

Power Query M

shared NavigationTable.Nested = () as table =>


let
objects = #table(
{"Name", "Key", "Data", "ItemKind",
"ItemName", "IsLeaf"},{
{"Nested A", "n1", CreateNavTable("AAA"), "Table",
"Table", false},
{"Nested B", "n2", CreateNavTable("BBB"), "Table",
"Table", false},
{"Nested C", "n3", CreateNavTable("CCC"), "Table",
"Table", false}
}),
NavTable = Table.ToNavigationTable(objects, {"Key"}, "Name", "Data",
"ItemKind", "ItemName", "IsLeaf")
in
NavTable;

CreateNavTable = (message as text) as table =>


let
objects = #table(
{"Name", "Key", "Data", "ItemKind",
"ItemName", "IsLeaf"},{
{"Item1", "item1", #table({"Column1"}, {{message}}), "Table",
"Table", true},
{"Item2", "item2", #table({"Column1"}, {{message}}), "Table",
"Table", true}
}),
NavTable = Table.ToNavigationTable(objects, {"Key"}, "Name", "Data",
"ItemKind", "ItemName", "IsLeaf")
in
NavTable;

This code would result in the following Navigator display in Power BI Desktop:

Dynamic navigation tables


More complex functionality can be built from these basics. While all of the above
examples show hard-coded entities in the nav table, it's easy to see how a nav table
could be generated dynamically based on entities that are available to a given user. A
few key considerations for dynamic navigation tables include:

Error handling to ensure a good experience for users that don't have access to
certain endpoints.
Node evaluation is lazy by default; leaf nodes are not evaluated until the parent
node is expanded. Certain implementations of multi-level dynamic nav tables may
result in eager evaluation of the entire tree. Be sure to monitor the number of calls
that Power Query is making as it initially renders the navigation table. For example,
Table.InsertRows is 'lazier' than Table.FromRecords, as it doesn't need to evaluate
its arguments.
Handling Gateway Support
Article • 02/17/2023

Test Connection
Custom Connector support is available in both Personal and Standard modes of the
on-premises data gateway . Both gateway modes support Import. Direct Query is
only supported in Standard mode. OAuth for custom connectors via gateways is
currently supported only for gateway admins but not other data source users.

The method for implementing TestConnection functionality is likely to change while


the Power BI Custom Data Connector functionality is in preview.

To support scheduled refresh through the on-premises data gateway, your connector
must implement a TestConnection handler. The function is called when the user is
configuring credentials for your source, and used to ensure they are valid. The
TestConnection handler is set in the Data Source Kind record, and has the following
signature:

(dataSourcePath) as list => ...

Where dataSourcePath is the Data Source Path value for your function, and the return
value is a list composed of:

The name of the function to call (this function must be marked as #shared , and is
usually your primary data source function).
One or more arguments to pass to your function.

If the invocation of the function results in an error, TestConnection is considered to have


failed, and the credential won't be persisted.

7 Note

As stated above, the function name provided by TestConnection must be a shared


member.
Example: Connector with no required arguments
The code snippet below implements TestConnection for a data source with no required
parameters (such as the one found in the TripPin tutorial). Connectors with no required
parameters (referred to as 'Singletons') do not need any user provided input to test a
connection (other than credentials). In this case, the dataSourcePath value would be
equal to the name of the Data Source Kind, and can be ignored. The TripPin.Contents
function is invoked with no additional parameters.

Power Query M

TripPin = [
TestConnection = (dataSourcePath) => { "TripPin.Contents" },
Authentication = [
Anonymous = []
],
Label = "TripPin"
];

Example: Connector with a URL parameter


If your data source function has a single required parameter of the type Uri.Type , its
dataSourcePath will be equal to the URL provided by the user. The snippet below shows

the TestConnection implementation from the Github Sample.

Power Query M

GithubSample = [
TestConnection = (dataSourcePath) => {"GithubSample.Contents",
dataSourcePath},
Authentication = [
OAuth = [
StartLogin = StartLogin,
FinishLogin = FinishLogin,
Label = Extension.LoadString("AuthenticationLabel")
]
]
];

Example: Connector with required parameters


If your data source function has multiple parameters, or a single non-URL parameter,
then the dataSourcePath value will be a JSON string containing the parameters. The
snippet below comes from the DirectQueryForSQL sample.
Power Query M

DirectSQL = [
TestConnection = (dataSourcePath) =>
let
json = Json.Document(dataSourcePath),
server = json[server],
database = json[database]
in
{ "DirectSQL.Database", server, database },
Authentication = [
Windows = [],
UsernamePassword = []
],
Label = "Direct Query for SQL"
];
Handling Power Query Connector
Signing
Article • 12/21/2022

In Power BI, the loading of custom connectors is limited by your choice of security
setting. As a general rule, when the security for loading custom connectors is set to
'Recommended', the custom connectors won't load at all, and you have to lower it to
make them load.

The exception to this is trusted, 'signed connectors'. Signed connectors are a special
format of custom connector, a .pqx instead of .mez file, which has been signed with a
certificate. The signer can provide the user or the user's IT department with a
thumbprint of the signature, which can be put into the registry to securely indicate
trusting a given connector.

The following steps enable you to use a certificate (with an explanation on how to
generate one if you don't have one available) and sign a custom connector with the
'MakePQX' tool.

7 Note

If you need help creating a self-signed certificate to test these instructions, go to


the Microsoft documentation on New-SelfSignedCertificate in PowerShell.

7 Note

If you need help exporting your certificate as a pfx, go to Export-PfxCertificate.

1. Download MakePQX .

2. Extract the MakePQX folder in the included zip to the target you want.

3. To run it, call MakePQX in the command line. It requires the other libraries in the
folder, so you can't copy just the one executable. Running without any parameters
will return the help information.

Usage: MakePQX [options] [command]

Options:
Options Description

-? | -h | --help Show help information

Commands:

Command Description

pack Create a pqx file.

sign Signs an unsigned pqx, or countersigns if pqx is already signed. Use the --replace
option to replace the existing signature.

verify Verify the signature status on a pqx file. Return value will be non-zero if the
signature is invalid.

There are three commands in MakePQX. Use MakePQX [command] --help for more
information about a command.

Pack
The Pack command takes a mez file and packs it into a pqx file, which can be signed.
The pqx file is also able to support some capabilities that will be added in the future.

Usage: MakePQX pack [options]

Options:

Option Description

-? | -h | --help Show help information.

-mz | --mez Input extension file.

-c | --certificate Certificate (.pfx) used to sign the extension file.

-p | --password Password for the certificate file.

-t | --target Output file name. Defaults to the same name as the input file.

Example
C:\Users\cpope\Downloads\MakePQX>MakePQX.exe pack -mz
"C:\Users\cpope\OneDrive\Documents\Power BI Desktop\Custom
Connectors\HelloWorld.mez" -t "C:\Users\cpope\OneDrive\Documents\Power BI
Desktop\Custom Connectors\HelloWorldSigned.pqx"
Sign
The Sign command signs your pqx file with a certificate, giving it a thumbprint that can
be checked for trust by Power BI clients with the higher security setting. This command
takes a pqx file and returns the same pqx file, signed.

Usage: MakePQX sign [arguments] [options]

Arguments:

Argument Description

<pqx file> The path to the pqx file.

Options:

Option Description

-c | --certificate Certificate (.pfx) used to sign the extension file.

-p | --password Password for the certificate file.

-r | --replace Replace existing signature instead of countersigning.

-? | -h | --help Show help information.

Example
C:\Users\cpope\Downloads\MakePQX>MakePQX sign
"C:\Users\cpope\OneDrive\Documents\Power BI Desktop\Custom
Connectors\HelloWorldSigned.pqx" --certificate ContosoTestCertificate.pfx --
password password

Verify
The Verify command verifies that your module has been properly signed, and is showing
the Certificate status.

Usage: MakePQX verify [arguments] [options]

Arguments:

Argument Description

<pqx file> The path to the pqx file.


Options:

Option Description

-q | --quiet Hides signature verification output.

-? | -h | --help Show help information.

Example
C:\Users\cpope\Downloads\MakePQX>MakePQX verify
"C:\Users\cpope\OneDrive\Documents\Power BI Desktop\Custom
Connectors\HelloWorldSigned.pqx"

{
"SignatureStatus": "Success",
"CertificateStatus": [
{
"Issuer": "CN=Colin Popell",
"Thumbprint": "16AF59E4BE5384CD860E230ED4AED474C2A3BC69",
"Subject": "CN=Colin Popell",
"NotBefore": "2019-02-14T22:47:42-08:00",
"NotAfter": "2020-02-14T23:07:42-08:00",
"Valid": false,
"Parent": null,
"Status": "UntrustedRoot"
}
]
}

Trusting signed connectors in Power BI Desktop


Once you’ve verified your signature, you can provide the thumbprint to the end user to
list as trusted. You can read about how to provide the thumbprint in the Power BI
Documentation.
Handling proxy support
Article • 02/17/2023

This article describes how you can enable proxy support in your Power Query custom
connector using the Power Query SDK.

Windows proxy settings


First you'll need to set up the proxy settings in Windows. There are two ways to set up
the proxy, either through Internet Options or the Network & internet setting.

Internet Options
1. From the Windows start menu, search for and open Internet Options.
2. Select the Connections tab.
3. Select LAN settings.
4. In the Proxy sever section, configure the proxy server.

Network & internet


1. From the Windows start menu, select the Settings icon.
2. In Settings, select Network & Internet (Windows 10), or select the Open
Navigation icon and then select Network & internet (Windows 11).
3. Select Proxy.
4. Under the Manual proxy setup section, toggle the Use a proxy server button
(Windows 10), or select the Set up button and toggle the Use a proxy server
button (Windows 11).
5. Configure the address and port.

Enabling proxy support


A Web.DefaultProxy function has been added to the extensibility module. Third-party M
connectors can use this function to fetch the web proxy settings to be set in the driver
properties (when the proxy is enabled as described in Windows proxy settings).

The proxy information can be set in the connection string against the connection
parameter, which can differ by connectors. Proxy credentials (username and password)
aren't supported. Web.DefaultProxy takes in the serverUri/host as the parameter and
returns a record containing the proxy URI as the ProxyUri field of a record. To get the
constituent parts of the URI (for example: scheme, host, port) inside the connector, use
Uri.Parts .

If the proxy isn't configured, Web.DefaultProxy returns an empty record.

Example usage

Example 1
To use Web.DefaultProxy in the connector code, a boolean type variable can be used to
opt in or out of using this functionality. In this example, Web.DefaultProxy is invoked in
the connector code if the optional boolean parameter UseWebDefaultProxy is set to true
(defaults to false).

Power Query M

UseWebDefaultProxyOption = options[UseWebDefaultProxy]?,
ProxyUriRecord = if UseWebDefaultProxyOption then Web.DefaultProxy(Host)
else null,
ProxyOptions = if Record.FieldCount(ProxyUriRecord) > 0 then
[
Proxy = ProxyUriRecord[ProxyUri]
]
else [],
...

Once the UseWebDefaultProxy is set to true and ProxyUriRecord is fetched, a record can
be created to set the Proxy (configuration parameter supported by the driver, which can
vary) with the ProxyUri field returned by Web.DefaultProxy . It can be named something
like ProxyOptions . This record can then be appended to the base ConnectionString , and
include the proxy details along with it.

Power Query M

Databases = Odbc.DataSource(ConnectionString & ProxyOptions, Options)

Example 2
If there are multiple configuration parameters used by the driver for setting the proxy
details (like host and port details being handled separately), Uri.Parts can be used.

Power Query M
UseWebDefaultProxyOption = options[UseWebDefaultProxy]?,
ProxyRecord = if UseWebDefaultProxyOption then Web.DefaultProxy(Host) else
null,
UriRecord = if ProxyRecord <> null then Uri.Parts(ProxyRecord) else null,
ProxyOptions = if UriRecord <> null then
[
ProxyHost = UriRecord[Scheme] & "://" & UriRecord[Host],
ProxyPort = UriRecord[Port]
]
else [],
...
Native query support in Power Query
custom connectors
Article • 10/02/2023

7 Note

This article covers advanced topics around the implementation of native query
support for custom connectors, as well as query folding on top of them. This article
assumes you already have a working knowledge of these concepts.

To learn more about Power Query custom connectors, go to Power Query SDK
Overview.

In Power Query, you're able to execute custom native queries against your data source
to retrieve the data that you're looking for. You can also enable the capability to
maintain query folding throughout this process and subsequent transformation
processes done inside of Power Query.

The goal of this article is to showcase how you can implement such capability for your
custom connector.

Prerequisites
This article uses as a starting point a sample that uses the SQL ODBC driver for its
data source. The implementation of the native query capability is currently only
supported for ODBC connectors that adhere to the SQL-92 standard.

The sample connector uses the SQL Server Native Client 11.0 driver. Make sure that you
have this driver installed to follow along with this tutorial.

You can also view the finished version of the sample connector from the Finish folder
in the GitHub Repository.

Modify the SQLCapabilities of your connector


In the SqlCapabilities record of the sample connector, you can find a record field with
the name Sql92Translation and the value PassThrough for it. This new field is necessary
for the native query to be passed using Power Query without any validation.
Power Query M

SqlCapabilities = Diagnostics.LogValue("SqlCapabilities_Options",
defaultConfig[SqlCapabilities] & [
// Place custom overrides here
// The values below are required for the SQL Native Client ODBC driver,
but might
// not be required for your data source.
SupportsTop = false,
SupportsDerivedTable = true,
Sql92Conformance = 8 /* SQL_SC_SQL92_FULL */,
GroupByCapabilities = 4 /* SQL_GB_NO_RELATION */,
FractionalSecondsScale = 3,
Sql92Translation = "PassThrough"
]),

Make sure that this field appears in your connector before moving forward. If not, you'll
face warnings and errors later on when it comes down to using a capability that isn't
supported because it isn't declared by the connector.

Build the connector file (as .mez or.pqx) and load it into Power BI Desktop for manual
testing and to define the target for your native query.

Manually test the native query capabilities of


your connector

7 Note

For this article, we'll be using the AdventureWorks2019 sample database. But you
can follow along with any SQL Server database of your choice and make the
necessary changes when it comes down to the specifics of the database chosen.

The way native query support will be implemented in this article is that the user will be
requested to enter three values:

Server name
Database name
Native query at the database level

Now inside Power BI Desktop, go to the Get Data experience and find the connector
with the name SqlODBC Sample.
For the connector dialog, enter the parameters for your server and your database name.
Then select OK.
A new navigator window appears. In Navigator, you can view the native navigation
behavior from the SQL driver that displays the hierarchical view of the server and the
databases within it. Right-click the AdventureWorks2019 database, then select
Transform Data.

This selection brings you to the Power Query editor and a preview of what's effectively
the target of your native query since all native queries should run at the database level.
Inspect the formula bar of the last step to better understand how your connector should
navigate to the target of your native queries before executing them. In this case the
formula bar displays the following information:

= Source{[Name="AdventureWorks2019",Kind="Database"]}[Data]

Source is the name of the previous step that, in this case, is simply the published
function of your connector with the parameters passed. The list and the record inside of
it just helps navigate a table to a specific row. The row is defined by the criteria from the
record where the field Name has to be equal to AdventureWorks2019 and the Kind
field has to be equal to Database. Once the row is located, the [Data] outside of the list
{} lets Power Query access the value inside the Data field, which in this case is a table.

You can go back to the previous step (Source) to better understand this navigation.
Test native query
With the target now identified, create a custom step after the navigation step by
selecting the fx icon in the formula bar.

Replace the formula inside the formula bar with the following formula, and then select
Enter.

Power Query M

= Value.NativeQuery( AdventureWorks2019_Database, "SELECT TOP (1000) *


FROM [Person].[Address]")

After you apply this change, a warning should appear underneath the formula bar
requesting permission to run the native query against your data source.
Select Edit Permission. A new Native Database Query dialog is displayed that tries to
warn you about the possibilities of running native queries. In this case, we know that this
SQL Statement is safe, so select Run to execute the command.

After you run your query, a preview of your query appears in the Power Query editor.
This preview validates that your connector is capable of running native queries.
Implement native query logic in your connector
With the information gathered from the previous sections, the goal now is to translate
such information into code for your connector.

The way that you can accomplish this translation is by adding a new
NativeQueryProperties record field to your connector's Publish record, which in this
case is the SqlODBC.Publish record. The NativeQueryProperties record plays a crucial
role in defining how the connector will interact with the Value.NativeQuery function.

The new record field consists of two fields:

NavigationSteps: This field defines how the navigation should be performed or


handled by your connector. It contains a list of records that outline the steps for
navigating to the specific data you want to query using the Value.NativeQuery
function. Within each record, it defines what parameters are required or needed in
order for such navigation to reach your desired target.
DefaultOptions: This field helps identify how certain optional parameters should
be included or added to the Value.NativeQuery options record. It provides a set of
default options that can be used when querying the data source.

NavigationSteps
Your navigation steps can be categorized into two groups. The first contains those
values that are entered by the end-user, such as the name of the server or the database,
in this case. The second contains those values that are derived by the specific connector
implementation, such as the name of fields that aren't displayed to the user during the
get data experience. These fields could include Name , Kind , Data , and others depending
on your connector implementation.

For this case, there was only one navigation step that consisted of two fields:
Name: This field is the name of the database that was passed by the end-user. In
this case, it was AdventureWorks2019 , but this field should always be passed as-is
from what the end-user entered during the get data experience.
Kind: This field is information that isn't visible to the end-user and is specific to the
connector or driver implementation. In this case, this value identifies what type of
object should be accessed. For this implementation, this field will be a fixed value
that consists of the string Database .

Such information will be translated to the following code. This code should be added as
a new field to your SqlODBC.Publish record.

Power Query M

NativeQueryProperties = [
NavigationSteps = {
[
Indices = {
[
FieldDisplayName = "database",
IndexName = "Name"
],
[
ConstantValue = "Database",
IndexName = "Kind"
]
},
FieldAccess = "Data"
]
}
]

) Important

The name of the fields are case sensitive and must be used as shown in the sample
above. All information passed to the fields, either ConstantValue , IndexName , or
FieldDisplayName must be derived from the connector's M code.

For values that will be passed from what the user entered, you can use the pair
FieldDisplayName and IndexName . For values that are fixed or predefined and can't be

passed by the end-user, you can use the pair ConstantValue and IndexName . In this
sense, the NavigationSteps record consists of two fields:

Indices: Defines what fields and what values to use to navigate to the record that
contains the target for the Value.NativeQuery function.
FieldAccess: Defines what field holds the target, which is commonly a table.
DefaultOptions
The DefaultOptions field lets you pass optional parameters to the Value.NativeQuery
function when using the native query capability for your connector.

To preserve query folding after a native query, and assuming that your connector has
query folding capabilities, you can use the following sample code for EnableFolding =
true .

Power Query M

NativeQueryProperties = [
NavigationSteps = {
[
Indices = {
[
FieldDisplayName = "database",
IndexName = "Name"
],
[
ConstantValue = "Database",
IndexName = "Kind"
]
},
FieldAccess = "Data"
]
},

DefaultOptions = [
EnableFolding = true
]
]

With these changes in place, build the connector and load it into Power BI Desktop for
testing and validation.

Test and validate the connector


In Power BI Desktop with your new custom connector in place, launch the connector
from the Get Data experience. When launching the connector, you'll notice that the
dialog now has a long text field with the name Native query and, in parenthesis, it has
the required fields for it to work. Enter the same values for the server, database, and the
SQL statement that you previously entered when testing the connector.
After you select OK, a table preview of the executed native query is shown in a new
dialog.
Select OK. A new query will now load inside the Power Query editor where you can do
further testing of your connector as required.

7 Note

If your connector has query folding capabilities and has explicitly defined
EnableFolding=true as part of the optional record for Value.NativeQuery , then you

can further test your connector in the Power Query editor by checking if further
transforms fold back to the source or not.
Versioning
Article • 12/21/2022

Versioning is required for certified connectors.

Version format in Power Query


Versioning in Power Query is a simple metadata record above the section document:

Power Query M

[Version = "1.0.0"]
section MyConnector;

Semantics of Version in Power Query


Even though support for versioning isn't yet supported in Power Query, you should try
to follow the semantics that will be supported in the future.

The first number is the "major" version, which will indicate breaking changes. This
number should be incremented whenever users will be required to potentially rebuild
reports due to massive connector rearchitecture or removal of features.

The second number is the "minor" version, which indicates addition of functionality.
These will generally not be breaking, but might cause peripheral side effects. This
number should be incremented whenever functionality is added to the connector.

The final number is the "patch" version, which indicates minor tweaks and fixes to
connectors. This is the version that will change the most often, and should be
incremented whenever you release small tweaks of a connector to the public.
Power Query Connector Certification
Article • 02/17/2023

7 Note

This article describes the requirements and process to submit a Power Query
custom connector for certification. Read the entire article closely before starting the
certification process.

Introduction
With the Power Query SDK, everyone is empowered to create a custom Power Query
connector to connect to a data source from Power Query. Currently, custom connectors
are only supported in Power BI datasets (Power BI Desktop and Power BI service), and
require the use of an on-premises data gateway to refresh through Power BI service.
Custom connectors need to be individually distributed by the developer.

Data source owners who develop a custom connector for their data source might want
to distribute their custom connector more broadly to Power Query users. Once a custom
connector has been created, used, and validated by end users, the data source owner
can submit it for Microsoft certification.

Certifying a Power Query custom connector makes the connector available publicly, out-
of-box, within Power BI datasets (Power BI Desktop and Power BI service), Power BI
dataflows, and Power BI datamarts. Certified connectors are supported in PowerBI.com
and all versions of Power BI Premium.

Certified connectors are:

Maintained by the partner developer

Supported by the partner developer

Certified by Microsoft

Distributed by Microsoft

We work with partners to try to make sure that they have support in maintenance, but
customer issues with the connector itself will be directed to the partner developer.
Certified connector and custom connector
differences
Certified connectors are bundled out-of-box in Power BI Desktop, and deployed to
Power BI Service, Power BI dataflows, and Power BI datamarts. Custom connectors are
only supported in Power BI datasets and need to be loaded in Power BI Desktop, as
described in Loading your extension in Power BI Desktop. Both certified and custom
connectors can be refreshed through Power BI Desktop or Power BI Service through
using an on-premises data gateway by implementing a TestConnection. The on-
premises data gateway is required for custom connectors.

Certified connectors in Power BI Desktop with a TestConnection implementation also


support end-to-end refresh through the cloud (Power BI Service) without the need of an
on-premises data gateway. The Power BI service environment essentially hosts a “cloud
gateway” that runs similar to the on-premises gateway. After certification, we'll deploy
your connector to this environment so that it's available to all Power BI customers.

Both custom and certified connectors with extra components (for example, ODBC driver)
need the extra component to be installed on the end user machine and require the on-
premises data gateway, unless the extra component is deployed to Power BI cloud.
Currently, we aren't certifying and deploying any new extra components to Power BI
cloud, so the certification of connectors with a dependency on an extra component
won't remove the on-premises data gateway requirement.

Custom connector distribution


Custom connectors can and should be distributed to end users before certification.

As M is a versatile language that, as seen in Handling Authentication, has the capacity to


interact with stored credentials, we need to give users a way to only allow trusted
connectors to run.

From a developer's perspective, developers need to self-sign their custom connector


and provide their users with the information (thumbprint) to securely load it.

From a user's perspective, users need to use the thumbprint from the developer to
securely trust and load the custom connector for use. Alternatively, users can opt to
lower their security settings to allow loading of code not certified by Microsoft or
another developer, but this isn't recommended.

Certification Overview
Prerequisites
To ensure the best experience for our customers, we only consider connectors that meet
a set of prerequisites for certification:

The connector must be for a public product.

The connector must be considered code-complete for an initial release version.


The program allows for frequent iterations and updates. Note that Microsoft
doesn't offer technical assistance or custom connector development consulting.
We recommend leveraging public resources such as our SDK documentation and
samples repository. If you require further assistance, we can share a list of known
3rd-party industry custom connector development consultants that you may want
to engage directly, separate from any Microsoft program or partnership. Note that
Microsoft isn't affiliated with any of these consultants and isn't responsible for your
use of their services. Microsoft provides the list for your convenience and without
any assurances, recommendations, or guarantees. Reach out to your Microsoft
certification contact to learn more.

The developer must provide an estimate for usage. We suggest that developers of
connectors for very boutique products use our connector self-signing capabilities
to provide them directly to the customer.

The connector must be already made available to customers directly to fulfill a user
need or business scenario. This can be done using a Private Preview program by
distributing the completed connector directly to end users and organizations
through self-signing. Each user or organization should be able to provide feedback
and validation that there's a business need for the connector and that the
connector is working successfully to fulfill their business requirements.

The connector must be working successfully at an anticipated level of usage by


customers.

There must be a thread in the Power BI Ideas forum driven by customers to


indicate demand to make the connector publicly available in Power BI Desktop.
There's no set threshold of engagement. However the more engagement, the
stronger the evidenced demand for the connector.

These prerequisites exist to ensure that connectors undergoing certification have


significant customer and business need to be used and supported post-certification.

Process and Timelines


Certified connectors are released with monthly Power BI Desktop releases, so the
deadlines for each release work back from each Power BI Desktop release date. The
expected duration of the certification process from registration to release varies
depending on the quality and complexity of the connector submission. Microsoft
doesn't provide any specific timeline guarantees with regard to any connector review
and approval. The hard deadlines for each connector review is outlined in the following
steps, but Microsoft doesn't guarantee adherence to these timelines.

Registration: notification of intent to certify your custom connector. This must


occur by the 15th of the month, two months before the targeted Power BI desktop
release.
For example, for the April Power BI Desktop release, the deadline would be
February 15th.

Submission: submission of connector files for Microsoft review. This submission


must occur by the 1st of the month before the targeted Power BI desktop release.
For example, for the April Power BI Desktop release, the deadline would be
March 1st.

Technical Review: finalization of the connector files, passing Microsoft review and
certification. This review must occur by the 15th of the month before the targeted
Power BI Desktop release.
For example, for the April Power BI Desktop release, the deadline would be
March 15th.

Due to the complexity of the technical reviews and potential delays, rearchitecture, and
testing issues, we highly recommend submitting early with a long lead time for the
initial release and certification. If you feel like your connector is important to deliver to
a few customers with minimal overhead, we recommend self-signing and providing it
that way.

Certification Requirements
We have a certain set of requirements for certification. We recognize that not every
developer can meet these requirements, and we're hoping to introduce a feature set
that will handle developer needs in short order.

Submission Files (Artifacts)


Ensure the connector files that you submit include all of the following:

Connector (.mez) file


The .mez file should follow style standards and be named similarly to the
product or service name. It shouldn't include words like "Power BI", "Connector"
or "API".
Name the .mez file: ProductName.mez

Power BI Desktop (.pbix) file for testing


We require a sample Power BI report (.pbix) to test your connector with.
The report should include at least one query to test each item in your
navigation table.
If there's no set schema (for example, databases), the report needs to include a
query for each "type" of table that the connector may handle.

Test account to your data source


We'll use the test account to test and troubleshoot your connector.
Provide a test account that is persistent, so we can use the same account to
certify any future updates.

Testing instructions
Provide any documentation on how to use the connector and test its
functionality.

Links to external dependencies (for example, ODBC drivers)

Features and Style


The connector must follow a set of feature and style rules to meet a usability standard
consistent with other certified connectors.

The connector MUST:


Use Section document format.
Contain a version header/adornment above the section document.
Provide function documentation metadata.
Have TestConnection handler.
Follow naming conventions (for example, DataSourceKind.FunctionName ). It
shouldn't include words like "Power BI", "Connector" or "API".
Have the Beta flag set to True on initial release.

The FunctionName should make sense for the domain (for example "Contents",
"Tables", "Document", "Databases", and so on).

The connector SHOULD:


Have icons.
Provide a navigation table.
Place strings in a resources.resx file. URLs and values should be hardcoded in
the connector code and not be placed in the resources.resx file.

Security
There are specific security considerations that your connector must handle.

If Extension.CurrentCredentials() is used:
Is the usage required? If so, where do the credentials get sent to?
Are the requests guaranteed to be made through HTTPS?
You can use the HTTPS enforcement helper function.
If the credentials are sent using Web.Contents() via GET:
Can it be turned into a POST?
If GET is required, the connector MUST use the CredentialQueryString record
in the Web.Contents() options record to pass in sensitive credentials.

If Diagnostics.* functions are used:


Validate what is being traced; data must not contain PII or large amounts of
unnecessary data.
If you implemented significant tracing in development, you should implement a
variable or feature flag that determines if tracing should be on. This must be
turned off prior to submitting for certification.

If Expression.Evaluate() is used:
Validate where the expression is coming from and what it is (that is, can
dynamically construct calls to Extension.CurrentCredentials() and so on).
The Expression should not be user provided nor take user input.
The Expression should not be dynamic (that is, retrieved from a web call).

Registering for Certification


If you're interested in pursuing certification of your custom connector, ensure that your
scenario and connector meet the prerequisites and requirements outlined in this article.
Failure to do so will cause delays in certification as our team will require you to fix any
issues or inconsistencies prior to moving forward with certification.

Ensure that your connector is code complete and has been tested in both authoring in
Power BI Desktop, and refreshing and consumption in Power BI Service. Ensure you have
tested full end-to-end refresh in Power BI Service through the use of an on-premises
data gateway.
To get started, complete our registration form , and a Microsoft contact will reach out
to begin the process.

Template Apps (Recommended)


Once you've developed a connector to a data source, consider helping customers get
up and running quickly by creating a template app. A template app provides customers
a prebuilt report connected to their data that they can use out-of-the-box or customize
as necessary.

7 Note

Template apps do not support connectors that require a gateway.


Power Query connector submission
Article • 03/14/2023

Introduction
This article provides instructions for how to submit your Power Query custom connector
for certification. Don't submit your connector for certification unless you've been
explicitly directed to by your Microsoft contact.

Prerequisites
After you've been approved for certification, ensure that your connector meets the
certification requirements and follows all feature, style, and security guidelines. Prepare
the submission artifacts for submission.

Submitting to the Connector Certification


Portal
The Connector Certification Portal is an experience within ISV Studio that allows
Microsoft partners and ISVs to submit, view, and manage connectors submitted for
certification. After submission, communication between Microsoft and the partner
connector developer will be through the Connector Certification Portal.

Initial Submission
1. Navigate to ISV Studio and sign in with your work Microsoft account. Personal
accounts aren't supported in this experience.

2. Select the Connector certification tab on the left to launch the Connector
Certification Portal experience.

3. Select Power Query as your connector type.

4. Read and agree to our connector Partner Agreement. If there's a separate


agreement governing the relationship between your organization and Microsoft,
let your Microsoft contact know.

5. Upload your .mez file and complete the form with information on your connector.
Submit the form to finish the connector submission process. Once submitted, you
can use the Activity Control experience on the right to communicate with your
Microsoft contact.

6. Read the guidelines for providing documentation for your custom connector.
Create a Markdown ( .md ) file following the custom connector documentation
guidelines, using examples from existing documentation if needed. This step is
crucial to ensure users know how to use your connector. Once you have the pull
request for the public documentation available, email the pull request link to your
Microsoft contact.

7. In the Activity Control, share a paragraph introducing your connector and


explaining its value proposition to users and customers. This entry will be
submitted as part of the Power BI blog in the upcoming month's post, announcing
the release of your connector.

Note that we need you to complete all the steps in order to move forward with
certification. If you would like to add teammates to manage your connector, let your
Microsoft contact know.

After your connector code review is complete, you'll need to submit a demo video to us
outlining the following scenarios:

Load the connector as a custom connector in Power BI Desktop.


Select the connector from Get Data.
Successfully authenticate to the connector.
Select a table from the Navigator experience.
Load the data into Power BI Desktop and create a simple report.
Publish the report to the Power BI service.
Update the data in your end service backend (any simple data value update).
Successfully refresh the dataset in the Power BI service using the connector
through the on-premises data gateway. Refresh the report visual. The changed
data value should be updated.
Successfully refresh in Power BI Desktop. The changed data value should be
updated.

Updates
Updates to your connector submission can be made at any time, except when your
connector is in the process of production deployment. When you're submitting an
update, ensure that you submit an update to your existing submission, rather than
creating a new submission.

1. Navigate to the Connector certification experience within ISV Studio .


2. From the list of connectors you manage, select the connector submission to view
its connector versions.

3. For an update to a certified connector, select the link to submit a new version in
the panel on the right, on top of the existing connector versions. For an update to
an existing connector version undergoing certification, select the most recent
connector version and on the bottom left, select the Submit an update button.

4. You can upload a new version of artifacts and complete the submission form again.

5. After submitting the connector form, in the Activity Control chat feature on the
right, submit a short changelog explaining the connector update. This information
should be public and written in a customer-facing way, as it will be included
verbatim in the next Power BI Desktop blog update.

6. Also be sure to update your connector-specific documentation in the connector


reference. You can submit a pull request by selecting the pencil icon on the top
right of the screen. Any functionality change should be accompanied by the
corresponding documentation update.
Providing user documentation for your
custom connector
Article • 02/17/2023

Once you've finished designing your Power Query custom connector, you'll need to
submit an article that provides instructions on how to use your connector for
publication on Microsoft Learn. This article discusses the layout of such an article and
how to format the text of your article.

Article layout
This section describes the general layout of the Power Query connector articles. Your
custom connector article should follow this general layout.

Support note
Right after the title of the article, insert the following note.

7 Note

The following connector article is provided by <company name>, the owner of this
connector and a member of the Microsoft Power Query Connector Certification
Program. If you have questions regarding the content of this article or have
changes you would like to see made to this article, visit the <company name>
website and use the support channels there.

Replace <company name> with your company name.

Summary table
After the support note, provide a summary table that contains the following
information:

Release state: Indicates whether the connector is in preview or general availability.


Use either "Preview" or "General Availability".
Products supported: Lists the products that can use your custom connector.
Authentication types supported: Lists the authentication types your custom
connector supports.
Function reference documentation: Lists any M formula language documentation
associated with your custom connector.

If your custom connector is implemented on various products, but has different


capabilities on these products, include the following or similar note in your article after
the summary table.

7 Note

Some capabilities may be present in one product but not others due to
deployment schedules and host-specific capabilities.

Prerequisites
If your custom connector requires that other applications be installed on the system
running your connector or requires that a set-up procedure be done before using your
custom connector, you must include a Prerequisites section that describes these
installation and set-up procedures. This section will also include any information about
setting up various versions of your connector (if applicable).

Capabilities supported
This section should contain a list of the capabilities supported by your custom
connector. These capabilities are usually a bulleted list that indicates if the connector
supports Import and DirectQuery modes, and also any advanced options that are
available in the initial dialog box that appears after the user selects your connector in
Get data.

Connection instructions
This section contains the procedures required to connect to data. If your custom
connector is only used in Power Query Desktop, only one procedure is required.
However, if your custom connector is used on both Power Query Desktop and Power
Query Online, you must supply a separate procedure in separate sections for each
instance. That is, if your custom connector is only used by Power Query Desktop, you'll
have one procedure starting with a second order heading and a single step-by-step
procedure. If your custom connector is used by both Power Query Desktop and Power
Query Online, you'll have two procedures. Each procedure starts with a second order
heading, and contains a separate step-by-step procedure under each heading. For
examples of each of these types of procedures, go to Example connector articles.

The procedure is made up of a numbered list that includes each step required to fill in
the information needed to provide a normal connection (not requiring advance options)
to the data.

7 Note

All new certified connectors are required to support Power BI dataflows, and must
contain sections for connecting to your data in both Power Query Desktop and
Power Query Online.

Connect using advanced options (optional)


If your custom connector contains advanced options that can be used to connect to the
data, this information should be covered in a separate section of the documentation.
Each of the advanced options should be documented, and the purpose of each
advanced option explained in this section.

Troubleshooting (optional)
If you know of any common errors that may occur with your custom connector, you can
add a troubleshooting section to describe ways to either fix the error, or work around
the error. This section can also include information on any known limitations of your
connector or the retrieval of data. You can also include any known issues with using your
connector to connect to data.

Additional instructions (optional)


Any other instructions or information about your connector that hasn't been covered by
the previous sections can go in this section.

Article location and format


The filename format for your connector article should contain only lowercase letters,
with a hyphen between words. For example, the filename for the Azure Blob Storage
connector is azure-blob-storage.md. If your connector name uses acronyms, the entire
acronym counts as one word. For example the filename for the Hive LLAP connector is
hive-llap.md.

Your article should be made available on GitHub under the connectors folder in the
Power Query documentation repo: https://github.com/MicrosoftDocs/powerquery-
docs/tree/master/powerquery-docs/connectors . Ensure that you also add a link to
your article in the list of connectors referencing the correct logo image uploaded to the
/connectors/media/index folder. Lastly, ensure that you add a link to your article in the
table of contents file ( TOC.yml ). Certified connectors will only be available under Power
BI (Datasets).

See our contributor guide on how you can contribute to our repo.

The article should be formatted and submitted as a Markdown file. It should use the
Microsoft style for describing procedures and the UI layout.

The following articles include instructions on formatting your document in Markdown,


and the Microsoft style that you should follow when authoring your article:

Docs Markdown reference


Microsoft Writing Style Guide

Example connector articles


Here's a couple of example Power Query connector articles that you can use as a
starting point:

Adobe Analytics (Power Query Desktop only)


Azure SQL database (Power Query Desktop and Power Query Online)
Custom connector development
FAQ
FAQ

Here are some answers to common questions that might occur while developing
custom Power Query connectors.

General
Is it possible to show a warning if...?
Outside of documented warning patterns, we currently don't provide a way to return
out of band warnings, such as a large table or large metadata warning.

Is it possible to present table partition


information by adding another level to the
navigation hierarchy, and let users select one or
more partitions?
It's possible if end users frequently want to retrieve a single partition of data. However,
this functionality can't be added to an already existing connector. In essence, this
change would break an already existing connector.

Troubleshooting
The custom connector I've been developing
works fine in Power BI Desktop. But when I try to
run it in Power BI service, I can't set credentials
or configure the data source. What's wrong?
There could be several reasons why you're seeing this behavior. Some common errors
that might occur while running the connector on Power BI service are:

Failed to update data source credentials


An exception occurred while trying to get OAuthProvider from
DataSourceReference

The given data source kind isn't supported

Before you begin troubleshooting this behavior, first collect a copy of the custom
connector (.pq or .mez file). If you have a .mez file, rename the file to .zip and extract the
.pq file.

To troubleshoot the custom connector:

1. Open the custom connector file (.pq) in a text editor of your choice.

2. Find the TestConnection function. The TestConnection function is required for


scheduled refresh in the Power BI service, but isn't used in Power BI Desktop.
Check the .pq file for a TestConnection implementation, and confirm that the
parameters match the connector's data source function. More information:
Handling gateway support

3. If the connector uses OAuth, check for the state parameter. A common cause of
service-only failures is a missing state parameter in the connector's StartLogin
implementation. This parameter isn't used in Power BI Desktop, but is required in
the Power BI service. The state parameter must be passed into the call to
Uri.BuildQueryString. The following example demonstrates the correct
implementation of state .

Power Query M

StartLogin = (resourceUrl, state, display) =>


let
authorizeUrl = authorize_uri & "?" & Uri.BuildQueryString([
response_type = "code",
client_id = client_id,
state = state, //correct implementation
redirect_uri = redirect_uri,
resource = resource
])
in
[
LoginUri = authorizeUrl,
CallbackUri = redirect_uri,
WindowHeight = 720,
WindowWidth = 1024,
Context = null
];
When a schema or database opens in the Power
Query navigator, it starts immediately fetching
all of the tables under the database instead of
waiting for a table to be selected. What is
causing this behavior?
This behavior might be a side effect of how you're building your navigation table. If
you're creating new records with Table.TransformRows, this usage usually results in
eager evaluation of the data tables. However, values produced by Table.AddColumn are
produced lazily. so, in the following example code, "each GetSchemas(url, [name])"
won't be evaluated unless the user query actually references this data.

Power Query M

GetShares = (server_host as text) as table =>


let
url = server_host & "/shares",
shares = GetItems(url),
withData = Table.AddColumn(shares, "Data", each GetSchemas(url,
[name])),
withItemKind = Table.AddColumn(withData, "ItemKind", each "Folder"),
withItemName = Table.AddColumn(withItemKind, "ItemName", each
"Folder"),
withIsLeaf = Table.AddColumn(withItemName, "IsLeaf", each false),
renamed = Table.RenameColumns(withIsLeaf, {{"name", "Name"}, {"key",
"Key"}}),
navTable = Table.ToNavigationTable(renamed, {"Key"}, "Name", "Data",
"ItemKind", "ItemName", "IsLeaf")
in
navTable;

A single table can consist of multiple partitioned


files. The current implementation downloads all
the files before showing a preview. Is there a way
to avoid downloading all the files, and only
download files until there's enough rows for the
preview?
This behavior is a side effect of using Table.Combine. An alternative method is to build a
"table of tables" and use the Table.ExpandTableColumn function. This method lazily
expands partitions as needed. For example:
Power Query M

GetFiles = (tables_url as text, table_name as text) as table =>


let
// parse raw ndjson and get the list of parquet files
// resp format: line 1: protocol, line 2: schema, line 3..:file info
resp = Lines.FromBinary(SendRequest(tables_url & "/" & table_name &
"/query", [
Headers= [#"Content-Type"="application/json"],
Content= Text.ToBinary("{}")
]), null, null, 1252),
protocol = resp{0}, // TODO: Add protocol version check
schema = Json.Document(Json.Document(resp{1})[metaData]
[schemaString])[fields],
columnNames = List.Transform(schema, each [name]),

fileInfos = List.Range(resp, 2),


fileUrls = List.Transform(fileInfos, each Json.Document(_)[file]
[url]),
numFiles = List.Count(fileUrls),

toTable = Table.FromList(fileUrls, Splitter.SplitByNothing(),


{"FileUrl"}),
processPartition = Table.AddColumn(toTable, "Data", each
Parquet.Document(Binary.Buffer(ProtectSensitiveQueryParameters([FileUrl],
[ManualCredentials = true])))),
removeFileUrl = Table.RemoveColumns(processPartition, {"FileUrl"}),
expanded = Table.ExpandTableColumn(removeFileUrl, "Data",
columnNames)
in
if numFiles = 0 then #table(columnNames, {}) else expanded;

ProtectSensitiveQueryParameters = (url as text, options as record) =>


let
uriParts = Uri.Parts(uri),
uriWithoutQuery = Uri.FromParts(uriParts & [Query = []]),
modifiedOptions = options & [
CredentialQuery = uriParts[Query],
]
in
Web.Contents(uriWithoutQuery, modifiedOptions);

Additional information

7 Note

Reach out to your Microsoft contact if you are interested in engaging a


recommended 3rd party custom connector developer consultant.
Power BI documentation
Power BI amplifies your insights and the value of your data. With Power BI documentation, you get
expert information and answers to address your needs, no matter how you use Power BI.

GET STARTED TUTORIAL


Download Power BI Desktop for Microsoft Fabric for Power BI
free users

GET STARTED CONCEPT


What is Microsoft Fabric? Export Power BI embedded
analytics reports API

TRAINING CONCEPT
Get started with DAX Power BI Premium features

HOW-TO GUIDE TRAINING


Edit data models in the Power BI Study Power BI start to finish on
service Microsoft training

Do more in Power BI
Use Power BI software services, apps, and connectors to turn your unrelated sources of data into
coherent, visually immersive, and interactive insights.

Power BI: Get started Enterprise


e What is Power BI? e Licensing for users in your organization
b Get started with Power BI Desktop p Security
g Get started creating in the Power BI service p Data protection
See all getting started docs T See all enterprise docs T

Administer Power BI Develop in Power BI


e What is Power BI administration? e What is Power BI embedded analytics?
c Admin portal settings e Power BI Embedded
c Use the same account for Power BI and Azure e Develop your own Power BI visual

See all admin docs T See all developer docs T

Connect to data Datamarts (preview)


g Connect to data e Datamarts overview (preview)
p On-premises data gateways p Understand datamarts (preview)
p Discover data on the data hub g Get started with datamarts (preview)

See all data connection docs T See all datamarts docs T

Model data Create reports and dashboards


c Add a custom column c Design reports for accessibility
g Create calculated columns p Create a paginated report based on a shared
dataset
c Create and manage relationships
c Customize buttons in reports
See all data modeling docs T
See all report creation docs T

Collaborate and share Integrate across products


p Share dashboards and reports p Azure and Power BI
c Create workspaces p Analyze in Excel
c Add the Power BI app to Microsoft Teams p Export and email with Power Automate

See all collaboration docs T See all product integration docs T

Power BI for business users Power BI in the mobile apps


b Am I a Power BI business user? b What are the mobile apps?
b Get around in the Power BI service f Explore dashboards and reports
p What is a dashboard? d Get data on the go with the mobile apps

See all business user docs T See all mobile apps docs T
Additional Power BI resources

Best practices guidance Training catalog


p Power BI adoption roadmap d Business decision makers
p Power BI implementation planning d Data analysts
p Optimization guide for Power BI d Solution architects

See all guidance docs T See all training catalog docs T

Troubleshooting Support
e Known issues for Power BI features c Power BI Pro and Power BI Premium support
options
c Troubleshoot refresh scenarios
c Troubleshoot subscribing to reports and c Track Power BI service health in Microsoft 365
dashboards c Move Power BI to another region

See all troubleshooting docs T See all support docs T

Related tools in the Power BI family

Power BI Report Builder Power BI Report Server Power BI REST APIs


Power BI Report Builder is a Explore Power BI Report Server, The Power BI REST APIs
standalone tool for authoring an on-premises report server provide service endpoints for
paginated reports that you can with a web portal in which you embedding, administration,
publish to the Power BI servic… display and manage Power BI… and user resources.

Power BI embedded
analytics client APIs
A client-side library for
embedding Power BI using
JavaScript or TypeScript.

Content about related products

Microsoft Fabric Power Platform Power Apps


The unified platform to meet Make the most of Microsoft Quickly build low-code apps
your organization's data and Power Platform products with that modernize precesses and
analytics needs. Discover the online training courses, docs, solve tough business
capabilities Fabric has to offer… and videos covering product… challenges in your…

Power Automate Power Pages Power Virtual Agents


Create automated workflows Design, host, and administer Create powerful bots using a
between your favorite apps secure, modern, and low-code guided, no-code graphical
and services to synchronize business websites. interface with no need for data
files, get notifications, and… scientists or developers.
Power Query M function reference
Article • 09/21/2022

The Power Query M function reference includes articles for each of the over 700
functions. These reference articles are auto-generated from in-product help. To learn
more about functions and how they work in an expression, go to Understanding Power
Query M functions.

Functions by category
Accessing data functions
Binary functions
Combiner functions
Comparer functions
Date functions
DateTime functions
DateTimeZone functions
Duration functions
Error handling
Expression functions
Function values
List functions
Lines functions
Logical functions
Number functions
Record functions
Replacer functions
Splitter functions
Table functions
Text functions
Time functions
Type functions
Uri functions
Value functions

Feedback

ツ Yes ト No
ツ Yes ト No
Was this page helpful?

Get help at Microsoft Q&A


Power Query M language specification
Article • 08/09/2022

The specification describes the values, expressions, environments and variables,


identifiers, and the evaluation model that form the Power Query M language's basic
concepts.

The specification is contained in the following topics.

Introduction
Lexical Structure
Basic Concepts
Values
Types
Operators
Let
Conditionals
Functions
Error Handling
Sections
Consolidated Grammar

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A


Types in the Power Query M formula
language
Article • 08/09/2022

The Power Query M Formula Language is a useful and expressive data mashup
language. But it does have some limitations. For example, there is no strong
enforcement of the type system. In some cases, a more rigorous validation is needed.
Fortunately, M provides a built-in library with support for types to make stronger
validation feasible.

Developers should have a thorough understanding of the type system in-order to do


this with any generality. And, while the Power Query M language specification explains
the type system well, it does leave a few surprises. For example, validation of function
instances requires a way to compare types for compatibility.

By exploring the M type system more carefully, many of these issues can be clarified,
and developers will be empowered to craft the solutions they need.

Knowledge of predicate calculus and naïve set theory should be adequate to understand
the notation used.

PRELIMINARIES
(1) B := { true; false }
B is the typical set of Boolean values

(2) N := { valid M identifiers }


N is the set of all valid names in M. This is defined elsewhere.

(3) P := ⟨B, T⟩
P is the set of function parameters. Each one is possibly optional, and has a type.
Parameter names are irrelevant.

(4) Pn := ⋃0≤i≤n ⟨i, Pi⟩


Pn is the set of all ordered sequences of n function parameters.

(5) P* := ⋃0≤i≤∞ Pi
P* is the set of all possible sequences of function parameters, from length 0 on up.

(6) F := ⟨B, N, T⟩
F is the set of all record fields. Each field is possibly optional, has a name, and a type.
(7) Fn := ∏0≤i≤n F
Fn is the set of all sets of n record fields.

(8) F* := ( ⋃0≤i≤∞ Fi ) ∖ { F | ⟨b1, n1, t1⟩, ⟨b2, n2, t2⟩ ∈ F ⋀ n1 = n2 }


F* is the set of all sets (of any length) of record fields, except for the sets where more than
one field has the same name.

(9) C := ⟨N,T⟩
C is the set of column types, for tables. Each column has a name and a type.

(10) Cn ⊂ ⋃0≤i≤n ⟨i, C⟩


Cn is the set of all ordered sequences of n column types.

(11) C* := ( ⋃0≤i≤∞ Ci ) ∖ { Cm | ⟨a, ⟨n1, t1⟩⟩, ⟨b, ⟨n2, t2⟩⟩ ∈ Cm ⋀ n1 = n2 }


C* is the set of all combinations (of any length) of column types, except for those where
more than one column has the same name.

M TYPES
(12) TF := ⟨P, P*⟩
A Function Type consists of a return type, and an ordered list of zero-or-more function
parameters.

(13) TL :=〖T〗
A List type is indicated by a given type (called the "item type") wrapped in curly braces.
Since curly braces are used in the metalanguage, 〖 〗 brackets are used in this document.

(14) TR := ⟨B, F*⟩


A Record Type has a flag indicating whether it's "open", and zero-or-more unordered
record fields.

(15) TRo := ⟨true, F⟩

(16) TR• := ⟨false, F⟩


TRo and TR• are notational shortcuts for open and closed record types, respectively.

(17) TT := C*
A Table Type is an ordered sequence of zero-or-more column types, where there are no
name collisions.

(18) TP := { any; none; null; logical; number; time; date; datetime; datetimezone; duration;
text; binary; type; list; record; table; function; anynonnull }
A Primitive Type is one from this list of M keywords.
(19) TN := { tn, u ∈ T | tn = u+null } = nullable t
Any type can additionally be marked as being nullable, by using the "nullable" keyword.

(20) T := TF ∪ TL ∪ TR ∪ TT ∪ TP ∪ TN
The set of all M types is the union of these six sets of types:
Function Types, List Types, Record Types, Table Types, Primitive Types, and Nullable Types.

FUNCTIONS
One function needs to be defined: NonNullable : T ← T
This function takes a type, and returns a type that is equivalent except it does not
conform with the null value.

IDENTITIES
Some identities are needed to define some special cases, and may also help elucidate
the above.

(21) nullable any = any


(22) nullable anynonnull = any
(23) nullable null = null
(24) nullable none = null
(25) nullable nullable t ∈ T = nullable t
(26) NonNullable(nullable t ∈ T) = NonNullable(t)
(27) NonNullable(any) = anynonnull

TYPE COMPATIBILITY
As defined elsewhere, an M type is compatable with another M type if and only if all
values that conform to the first type also conform to the second type.

Here is defined a compatibility relation that does not depend on conforming values, and
is based on the properties of the types themselves. It is anticipated that this relation, as
defined in this document, is completely equivalent to the original semantic definition.

The "is compatible with" relation : ≤ : B ← T × T


In the below section, a lowercase t will always represent an M Type, an element of T.

A Φ will represent a subset of F*, or of C*.

(28) t ≤ t
This relation is reflexive.
(29) ta ≤ tb ∧ tb ≤ tc → ta ≤ tc
This relation is transitive.

(30) none ≤ t ≤ any


M types form a lattice over this relation; none is the bottom, and any is the top.

(31) ta, tb ∈ TN ∧ ta ≤ ta → NonNullable(ta) ≤ NonNullable(tb)


If two types are compatible, then the NonNullable equivalents are also compatible.

(32) null ≤ t ∈ TN
The primitive type null is compatible with all nullable types.

(33) t ∉ TN ≤ anynonnull
All nonnullable types are compatible with anynonnull.

(34) NonNullable(t) ≤ t
A NonNullible type is compatible with the nullable equivalent.

(35) t ∈ TF → t ≤ function
All function types are compatible with function.

(36) t ∈ TL → t ≤ list
All list types are compatible with list.

(37) t ∈ TR → t ≤ record
All record types are compatible with record.

(38) t ∈ TT → t ≤ table
All table types are compatible with table.

(39) ta ≤ tb ↔ 〖ta〗≤〖tb〗
A list type is compaible with another list type if the item types are compatible, and vice-
versa.

(40) ta ∈ TF = ⟨ pa, p* ⟩, tb ∈ TF = ⟨ pb, p* ⟩ ∧ pa ≤ pb → ta ≤ tb


A function type is compatible with another function type if the return types are
compatible, and the parameter lists are identical.

(41) ta ∈ TRo, tb ∈ TR• → ta ≰ tb


An open record type is never compatible with a closed record type.

(42) ta ∈ TR• = ⟨false, Φ⟩, tb ∈ TRo = ⟨true, Φ⟩ → ta ≤ tb


A closed record type is compatible with an otherwise identical open record type.

(43) ta ∈ TRo = ⟨true, (Φ, ⟨true, n, any⟩)⟩, tb ∈ TRo = ⟨true, Φ⟩ → ta ≤ tb ∧ tb ≤ ta


An optional field with the type any may be ignored when comparing two open record
types.

(44) ta ∈ TR = ⟨b, (Φ, ⟨β, n, ua⟩)⟩, tb ∈ TR = ⟨b, (Φ, ⟨β, n, ub⟩)⟩ ∧ ua ≤ ub → ta ≤ tb


Two record types that differ only by one field are compatible if the name and optionality of
the field are identical, and the types of said field are compatible.

(45) ta ∈ TR = ⟨b, (Φ, ⟨false, n, u⟩)⟩, tb ∈ TR = ⟨b, (Φ, ⟨true, n, u⟩)⟩ → ta ≤ tb


A record type with a non-optional field is compatible with a record type identical but for
that field being optional.

(46) ta ∈ TRo = ⟨true, (Φ, ⟨b, n, u⟩)⟩, tb ∈ TRo = ⟨true, Φ⟩ → ta ≤ tb


An open record type is compatible with another open record type with one fewer field.

(47) ta ∈ TT = (Φ, ⟨i, ⟨n, ua⟩⟩), tb ∈ TT = (Φ, ⟨i, ⟨n, ub⟩⟩) ∧ ua ≤ ub → ta ≤ tb


A table type is compatible with a second table type, which is identical but for one column
having a differing type, when the types for that column are compatible.

REFERENCES
Microsoft Corporation (2015 August)
Microsoft Power Query for Excel Formula Language Specification [PDF]
Retrieved from https://msdn.microsoft.com/library/mt807488.aspx

Microsoft Corporation (n.d.)


Power Query M function reference [web page]
Retrieved from https://msdn.microsoft.com/library/mt779182.aspx

Feedback
Was this page helpful? ツ Yes ト No

Get help at Microsoft Q&A

You might also like