Skip to content

Category: Power BI

Understanding licensing for Microsoft Fabric from a Power BI perspective

Shortly after Premium capacities were announced for Power BI, I published a post titled Understanding the Power BI Capacity Based SKUs to help the Power BI community understand how this new license model worked. With the recent announcement of Microsoft Fabric, that model got a little more complex. Admittedly, it’s not the same radical departure that Power BI Premium was initially – in fact, it’s based on the same model, but there are several nuances that are worth understanding.

Adam Saxton recently addressed Fabric licensing on Guy in a Cube here – Understanding Microsoft Fabric Licensing and Cost (Public Preview) and I highly recommend watching it. My take on the differences is below.

For those familiar with Power BI user and capacity licenses, the easiest way to understand the differences brought by Microsoft Fabric is to think of all of the new Fabric features as new Premium features in Power BI. Fabric also introduces a new type of capacity, the F sku in addition to (for the moment at least) the P and A skus currently available to Power BI. For those less familiar, and for more detail, a more comprehensive discussion is warranted.

User vs Capacity based licenses

Power BI has two different licensing models, user based, and capacity based. Power BI is a component of Fabric, but all other Fabric assets use only capacity-based licensing. Therefore, within the context of Fabric, the Power BI license only pertains to Power BI assets. There are three user-based licenses for Power BI: Free, Pro, and Premium per user. A user’s license determines which workspaces that the user can access.

A workspace contains assets created by users and provides a logical security boundary. Workspaces are also bound to a capacity, and that binding determines which capacity is used when these assets are utilized.

A capacity is a collection of resources that are consumed when various assets are utilized. For example, when a user renders a report, or a dataset refresh is run, resources are consumed from the workspace’s underlying capacity. There are several capacity types and sizes, and the type of capacity determines the capabilities that are available to the workspace.

Capacity Types

There are five capacity types commonly in use with workspaces today. These are:

  • Shared (or Pro)
  • Premium per User
  • Premium A sku
  • Premium P sku
  • Fabric F sku

Shared capacities (also referred to as Pro capacities) are the default capacity that any new Power BI workspace is backed by. Shared capacities are for Power BI only and are provided by Microsoft at no extra cost. Shared capacities do not provide Premium or Fabric features and impose certain operating limits. To access a workspace that is backed by a shared capacity, a user must have a Power BI Pro license.

Premium per User capacities are available to users that have a Premium per User license. PPU capacities are for Power BI only, are provided by Microsoft at no extra cost and they do provide most Premium, but not Fabric features. To access a workspace that is backed by a Premium per User capacity, a user must have a Power BI Premium per user license.

All other capacity types are purchased for the tenant by an organization. There is a wide variety of options to choose from. To utilize Fabric features, the capacity must be either a P or an F sku. P skus have been with us since Premium was initially introduced, and F skus have been introduced with Fabric. Functionally there is no difference between the two, apart from how they are purchased, which is covered below. A complete summary of capacity features and resources can be seen below.

Capacity Features

FeatureSharedPremium APPUPremium PFabric F
Dataset size limit1 GB3-400 GB100 GB25-400 GB1-800 GB
Refreshes per day848*48*48*48**
XMLA EndpointReadRead/writeRead/writeRead/writeRead/write
DataflowsBasicEnhancedEnhancedGen 2Gen 2
Automatic Page Refresh.5 hr min1s min / change detect1s min / change detect1s min / change detect1s min / change detect
Deployment PipelinesNoYesYesYesYes
DatamartsNoYesYesYesYes
Hybrid TablesNoYesYesYesYes
MetricsYesYesYesYesYes
AI Capabilities (Auto ML, CS)NoYesYesYesYes
Automatic AggregationsNoYesYesYesYes
Multi-geo supportNoYesNoYesYes
Other Fabric assetsNoNoNoYesYes
*  Limited by the UI. Unlimited refreshes available programmatically
** Limited by the UI. Unlimited refreshes available programmatically. Direct Lake mode datasets will not require refresh.

Capacity Resources by sku

Premium APremium PFabric FCapacity Units
(CU)
V-CoresMax Dataset size
(GB)
F220.251
F440.52
A1F8813
A2F161625
A3F3232410
A4P1F6464825
A5P2F1281281650
A6P3F25625632100
A7P4F51251264200
A8P5F10241024128400
F20482048256400

As you can see in the above chart, the A4. P1, and F64 skus all have the same resource level. Therefore, an organization could purchase an F64 capacity, disable Fabric features, and have exactly the same experience as with the P1 sku. The only difference between them is the way that they are acquired, which is discussed below.

User Access

In addition to features and resources, the capacity that backs a workspace also determines the user license that is required to consume Power BI resources contained within it. It is worth noting here that no matter what capacity backs a workspace, a Power BI Pro license is required to publish reports to it.

The license requirements for the various capacity types can be seen in the table below. Again, this pertains to Power BI artifacts only – all other Fabric artifacts require a P or F capacity and can be accessed by users without a Power BI license.

Workspace access by Power BI License

Client licensePersonalSharedPremium A SKUPremium Per UserPremium P SKUFabric
FreeXX✔*
Power BI ProX
Premium per user
*Fabric capacity must be F64 or above for access with a free license. Below F64 requires Power BI Pro.

License Acquisition

User licenses are relatively straightforward. Free licenses are available to any Azure Active Directory account. Power BI Pro licenses are part of Microsoft 365 E5 licenses, and can also be purchased separately from the Microsoft 365 store. Premium per User licenses include Power BI Pro, and can also be purchased separately, or as an add-on to an existing Pro license from the Microsoft 365 store.

Capacity acquisition is significantly more complicated. Indeed, the way that they are acquired can often play a role in selecting the appropriate capacity type for a given scenario.

The Premium P sku is purchased from the Microsoft Office store, and it requires an annual commitment. That is enough to make many smaller organizations pause before trying it out. The Premium A sku is purchased from Azure, can be turned on and off, and is billed hourly. It also has a significantly lower entry point. The A1 sku has only 1/8 the resources of a P1 sku, and is significantly cheaper to get started with. If Power BI features are the only thing of interest, thne the A sku presents a compelling choice, but it does not support Fabric features.

The new Fabric skus appear to bridge the gap between the P and A skus. Fabric skus are available in a much wider variety of resource levels (F2 to F2048) which makes them approachable for organizations that want to get started or have less demanding requirements. They can be scaled up past the maximum currently available in P5. Finally, they are purchased from Microsoft Azure and do not require an annual commitment (using pay-as-you-go). Pricing was recently announced and can be found at the Microsoft Fabric blog here – Announcing Microsoft Fabric capacities are available for purchase.

As mentioned above, a P1 is functionally equivalent to an F64. However, the price of an F64 in the blog post is approximately 40% higher than that of a P1. This is because the P1 requires an annual commitment and is effectively discounted by that very same 40%. The Fabric skus will also have an option to opt in for an annual commitment, and to then enjoy a significant discount, presumably making the cost of the P1 and the F64 equivalent. The details of this have not yet been announced, so your mileage may vary.

Trial Licenses

In order to “try before you buy”, a 60-day trial license is available. Details of the trial can be found here – Fabric (preview) trial. The trial will grant the requesting user a Premium-per-user license along with an F64 capacity. It’s a preview of all Power BI capabilities, and the capacity allows for the testing of all Fabric capabilities.

One word of warning with trials. If a user with a free license accesses Power BI assets that they have right to, but are unlicensed for, a free trial will automatically be granted, and the clock will start ticking on a 60-day trial. This grants the user not only a Premium per User license, but also a Fabric capacity. This does provide a seamless experience for the user but may come as a shock when the 60-day period is up.

Conclusion

While the existing Power BI Premium skus will continue to exist for those that want them, the Fabric skus are clearly the way forward. They provide all the same features as the legacy capacities, with increased sizing options on both the lower and higher end of resource requirements. The option to pay as you go or to take a commitment means that they can be both approachable and cost-effective, Finally, if all you need is Power BI features, you can turn off Fabric, and still enjoy these flexibility benefits.

1 Comment

Connect Power BI to Azure Monitor data using Direct Query with Azure Data Explorer

Man – that’s a boring title. But it’s accurate.

A few months ago, I posted an article outlining how to connect Power BI to Azure Application Insights and Azure Log Analytics (jointly referred to as Azure Monitor) with Direct Query. This article describes an approach that allows you to use a native Kusto connector to connect to the Azure Monitor instance as if it were an ADX cluster. This allows for Direct Query to be used, among other things. The option connecting Power BI available through the Azure Monitor UI uses an html connector to query the respective APIs, and that connector doesn’t support Direct Query.

The problem with using this connector is that it’s a bit of a hack. At the time it was written, you needed to use the old Power BI driver for Kusto to make it work, and that approach isn’t simple. Over time, it stopped working altogether for Application Insights. The ADX connector has since been updated to support connection to Azure Log Analytics (but not Application Insights) and is therefore still valid.

There is however another way to achieve this by using your own ADX cluster. ADX clusters allow for “cross-cluster queries” that permit tables in a database in one cluster to be joined or unioned with tables in a completely different cluster. The same proxy addresses mentioned above can be used in one of these cross-cluster queries, and in this way, be just use the ADX cluster as an intermediary.

Everything that you need to know about this approach can be found in the support article “Query data in Azure Monitor using Azure Data Explorer”

To create a Power BI report that queries Azure Monitor data using Direct Query, first create a new report, and connect to data using the “Azure Data Explorer (Kusto) connector”. Enter the address of the cluster, and the name of a database within that cluster. The database itself doesn’t matter; it simply provides a scope for the query. Finally, you need to specify the query, and this is where the cross-cluster query comes into the picture. The query takes the following form:

cluster(‘ProxyURL‘).database(‘ResourceName‘).TableName

The Proxy URLs differ between Log Analytics and Application Insights. The two take the following forms:

Log Analytics:

https://ade.loganalytics.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.operationalinsights/workspaces/<workspace-name>

Application Insights:

https://ade.applicationinsights.io/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/components/<ai-app-name>

The cross-cluster query for the table named “pageViews” in an Application Insights instance named “WhitePagesLogs” in a Resource group named “MyResourceGroup” in the subscription “71a90792-474e-5e49-ab4e-da54baa26d5d” is therefore”

cluster('https://ade.applicationinsights.io/subscriptions/71a90792-474e-5e49-ab4e-da54baa26d5d/resourcegroups/MyResourceGroup/providers/microsoft.insights/components/WhitePagesLogs').database('WhitePagesLogs').pageViews

It is worth explicitly noting that the resource name appears twice in the query – once in the cluster address, and as the database name.

When ready, the Get data dialog box should appear as follows:

If you want to use Direct Query, don’t forget to open the Advanced Options section, and select it here.

At this point, the report can be built, and it will behave as if it was a normal ADX cluster. You can of course build more complex queries, etc, but you cannot build things like functions, or materialized vies, since you do not have administrative access to the engine behind Azure Monitor.

Compared to using the Power BI ADX connector directly, this approach has the advantage of being explicitly supported, and it also works with bot Application Insights, and Log Analytics. On the downside, there is a cost to running your own ADX cluster, although it is minimal. This cluster is simply acting as a gateway in this case, and therefore, a bare minimum of resources will suffice.

2 Comments

Using Power BI with Microsoft Lists

Microsoft Lists debuted in 2020 and they are a (yet another) great way to organize list-based data. Now, when someone says data, I think Power BI. Obviously, we’ll want to report on this data, but how do we do that? There is no Power BI connector for Microsoft Lists. The answer is quite simple, if not completely obvious. You need to use the SharePoint Online List connector in Power BI.

Microsoft Lists are the same thing as SharePoint lists. In fact, they ARE SharePoint lists. The Microsoft Lists service is just a new user interface for interacting with them. You can use Lists all you want and never see SharePoint at all, unless you need to change list settings, but I digress. Given this fact, as far as Power BI is concerned, everything that applies to SharePoint lists applies to Microsoft Lists lists (the grammar here gets awfully awkward).

For reference, I wrote a series of articles some time ago about the idiosyncrasies of working with SharePoint data in Power BI, and these articles are still valid today (2021). The most recent of these articles can be found here and includes links to the others.

There is one thing that is worth mentioning about Microsoft Lists. When a new list is created using the Lists interface, the user can save it to any of their SharePoint sites, but another option is to same it to “My lists”.

When you use the SharePoint Online list connector in SharePoint, it prompts you to enter the URL for the SharePoint site that contains the list that you want to report on. That is straightforward when your list is stored in a SharePoint site, but what if your list is stored in “My lists”? Where are “My lists” stored?

They are stored in a “personal” SharePoint site. We SharePoint old timers would know it as the MySite, and while usage of MySite has been de-emphasized in Microsoft 365, it is very much still there. Each user has one. In fact, this is where the “personal” OneDrive for Business content is stored – in the Documents library of the very same MySite. By storing personal lists in the MySite, Microsoft Lists is just following the same pattern used by OneDrive for Business, which makes perfect sense.

Given this, what URL should you use in Power BI to connect to your lists stored in “My Lists”? You’ll find it in the Microsoft Lists web interface in the URL bar. It’s that portion of the URL up to “/Lists/.

In most cases it will take the following form:

https://TenantName-my.sharepoint.com/personal/LoginID/Lists/ListName

Where:

  • TenantName = the name of your Microsoft 365 tenant, i.e. Contoso
  • LoginID = the email address used to login to Microsoft 365 with the “@” and the “.” replaced with underscores i.e. jpw_contoso_com
  • ListName – the name of you list

Once you enter in this URL, you’ll have access to any of the lists stored in “My lists”. At this point, your personal lists will behave like any list in any other SharePoint site.

Leave a Comment

It’s time to stop using Power Pivot

Excel is an excellent tool for analyzing data. An analyst can easily connect to and import data, perform analyses, and achieve results quickly. Export to Excel is still one of the most used features of any Business Intelligence tool on the market. The demand for “self-service BI” resulted in a lot of imported data being stored in overly large Excel files. This posed several problems. IT administrators had to deal with storage requirements. Analysts were restricted by the amount of data they could work with, and the proliferation of these “spreadmarts” storing potentially sensitive data created a governance nightmare.

A little history

Power Pivot was created to provide a self-service BI tool that solved these problems. Initially released as an add-in for Excel 2010, it contained a new analytical engine that would soon be introduced to SQL Server Analysis Services as well. Its columnar compression meant that millions of rows of data could be analyzed in Excel and would not require massive amounts of space to store. Data in Power Pivot is read-only and refreshable – ensuring integrity. It allowed analysts to set up their own analytical data sets and analyze them using a familiar looking language (DAX), and visual reporting canvas (PowerView) all from within Excel.

The original version of Power BI brought PowerPivot to Office 365 through Excel before Power BI’s relaunch gave it its own consumption interface (the service) and design client (Power BI Desktop). Both the PowerPivot engine, and Power Query were incorporated into the service and Power BI Desktop, while the Silverlight based Power View was replaced with a more web friendly reporting canvas.

Excel support

Throughout all these changes, Excel has continued to be well supported in the Power BI service. Analyze in Excel allows an analyst to connect to a deployed Power BI dataset (built with Power BI Desktop) and analyze it using pivot tables, charts, etc. Recent “connect to dataset” features have made this even simpler. Organizational Data Types allow Excel data to be decorated with related data in Power BI.

Excel workbooks containing Power Pivot models have always been supported by the service. These models can even be refreshed on a regular basis. If the source data resides on premises, it can even be refreshed through the on-premises data gateway. This all because the data engine in Power BI is essentially Power Pivot.

It’s that word “essentially” that causes a problem.

Datasets that are created and stored within Excel workbooks are functional but can only be accessed by that workbook. Contrast this with a dataset created by Power BI Desktop, which can be accessed by other interactive (pbix) reports, paginated reports, and as mentioned above, by Excel itself. The XMLA endpoint also allows these reports to be accessed by a myriad of third part products. None of this is true for datasets created and stored in Excel.

So why would anyone continue to create models in Excel. The reason has been until now that although Excel can connect to Power BI datasets to perform analysis, those connected workbooks would not be updated when the source dataset changes. This meant that those analysts that really care about Excel needed to work with the Excel created models. This changed recently with an announcement at Microsoft Ignite Spring 2021. In the session Drive a data Culture with Power BI: Vision, Strategy and Roadmap it was announced that very soon, Excel files connected to Power BI datasets will be automatically updated. This removes the last technical reason to continue to use Power Pivot in Excel.

Tooling

Building a dataset with Power BI Desktop is fundamentally the same as building one with Excel. The two core languages and engines (M with Power Query, and DAX with Power Pivot) are equivalent between the two products. The only difference is that the engine versions found in Excel tend to lag those found in Power BI Desktop and the Power BI service itself. I’d argue that the interfaces for performing these transforms, and building the models are far superior in Power BI Desktop. not to mention the third-party add-in capability.

In this “new world” of Excel data analysis, Datasets will be created by using Power BI Desktop, deployed to the service, and then Excel will connect to them to provide deep analysis. These workbooks can then be published to the Power BI service alongside and other interactive or paginated reports for use by analysts. With this new capability, Excel truly resumes its place as a full-fledged first-class citizen in the Power BI space.

What to use when

With this change, the decision of what tool to use can be based completely on its suitability to task, and not on technical limitations. There are distinct types of reports, and different sorts of users. The choice of what to use when can now be based completely on these factors. The common element among them all is the dataset.

With respect to report usage, typical usage can be seen below.

ToolUsed byPurpose
Power BI ServiceReport consumersConsuming all types of reports: interactive, paginated and Excel
Excel OnlineReport consumersConsuming Excel reports from SharePoint, Teams, or the Power BI service
Power BI DesktopModel builders
Interactive report designers
Building Power BI dataset
Building interactive reports
Power BI Report BuilderPaginated report designersBuilding paginated reports
ExcelAnalystsBuilding Excel reports
Analyzing Power BI datasets

Making the move

Moving away from Power Pivot won’t require any new services or infrastructure, and existing reports and models don’t need to be converted. They will continue to work and be supported for the foreseeable future. Microsoft has neither said not indicated that Power Pivot in Excel is going anywhere. However, by building your new datasets in Power BI Desktop, you will be better positioned moving forward.

If you do want to migrate some or all your existing Excel based Power Pivot datasets, it’s a simple matter of importing the Excel file into Power BI Desktop. This is completely different than connecting to an Excel file as a data source. From the File menu in Power BI Desktop, select Import, then select Power Query, Power Pivot, Power View. You will then select the Excel file that contains your dataset.

Power BI will then import all your Power Query queries, your Power Pivot dataset, and if you have any it will convert PowerView reports to the Power BI report types. The new report can then replace your existing Excel file. Once deployed to the Power BI service, other Excel files can connect to it if so desired.

Building your datasets with Power BI Desktop allows you to take advantage of a rich set of services, across a broad range of products, including Excel. Building them in Excel locks you into an Excel only scenario. If you already use Power BI, then there’s really no reason to continue to build Power Pivot datasets in Excel.

6 Comments

Using Power BI Incremental Refresh with Azure Data Explorer

Azure Data Explorer (ADX) is a great platform for storing large amounts of transactional data. The Incremental Refresh feature (now available for Pro users!) in Power BI makes it much faster to keep data models based on that data current. Unfortunately, if you follow the standard guidance from Microsoft for configuring Incremental Refresh, you’ll quickly bump into a roadblock. Luckily, it’s not that difficult to get around.

Incremental Refresh works by setting up data partitions in the dataset in the service. These partitions are based on time slices. Once data has been loaded into the dataset, only the data in the most recent partition is refreshed.

To set this up in Power BI Desktop, you need to configure two parameters, RangeStart, and RangeEnd. These two parameters must be set as Date/Time parameters. Once set, the parameters are used to filter the Date/Time columns in your tables accordingly, and once published to the service, to define the partitions to load the data into.

When Power Query connects to ADX, all Date/Time fields come in as the Date/Time/Timezone type. This is a bit of a problem. When you use the column filters to filter your dates, the two range parameters won’t show up because they are of a different type (Date/Time). Well, that’s not a big problem, right? Power Query lets us change the data column type simply by selecting the type picker on the column header.

Indeed, doing this does in fact allow you to use your range parameters in the column filters. Unfortunately, data type conversions don’t get folded back to the source ADX query. You can see this by right-clicking on a subsequent step in the Power Query editor. The “View Native Query” option is greyed out, which indicates that the query cannot be folded.

Query folding is critical to incremental refresh. Without it, the entirety of the data is brought locally so that it can be filtered vs having the filter occur at the data source. This would completely defeat the purpose of implementing Incremental Refresh in the first place.

The good news is that you can in fact filter a Date/Time/Timezone column with a Date/Time parameter, but the Power Query user interface doesn’t know that. The solution is to simply remove the type conversion Power Query step AFTER performing the filter in the Power Query UI.

Alternatively, if you’re comfortable with the M language, you can simply insert something like the following line using the Advanced Editor in Power Query (where CreatedLocal is the name of the column being filtered).

#"Filtered Rows" = Table.SelectRows(Source, each [CreatedLocal] >= RangeStart and [CreatedLocal] < RangeEnd),

If the filtration step can be folded back into the source, Incremental Refresh should work properly. You can continue setting up Incremental Refresh using the DAX editor. You will likely see some warning messages indicating that folding can’t be detected, but these can safely be ignored.

Leave a Comment