The White Pages https://whitepages.unlimitedviz.com Random musings and thoughts on SharePoint, Business Intelligence, Microsoft technology, and occasionally even diving from John White Tue, 21 Jan 2020 20:38:17 +0000 en-CA hourly 1 https://wordpress.org/?v=5.3.2 /wp-content/uploads/2016/10/cropped-UVMetroLogoSquare-32x32.png The White Pages https://whitepages.unlimitedviz.com 32 32 171455946 A low cost approach for paginated report subscriptions in Power BI https://whitepages.unlimitedviz.com/2019/11/a-low-cost-approach-for-paginated-report-subscriptions-in-power-bi/ https://whitepages.unlimitedviz.com/2019/11/a-low-cost-approach-for-paginated-report-subscriptions-in-power-bi/#comments Tue, 12 Nov 2019 19:13:34 +0000 http://whitepages.unlimitedviz.com/?p=15991 Continue readingA low cost approach for paginated report subscriptions in Power BI]]> Paginated reports in Power BI offer a rich set of capabilities for printing, and the generation of report documents. Paginated reports can be exported to multiple document formats, and those exports can be scheduled and delivered via email. Unfortunately, for the moment at least, paginated reports require the use of a dedicated capacity, which can be cost prohibitive for some. This article describes a pattern that will help minimize the cost of using paginated reports for subscriptions.

Dedicated capacities

The term “Premium” is often used to describe the resource (as opposed to user) based licensing option in Power BI. More accurately, this option is called “dedicated capacity”, and there are a number of different ways to use dedicated capacity. Premium is certainly one of those ways, but the Azure (A) SKU and the EMbedding (EM) SKU are two others.

The different SKUs have different features and capabilities, but for the purposes of this article, they are functionally equivalent. For a detailed discussion of the different dedicated capacity SKUs, see my older post Understanding the Power BI capacity based SKUs.

The “A” SKU is of particular interest, because unlike the other two SKU types, it is billed in hourly increments as opposed to monthly. If the capacity can be started, perform a task, and then stopped, then you will only be billed for the time used to perform the task in question. Matt Allington has a post that outlines this concept in great detail: Affordable Power BI Premium for Small Business .

The process

In our scenario, we have a paginated report that uses a published Power BI dataset as a data source. as of this writing, there is no API call available to render a paginated report on demand, so we will rely on the scheduled subscription capability. in order to minimize the cost of the solution, we want the dedicated capacity to run as little as possible.

The solution will consist of an Azure logic app, and Power BI paginated report scheduling. An Azure logic app uses the same set of actions that a Flow in Power automate does, but is a little more flexible in its permissions model.

The overall process flow is as follows:

  1. The Logic App runs on a schedule, and starts the dedicated capacity by calling the Power BI API from an HTTP action
  2. The Logic app uses the “Refresh a dataset” Power BI action to initiate a dataset refresh. Note that this is only necessary if the report’s data souce is a Power BI dataset.
  3. On a schedule that allows for the dataset above to be refreshed, the report schedule runs and delivers the report to the destination addresses.
  4. Another Logic App runs on a schedule that allows for both the refresh of the data in step 2, and the report to be rendered in step 3, that pauses the capacity. Ideally this should be less than a hour after the capacity was started in step 1, given the hourly billing increments.

Optionally, further automation can run on the destination inbox to deliver the report to alternate locations, such as a SharePoint library, etc.

Create the capacity

Prior to doing any of this, you will need to create a Power BI dedicated capacity in Azure. In Azure, the service is called “Power BI Embedded” and detailed instructions for creating it can be found in this document.

Although there are 6 possible sizes to choose from, paginated reports in the Power BI service require an A4 capacity at a minimum. It is therefore important to select the “Change size” link and choose a size that is A4 or greater.

Once created, any relevant workspaces can be assigned to this capacity. It should be noted that the capacity must be created in the same tenant as the Power BI service itself.

Starting and Refreshing

Creating an Azure Logic app is relatively straightforward. The steps involved in doing so can be found in this introductory document. Once created, we will add a schedule trigger, an HTTP action, and a “Refresh a dataset” action. The complete Logic app is below.

Logic app to start the capacity and refresh a dataset

The recurrence step will kick off the run on a scheduled basis. You can set the recurrence to a number of time periods, and the run time will be based on that start date/time. In the example above, the process runs every day at 2:50 AM

The HTTP step calls into Azure to start the capacity. The request is a POST request, and all that it requires is a single URL.You must provide 3 configuration values for it to work. The URL itself is:

https://management.azure.com/subscriptions/{SubscriptionGUID}/resourceGroups/{ResourceGroup}/providers/Microsoft.PowerBIDedicated/capacities/{CapacityName}/resume?api-version=2017-10-01

Where:

  • {SubscriptionGUID} is the GUID of the Azure subscription for the capacity
  • {ResourceGroup} is the name of the Resource Group for the capacity
  • {CapacityName} is the name of the capacity

The other important item here is the Authentication setting. There are a number of options here, and your choice will match your requirements. For this example, we are using the relatively new “Managed Identity” in Azure. Managed identity allows one service (the Power BI capacity) to grant direct access to another (our Logic app). In this way, the Logic app is granted permission to start and stop the capacity.

More information on managed identities can be found in this Microsoft document.

The final action in this app is the “Refresh a dataset” action. Once added, you provide your credentials, select a workspace, and the the dataset to be refreshed. The logic app waits for the HTTP action to finish before it tries to refresh the dataset, so you do not need to worry about adding pauses.

Subscription

The subscription for the paginated report needs to be set to run some time after the dataset above has refreshed if using a dataset as a data source, or after the capacity has started if not. There is nothing unique about subsctriptions for this process, but you need to be aware of the timing. The subscription must run and complete in the time between the starting and refresh of any datasets, and when the companion logic apps fires to pause the capacity.

As of this writing, subscriptions can only be delivered to mailboxes within the same tenant as the Power BI service itself.

To create a subscription, select the “Subscribe” button in the upper right of the report toolbar.

Next, complete the subscription options. Note that in this example, the subscription is set to run at 3:15 AM, which is 25 minutes after the start capacity Logic app fires, allowing for plenty of time to start the capacity, and to refresh the dataset that this report connects to.

Pausing the capacity

A second Logic app is required to pause the capacity. It needs to be scheduled to allow for the subscription to be sent, ideally within an hour of the capacity starting so that only one hour of usage is billed.

The Logic app to pause the capacity is very similar to the one to start it, with a different URL being called.

In our example, the Recurrence trigger is set to run daily at 3:45 AM, which is 55 minutes after the start app.

The HTTP action settings are identical to those in the startup logic app, with the exception that the suspend method is called instead of the resume method. The full URL is below.

https://management.azure.com/subscriptions/{SubscriptionGUID}/resourceGroups/{ResourceGroup}/providers/Microsoft.PowerBIDedicated/capacities/{CapacityName}/resume?api-version=2017-10-01

Where:

  • {SubscriptionGUID} is the GUID of the Azure subscription for the capacity
  • {ResourceGroup} is the name of the Resource Group for the capacity
  • {CapacityName} is the name of the capacity

Summary

While paginated reports are currently a “Premium” offering, it is possible to use automation techniques, along with the Power BI Embedded Azure service to only run the service for an hour a day, turning a cost of hundreds per day into several dollars per day.

By taking advantage of the report rendering capabilities that paginated reports affords, and by building the reports on top of pre-existing Power BI dataset, paginated reports can become a print engine for the analytical reports.

]]>
https://whitepages.unlimitedviz.com/2019/11/a-low-cost-approach-for-paginated-report-subscriptions-in-power-bi/feed/ 2 15991
Keep your Power BI SharePoint reports current with Microsoft Flow https://whitepages.unlimitedviz.com/2019/10/keep-your-power-bi-sharepoint-reports-current-with-microsoft-flow/ https://whitepages.unlimitedviz.com/2019/10/keep-your-power-bi-sharepoint-reports-current-with-microsoft-flow/#comments Fri, 18 Oct 2019 18:24:57 +0000 http://whitepages.unlimitedviz.com/?p=15987 Continue readingKeep your Power BI SharePoint reports current with Microsoft Flow]]>

Power BI is without question the best way to report on data in SharePoint lists. The query tools available in Power Query make working with SharePoint data relatively painless, an the cached dataset means that reports are run against an optimized copy of the list data, not the data itself.

This latter distinction, while removing the performance issues of systems that query lists directly, also introduces problems with data latency. The report will never be fully “up to date”, as it needs to be refreshed on a periodic basis.

Consider the following scenario. A Power BI report has been built that uses data from a SharePoint list. That report has been embedded on a SharePoint page in the same site. A user adds an item to the list, and then navigates to the page to see the updated report. Unfortunately, that report won’t get updated until the next scheduled refresh.

This has been a significant problem, until the recent release of the new “Refresh a dataset” action in Microsoft Flow.

It is a relatively simple procedure to add a simple 1 step flow to any SharePoint list that is triggered when an item is created, updated, or deleted. This flow simply needs to add the “Refresh a dataset” action, that is configured for the relevant dataset, and these embedded reports will be updated very shortly after the data is modified.

Alternatively, the flow can be triggered by a timer, allowing you to create your own schedule (every 5 minutes, etc) that is not hardwired to run at the top or bottom of any given hour.

A few caveats should be kept in mind when using this action however.

While this action gives us much finer grained control over when refreshes happen, all of the current license restrictions remain in place. For datasets located in the shared capacity, only 8 refreshes per day are allowed.

For datasets in dedicated capacities (Premium), there are no limits to the number of refreshes. The limit of 48 per day is a UI restriction, not a licensing restriction. However, refresh can utilize significant resources, particularly memory, so you’ll want to ensure that you have significant resources to support the update frequency.

Finally, the load on the source data system should be considered. Refresh will pull a significant amount of data every time it is run.

Caveats aside, this new flow action is a welcome relief to those that need greater control of how their reports are updated.

]]>
https://whitepages.unlimitedviz.com/2019/10/keep-your-power-bi-sharepoint-reports-current-with-microsoft-flow/feed/ 3 15987
Fixing Power BI Report Builder Connection Errors https://whitepages.unlimitedviz.com/2019/10/fixing-power-bi-report-builder-connection-errors/ https://whitepages.unlimitedviz.com/2019/10/fixing-power-bi-report-builder-connection-errors/#comments Fri, 11 Oct 2019 14:40:54 +0000 http://whitepages.unlimitedviz.com/?p=15977 Continue readingFixing Power BI Report Builder Connection Errors]]>

Power BI Report Builder is Microsoft’s design tool for building Paginated reports in Power BI. It is based on Microsoft Report Builder (formerly SQL Server Reporting Services Report Builder), but has been optimized for the Power BI service.

One of the most important capabilities of Power BI Report Builder is the ability to connect to datasets that have been published to the service. If you have done this, and spent any significant amount of time building reports, you may have come across some puzzling connection errors that are caused by the same thing.

After initially creating a connection and building a “Paginated dataset” (not to be confused with a Power BI service dataset), and then spending some time designing your report, when you select the “Run” option from the ribbon, you may be presented with the “Failed to preview report” error shown at the top of this article. Selecting the details button reveals more information:

A similar error can be found under the same conditions when editing a Paginated dataset’s query with the Query designer tool. Selecting this tool can result in the error “Unable to connect to data source xxxxxxxx”, and the details button reveals another “Unauthorized” error.

What’s worse in this case is that when you select OK, a dialog box appears prompting you to enter a set of credentials.

There are no combination of credentials that you can enter that will fix the connection to the data source. This dialog box was designed for classic paginated connections, not for connections to published data sets. You should select cancel if you see this dialog box.

What is happening in both of these cases is that the token acquired from the Power BI service has expired, and Report Builder does not automatically fetch a new one. There are a couple of ways to deal with this problem.

If you have saved the RDL file to a local file system, you can close Report Builder and reopen it. That will re-establish the data connection. You could also choose to save the RDL directly into the Power BI workspace. This will also re-establish the connection. You can do this by selecting File – Save as and selecting Power BI Service.

You can then choose which workspace to save the file in. This also removes the need to upload the file into the service when you want to publish it – saving and publishing are the same thing in this scenario.

If you are editing a file directly in the service, these errors will still appear after periods of no data retrieval activity, but the connection can re-established simply by saving the report. You can look at the errors as a way of prompting you to save your work .

]]>
https://whitepages.unlimitedviz.com/2019/10/fixing-power-bi-report-builder-connection-errors/feed/ 1 15977
Connecting Power BI workspaces and SharePoint sites https://whitepages.unlimitedviz.com/2019/06/connecting-power-bi-workspaces-and-sharepoint-sites/ https://whitepages.unlimitedviz.com/2019/06/connecting-power-bi-workspaces-and-sharepoint-sites/#comments Fri, 07 Jun 2019 14:32:54 +0000 https://whitepages.unlimitedviz.com/?p=15901 Continue readingConnecting Power BI workspaces and SharePoint sites]]> Power BI V2 workspaces recently (May 2019) entered into general availability. The biggest difference between a V1 and V2 Power BI workspace is the fact that a V2 workspace is not backed by an Office 365 group, and a V1 workspace is. One area that this change affects a great deal is the “Get data” experience in the Power BI service (browser). This post outlines the differences, and describes the configuration options.

Data connections to files stored in SharePoint and OneDrive have certain unique characteristics when they are created in the browser. For example, these connections are automatically refreshed hourly unless that option is disabled.

V1 workspaces automatically offer the connection to the Documents library in the underlying SharePoint site. V2 workspaces do not automatically offer this option, as there is no underpinning SharePoint site. However, any V2 workspace can be connected to any Modern SharePoint site, and in this way, the option is more flexible. For the sake of clarity, a Modern SharePoint site is one that is backed by an Office 365 Group, and has an email address.

Let’s explore the 4 possible experiences when using “Get Data” and then choosing “Files” in the Power BI service. There are 4 possible experiences, depending on the type and configuration of the workspace;

  1. Personal workspace
  2. V1 workspace
  3. V2 workspace not connected to a site
  4. V2 workspace connected to a site

In each example below, the options are reached by selecting “Get Data” and then choosing “Files”. The type of files that can be imported are CSV, Excel, PBIX (Power BI Desktop files) and RDL (paginated reports).

Personal workspace

The personal workspace is the only workspace available using the free Power BI license. It is not connected to any SharePoint sites, and provides 4 options for importing.

“Local File” can be used for importing files from a local file store. Files imported in this manner are not automatically refreshed, and without the use of a gateway, cannot be. This option is available for every workspace type and will not be discussed further. “Learn about importing files” is a simple help link, likewise available to all workspace types.

OneDrive – Business connects to the currently logged in user’s OneDrive for Business storage. This is the OneDrive that is associated with “School or Organization” account which is stored in Azure Active Directory.

OneDrive – Personal connects to a user’s personal, or consumer OneDrive account. This is the type of OneDrive that is accessed using a “personal” account (otherwise known as a Microsoft account, or MSA). The personal workspace is the only type of workspace that allows a connection to personal OneDrive content.

SharePoint – Team Sites allows files stored in any SharePoint Online library to be loaded. Files stored in SharePoint on-premises can be loaded into Power BI, but only through Power BI Desktop. This method is online only.

Data imported in this fashion will be updated hourly with the exception of “Local File”. This will also be true of any OneDrive or SharePoint source referenced below.

V1 workspace

A Power BI V1 workspace is connected to an Office 365 Group, and therefore backed by a SharePoint site. This is reflected in the Files experience in the service.

Here we see 3 import options. Local File, SharePoint – Team Sites, and “Learn about..” are exactly the same as with personal workspaces. However, both OneDrive options from there are unavailable. The “OneDrive – XXXX” option is different, and bears some explanation.

In the image above, “Demos” is the name of the V1 workspace. Selecting this option will open the SharePoint library named “Documents” in the SharePoint site that is associated with this workspace and Office 365 group.

In my opinion, this option is poorly named, which leads to confusion. This container truly has nothing to do with OneDrive – it is a SharePoint library. We already have enough different “OneDrives” to keep track of, but I digress.

V2 workspace (not connected to a site)

The V2 workspace is not associated with a SharePoint site, and therefore, there is no Documents library to connect to. The option is instead replaced with the ability to connect to the user’s OneDrive for Business (OneDrive – Business) storage, as in the personal workspace. In essence, this experience is identical to the personal workspace experience minus the ability to connect to personal OneDrives.

V2 workspace (connected to a site)

Although a V2 workspace is not inherently connected to a SharePoint site, it can be manually connected to one. This restores the capability missing from V1 workspaces, while being more flexible. The workspace is no longer bound to a specific site, but can be configured to work with any Modern SharePoint site. In addition, the same site can be bound to multiple workspaces.

The “Modern” distinction above is important. The SharePoint site itself must be backed by an Office 365 group, as that is how it is identified in Power BI.

Associating a workspace with a SharePoint site

With V2 workspaces, site connection is now a property of the workspace. To edit workspace properties, select either the workspace settings button in the ribbon, or the ellipsis beside the workspace in the workspace list.

The connection setting is in the advanced section, and is identified as the “Workspace OneDrive”.

The important thing to note here is that you do NOT enter the URL of the SharePoint site in this field. This field is expecting the address of it in email format (ie demos@xxxx.com). All Modern Sharepoint sites are bound to an Office 365 group, and the email address is the address of that group.

Get Data – File options for a V2 connected site

Once connected, the “Get Data” – “File” options will be much the same as with an unconnected workspace, but with the “OneDrive – SiteName” option added.

I still take exception with the name presented above, in my opinion it should be “Site – SiteName” or “SharePoint – SiteName site” and use a SharePoint option. However, once connected files in the connected site can be imported easily into the Power BI service.

Usage

It is important to understand what the connected site is used for in Power BI. Connecting a site allows for files stored in a SharePoint library to be either imported into the service (all supported file types), or connected to (Excel files). This feature does NOT allow Power BI content to be stored in a SharePoint library

]]>
https://whitepages.unlimitedviz.com/2019/06/connecting-power-bi-workspaces-and-sharepoint-sites/feed/ 1 15901
Connect to Power BI dataflows and datasets using Power BI Desktop with multiple accounts https://whitepages.unlimitedviz.com/2019/06/connect-to-power-bi-dataflows-and-datasets-using-power-bi-desktop-with-multiple-accounts/ https://whitepages.unlimitedviz.com/2019/06/connect-to-power-bi-dataflows-and-datasets-using-power-bi-desktop-with-multiple-accounts/#respond Tue, 04 Jun 2019 14:11:31 +0000 http://whitepages.unlimitedviz.com/?p=15867 Continue readingConnect to Power BI dataflows and datasets using Power BI Desktop with multiple accounts]]> Image result for multiple accounts

Power BI datasets and dataflows are the two native data sources for Power BI reports. Connecting to a datasets allows a report to be built against an existing Power BI dataset in place, and dataflows represent a source of data that has had transformations applied to it. When connecting to Power BI dataflows, data is imported, into a data model but the connection to a Power BI dataset is a direct connection.

The two sources handle identity in drastically different ways, and this can lead to confusion when dealing with multiple accounts and tenants. This post is an attempt to help clarify this confusion

Connecting to a dataset

To connect to a Power BI dataset, select the “Get Data” button from the ribbon, select the “Power BI” tab, select Power BI dataset, and finally the “Connect” button.

Connecting to a Power BI dataset

Next, if a user is signed into Power BI Desktop a list of workspaces and their datasets are presented. If not, the user is prompted to sign in. The dataset can then be connected.

Selecting a Power BI dataset

The important thing to notice here is the list of workspaces itself. The list presented is a list of the workspaces available in the tenant belonging to the currently signed in user. It is the same list of workspaces that can be chosen as a publishing target. It should also be noted that the identity of the user is displayed in the upper right corner of the dialog box, and the identity can be changed directly from there.

Connecting to a dataset in a different tenant

The signed in user can be changed by selecting “Sign in” at the upper right of the Power BI Desktop client, or within the connection dialog itself. If the user signs into a different account (in a different tenant), a different list of workspaces and datasets will then be offered. The dataset source is hard linked to the currently signed in user. In this way, the Power BI dataset source behaves differently than all other data sources, which maintain connection credentials separately.

Connecting to a Dataflow

Connecting to a dataflow follows the same steps as a dataset, with the exception that the “Power BI dataflows” option is chosen.

Connecting to a Power BI datflow

At this point, a Power BI data connection dialog will be shown.

Signing in to a Power BI dataflow

There is only one authentication option because Power BI dataflows only support one authentication option.

Unlike datasets, dataflows are NOT linked to the currently signed in user. The connection is authenticated, not the current user. The “Sign In” button must be selected, and authentication completed to connect to a Power BI dataflow.

Once signed in, selecting the “Connect” button will display a list of workspaces that contain dataflows. Expanding the workspace and then the dataflow will expose a list of entities that can be imported into the Power BI data model.

Selecting a Power BI dataflow

The connection information for the dataflow is cached with Power BI Desktop, and subsequent connections to dataflows will not require the user to sign in. The same authentication credentials will be used.

it should be noted that unlike the dataset connection dialog, this one does not show the current credentials and does not allow those credentials to be changed. This makes the process of changing credentials to use dataflows in multiple tenant somewhat less than intuitive.

Connecting to a dataflow in a different tenant

With datasets, changing the currently signed in user will result in a different set of datasets being presented when the dataset option is chosen. This is different with dataflows. No matter what user is currently logged in, the cached credentials will be used.

This behaviour can be confusing when multiple tenants need to be accessed. With most other data sources, the cached credentials are linked with the specific data source. For example, when two different SQL databases are connected, Power BI caches two different sets of credentials.

To connect to dataflows in a different tenant, the current connection information needs to be cleared. This can be done with any data source, but it is particularly important to dataflows as it is the only way to switch connection credentials.

To clear the credentials for the dataset, select “File”, “Options and Settings” and the “Data Source Settings”. The Data source settings dialog will then be presented.

Power BI Data source settings
Clearing the credentials for a Power BI dataflow

Unlike most other data sources which can have multiple entries in the list, one for each unique data source, there will only be one source for dataflows. It is named “Power BI dataflows. For example, if the current instance of Power BI Desktop has authenticated to 3 different SQL servers, there will be three SQL Server connections in this list, but there will only be one for dataflows, no matter how many tenants that have been connected.

To switch tenants, the current credentials must be either cleared, or edited. The cached credentials can be fully removed by selecting “Clear Permissions” or they can be changed by selecting “Edit Permissions”. If cleared, the user will be prompted for credentials the next time the dataflow option is selected. If edited, the new credentials will be stored.

Conclusions, and recommendations

It is possible to work with multiple tenants for both connected datasets and dataflows. However, the methods for doing so are completely different for either option. This can obviously lead to some confusion.

It is my opinion that this behaviour should be changed, and that the behaviour or connected datasets is the more intuitive. If the credentials for the currently logged in user were user for both types of connection, it would be much more intuitive, and also easier to user for report designers.

]]>
https://whitepages.unlimitedviz.com/2019/06/connect-to-power-bi-dataflows-and-datasets-using-power-bi-desktop-with-multiple-accounts/feed/ 0 15867
Creating complex real-time dashboards with Power BI https://whitepages.unlimitedviz.com/2019/05/creating-complex-real-time-dashboards-with-power-bi/ https://whitepages.unlimitedviz.com/2019/05/creating-complex-real-time-dashboards-with-power-bi/#comments Wed, 29 May 2019 06:41:34 +0000 http://whitepages.unlimitedviz.com/?p=15849 Continue readingCreating complex real-time dashboards with Power BI]]> Image result for real-time

If you are new to Power BI, or if you’ve worked with Power BI Desktop, you’re familiar with the concept of refreshing data. By default, Power BI caches data which needs to be refreshed on a periodic basis. Reports that use Direct Query datasets do not need to have their caches refreshed, but to see data changes, the report pages themselves need to be refreshed. If the requirement is to have visuals on screen refreshed without any user intervention at all, it is necessary to use a streaming dataset.

Unlike regular datasets, data is not “pulled” into a streaming dataset, rather it is “pushed” in through the Power BI API, Microsoft Flow, Azure Stream Analytics, or third party services such as PubNub. This article aims to explore the various ways of working with streaming datasets.

Creating and populating a streaming dataset

Streaming datasets are created directly in the Power BI service itself or through the Power BI API. Unlike with other dataset types, there is no schema to read in from an external source.

To create a streaming dataset, choose “create” from a workspace menu in the upper right, and then select “Streaming dataset”.

Select the type of the streaming dataset. For both API and flow, choose the API option. then select “Next”.

Streaming datasets contain only a small subset of the data types supported by regular datasets. These types are Text, Number and DateTime. Give the dataset a name and then create all the necessary fields.

It is important to choose the dataset correctly, as there is no opportunity to transform fields into different types within reports or dashboards.

As fields are added, the JSON definition of the dataset is available. This can be copied and used by the data source that is pushing the data into the dataset. Note that Microsoft Flow can read the schema directly, so that copying is not necessary

In order to use the techniques outlined below, it is critical to turn on the “Historic Data Analysis” switch. This switch changes the dataset from a streaming dataset to a push dataset.

With a streaming dataset, data is stored in a cache long enough to display in a dashboard tile and it expires very quickly. a push dataset retains the data permanently up to a limit of 15 MB. In order to create more complex visuals, a report must be created, and a report requires a push dataset.

A push dataset is identified as “Hybrid” in the Power BI dataset list.

The most important option to select in the definition of the dataset is “retain historical data” If this option is not selected dashboard tiles will be able to display current data, but will not be able to display it over any significant time period. Data will be loaded into the dashboard cache for use with dashboard tiles, but when the cache expires, so doe the data. In order to use reports of any kind with a streaming dataset, this option must be selected.

Once created, data can be added through the API, Microsoft Flow, Stream Analytics or PubNub.

It should be noted that data stored in this way will only be available to Power BI, and only until the limits are reached. As the dataset fills, the oldest data will drop off. If there is any requirement to analyze the data over any significant amount of time, it is highly recommended that it also be stored in another location.

Adding a tile

Dashboard tiles can be created directly by Opening a dashboard and selecting “Add tile” from the ribbon menu. Select the Real-Time Data tile, and then select the dataset to use.

Tiles created in this way are limited to several visual types. These types are:

  • Card
  • Line chart
  • Clustered bar chart
  • Clustered column chart
  • Gauge

There are a limited number of configuration options available to these tile, depending on the tile type.Tiles created in this way will display data from the point of creation forward, according to the settings for the visual itself. These values will update in real time as data is added to the dataset, with no user intervention or refreshing required.

In order to display different types of visuals, or to use customize them beyond what is available directly in the dashboard it is necessary to create a report.

Adding a report in the service

Once created, the streaming dataset will appear in the service like any other. As with other datasets, selecting it from the dataset menu will open a new report canvas that can be saved. The report canvas in the service allows any of the Power BI visuals to be used with the streaming dataset.

Visuals on a report do not update automatically as data is pushed into the dataset, but these visuals can be pinned to a dashboard. Once pinned, the dashboard til will update automatically, so in this way, practically any visual can be added to a dashboard and updated in real time. All that is necessary is first create a report.

Creating a report in the service allows full fidelity access to the report canvas and all of the available visual types, but it does not allow for any editing of the data model. If things like calclated measures and columns are needed, it is necessary to create a report using Power BI Desktop.

Adding a report with Power BI Desktop

Power BI Desktop is able to connect directly to datasets in the Power BI service and push datasets are no exception. To connect to a streaming dataset (or any other), select the “Get Data” button, select “More”, then select the Power BI tab. Finally, select the Power BI dataset option, then select “Connect”

Next, select the workspace that contains the real-time dataset, and select the dataset itself. Selecting Load will establish a connection between the report and the dataset,

Because the report uses a direct connection to the dataset in the service, there is not data transformation opportunity and Power Query cannot be used. Additionally, several DAX functions are not available. For example, most of the functions on the “Modeling” tab are unavailable. It is also not possible to create calculated columns, but calculated measures can be created. Using Power BI Desktop, some relatively complex visuals can be created.

Once the report is published to the service, the visuals can be pinned to a dashboard, and once pinned, they will update automatically in real time.

Purging data

From time to time, it may be necessary to purge the data from the push dataset to reset the dashboard. To do this, the dataset can be temporarily changed from “push” mode to “streaming” mode. This will purge the stored data. Setting it back to “push” will start storing the data again.

To change the mode of the dataset, select the “Datasets” tab from the workspace menu, and then select the “edit” icon for the database that is to be changed.

The option that changes the mode it “Historic data analysis”. Switching it off changes it to a streaming dataset, and switching it on changes it to a push dataset.

At first, it may seem that visualizing real-time data in Power BI is quite limited due to the limited nature of tiles creating in dashboards. However, by using push datasets along with Power BI Desktop built reports allows for relatively complex visuals to be viewed in real time.

]]>
https://whitepages.unlimitedviz.com/2019/05/creating-complex-real-time-dashboards-with-power-bi/feed/ 2 15849
Using Microsoft Graph Data Connect with Power BI Dataflows https://whitepages.unlimitedviz.com/2019/05/using-graph-data-connect-with-power-bi-dataflows/ https://whitepages.unlimitedviz.com/2019/05/using-graph-data-connect-with-power-bi-dataflows/#comments Tue, 07 May 2019 20:37:49 +0000 https://whitepages.unlimitedviz.com/?p=15830 Continue readingUsing Microsoft Graph Data Connect with Power BI Dataflows]]>

Microsoft Graph data connect (GDC) is a connector technology that allows an organization to extract data in bulk from the Microsoft Graph. Using Azure Data Factory, extraction jobs can be scheduled that can securely extract Graph data while respecting an organization’s data control policies. On a scheduled basis, GDC stages the data behind the scenes, and stores it in an Azure storage account. The storage can either be Azure Blob storage, Azure Data lake Gen 1, or Azure Data Lake Gen 2. This article describes a procedure to process the output from GDC and store it in a Power BI dataflow.

Details on how to configure GDC can be found here, and an excellent video tutorial here.

Azure Data Lake Gen 2 Storage

Azure Data Lake Gen 2 (ADLG2) brings a hierarchical namespace to Azure Blob storage. This storage system is designed for big data analytics and is highly cost effective. It is one of the three data sink (destination) options for GDC, and it is the required storage system for the “bring your own”, or external storage option of Power BI Dataflows. Given that n ADLG2 account is required for the Power BI Dataflows, it is logical to use the same account as the GDC data sink, but it is not required.

In order to use an ADLG2 account for external storage with Power BI dataflows, it must be in the same data center as the Power BI tenant. The data center for a tenant can be determined by navigating to the Power BI web application, selectin the “?” icon in the upper right, and then selecting “About Power BI”.

In order to be able to use an external storage account for Power BI dataflows, it MUST be created in the data center listed in “Your data is stored in”.

Connecting Power BI to ADLGen2 Storage

When a dataflow is created in Power BI, it is stored in an ADLG2 storage system managed by Microsoft. If Power BI is the only platform that will access the data, this is perfectly adequate, but an organization may wish to use the data with other tools. If this is the case, a Power BI tenant can be connected to an ADLG2 account that is accessible to other tools. A workspace administrator can then decide to have all the dataflows in that workspace store their data in the custom storage account. These are known as “external dataflows”. Dataflows are all stored in Common Data Model (CDM) folders which are described in detail here.

Detailed instructions on configuring external dataflow storage for Power BI can be found here . The process consists of several steps. It should be noted that as of this writing, external dataflows are in preview, and these steps could change.

  1. If one does not already exist, create an ADLG2 account in the same tenant as Power BI
  2. In Azure, Grant the Reader role to the Power BI service identity for the account in #1
  3. Create a file system for Power BI. The file system MUST be named “powerbi”
  4. Using Azure Storage Explorer, grant file system access to three Power BI service principals, Power BI Premium, Power BI Service, and Power Query Online (see the above link for details)
  5. Connect the Power BI tenant to the ADLG1 account
  6. Enable workspace administrators to assign workspaces to external storage.

As of this writing, step #5 above is irreversible. Care should be taken with its name.

Once configured, a workspace administrator can assign their workspace to their external storage. This setting is a property of the workspace, and can be accessed via its settings with the “Storage” tab.

Once this setting has been enabled, all dataflows will be stored in external storage. A folder is created within the file system created in step #3 above with the name of the workspace. Each dataflow in the workspace will be added within that folder, and each entity of the dataflow as a folder of its own. The dataflow folder will contain a file named model.json which describes the entities, and the entity folders contain multiple csv files which house the data itself. Within Azure Data Explorer, the structure appears as below.

  1. Azure Data Lake Gen 2 account (connected to the Power BI tenant)
  2. File system created for Power BI dataflows (always named powerbi)
  3. Workspace folder
  4. Dataflow folder
  5. JSON file describing the dataflow
  6. Entity folder containing entity data

Once configured, Power Query Online (part of the process of creating a dataflow) can be used to acquire and transform data. The data will be stored in these folders according to the Common Data Model specification and can be accessed by other applications. However, the reverse of this is also true. Any CDM folder that is stored in the Power BI connected file system can be connected to Power BI as an external dataflow. The process for doing this is described here. The order of operations is important. The user that will make the connection needs to be granted access to the CDM folder before it is populated with data.

An external dataflow is read only with respect to Power BI (Power BI only sees the data; it does not transform it). The goal is therefore to transform the data created by Graph data connect into the CDM format. Azure Databricks provides support for doing so.

Azure Databricks

Azure Databrick is a suite of serverless big data technologies that encompass Hadoop, Apache Spark, SQL, Python and Scala technologies. Databricks clusters can be created and used when needed and discarded or suspended when not as needed. A discussion of how to create and use Databricks is beyond the scope of this post, but there is a great deal of documentation on it here. In addition, Microsoft provides a free 14-day trial of Azure Databricks.

Databricks is particularly useful in this scenario, as it has libraries that support Azure Data Lake Gen 2, and libraries that support the Common Data Model. Databricks notebooks can be called from Azure Data Factory, so that when a GDC extraction job is completed, the resulting files can be processed with Databricks to populate the CDM folders. 

An excellent tutorial on using Databricks with dataflows and CDM folders can be found on GitHub here. The scenario in the tutorial involves using dataflows to produce data instead of consuming it, but it does cover off several important concepts. The tutorial is part of the project that includes the CDM library for Databricks which is used to transform GDC data into CDM folders.

As of this writing, the CDM library requires a Databricks 4.3.x-scala2.11 cluster. This is an older configuration that is not available to the standard user interface when creating a Databricks cluster. Subsequent versions of the CDM library will most likely support newer clusters, but at present, it is necessary to take a few additional steps during cluster creation.

From the cluster creation UI, specify window.prefs.set(“enableCustomSparkVersions”, true) in the browser debug console, and then navigate to the cluster page, and specify the image tag below. Refresh the browser and then 4.3.x-scala2.11 will be listed as a custom version.

Once a cluster has been created, and the CDM library loaded into it, a notebook can be created to process the GDC data. Processing consists of four main steps. Connecting Databricks to ADLG2, Reading the JSON files from GDC, extracting the desired data into dataframes, and writing the data out to CDM folders.

Connecting Databricks to ADLG2

The recommended way to connect Databricks to ADLG2 storage is through a Service Principal. The same principal that GDC itself uses can be used, and if the same ADLG2 account is being used, no further configuration is necessary.

Databricks will need to read from the file system that houses the GDC data. Several lines of code (Python) in a Databricks notebook will establish the required connections:

DLG2AccessKey = “Storage Account Access Key”
ADLG2AccountName = “ADLG2 Account”

spark.conf.set("fs.azure.account.key." + ADLG2AccountName + ".dfs.core.windows.net", DLG2AccessKey)
spark.conf.set("fs.azure.createRemoteFileSystemDuringInitialization", "true")
filesystem = "abfss://GDCFileSystem@ADLG2AccountName.dfs.core.windows.net/"

Once connected, files in the GDC folder can be listed using the built in dbutils library:

dbutils.fs.ls(filesystem + “/GDCFolderName”)

While the above and below examples shows account names and keys being explicitly defined in the notebook, this is not recommended beyond any testing or demonstration environments. Instead, it is recommended to store such secure strings in Azure Key Vault and retrieve them at runtime. For instructions on how this is done, see the document Secret Scopes.

Reading JSON Files

Databricks can read all JSON files in a folder (as well as other text-based formats) into a dataframe. A dataframe is an in-memory table that can be hierarchical and queried via standard SQL commands. The schema of the dataframe will be implied through the structure of the JSON files contained within. To load all of the GDC JSON files from a particular folder into a dataframe, the following line of Python can be used:

contactbasedf = spark.read.json(filesystem + “/Contacts Folder”)

The read is recursive, which means that subfolders are interrogated as well. GDC folders typically contains a metadata folder with files of differing schemas than the data files themselves. For this reason, it is a good idea to move the data files to a dedicated folder before reading them into a dataframe. This can be done with the dbutils.fs.mv command.

Extracting the desired data

Once the files have been read into a dataframe, the dataframe can be saved to a temporary table. This table can be queried through standard SQL commands. For example, the query creates a temporary table from the initial dataframe (contactbasedf) that was created by reading JSON files created by GDC for organizational contacts. The relevant details are then queried and saved into another dataframe, named df1 in this case.

contactbasedf.createOrReplaceTempView("contactbaseTemp")
df1 = spark.sql("SELECT AssistantName, Birthday, BusinessHomePage, CompanyName, Department, DisplayName, GivenName, Initials, JobTitle, Manager, MiddleName, NickName, PersonalNotes, Profession, Surname, Title from contactbaseTemp")

Writing to CDM folders

Once the CDM libraries are loaded into a Databrick cluster, writing data to them is a relatively simple method call from a dataframe. The call itself requires several parameters, and those parameters are:

  • cdmModelName – The name of the Model (dataflow) that houses all entities
  • entity – the name of the entity within the dataflow (a dataflow can contain multiple entities or tables)
  • cdmFolder – The folder in ADLG2 to save the model.
  • appId – The service principal ID of an application with Blob Contributor access to the ADLG2 account
  • appKey – The secret key for the appId specified above
  • tenantId – The tenant ID for the ADLG2 account

Using the dataframe defined above, the contents of the dataframe can be written out to the CDM folder with the following (Python) code:

appID = “Service Principal ID”
appKey = “Service Principal Key”
tenantID = “Tenant ID”
Workspace = “Workspace Name”
cdmModelName = "AllContacts"
outputLocation = "https://" + ADLG2AccountName + ".dfs.core.windows.net/powerbi/" + Workspace + "/” + cdmModelName

(df.write.format("com.microsoft.cdm")
                   .option("entity", "Contacts")
                   .option("appId", appID)
                   .option("appKey", appKey)
                   .option("tenantId", tenantID)
                   .option("cdmFolder", outputLocation)
                   .option("cdmModelName", cdmModelName)
                   .save())

The above code will output the contents of the dataframe to an entity named “Contacts” in a model named “AllContacts” stored in a folder named “AllContacts” within the workspace folder specified in the “Workspace” variable.

Creating an external Dataflow

Once the GDC data has been written to a CDM folder, it can be connected to Power BI as an external dataflow. In order to do so, as mentioned above, the user making the connection must have explicit access to the model folders.

From a Power BI V2 workspace (V1 workspaces are not supported), go to the dataflows tab, and select Create – Dataflow from the toolbar. If Power BI has been connected to the ADLG2 storage, and the workspace has been configured for external storage, the “Attach a Common Data Model folder” option should appear.

Selecting “Create and attach” brings up the Attach Common Data Model folder dialog box, where two items must be entered.

The Name of the dataflow is the name with respect to Power BI. It can be completely different than the name of the model folder, or the internal name of the model created above, but it’s likely a good idea to keep it consistent. The CDM folder path is actually the absolute path to the model.json file that describes the model, and it’s vital that model.json be included at the end of the path. Failing to do so will result in an error.

Finishing Up

 Once completed, Power BI Desktop can be used to connect to the external dataflow, just like any other dataflow. The only difference is that external dataflows are not refreshed in the Power BI service, but will be updated by Databricks. The same Azure Data Factory jobs that extract data from Graph data connect can be used to call into the Databricks notebooks when the data has been extracted.

If you are interested in a product that leverages the data produced by Graph data connect, I would be remiss if I did not suggest our tyGraph for Exchange, which is currently in preview. It combines all of the technology listed above with a rich set of reports in concert with other Office 365 workloads. If you are interested, please contact me directly, or email sales@tygraph.com .

Finally, for an in depth conversation with Abram Jackson and Tyler Lenig from the Graph data connect team, check out their recent appearance on our BIFocal podcast.

]]>
https://whitepages.unlimitedviz.com/2019/05/using-graph-data-connect-with-power-bi-dataflows/feed/ 2 15830
Using Power BI to Report on Location Columns in SharePoint https://whitepages.unlimitedviz.com/2019/01/using-power-bi-to-report-on-location-columns-in-sharepoint/ https://whitepages.unlimitedviz.com/2019/01/using-power-bi-to-report-on-location-columns-in-sharepoint/#respond Tue, 15 Jan 2019 19:18:56 +0000 https://whitepages.unlimitedviz.com/?p=15789 Continue readingUsing Power BI to Report on Location Columns in SharePoint]]> At the end of 2018, SharePoint received something that we haven’t seen for a long time – a new column type, Location. Location columns will look up an address and geocode it as it is being entered in a form. It will also separate all the constituent parts of the address as well as the latitude and longitude into separate display only columns. These columns are used primarily in views but can also be used in reports. Given that I put together a series of posts recently on using Power BI to work with complex SharePoint report types, I was interested on how to report on this new column type. As it turns out, it is relatively straightforward.

This post will delve into the nuances involved with reporting on this new SharePoint Location column in Power BI..

The Location Column

To begin with, the Location column is a “modern” SharePoint column. This means that it can be added to a list via the Add column button in the list view, but NOT through the list settings page as other column types are.

List view creation

List settings creation

If the Add column does not appear for you, you may be using a “classic” SharePoint site, or you may be using one or more column types that are not supported in “modern” which causes a classic view to be used. Removing these columns from the view is often enough to light up the add button.

Once created, you will have the option to add any or all of the address components to the view. These are display elements only and will be available to reports (or other views) whether or not they are added to the view at creation time.

Once created, entering data is as simple as typing in an address, or the name of a location into the column. The typeahead feature will attempt to find the location and fill in the details.

Once selected, the full address will be filled in, and all the constituent address properties will be populated. If they are on the view, the list can be sorted, filtered, etc. by these elements.

Reporting on the Location Column

Internally, the location is saved as a BLOB of JSON content within a column. When the column itself is used in the view, its friendly display format is displayed. When constituent items are displayed (City for example) their values are extracted from the column and displayed as discrete elements. For other SharePoint column types, this can provide complications, but the developers of the location feature seem to have had reporting in mind when it was built. Consider the following list that contains a Location column named Location:

Loading the Data

We first launch Power BI Desktop, select “Get Data” and then choose SharePoint Online list. We are then prompted for the URL of the SharePoint Site. The dialog is titled SharePoint lists, but the value is the URL of the site, NOT the list itself. Once this is entered, we are prompted for credentials if we haven’t connected to this site before. After entering credentials, we can select the list that we want to report on. In our case, it’s named Properties. We select it, and then click on the Edit button.


Once the data loads in, one of the first things that you’ll notice is that there are a lot of columns to choose from, and it’s a good idea to remove the columns that you don’t need. We can do this by right clicking on the desired column titles and selecting Remove.

Using FieldValuesAsText

With all other complex SharePoint column types, the FieldValuesAsText column will retrieve the textual representation of required column values. This is the way that the column value appears in a view. However, it appears that the Location column type is an exception to this rule. When the Location column is used, the JSON value itself is returned, which renders FieldValuesAsText relatively useless. THis value is also available using the Location column value itself. The steps for extracting FiedValuesAsText are covered in previous posts in this series. Given that ultimately this will not be a good approach for the Location column, we won’t go into it further here.

Field value and value extracted from FieldValuesAsText

Using DispName

The text value of the location column is instead available through the derived DispName column.

With Power BI, it is possible to transform the JSON data contained in the original column, or the extracted FieldValuesAsText column. All of the extracted properties are available through more efficient means. The FieldValuesAsText column can therefore be ignored for the purposes of reporting on Location columns. In addition, in most cases, the original column (Locations in this case) can be removed, and the DispName column should be renamed in its place.

This behaviour is inconsistent with the behaviour of other complex SharePoint fields. It does not affect capability, but in the interests of consistency, my strong suggestion would be for the SharePoint team to eliminate the DispName field, and leverage FieldValuesAsText for the text conversion in the data feed.

Using Location Components

All the text components of the location column are separated out automatically as columns in Power Query. They can be used as any other column, and no additional action is necessary.

Automatically extracted location components

Using GeoLoc

Power BI will automatically geocode data at the time the report is rendered. The text components can therefore be used by the reporting engine to place data on a map. However, geocoding is a relatively computationally expensive operation, especially if there is a lot of data, or poor internet connection. In addition, some visuals may require the use of specific latitude and longitude co-ordinates. These co-ordinates are available through the GeoLoc column if they are needed, but they do need to be extracted.

Within Power Query, locate the GeoLoc column, and click on the Expand icon in the right of the column header.

Select both the Latitude and Longitude columns and deselect Use original column name as prefix. In my testing, both Altitude and Measure do not return any meaningful data, so they can be safely ignored, however this could change in the future.

At this point, we are almost ready to do some reporting. Once all the required columns have been shaped, and their data types set, select the Close and Apply button from the ribbon.

Reporting

Before using the location data on a map, it is important to categorize each of the components so that Power BI knows how to use it on a map. To categorize a data field, select it from the fields list. Then select the Modeling tab from the ribbon click the Data Category dropdown.

The category for most of the fields is obvious, but below is a table of recommended choices. In addition, both the longitude and latitude fields need to be set to the Decimal Number type.

Field Category
City City
CountryOrRegion Country/Region
Latitude Latitude
Location Place
Longitude Longitude
PostalCode Postal Code
State State or Province
Street Address

Once categorized, the data can be placed on a map according to any desired parameters. In this can, the below shows a map of listings colour coded by the asking price range.

The resulting report can then be published to the Power BI service, and then embedded into a SharePoint page through either the Power BI web part, or secure embedding if so desired.

Happy reporting!

]]>
https://whitepages.unlimitedviz.com/2019/01/using-power-bi-to-report-on-location-columns-in-sharepoint/feed/ 0 15789
Get the most out of Power Query in Power BI Dataflows https://whitepages.unlimitedviz.com/2018/11/most-power-query-power-bi-dataflows/ https://whitepages.unlimitedviz.com/2018/11/most-power-query-power-bi-dataflows/#comments Fri, 16 Nov 2018 17:19:19 +0000 https://whitepages.unlimitedviz.com/?p=15744 Continue readingGet the most out of Power Query in Power BI Dataflows]]> Power BI dataflows

Power BI dataflows are the first place that many users will encounter the new Power Query web based interface. Until now Power Query has been restricted to Power BI Desktop and Excel. This new web interface is, well, new, and it doesn’t contain all of the capabilities of the more mature client based interface. The good news is that you can take advantage of both the reusability of dataflows, and the maturity of the Power BI Desktop interface.

Two Interfaces

A quick glance at the user interfaces for Power Query on the web and in Power BI Desktop reveals the feature difference.

Power Query editor in dataflows
Power Query editor in dataflows

Power Query editor in Power BI Desktop
Power Query editor in Power BI Desktop

The Desktop editor has a full ribbon interface with a wide array of capabilities, while the web interface has a simple button bar with a subset of features. In the images above (which show the exact same set of queries in the two interfaces) it is easy to see that the combine binaries, or expand tables feature is not there for the “Content” column.

Advanced editor

The key to bringing all of these capabilities in Power BI Desktop to Power Query in dataflows is the Advanced editor. Power Query is at its essence an interface that is used to construct queries using the M language. This core code is available to you in both Desktop and dataflows.

In Desktop, the Advanced editor is available from the ribbon both in the Home tab and in the View tab. In the web based editor it is available by right clicking on an entity (query) and selecting Advanced editor.

 

Advanced Editor in Power BI Desktop - Home Tab Advanced Editor in Power BI Desktop - View Tab Advanced Editor in dataflow editor

The code revealed by this editor can appear rather daunting for a complex query, but all you really need to understand is how to copy and paste it. Build up whatever query you need using Desktop (or Excel!), open the Advanced editor and copy it to your clipboard. Then, either create a new dataflow or add an entity to a new dataflow using the Blank Query data source. Once the editor is open, right click on the query, open the Advanced editor, and paste the query from your clipboard.

Done. Well, almost. While both environments execute M code, there are a few differences to be aware of.

Some functions may not work

There are subtle differences between the M engine in Excel and the one in Power BI Desktop. This approach works very well with these two products, but occasionally an incompatibility can crop up. This is no different with the M engine for dataflows. If you do encounter an incompatibility, try achieving the same thing a different way in Desktop and trying again.

Data Sources

Dataflows do not support all of the data sources that Power BI Desktop and Excel do. This will of course change over time, but as of this writing, dataflows are in preview, and currently support 24 data sources compared to the almost 100 in Power BI  Desktop.

Data sources supported in dataflows - Nov 2018
Data sources supported in dataflows – Nov 2018

Queries posted into a dataflow that use an unsupported data source will therefore likely not work. However, there’s nothing stopping you from trying, I’ve been pleasantly surprised by a few.

Functions

Functions ARE supported in dataflows. They can be created using a blank query (and copying function from Power BI Desktop). However, if that’s all that you do, you may receive an error like “This dataflow contains computed entities, which require Premium to refresh” or “We cannot convert a value of type Table to Function”.

You do NOT need Premium to use functions, but a function must its “Enable Load” function disabled. This is done by right clicking on the function and toggling the Enable Load item to off.

Turn off Enable Load for a function
Turn off Enable Load for a function

Computed Entities

Computed entities (or calculated queries) are supported by dataflows but because the type of calculation can’t be predicted, they require the isolation that dedicated capacity (Premium) provides.

Referenced tables are an example of computed entities.  If you are in the habit of designing a base query that does not load data, and then creating variants of that table that do in your reports, you will need to change that design in dataflows in order to avoid the Premium requirement.

In Power BI Desktop, this is the difference between Duplicate and Reference when creating a new query from a base query. Duplicate will simply create a new query with the same steps, while Reference will create a computed entity. If you want to avoid Premium, you’ll need to use Duplicate.

New query create options in Power BI Desktop
New query create options in Power BI Desktop

The Power Query capabilities in dataflows are more powerful that they might appear at first glance. Power BI Desktop is the key to unlocking them, unless you’re already a total wizard at writing M code. Even then, the new editing features in Power BI Desktop likely put it over the top as an editor.

For now we need to cut and paste, but I would love to see a day when Power BI Desktop could connect directly to a dataflow and edit it in place.

]]>
https://whitepages.unlimitedviz.com/2018/11/most-power-query-power-bi-dataflows/feed/ 4 15744
Use Power BI dataflows to warehouse SharePoint list data https://whitepages.unlimitedviz.com/2018/11/power-bi-dataflows-sharepoint-list-data/ https://whitepages.unlimitedviz.com/2018/11/power-bi-dataflows-sharepoint-list-data/#comments Thu, 15 Nov 2018 22:00:05 +0000 https://whitepages.unlimitedviz.com/?p=15739 Continue readingUse Power BI dataflows to warehouse SharePoint list data]]> Image result for data warehouse

Reporting on SharePoint data has been a requirement for a long time, and there have been many approaches to fulfill this need. Custom web parts, Data View web parts and SSRS direct connected reports have historically been some of the solutions, but they all suffer from the same problem. If you have any serious amount of SharePoint data, you’ll quickly begin to bump into capacity limits and performance limitations, and in some cases, you can impact the performance of the overall system. In order to avoid this problem, it is necessary to warehouse SharePoint data first, as I argued in this post from 2012.

Once your list-based data is in a relational database, the performance issue is taken care of. However, the means of getting it moved there have traditionally been problematic. For a long time, there was a CodePlex project called the SharePoint List Source and Destination. This solution provided read and write access to SharePoint lists from SQL Server Integration Services (SSIS). Unfortunately, it was last updated in 2012, it was unsupported by Microsoft, and it did not support authentication for Office 365. This of course rendered it useless for use with SharePoint Online. In 2015, SQL Server Integration Services got an OData source, and given that SharePoint lists have OData endpoints, this became a viable option, particularly given that it did support Office 365 authentication. The OData connection from SharePoint did however have some limitations as well.

For cloud scenarios, Power BI has emerged as a very competent way of reporting against SharePoint data. It has native connectors for SharePoint list data, both on premises and in the cloud and Power BI reports can be hosted in the cloud through the SharePoint Power BI web part. On premises, the same can be done with Power BI Report Server. The structure of Power BI reports mean that the data is cached in a data model, so reports are not run directly against the list data source. This avoids the performance issues listed earlier.

Earlier this year I published a series of articles detailing how to do exactly this. The only issue with this approach is that the data shaping and preparation is always specific to a single report. If I have 5 different reports that use one list, I must query and shape that data 5 different times – one for each report. This is where Power BI dataflows come in.

In this context, dataflows are essentially a data warehousing layer with transformation capability. Instead of each report connecting back to a source list, the dataflow connects to the list, shapes the data with Power Query online and stores it in a data lake. The Power BI reports then connect to the dataflow as their data source. Transformation and storage only need to happen once.

As of this writing, dataflows are in public preview, so be warned – some things could change.

Creating a dataflow

Creating a dataflow from a SharePoint list is relatively straightforward. In our examples below, we will work with the same sample list from the series of articles on SharePoint data earlier this year. To begin open Power BI and navigate to a workspace (your personal workspace will not have dataflows). Click on the workspace name in the navigation pane and the dataflows tab should be available.

To create a new dataflow, Select the Create button, and click dataflow.

Select the Add new entities button and the data source selection will appear. SharePoint list and SharePoint online list are both options. SharePoint list is for on premises list data which will work with the On-Premises Data Gateway. In our case we are working with SharePoint Online, so we select the SharePoint Online source.

At this point, you enter the URL for the site that you want to connect to (NOT the URL for the list) and select the Next button. Power BI Will connect to the site and you can then select which list you want to work with. In our case, we need our Listings data, so we select that list and click Next.

Finally, we’re in the Power Query editing screen. This should be quite familiar to those used to working with Power Query in either Power BI Desktop or in Excel. From here you can select the columns that you want to include in the dataflow.

Although this experience is similar that building queries in the Power BI Desktop, there are a few noticeable differences. Queries in a PBIX file are referred to as queries, but within a dataflow they are referred to as entities. These entities can be custom, or they can be mapped to Common Data Model object types. The Power Query web editor also does not include the full featured editing ribbon found in Power BI Desktop, but instead has a button bar. Many of the editing options available in Power BI Desktop are not available in the Power Query web experience.

If you have read through some of my earlier articles on working with SharePoint data in Power BI, you will notice that there are fewer columns available than we see in the Desktop Power Query editor. Most notably for us working with SharePoint data is the FieldValuesAsText column which is the convenient way of retrieving the text representation of complex SharePoint list column types. At first glance, this would appear to be quite limiting.

However, by right-clicking on the entity name, we can access the Advanced Editor.

This Advanced editor allows you to write queries by hand using the M language. The side benefit of the Advanced editor is that it makes queries portable between platforms -Desktop, Excel, and now dataflows. You can therefore build your queries in Power BI Desktop using its fully functional editor and then copy and paste it into a new blank query in the dataflow editor. Using this approach allows you take advantage of the SharePoint helpers built into Power BI Desktop as the FieldValuesAsText column, and other columns are available. Using this technique, the Listings example can be transformed into several normalized tables in the dataflow.

Click on Done to save your entities, and then the Save button to save your dataflow. You will be prompted to Refresh Now which is a good idea because by default, the dataflow has no data contained within it. To keep the data up to date, you need to set a refresh schedule by clicking the schedule refresh icon under actions for the dataflow in question. From here, you schedule data refresh in the same manner as you would with ta Power BI Report.

Using the dataflow

Once data is loaded into the dataflow it becomes a source for a Power BI report. You must use Power BI Desktop to create this report, there is no way to connect a report to a dataflow in the pure web interface. Start Power BI Desktop and select “Get Data”. Choose the Power BI blade and then Power BI dataflows.

After clicking Connect, you will be presented with a set of Power BI workspaces that contain dataflows. Opening the workspace will allow you to open the dataflow and select the desired entities.

Once loaded, the report can be built just like any other. When it is refreshed, it will be refreshed from the data stored in the dataflow, NOT directly from the SharePoint list. It is therefore important to keep the dataflow itself up to date.

Any number of reports can be created from the dataflow. Instead of having all the transformation logic tied up within a single report, dataflows allow them to be centralized and consistent. With a little work, these transformations allow you work with your SharePoint data just as though it were relational. Power BI dataflows really are the best way to perform data warehousing with your SharePoint data, whether you SharePoint is on line or on-premises.

]]>
https://whitepages.unlimitedviz.com/2018/11/power-bi-dataflows-sharepoint-list-data/feed/ 1 15739
Business Intelligence in SharePoint 2019 https://whitepages.unlimitedviz.com/2018/08/business-intelligence-sharepoint-2019/ https://whitepages.unlimitedviz.com/2018/08/business-intelligence-sharepoint-2019/#comments Thu, 02 Aug 2018 15:24:13 +0000 https://whitepages.unlimitedviz.com/?p=15714 Continue readingBusiness Intelligence in SharePoint 2019]]> The recent availability of the SharePoint 2019 public preview, and the supporting information that accompanies it has clarified the status of Business Intelligence features in SharePoint 2019. This release, with one exception, is the culmination of the process of decoupling BI from SharePoint which began in SharePoint 2016 through the removal of Excel Services. This decoupling strategy was initially articulated in the fall of 2015 with the document Microsoft Business Intelligence – our reporting roadmap which stated that SQL Server Reporting Services was to be the cornerstone of their on-premises BI investment (and not SharePoint).

The embedded BI features now run with SharePoint as opposed to on SharePoint. These changes do however require some planning and some effort on behalf of those that have already invested in the current platform and wish to move forward on-premises. With this in mind, and the fact that concise information around these changes is a bit difficult to find, I wanted to put this reference together. This post does not get into migration strategies, only the changes themselves.

The source for much of the below comes from discussions with the relevant product teams, and official information is found (today) in two primary places. The document What’s deprecated or removed from SharePoint Server 2019 Public Preview
which was published concurrently with the SharePoint 2019 public preview, and Christopher Finlan‘s presentation at the Microsoft Business Application Summit 2019 entitled Self-service BI and enterprise reporting on-premises with Power BI Report Server.

A summary of the changes to BI features, and a brief discussion of each is below.

Feature Status
SQL Server Reporting Services Integrated Mode Removed
Power View Removed
BISM file connections Removed
PerformancePoint Deprecated
PerformancePoint – Decomposition Trees Removed
Power Pivot for SharePoint Removed
Scheduled workbook data refresh Removed
Workbook as a data source Removed
PowerPivot management dashboard Removed
PowerPivot Gallery Removed

SQL Server Reporting Services Integrated Mode

SSRS Integrated mode was deprecated in November 2016, as was not a part of SQL Server 2017. However, organizations could continue to use SSRS versions from 2016 and prior in SharePoint 2016. This is not supported in SharePoint 2019, which means that integrated mode isn’t an option at all with SharePoint 2019. The good news is that the recent Report Viewer web part fully replicates the capabilities of the SSRS Integrated mode web part.

Power View

Power View was a feature of SSRS Integrated mode and is available in Excel. When Excel Services was removed in 2016, Power View in Excel required SSRS Integrated mode to work. Both supporting platforms are now gone, and thus Power View is not supported in SharePoint 2019.

BISM file connections

The BISM file connection type was used by Excel and SSRS to connect Power View reports to SQL Server Analysis Services data sources. This connection type has been removed along with Power View.

PerformancePoint Services

PerformancePoint is a combination of capabilities that includes dashboarding, scorecards, and analytic reports. Very few new features have been added to PerformancePoint in the last few versions, and this one even loses a few. Many of of these features are also available in Power BI and Power BI report server, and Microsoft has taken the decision to deprecate this product. This gives customers with a PerformancePoint investment time to migrate their assets but is a clear indication that it will also be removed in a subsequent release.

PerformancePoint – Decomposition Trees

The Decomposition Tree feature in PerformancePoint came originally from ProClarity – one of the three products that made up the original PerformancePoint product. These visuals are based on Silverlight, and have been removed from the product accordingly.

PowerPivot for SharePoint

PowerPivot for SharePoint is not supported in SharePoint 2019. PP4SP was originally a combination of two technologies – a specialized version of SQL Server Analysis Services, and a SharePoint service application. In the 2016 version, these two parts were split into two – the SSAS component became a part of the SQL Server installation media as SSQL – PowerPivot mode, and the service application, which continued the name PowerPivot for SharePoint. To be clear, it is the second of the two that has been removed. SSAS PowerPivot mode continues to be an important component and is used by Office Online Server for working with Excel files that have embedded models.

Scheduled workbook data refresh

This feature allowed for the automatic refresh of the data stored within Excel workbooks in SharePoint. It requires a PowerPivot data model to work, but the refresh operation would refresh all connected data in the workbook on a scheduled basis. This was a component of PowerPivot for SharePoint. It has recently been announced that this capability will soon be available in Power BI Report Server.

Workbook as a data source

With PowerPivot for SharePoint deployed, it is possible to use the data model in a published Excel workbook as the data source for another workbook. This feature will no longer be available, and there are no plans at present to reintroduce it.

PowerPivot Management Dashboard

Originally a part of SharePoint Central Administration, the management dashboard provided status updates on all PowerPivot for SharePoint operations. Being a part of PowerPivot for SharePoint, this has been removed accordingly.

PowerPivot Gallery

The PowerPivot Gallery is a modified SharePoint Document library form that displays worksheet thumbnails contained in published Excel workbooks. This component is Silverlight based, and part of PowerPivot for SharePoint. It has been removed accordingly.

Power View, Decomposition trees, and the PowerPivot gallery were the last remaining features that carried a Silverlight dependency. SharePoint 2019 no longer has any Silverlight dependencies.

These changes are significant for anyone with an existing Business Intelligence investment that plans to move to SharePoint 2019. I intent to write more about migration strategies and will be addressing these topics at various conferences in the future.

]]>
https://whitepages.unlimitedviz.com/2018/08/business-intelligence-sharepoint-2019/feed/ 5 15714
Using Excel Files with Power BI Desktop and SharePoint https://whitepages.unlimitedviz.com/2018/07/excel-files-power-bi-desktop-sharepoint/ https://whitepages.unlimitedviz.com/2018/07/excel-files-power-bi-desktop-sharepoint/#comments Tue, 31 Jul 2018 12:51:32 +0000 https://whitepages.unlimitedviz.com/?p=15703 Continue readingUsing Excel Files with Power BI Desktop and SharePoint]]> As discussed in a previous post, Working with Excel Files in the Power BI Service, Excel and Power BI have a rich, complex relationship. Power BI Desktop is the primary design tool for Power BI, and it has many feature overlaps with Excel as an analytic tool. Excel can be used both as an analytic tool and a data source, and the structure of the Excel file will dictate the way that Power BI Desktop can be used with it. If Excel is being used as an analytic tool (i.e. connected to data), the appropriate items in the file can be imported into Power BI Desktop. If it is being used as a data source (data in worksheets), Power BI Desktop will connect to it, and use its data to build a model. This post attempts to articulate the nuances of both scenarios.

Importing from Excel

Power BI Desktop has an unfortunate name in my opinion. It is a design tool and is not meant to replicate the capabilities of the Power BI service on the desktop, as the name might suggest. A better name for the product would I believe be Power BI Designer. Its purpose is to connect to and transform data (Power Query), build data models for Analysis (Power Pivot) and build reports (report designer).

Used as an analysis tool, Excel has all these capabilities as well. In fact, the first two (Power Query and Power Pivot) are identical to what is already in Power BI Desktop. Excel also has Power View for analytic reporting. Power View is very similar to the type of reporting in Power BI Desktop, but uses a different technology and has been deprecated for some time. As a result, Excel charts and pivot tables are the primary means of visualizing data in Excel.

So why would you need to use Power BI Desktop if you are using Excel? As explained in my previous post, the Power BI service can fully interact with Excel as an analysis tool, and allows you to interact with Excel right from the Power BI Service. If Excel is meeting all your analytics needs, then there may be no need to introduce Power BI Desktop at all. However, if you wish to take advantage of Power BI’s analytic reporting capabilities, and you have existing Excel assets, you may wish to convert them to the native Power BI format.  Whatever the reason, moving from Excel to Power BI is relatively straightforward with Power BI Desktop.

From the File menu in Power BI Desktop, select Import, and then Excel Workbook Contents.

Importing Excel Files contents

You are then prompted to select an Excel file. Once selected, you are then presented with a warning dialog.

Excel files contents warning dialog

The dialog does a very good job of explaining what will happen, specifically the fact that data from workbooks will not be brought into this new file. Any Power Pivot data models or Power Queries will be brought in. If the workbook contains legacy Power View sheets, they will be converted to native Power BI visuals. In addition, any legacy (non-Power Query) data connections used by the source file’s Power Pivot data model will be converted to Power Query and imported.

Imported Power View sheet

Legacy Power View Sheet converted to Power BI visuals

A complete list of workbook content and what is/isn’t converted is below:

Excel Content Import to PBI Desktop Support
Data in sheets Not imported
Data model (Power Pivot) Imported
Data connections Converted to Power Query and imported
Power Queries Imported
Power View Sheets Converted to PBI visuals and imported
Pivot charts/tables Not imported
Excel charts Not imported
Macros Not imported

Once imported, the new Power BI file (PBIX) lives on its own and contains no connection or any other type of relationship to the original source Excel file. If the source Excel file is changed, there is no way to update the PBIX file. Any imported data connections are between the PBIX file and the original data source. The new PBIX file can be published to the Power BI service like any other.

Connecting to Excel

Connecting to Excel as a data source is a very different thing than importing from it. In this scenario, the data in the worksheets and only the data in the worksheets is brought into the data model. The is very different behaviour than that of connecting to Excel files to the Power BI Service, where both the model and the worksheet data is brought in.

Using the Excel Connector

The easiest, and most obvious way to connect to Excel worksheet data is by using the Excel connector. From the ribbon in Power BI Desktop, select Get Data. The Excel connector is right at the top of the list.

Connecting to Excel files on the file system

Selecting it allows you select your source file, and then the workbooks within it, and then build out the data model.

This approach works well but carries with it an important limitation. The new queries are  connected to the file using a local file system. This means that to be refreshed, an on-premises data gateway is required. In order to eliminate the gateway requirement, you can connect to the file in SharePoint using the SharePoint folder connector.

Using the SharePoint Folder Connector

The SharePoint Folder connects to all the files stored in libraries of a SharePoint site. It allows you to report on file metadata, but it also allows you to drill into file contents.

From Power BI Desktop select Get Data but instead of selecting Excel, Search for SharePoint and select SharePoint folder.

Using the SharePoint folder connector

Once selected, enter the URL of the SharePoint site (NOT the URL of a library or folder) in the dialog box.

Next, you will be presented with a preview of all the files in your site. Unless you are only interested in file metadata, click on the Edit button to bring up the Power Query editor.

The initial view will contain all the files in the site, but we are interested in the content of just one of those files. Every file in this view will contain the hyperlinked value “Binary” in the Content column. Clicking that link for the file that you want to connect to will drill down into the contents.

Site contents using the SharePoint folder connector

From this level, you can build your Power Query, data model, and report as needed just as if you had used the Excel connector. The difference is that now when you publish your report to Power BI, it will know the file is stored in SharePoint and will connect directly to it. It will not require a gateway for refresh purposes. Once credentials are registered, the report will refresh itself directly from the workbook stored in SharePoint.

XLS vs XLSX

A note of caution. The above SharePoint folder approach only works for XLSX files. The Power BI Desktop and the Power BI service both support both Excel file formats (XLS and XLSX). However, refresh does not. If the source file format is XLS, and a refresh is attempted, you will receive the classic “microsoft.ace.oledb.12.0 provider” error in the Power BI service.

Excel files refresh error with XLS file types

The older Excel file format (XLS) requires an Access driver to refresh, which is not a part of the Power BI service. The newer XLSX file does not require this driver. As a result, if the source file is XLS, refreshing it requires going through an On-Premises Data gateway, and that gateway machine must also have the ACE components installed.

To recap, you can bring Excel assets into Power BI Desktop by using the import function, and you can load data from Excel files through Power Query. The two operations have very different results, and the can be combined if a source workbook contains both analyses and data.

]]>
https://whitepages.unlimitedviz.com/2018/07/excel-files-power-bi-desktop-sharepoint/feed/ 4 15703
Working with Excel files in the Power BI Service https://whitepages.unlimitedviz.com/2018/07/excel-files-power-bi-service/ https://whitepages.unlimitedviz.com/2018/07/excel-files-power-bi-service/#comments Fri, 20 Jul 2018 16:06:24 +0000 https://whitepages.unlimitedviz.com/?p=15672 Continue readingWorking with Excel files in the Power BI Service]]> Power BI has been able to work with Excel files since it was first introduced. Indeed, it was born from the analytic capabilities in Excel. Users can connect directly to Excel files by using the Power BI service and nothing but a browser. However, depending on the content of the Excel file, and the method of connecting, the resulting products can be very different. In this post I will attempt to clarify this behavior. A subsequent post will detail the options available when working with Excel files in Power BI Desktop.

File Structure

Excel is a multi-purpose tool. It contains all the building blocks of Power BI, and as such, it is an excellent Business Intelligence client. Excel files are also often used (much to my chagrin) as a data storage container, or as a data transport medium. Understanding how the file is structured, and what you want to do with it is key to making the right choice when combining it with Power BI.

Originally Excel files (workbooks) were collections of worksheets. Analysts could import data into those worksheets and then analyze them with the tools that Excel provided. Although Excel was never intended to be a database, it’s ease of use and familiarity led many people to begin using it that was, and “spreadmarts” (spreadsheet data marts) quickly became a problem. The problems arose because the instant data was extracted from a source it became stale, and the fact that it was being stored in worksheets meant that it could be edited (changing history) and became subject to the data size limitations of a worksheet.

To take advantage of Excel’s analytic capabilities without being subject to the issues involved in worksheet data storage, the data model was introduced, initially through PowerPivot. The data model is a “miniaturized” version of the SQL Server Analysis Services tabular engine that runs in Excel. This data model is read only, refreshable, and highly compressed which importantly means that its only data limitation is the amount of available memory available on the machine running it. Importantly, this engine is the same engine that is used by Power BI – the advantages of which we’ll explore shortly.

Excel of course still needs to be able to use worksheets and be Excel, so we can’t just remove the worksheet capability (which incidentally is effectively what Power BI Desktop is – Excel without worksheets). Therefore, today from a data perspective, Excel files can have data in the data model, worksheets or both. From the Power BI service perspective, the important thing is whether the file contains a data model, as it treats the two cases differently.

Getting Excel Data

From the Power BI service, you click the Get Data button, and then the Get button in the Files tile. You are then presented with one of two dialogs depending on whether you are using a personal workspace, or an app workspace.

Personal workspace

Importing files into a Power BI Personal Workspace

Connecting file-based data to a personal workspace

When importing into a personal workspace, there are 4 possible data sources.

A local file is one that is stored on a file system local to the machine being used. Selecting this option will allow you to work with the Excel file stored in that location, but if the file is being used as a data source (data is in the worksheets), then a Data Gateway will be required for any data refreshes. Power BI will also connect to a file stored in OneDrive, either Personal or Business (through office 365). Finally, the service can work with files stored in any accessible SharePoint site (not simply Team sites as the name would indicate).

App workspace

Importing files into a Power BI App Workspace

Connecting file-based data to an App workspace

When importing into an App workspace, there are 3 possible data sources. The Local File and SharePoint – Team Sites options are precisely the same as when importing into a personal workspace. The difference is the OneDrive – Workspace name option replaces the two other OneDrive options. Choosing this option allows you to work with files stored in the “Group OneDrive”. Since every App workspace is backed by an Office 365 or “Modern” group, it also has access to the SharePoint site for that group. The “Group OneDrive” is the Documents library within that SharePoint site. Therefore, choosing SharePoint – TeamSites and navigating to the Documents library will render the same results in a few more mouse clicks, but also give access to all other document libraries within that site.

Connect vs Import

Once you navigate to the Excel file that you want to work with, you select it, and click connect. You will then be presented with two options for the file, Import or Connect.

This choice dictates how the file is brought into the Power BI service. The structure of the file determines exactly what is brought in to the service in both cases.

Connect

Clicking the Connect button allows Power BI to connect to and work with the Excel file in place. The workbook is displayed as an Excel workbook in full fidelity in the Power BI interface using Excel Online. The file itself is shown in the Workbooks section in the Power BI interface, and it stands alone from other Power BI elements (except that regions of it can be pinned to a dashboard). Connecting to an Excel report will not create a Power BI Dataset, Report, or Dashboard. All operations, including refresh (see below) are controlled through the workbook.

At no point is the file moved, or “brought in” to the Power BI service. If the file is being stored in SharePoint, or OneDrive, anything done to the file in the Power BI service will be visible to anyone with access to the file itself, whether they are a Power BI user or not. This includes refresh, which will be discussed further below, but the important part to remember here is that if the data in the connected file is refreshed through the Power BI service, and it is being stored in SharePoint (or OneDrive), all users will be able to see updated data the next time that they open the file.

Connecting to an Excel file behaves the same way whether the file contains a data model or not, but the file must contain a data model in order to be refreshed by the Power BI service.

Connected Excel file within Power BI

Connected Excel file within Power BI

Import

Importing an Excel file behaves totally differently from connecting to it. When an Excel file is imported, it is treated as a data source to Power BI, and the assets within that file are brought into the Power BI service. Subsequent changes to the source file are not immediately reflected within the Power BI service, but are retrieved through the refresh process.

The way that the assets are brought into the service depends very much on the structure of the file, specifically whether it contains a data model or not. If the file does not contain a data model, then Power BI will use the data contained in the Excel worksheets to construct a new one. This is similar to what happens when a CSV file is imported into the service. If the file does contain a data model, then the worksheet data is imported, and that data model is brought into the service as-is. One important exception to this is if worksheet data uses the same query as an existing model, the worksheet data is ignored, and the data model is brought in as-is. This is important because Excel’s Power Pivot editor can be used to edit the model, creating calculated columns, calculated measures and relationships prior to import. The model that is automatically created when the file does not contain a model has no editing capabilities.

When an Excel file with a data model is imported, the data model (imported or created) is added to datasets, and a link to the dataset is added to the default dashboard for the workspace. If no default dashboard exists, one will be created. A report can then be authored in the service. If the workbook contains any PowerView reports, these will be converted to native Power BI reports and added to the service as well. Any embedded 3D maps are not brought in.

Imported Excel File showing calculated measures

Imported Excel File showing calculated measures

Refresh

Data refresh options, and behavior depend on both the Get Data choice (connect or import) and the structure of the Excel file.

Connected Workbooks

If the workbook is connected to the service, and it does not contain a data model, it cannot be refreshed. This is true even if the worksheets in the workbook contain data from Power Query queries. This is the only scenario that does not support refresh in any way.

If the workbook contains a data model refresh is supported. The interesting part is that refresh will be triggered not only for the data model itself, but for any worksheets that have Power Queries as a data source. Therefore, a workaround to the lack of refresh support for a worksheet with no data model is to add a blank data model.

For refresh to work, the data source must be available to the Power BI service. This means that the source must be available in the cloud or registered on an available On-Premises Data Gateway.

The important thing to note about connected workbooks is that the refresh options that are performed on them are permanent – refreshed data is stored with the workbook. This means that if the connected workbook is stored in SharePoint, or shared through OneDrive, updated data is available to all users with access regardless of whether they are Power BI users.

Imported Workbooks

Refresh options for imported workbooks are slightly more complicated. As mentioned above data is either imported from the worksheets, a data model imported into the service or both.

If data was imported from worksheets, then the Excel file is the data source from the standpoint of Power BI. If the file is stored in SharePoint or OneDrive, it will automatically be refreshed every hour by default. This means that changes to the underlying Excel file will be reflected back in the Power BI service within an hour. This feature can be disabled, but it is not possible to change to hourly schedule, nor precisely when it will occur.

Refresh options for workbooks in OneDrive/SharePoint

Refresh options for workbooks in OneDrive/SharePoint

If the file is stored on a file system, it can be scheduled more granularly, but you will need to connect to it through an On-Premises Data Gateway.

If the file contained a data model that was imported into the service, then the original source of data for that data model (the query) is what the Power BI service will refresh from (NOT the Excel file itself). In this case, the refresh options are the same as with most other Power BI data sources – Excel is taken out of the picture completely, and any changes to the source Excel file will not be reflected into the service. The exception to this is if the file had both a data model, and worksheet data that was imported.

In the case of an Excel with both a data model and worksheet data, both cases above will apply. The workbook is used as a data source for the table that was created by Power BI on import, and the original data model’s source is updated independently. This means that changes to the worksheet data are reflected in the Power BI service when refreshed, but any model changes to the original Excel file are not. Both the OneDrive and regular refresh schedules are used for imported files of this type.

Refresh options for a combined data source

Refresh options for a combined data source

The following table summarizes the refresh options available for file structure and connection type.

File Structure

Get Data option

Connect

Import

Worksheet data None Refresh from worksheet
Data model only Refresh from model source Refresh from model source
Data model plus worksheet data Refresh from model source and worksheet source Refresh from model source and worksheet

Summary

Both Excel and Power BI are powerful tools in their own rights, and the decision to use one does not preclude using the other and in fact there are many good reasons for doing so. Bringing refreshability to Excel files stored in SharePoint is just one of them. It is however important to understand how it all works in order to get the maximum impact.

]]>
https://whitepages.unlimitedviz.com/2018/07/excel-files-power-bi-service/feed/ 2 15672
Using Power BI to Report on Lookup Fields in SharePoint https://whitepages.unlimitedviz.com/2018/04/using-powerbi-report-lookup-fields-sharepoint/ https://whitepages.unlimitedviz.com/2018/04/using-powerbi-report-lookup-fields-sharepoint/#comments Wed, 18 Apr 2018 14:10:40 +0000 https://whitepages.unlimitedviz.com/?p=15623 Continue readingUsing Power BI to Report on Lookup Fields in SharePoint]]> This post is the sixth and final post in a series exploring Power BI and complex data types in SharePoint. This post examines the various options in Power BI for working with lookup fields. The previous posts are:

A lookup field in SharePoint contains values looked up from another list in the same SharePoint site. Strictly speaking, the field contains only the ID from the item in the source list, and the value(s) is/are looked up whenever the field is displayed. The lookup field can also be used to display multiple field values from the target list items.

Defining a lookup field in SharePoint

The List

Consider the following list that contains a lookup field named “Neighbourhood”:

The lookup field neighbourhood in the SharePoint view

We can see from the screenshot above that the text value for neighbourhood is displayed in the view, although only the row identifier is stored in the column. We will be able to get both values and more if desired in a Power BI report, but first we need to build the report using Power BI Desktop.

Loading the Data

We first launch Power BI Desktop, select “Get Data” and then choose SharePoint Online list (if connecting to SharePoint Online) or SharePoint List (if using SharePoint Server). We are then prompted for the URL of the SharePoint Site. The dialog is titled SharePoint lists, but the value is the URL of the site, NOT the list itself. Once this is entered we are prompted for credentials if we haven’t connected to this site before. After entering credentials, we can select the list that we want to report on. In our case, it’s named “Listings”. We select it, and then click on the Edit button.

Loading data rom the Listings list

Once the data loads in, one of the first things that you’ll notice is that there are a lot of columns to choose from, and it’s a good idea to remove the columns that you don’t need. We can do this by right clicking on the desired column titles and selecting “Remove”. In this case, we can remove the ContentTypeId column and everything to the right of it, with two important exceptions. We want to keep the “FieldValuesAsText” in addition to the special “Neighbourhood” column at the far right of the columns, as we’ll be needing them for our options below.

Using FieldValuesAsText

Examining our columns, we can see that amongst the simpler text fields, we don’t have a “Neighbourhood” column, but instead, a “NeighbourhoodId” column, with numeric values. We do have a Neighbourhood column further off to the right, but it doesn’t display simple text (we’ll come back to this shortly). If we simply want the text value of our lookup target, we can use the “FieldValuesAsText” column quickly.

Scrolling right in the Query editor view, we find the “FieldValuesAsText” column. The record values represent a one to one relationship with the text values of the list row, so we can click on the column expander at the right of the column title. From there we can extract the text value of any column, including our lookup field, “Neighbourhood”.

Extracting text using FieldValuesAsText

With “Neighbourhood” checked, and nothing else, including the “Use original column name..” option, we can click OK, and the “FieldNameAsText” column is replaced by a new column, “Neighbourhood” that contains the text values for Neighbourhood.

If this value is all that is needed, then this is a totally valid approach, and we can move on to report building. However, this is only one way to achieve this goal. If more information is needed, then other methods may be more suitable.

Retrieving all Lookup Field Values from the Extended Column

Given that the lookup target item is a SharePoint list item, all that item’s properties are available to us. We can access them from the extended column set up for the field. In our case, the original “Neighbourhood” column is the extended column. We can expand this column by selecting its column expander.

Extracting field values using the extended column

We then deselect all of the columns except the ones that we want to use in the report. The fields available are the fields available in the target list. In our case, we select the “Title” field, as it is the one being looked up. We can however retrieve any of the fields that we need from the target list.

Keep in mind that “Title” in our example is a simple text field, so no further action is necessary. The retrieved fields can be complex (person, MMS, etc), but keep in mind that if a complex field type is retrieved, it will need to be transformed just like any from the list in question.

The field name in the target list may not adequately describe its function for the report. In our case, “Title” actually means “Neighbourhood” in this report. It’s a good idea to rename it.

Finally, if multiple field values are to be retrieved, the data model could grow significantly. This is because the values for every field are repeated in every row of data. Given that the original lookup column adds a measure of relational behaviour to SharePoint, using this relationship is the most efficient way to work with this data. Power Query allows us to do just that.

Working with Related Tables

To work with related tables, we need not only the original data table (in our case, “Listings”) but also the table for the lookup list itself. To do this, from the Query Editor, we create a new data source like the one created above for “Listings”, but instead we select the lookup list (“Neighbourhoods”).

Once imported, we can remove any extraneous columns, and then set the data type for the ID field to be “Whole Number”.

Setting the ID column to whole number

We also need to set the data type of the “NeighbourhoodId” column in the Listings table to “Whole Number”. Once these options have been set, we are ready to work with the data model and the report. We select “Close and Apply” from the ribbon to load the data into the model.

Once loaded, we launch the relationship builder in the design pane in order to establish the relationship between the two tables.

Power BI relationship editor with default relationship defined

We can see that Power BI has already detected a relationship. However, it is not correct. The model designer assumes that because both tables contain an “id” column, then they must be related. However, the true relationship is between the “Id” column in our “Neighbourhoods” table, and the “NeighbourhoodId” column in our “Listings” table.

We must first delete the detected relationship by selecting the connector between the two tables and pressing “Delete”. We can then create the proper relationship by dropping one of our related columns onto the other. Once this is created, we also need to ensure that the “Cross filter direction” is set to “Both”. We do this by double clicking on the relationship connector and selecting the appropriate option.

Setting cross filter direction

Power BI relationship editor with correct relationship defined

Once the relationship has been established, we can return to the design pane and construct a rudimentary report. We drag a few fields from Listings into a table, create a calculated measure for the number of listings, and we add the “Title” field (renamed to “Neighbourhood”) to the canvas separately. Once we set the visualization for “Neighbourhood” to a slicer, we can easily slice our listings data by neighbourhood.

Slicing report with the values from a lookup field

We can therefore see that there are several options for accessing data for a lookup field, ranging from simple to complex. The trade-off for simplicity is flexibility. Which approach used will depend on your requirements but storing the lookup table separately is the most efficient as the data is only stored once and referenced.

]]>
https://whitepages.unlimitedviz.com/2018/04/using-powerbi-report-lookup-fields-sharepoint/feed/ 3 15623
Using Power BI to Report on Hyperlink or Picture Fields in SharePoint https://whitepages.unlimitedviz.com/2018/04/powerbi-report-hyperlink-picture-fields-sharepoint/ https://whitepages.unlimitedviz.com/2018/04/powerbi-report-hyperlink-picture-fields-sharepoint/#comments Thu, 12 Apr 2018 08:15:33 +0000 https://whitepages.unlimitedviz.com/?p=15608 Continue readingUsing Power BI to Report on Hyperlink or Picture Fields in SharePoint]]> This post is the fifth in a series exploring Power BI and complex data types in SharePoint. This one details the use of SharePoint Hyperlink or Picture fields. The previous posts are:

A hyperlink or picture field in SharePoint consists of a name-value pair. The value is always a URL, and the name is the descriptor for that URL. When the field is created, the creator can specify the character of the field, whether it is a hyperlink, or a picture.

Hyperlink or Picture field in SharePoint

If picture is chosen, then all values will be treated as images, and SharePoint renders them as such wherever displayed, in forms, views, etc. The name part of the name-value pair is used as the alt tag for the image when it is written. If hyperlink is chosen, the name portion will be displayed wrapped in a link to the value. These behaviours are particularly suitable to the way that Power BI works with both link and images, as we’ll see shortly. In our example below, we’ll be working with a list that contains 2 instances of this field type, one configured as a picture, and the other as a hyperlink.

The List

Consider the following list that contains two of these fields. The first named “Picture” is not surprisingly configured as a picture type, and the other, “More Info” is configured as a hyperlink:

Both Picture and hyperlink fields in a SharePoint view

This view renders a thumbnail of the image for the “Picture” field, and a clickable hyperlink using the link name for the “More Info” field. Behind the scenes however, the data is simply stored as that name-value pair. We will be able to get both field types to render with appropriate behaviours in a Power BI report, but first we need to build the report using Power BI Desktop.

Loading the Data

We first launch Power BI Desktop, select “Get Data” and then choose SharePoint Online list (if connecting to SharePoint Online) or SharePoint List (if using SharePoint Server). We are then prompted for the URL of the SharePoint Site. The dialog is titled SharePoint lists, but the value is the URL of the site, NOT the list itself. Once this is entered we are prompted for credentials if we haven’t connected to this site before. After entering credentials, we can select the list that we want to report on. In our case, it’s named “Listings”. We select it, and then click on the Edit button.

Loading the Listings data

Once the data loads in, one of the first things that you’ll notice is that there are a lot of columns to choose from, and it’s a good idea to remove the columns that you don’t need. We can do this by right clicking on the desired column titles and selecting “Remove”. In this case, we can remove the ContentTypeId column and everything to the right of it, with two important exceptions. We want to keep the “FieldValuesAsText” column, as we’ll be needing that to extract the text values.

Using FieldValuesAsText

In our example, both the “Picture” and “More Info” fields are displayed with a linkable value of “record” for every row. We will explore using these columns below, but to use FieldValuesAsText, it is best to remove them to avoid naming conflicts. As with most complex field type, the “FieldValuesAsText” column can be used to extract the URL for the Hyperlink or Picture field.

We scroll right and select the expander icon for the “FieldValuesAsText” column, then deselect all available fields except the “Picture” and “More Info” columns. In addition, we want to ensure that the “Use original column name as prefix” option is deselected to avoid a lot of messy renaming later.

Retrieving data with FieldValuesAsText

We then select OK, and two new columns are added in place of FieldValuesAsText named “Picture” and “More Info”. These columns contain the value portion of the name-value pair that make up the Hyperlink or Picture field, but the name portion is dropped.

Raw data for the Hyperlink or Picture Field

If all that is needed for the report is the URL portion, then this approach is sufficient, and you can continue working with the data model and report as below. However, to retrieve both the name and the value from the field, an alternate method is required.

Retrieving all Field Values

Instead of removing the “Picture” and “More Info” columns as described in the previous section, retrieving all of the values requires us to use them. In this case, we can safely remove the “FieldValuesAsText” column first, as it won’t be needed. The “Record” values shown for the field value on each row are Power BI’s way of expressing a one-to-one relationship. In this case, each relationship is with a record that has 2 fields, “Description” and “Url”. To use them in a report, they must first be flattened. We do this by selecting the column expander in the upper right of the column title, selecting both fields, and clicking OK.

Expanding the Hyperlink or Picture field

All Hyperlink or Picture fields will have the same properties, and in our case, this needs to be done for both the “Picture” and “More Info” fields. Because of this, it’s likely a good idea to check the “Use original column names as prefix” box to help keep everything straight. The columns can be renamed at any time if desired. Once This is done for both columns, we will see both the description, and the actual URL value for both of our “Hyperlink or Picture fields, as seen below:

Extracting all Hyperlink or Picture properties

At this point, we are ready to load the data into the data model by selecting the “Close and Apply” button from the ribbon. Once loaded, we are placed into the report design canvas. From here, we need to do a small amount of model editing.

Using Picture or Link Data in the Report

We can add a new table to the design surface, and then add “Picture.Url” to the table. We can quickly see that the default behavior is not optimal – it only displays the URL, not the rendered picture.

Raw picture data in a Power BI report

This is because the data model only knows the contents of the field to be text. We need to tell the model that this is a picture, and we can do that by selecting the Model tab in the ribbon, selecting the field in the field selection pane,

Setting the field properties in Power BI

Once flagged in this manner, the images will automatically render as images whenever they are used in tables, and several other visuals.

Picture field rendering in a Power BI table

The hyperlink field must be categorized in a similar fashion as the picture field, with one difference – instead of Image URL as the data category, we pick Web URL. Once we have done this, we can add it to our table above along with a couple of other fields, including the link description.

Adding Web URL data to the Power BI table

The hyperlinks are active and clickable, but they’re not the nicest to look at. They also take up a significant amount of space on the visual. Happily, there is a table feature that we can take advantage of to help us with this. To turn it on, open the table properties pane (the paint roller), open the Values section, and turn on the option for “URL icon” . All of the long links in the table will be reduced down to a compact link icon.

Web URL data formatted as an icon

Ideally, I would like to be able to recombine the link description and the link in the visual, the same way that it is rendered in SharePoint. However, this does work well, and it lends us a nice level of interactivity in our reports.

As we can see above, the SharePoint “Hyperlink or Picture” field is not only available to Power BI, but much of its utility can also be brought forward into Power BI reports.

]]>
https://whitepages.unlimitedviz.com/2018/04/powerbi-report-hyperlink-picture-fields-sharepoint/feed/ 2 15608
Using Power BI to Report on Rich Text Data Fields in SharePoint https://whitepages.unlimitedviz.com/2018/04/power-bi-report-rich-text-sharepoint/ https://whitepages.unlimitedviz.com/2018/04/power-bi-report-rich-text-sharepoint/#comments Tue, 10 Apr 2018 10:42:16 +0000 https://whitepages.unlimitedviz.com/?p=15591 Continue readingUsing Power BI to Report on Rich Text Data Fields in SharePoint]]> This post is the fourth in a series exploring Power BI and complex data types in SharePoint. It discusses working with using Power BI to report on rich text fields from SharePoint. The previous posts are:

A rich text field in SharePoint is a special instance of the multi-line column type which contains formatting attributes. The column becomes “rich” when either the “Rich text” or “Enhanced rich text” options are selected in the field’s definition.

SharePoint rich text fields

The List

Consider the following list that contains a rich text field named “Description”:

Rich text field displayed in SharePoint view

This view displays the value of the rich text field and retains its formatting. Internally all of the formatting commands are stored as HTML, rendering is a simple task for SharePoint. However, None of the default Power BI visuals support HTML rendering. We are left then with two options. We must either retrieve the raw text from the rich text field, and lose the formatting, or find a visual that supports HTML rendering. Happily, both options are possible. As with any SharePoint data, we must first start with Power BI Desktop.

Loading the Data

We first launch Power BI Desktop, select “Get Data” and then choose SharePoint Online list (if connecting to SharePoint Online) or SharePoint List (if using SharePoint Server). We are then prompted for the URL of the SharePoint Site. The dialog is titled SharePoint lists, but the value is the URL of the site, NOT the list itself. Once this is entered we are prompted for credentials if we haven’t connected to this site before. After entering credentials, we can select the list that we want to report on. In our case, it’s named “Listings”. We select it, and then click on the Edit button.

Loading the Listings data

Once the data loads in, one of the first things that you’ll notice is that there are a lot of columns to choose from, and it’s a good idea to remove the columns that you don’t need. We can do this by right clicking on the desired column titles and selecting “Remove”. In this case, we can remove the ContentTypeId column and everything to the right of it, with two important exceptions. We want to keep the “FieldValuesAsText” column, as we’ll be needing that to extract the text values.

Extracting Plain Text from Rich Text

One thing that you will notice right away is that he more simple column types like “Title” show their value directly in the Query editor. Rich text fields also show their values directly in the editor, but they include both the text and the html formatting commands. The rich text field in our example is named “Description”, and each entry begins with “<div class=…..”.

Rich text field contents in Power BI

Given that none of the standard Power BI visuals support HTML rendering, this is clearly not what we need for our report.

We could perform a series of text substitutions to strip out all the HTML formatting from the column, but that process would be highly tedious, not to mention messy. Happily, The Power BI SharePoint connector can do this for us automatically through the FieldValuesAsText column.

First, we can remove the Description column altogether. Next, with our example in the Query Editor, we scroll right and select the expander icon for the “FieldValuesAsText” column. We then then deselect all available fields except the “Description” column. In addition, we want to deselect the “Use original column name as prefix” option.

Expanding FieldValuesAsText

We then select OK, and we once again have a Description column, but this time it is free of all HTML formatting tags.

Description field with all formatting removed

At this point, we can click on “Close & Apply” in the ribbon, add a table to the design surface, and add a number of dimensions, including our “Description” field. It is displayed in the visual free of the HTML formatting.

Unformatted rich text in a Power BI table

Showing Rich Text in Full Fidelity

There may be cases where we want to use the rich text formatting, and not remove it. As mentioned above, that’s not possible using the out-of-box visuals. We are therefore left to find a custom visual to do this, and at the moment, there is only one such visual to the best of my knowledge. This is the HTML Viewer visual by Pragmatic Works, and its purpose is to do exactly as the name suggests.

To begin with, we need to start at the beginning of the previous section, prior to the removal of the Description column. In this case, we want to use the HTML codes, so the initial “Description” column is perfectly adequate as-is. All that we need to do is to select “Close and Apply” in the ribbon to load the data into the data model.

We must now get the HTML Viewer from the custom visuals marketplace. We click on the ellipsis in the visuals pane, select “marketplace” and then search for the visual using the search box. Once located, we select it, and click “Add”.

Acquiring the HTML Viewer

A new icon for the visual then appears in the gallery.

The HTML viewer displays and renders the values as a list. Only one dimension is allowed, so it is not possible to use it as a replacement for a table. The best way to use this visual is to make it the target of a selection. For example, to see the description of our listings, we can add a slicer on the page using “Title” and the HTML Viewer using “Description”.

Full fidelity HTML in a Power BI report

While it is limited, it is possible to render rich text fields from SharePoint in full fidelity. However, if only the text is necessary (as is likely the case for reporting), Power BI gives us a rich set of tools to make this process relatively painless.

]]>
https://whitepages.unlimitedviz.com/2018/04/power-bi-report-rich-text-sharepoint/feed/ 13 15591
Using Power BI to Report on Managed Metadata Fields in SharePoint https://whitepages.unlimitedviz.com/2018/04/using-powerbi-report-managed-metadata-fields-sharepoint/ https://whitepages.unlimitedviz.com/2018/04/using-powerbi-report-managed-metadata-fields-sharepoint/#comments Mon, 09 Apr 2018 09:49:52 +0000 https://whitepages.unlimitedviz.com/?p=15573 Continue readingUsing Power BI to Report on Managed Metadata Fields in SharePoint]]> This post is the third in a series exploring Power BI and complex data types in SharePoint. The first post explores working with multi-value columns, and the second post covers working with person fields. In this one, we’ll explore some of the nuances of working with managed metadata fields

Managed metadata fields in SharePoint are implemented as a special case of a lookup field. As managed metadata is used in SharePoint lists, a hidden list in the root of the site, TaxonomyHiddenList is populated with the values used. The Power BI SharePoint list connector is aware of this, and it provides helpers to make it relatively easy to get the value of these fields. It is also possible to get retrieve some of the extended properties of these fields, specifically all the language variants of the managed metadata values. We’ll examine several approaches to extracting managed metadata information.

The List

Consider the following list that contains a managed metadata field named “City”:

Example list of listings

This view displays the value of the managed metadata field in the language of the site. If multiple language values have been defined for the term, the appropriate one will be used. While not shown in the view, these multiple language values, along with other attributes of the term are available to Power BI as needed. Depending on requirements, there are several ways to access the terms values and attributes in Power BI. To do so, the report must be built using Power BI Desktop.

Loading the Data

We first launch Power BI Desktop, select “Get Data” and then choose SharePoint Online list (if connecting to SharePoint Online) or SharePoint List (if using SharePoint Server). We are then prompted for the URL of the SharePoint Site. The dialog is titled SharePoint lists, but the value is the URL of the site, NOT the list itself. Once this is entered we are prompted for credentials if we haven’t connected to this site before. After entering credentials, we can select the list that we want to report on. In our case, it’s named “Listings”. We select it, and then click on the Edit button.

Importing Listings data

Once the data loads in, one of the first things that you’ll notice is that there are a lot of columns to choose from, and it’s a good idea to remove the data that you don’t need. In this case, we can remove the ContentTypeId column and everything to the right of it, with two important exceptions We want to keep the “FieldValuesAsText” column (we’ll come back to that shortly). Our simplest requirement will be to extract the value of the metadata column in the default language of the site, the same way that it is represented in the SharePoint list view. We can accomplish this using the FieldValuesAsText column.

Extracting the Term Value

One thing that you will notice right away is that he more simple column types like “Title” show their value directly in the Query editor. In our case, our managed metadata column, “City” shows a clickable “Record” value for all rows. Clearly, “Record” is not what we are looking for, but this is how Power BI represents a one-to-one relationship. In this case, the relationship is between the source data row, and the row representing the managed metadata term. We will explore using this column in the next section, but if all we want is the value of the term, we can have Power Query (the Power BI Query editor is really a user interface for Power Query) automatically extract it for us using the “FieldValuesAsText” column.

We scroll right and select the expander icon for the “FieldValuesAsText” column, then deselect all available fields except the “City” column, as well as the “Use original column name as prefix” option.

Using FieldValuesAsText

We then select OK, and a new column “City.1” is substituted for the original column with the value of the term.

Extracting the Managed Metadata value

The reason that “.1” is added to the column name is that we already have a column named City. If the value is all that is needed, it can safely be removed, and this column renamed. However, if additional term data is required, a more complex approach is required.

Extracting the Managed Metadata Properties

As noted above, the “City” column links to the managed metadata term. It can be used to get at the underlying term properties by clicking on the column expander to the right of the column title and selecting the TermGuid column.

Extracting the field properties

The Term column will display the id of the label of the managed metadata field, which is not what we need here. TermGuid will ultimately link to the term properties that we need, but when we click OK, we can see that the resulting values are simply a collection of GUIDs. Since we will use these values to link to the data we need, we must first set their data type. We do this by selecting the column, then the “Transform” tab in the query editor and selecting the “Text” data type.

Setting to Text type

At this point, we are ready to link to our terms list.

Linking to the Hidden Taxonomy List

As mentioned above, SharePoint adds items to a hidden list whenever managed metadata terms are used. This list, “TaxonomyHiddenList” is used for all lists in the site. We therefore need to add this list to our query. To do so, we repeat the procedure used above to get our listings data, substituting “TaxonomyHiddenList” for “Listings”.

As mentioned above, this list will contain values for all instances of managed metadata in the site, not just the field that we’re after. While not strictly necessary, it’s a good idea to filter out the values that are unnecessary, particularly if we want to use the term as a slicer. If we don’t all sorts of values could appear that make no sense to our report. In our example, we have listings with two cities, “Guelph” and “Elora” but there is also a value for “North America” that comes from another managed metadata instance. Examining the data we can see that the “IdForTermSet” column uniquely identifies the “City” column. Filtering on the GUID for our column will remove all extraneous data.

Loading the TaxonomyHiddenList

The Unique ID of the term in question is stored in the “IdForTerm” column. The term’s value is stored in the “Term” column, and the hierarchical path to the term is stored in the “Path” column. These columns will hold the value of the term in its default language. Scrolling right, we can see that there are many columns titled “Termxxxx” and “Pathxxxx”. These correspond to the term’s value in various languages, the language being identified by the xxxx value – it is the code of the language in question. For example, the language code of French is 1036, so if we want the French values for the term, we will need to use the columns Term1036 and Path1036.

If the term is not in a hierarchy, both the Term and the Path values will be identical. If values have not been defined for he additional languages, then the default language value will be returned. It is important to identify the data needed, and to remove all redundant column data at this point to avoid bloating the data model. In our case, we only need the default language, and there is no hierarchy, so we remove all columns with “Path” in the name, and all columns with “Termxxxx”. In addition, we can remove all the extended columns, and system columns. In the end, in our case we are left with the GUID for the term (named IdForTerm), and the term itself in the default language.

Removing Extraneous fields

The final step is to change the data type of the “IdForTerm” and the “Term” columns (and any other columns necessary) to Text, using the procedure for “TermGuid” above. Given that “Term” is generic, and we have filtered ours down to the city terms, we also rename the column to “City”.

When ready, we select the “Close and Apply” button from the Query Editor Ribbon. At this point, we have two tables in the model, Listings and Taxonomy (renamed from TaxonomyHiiddenList). We then select the relationship editor tab. By default, Power BI may assume a relationship for us. In our case, we retained the id field from the taxonomy list, so it established a relationship between the id fields in the two tables, which is incorrect. We need to select the relationship by clicking on the line between the two tables, and then pressing delete. Once deleted, we need to establish a relationship between “TermGuid” in the Listings table, and “IdForTerm” in the Taxonomy table. We establish this relationship by dragging one column onto the other. Once established, we double click on the relationship line to set the value of “Cross filter direction” to “Both”.

Establishing the relationship

We can now return to the design pane, add a table visual, and add columns from both tables. In this way, we can slice a listing report on the value of the managed metadata column, in any language that we have defined for it.

Working with the report

Clearly, it this case, if we simply want to get the default language value of the term, the FieldValuesAsText approach is far simpler, but linking to the hidden taxonomy list makes it possible to access all available attributes of the managed metadata term.

]]>
https://whitepages.unlimitedviz.com/2018/04/using-powerbi-report-managed-metadata-fields-sharepoint/feed/ 2 15573
The Road Ahead for SQL Server Reporting Services and Power BI Report Server https://whitepages.unlimitedviz.com/2018/03/road-ahead-sql-server-reporting-services-power-bi-report-server/ https://whitepages.unlimitedviz.com/2018/03/road-ahead-sql-server-reporting-services-power-bi-report-server/#comments Tue, 20 Mar 2018 17:46:57 +0000 https://whitepages.unlimitedviz.com/?p=15554 Continue readingThe Road Ahead for SQL Server Reporting Services and Power BI Report Server]]>

Power BI has garnered a lot of attention in the past few years as the “place to be” for Microsoft business intelligence. Power BI is of course Microsoft’s cloud platform for dashboarding and reporting. On premises, Microsoft’s reporting platform has traditionally been SQL Server Reporting Services (SSRS) and is now joined by Power BI Reporting Server (PBIRS). While there are differences between the two in the way that they are licensed and distributed, the main technical difference is that PBIRS includes SSRS, plus the ability to render both Power BI reports (pbix) and Excel based reports through Excel Online.

PBIRS is therefore able to render all four report types as defined by Microsoft:

Paginated

RDL (classic SSRS style reports)

Interactive

PBIX (Power BI Desktop)

Mobile

RDLX or PBIX (Datazen and Power BI Desktop)

Analytical

XLSX (Excel)

Given that PBIRS is the on-premises reporting platform, and Power BI is the cloud platform, they should theoretically be able to perform the same task. However, this isn’t the case. Currently, there is no way to render paginated (or as I like to refer to them, operational) reports in the cloud. PBIRS can render all report types, but today, the Power BI Service cannot.

Jason Himmelstein and I recently hosted Riccardo Muti, Microsoft Group Program Manager for Power BI Report Server on our BI Focal podcast (you can hear the whole show here) and we asked him a few questions about the future of paginated reports.

While it has been hinted at before, Riccardo explicitly stated that the ability to render paginated reports in the cloud was not only on the roadmap, but it was actively being worked on. This will complete the picture for BI in both the cloud and on premises. This is important because I the ideal world, the choice of on-prem, cloud, or hybrid should be related to the data in question, not the available features. No more standing up an Azure virtual machine to be able to get printable, paginated structured reports.

Riccardo sees this service rolling out in three distinct phases. Initially for the preview, RDL reports will be able to connect to cloud data sources like Azure SQL Database, Azure Data Warehouse, Azure Analysis Services, and Power BI data models. The next stage will leverage the On Premises Data Gateway in order to connect to on-premises data like SQL Server, Oracle, Teradata and more. The next phase will begin to leverage Power Query to connect to the world of data sources that Power Query Offers.

With paginated reports adopting Power Query, and Excel having moved to Power Query as a default, it is hard to argue with making an investment in learning Power Query. It all seems to be coming together.

One of the top requested features for SSRS on-premises for year has been the ability to authenticate to it with claims-based authentication. This will be the native authentication mechanism for paginated reports in the service. When this is combined with the fact that these reports will ultimately leverage Power Query, and the vast number of data connection options that it brings, it’s not hard to wonder when these features will be coming down to the on-premises product. When asked about this, Riccardo confirmed that this is certainly the vision for the product.

The future looks bright for both cloud based, and on-premises reporting.

]]>
https://whitepages.unlimitedviz.com/2018/03/road-ahead-sql-server-reporting-services-power-bi-report-server/feed/ 3 15554
Using Power BI to Report on Person Fields in SharePoint https://whitepages.unlimitedviz.com/2018/01/using-power-bi-to-report-on-person-fields-in-sharepoint/ https://whitepages.unlimitedviz.com/2018/01/using-power-bi-to-report-on-person-fields-in-sharepoint/#comments Tue, 30 Jan 2018 10:52:10 +0000 https://whitepages.unlimitedviz.com/?p=15545 Continue readingUsing Power BI to Report on Person Fields in SharePoint]]> This post is the second in a series exploring Power BI and complex data types in SharePoint. The first post explores working with multi-value columns. In this one, we’ll explore some of the nuances of working with person fields

Person fields in SharePoint are just a special case of the lookup field, and the Power BI SharePoint list connector is aware of them. As such, it provides helpers to make it relatively easy to get the person’s name. However, more information is also available. We’ll examine three approaches to extracting this information. It is worth noting that all SharePoint lists contain person fields for “Created By” and “Modified By”, and they are always available.

The List

Consider the following list that contains a multi-value choice field named Amenities:

The view displays the person’s name, although the column is a complex data type. There is more information than just the person’s name available behind it but this is unavailable to the SharePoint view. Power BI can however access this information in reports. Report requirements will ultimately dictate the best approach to extracting this information, but the good news is that there are several to choose from. In all cases the data first needs to be brought into Power BI Desktop.

Loading the Data

We first launch Power BI Desktop, select “Get Data” and then choose SharePoint Online list (if connecting to SharePoint Online) or SharePoint List (if using SharePoint Server). We are then prompted for the URL of the SharePoint Site. The dialog is titled SharePoint lists, but the value is actually the URL of the site, NOT the list itself. Once this is entered we are prompted for credentials if we haven’t connected to this site before. After entering credentials, we can select the list that we want to report on. In our case, it’s named “Listings”. We select it, and then click on the Edit button.

Once the data loads in, one of the first things that you’ll notice is that there are a lot of columns to choose from, and it’s a good idea to remove the data that you don’t need. In this case, we can remove the ContentTypeId column and everything to the right of it, with two important exceptions We want to keep the “FieldValuesAsText” and “Agent” columns (we’ll come back to that shortly). Remember, for our purposes here, we want to report on the person column, “Agent”. The simplest way to represent this data is with the person’s full name, as it is displayed in the SharePoint view. As noted above, it is also possible to use this data in a more sliceable, or structured way. Let’s start with the simplest.

Extracting the Full Name

One thing that you will notice right away is that he more simple column types like “Title” show their value directly in the Query editor. In our case, there are two fields related to Agent, the “AgentId” and “Agent” columns. The Agent ID column displays a number, and the “Agent” column displays a record data type. We will explore these columns, but if all we need is the user’s full name, we can use the highly useful “FieldValuesAsText” Column.

We scroll right and select the expander icon for the “FieldValuesAsText” column, then deselect all available fields except the Agent column.

We then select OK, and rename the column to “Agent Name”. The Full name of the Person is retrieved and used for the column. At this point, it’s ready to use in a report.

Linking to the User Information List

In many cases, the user name of the person may not be enough. As mentioned above, the Person Field is really just a lookup column that is automatically looking up data from a specific list. That list is the User Information list which is a hidden list that exists in the root site of every SharePoint site collection. This list gets populated automatically when the site collection is accessed. When Power BI loads a person column, it automatically creates a ColumnNameId column as well containing the ID value of the person field from this list. In our example, this is the “AgentId” column.

To leverage the data in this list it must first be loaded into the model. Following the same steps taken for loading the Listing data above, we select the “User Information List” which does get exposed to the Power BI Query editor. Once loaded, we remove all of the unnecessary columns from the query, being sure that we leave the ID column.

When ready, we select the “Close and Apply” button from the Query Editor Ribbon. At this point, we have two tables in the model, Listings and User Information List. We then select the relationship editor tab. The “AgentId” column in the Listings table is related to the “id” column un the User Information list table, and we establish this relationship by dragging one onto the other. Once established, we double click on the relationship line to set the value of “Cross filter direction” to “Both”.

We can now return to the design pane, add a table visual, and add columns from both tables. In such a way, we can show the agent’s name, email, phone, etc.

Expanding the Person Column

Although linking to the User Information List is powerful, and easier, and arguably better way to do the same thing is to use the automatically generated person column. This column is named the same as the original person column and contains a series of “Record” type values. The records in question are the corresponding records from the User Information List.

To access the data in this column, we click on the column expander and then select all of the columns that we will work with. Values from the related User Information List will be added to the table automatically.

This approach is clearly simpler than manually loading the entire User Information List, and only loads the records that are related. It will however likely result in a large amount of repeated data that the two table approach avoids. It is possible to achieve a two table solution with the person field using the technique outlined in my earlier article on working with multi-value fields, but the resultant table will still only contain related records. If it is necessary to show people regardless of whether or not there is a related record, then the manual approach is the only way.

Which approach is ultimately used will depend on the requirements of the report, but it is possible to reach deep into the person object in a SharePoint person field.

]]>
https://whitepages.unlimitedviz.com/2018/01/using-power-bi-to-report-on-person-fields-in-sharepoint/feed/ 11 15545
Using Power BI to Report on Multi-Value SharePoint Fields https://whitepages.unlimitedviz.com/2018/01/using-power-bi-to-report-on-multi-value-sharepoint-fields/ https://whitepages.unlimitedviz.com/2018/01/using-power-bi-to-report-on-multi-value-sharepoint-fields/#comments Wed, 24 Jan 2018 21:49:34 +0000 https://whitepages.unlimitedviz.com/?p=15528 Continue readingUsing Power BI to Report on Multi-Value SharePoint Fields]]> Power BI has direct support for reporting on SharePoint lists and documents whether SharePoint is on-premises or in the cloud. Getting at your data is relatively straightforward if you know the URL, but as I’ve written about before, SharePoint data can be idiosyncratic. Text and number fields work right away, but more complex data types require a little more thought, and this is certainly true of multi value fields.

Multi value fields are really SharePoint’s way of approximating the behavior of a one-to-many relationship of a relational database. SharePoint can store list-based data, and even though it has the concept of a lookup field, it is not a relational database. Chances are however that if we’re reporting on this type of field, we want it to behave as though it were. Fortunately, Power BI gives us several options for doing this.

The List

Consider the following list that contains a multi-value choice field named Amenities:

In the view, the different values are flattened, and displayed as a single value using a comma to separate them. How do we go about reporting on this data in Power BI? There’s no right answer, as it depends on the report requirements, but there are several options. In all cases the data first needs to be brought into Power BI Desktop.

Loading the Data

We first launch Power BI Desktop, select “Get Data” and then choose SharePoint Online list (if connecting to SharePoint Online) or SharePoint List (if using SharePoint Server). We are then prompted for the URL of the SharePoint Site. The dialog is titled SharePoint lists, but the value is actually the URL of the site, NOT the list itself. Once this is entered we are prompted for credentials if we haven’t connected to this site before. After entering credentials, we can select the list that we want to report on. In our case, it’s named “Listings”. We select it, and then click on the Edit button.

Once the data loads in, one of the first things that you’ll notice is that there are a lot of columns to choose from, and it’s a good idea to remove the data that you don’t need. In this case, we can remove the ContentTypeId column and everything to the right of it, with one important exception We want to keep the FieldValuesAsText column (we’ll come back to that shortly). Remember, for our purposes here, we want to report on the multi-value column, Amenities. The simplest way to represent this data is as a string of comma separated values, which is the same way it is shown in a SharePoint view. It is also possible to use this data in a more sliceable, or structured way. Let’s start with the simplest.

Extracting a Simple Text Field

If our report is to show all the amenities associated with a listing, then a simple text string will do the job. Examining the data in the Query Editor, you will notice that there are no values displayed for Amenity, the way that there is for simpler field types like “Sold” or “Asking Price”. Instead, each cell contains a “List Value”. Each of these Lists contains the Amenities values for each record and clicking on one will extract those values for that one item. However, of we want to extract the values for all items, so we need to expand the column by clicking on the expand icon in the column header.

We are then given two options, “Expand to New Rows” or “Extract Value”. To extract the values into a single value, we select “Extract Values”

We are then given the opportunity to choose a delimiter, and after that the values in the column are replace with simple text values.

Another way to accomplish this same thing is to use the FieldValuesAsText column (referenced above). The fields in this column contain a single record containing the textual values for all the complex data types in a SharePoint list. If you are doing reporting with SharePoint data in Power BI, FieldValuesAsText is your friend. To use it, we just click the expand icon in its column header, deselect all of the columns that we don’t need (which in our case is everything except Amenities) and click OK. We are returned the contents of the list in a comma separated string for each item.

At this point, we can report on the data.

We click on Close and Apply in the ribbon to return to the report canvas. We add a table visual, and add some of the fields like address, street number, and Amenities to it. You’ll see that the table looks much like it would in a SharePoint list. Next, we add a slicer to the report page, and use one of the other columns, like City. Clicking on a city will filter the table by that city as expected. Now, if we want to filter by Amenity, we can add the Amenity column to the slicer, but it won’t behave the way we want it to. Each combination of amenity values will be represented as a single value. We are unable to slice on a single value like “Garage”.

Flattened multi value field as slicer

To use discrete values for our multi-value field, we need to do a little more work in the Get Data (Power Query) interface.

Extracting Discrete Values from a Multi-Value Field

We follow the initial steps from the “Simple Text Value” section above, but when the expand icon is selected, we select “Expand to new rows” and NOT “Extract values”. When this is done, each row will be copied to accommodate all of the individual Amenity values. If there were 3 Amenity values, we wind up with three rows of identical values, but with each of the values for Amenity.

Expanding to new rows

We can now go to the design surface and add a table, and a slicer that uses the multi value column, and now we can filter on discrete values. This may be sufficient in many cases. However, if we create a measure that based on an aggregate value for the table (like COUNTROWS or SUM), the value will not be correct if there are no slicer values. It works with all of the duplicated rows.

Incorrect number of listings

Again, this approach may work in many situations, but for ultimate flexibility, we need a true one-to-many relationship. Luckily, Power Query gives us a few simple tools to do this.

Creating a One-to-Many Relationship

Again, we follow the initial steps from the “Simple Text Value” section above, but do not click on the expand icon in the Amenities column. A One-to-many relationship requires at least two tables, so now is the point where we need to add a second one. We do so by copying the first. We right click on the query in the Queries pane and select “Duplicate”.

This creates a second query identical to our first, which will load into another table. We can then give the new query a better name, like “Amenities” in our case. Next, we select the Id, and the Amenities columns, right click and select “Remove Other Columns”, and we are left with only the ID and Amenities columns. Now we select the expand icon and select “Expand to New Rows”.

We now go back to the original query and remove the Amenities column. The result is two tables, Listings and Amenities, with Listings containing the bulk of the information. We then click on Close and Apply in the ribbon to load the data into the model, and the relationships icon to take is to the relationship builder. We need to establish a relationship between the two tables, and we do so by clicking on the Id field in one table and dragging it to the Id field of the other. Finally, double click on the relationship line between the tables, and be sure to select “Both” for Cross Filter Direction – that way Amenity values can filter List values. If “Both” is selected, the directional arrow will point two ways.

Now, the same visuals from above can be added using the Amenity column from the second table as a slicer, and everything should work as expected. Listings will be filtered by the value, if no amenities are selected, the correct number of listings will be shown In the data card, and no duplication will appear in the table.

SharePoint multi-value columns are an attempt to replicate the capability of a relational database. With a few simple steps, Power BI can take SharePoint data, normalize it, and analyze it as if it were a true relational database.

]]>
https://whitepages.unlimitedviz.com/2018/01/using-power-bi-to-report-on-multi-value-sharepoint-fields/feed/ 15 15528
Using Power BI for Facebook Location Tracking https://whitepages.unlimitedviz.com/2017/11/facebook-location-tracking-power-bi/ https://whitepages.unlimitedviz.com/2017/11/facebook-location-tracking-power-bi/#comments Wed, 29 Nov 2017 21:32:47 +0000 https://whitepages.unlimitedviz.com/?p=15493 Continue readingUsing Power BI for Facebook Location Tracking]]> Three years ago I wrote an article about how to map your Facebook check in in Excel using Power Query and Power View. Given that Power View has gone the way of the Dodo for all intents and purposes, I have been meaning to rewrite it using Power BI to do Facebook location tracking. During my recent Power BI tutorial at the European SharePoint Conference I was also informed that the Power Query behind it no longer works. Something had to be done!

The Facebook Graph API that Power Query uses changed significantly about a year ago, making location data much more difficult to discover. However, it is still possible to do, and below, I outline the process for working with Facebook data in Power BI. The below example is to map and analyze Facebook check ins, but the principles apply to all sorts of Facebook content.

Building the Query

To begin with, you’ll need to be running Power BI Desktop. You can do everything necessary with Desktop, but if you want to deploy to the service for automatic updates or for sharing, you’ll also need a Power BI account at app.powerbi.com.

To begin with, launch Power BI Desktop, select “Get Data” and scroll down to Facebook. Alternatively, you can select the “Online Services” node, and select Facebook from there. Click the Connect button, and if prompted, enter your credentials. Once that’s done you are prompted to enter an object id, and a connection.

Connecting Power BI to Facebook data

There was a time when this could be used to gather data from any Facebook user, but with changes to the Facebook API, the Me object, or a Facebook page are the only objects that will work. The connection parameter allows you to select the data stream that will be returned. In this case, you will need the “Posts” connection. Once you click OK, you will be presented with a preview of the results. You will want to transform this data, so the next step is to select Edit.

At this point, you will notice that there are columns for message, story, time, Facebook ID, and object link, which contains further data about comments and likes. However, what you won’t find is any locational data, or even an indication as to the type of Facebook post. This will come as a surprise to people that used earlier versions of the Facebook connector. However, the data is still available.

If you look at the data source, you’ll notice that the “Me” and the “Posts” parameters entered above was used to form a URL. This URL queries the Facebook Graph, and the default result of that query is what you receive.

Power BI connecting to the Facebook Graph

Checking the Facebook Graph documentation, you will find many different options for the Posts query. Most important is the ability to specify what data it returns, and indeed, there is a field named “place” that will return any location data for a post. Appending “?fields=place” to the query will cause the place field to be included to the result. However, if you do this, you will notice that most of the default fields are no longer included. You therefore need to include all fields wanted explicitly. Additional fields are specified by separating them with a comma. In our case, we will also return the fields for picture, and post URL. Our query is therefore:

Source = Facebook.Graph("https://graph.facebook.com/v2.8/Me/posts?fields=place,message,story,created_time,id,permalink_url,picture")

This query will return the posts and any available location data. If you want to only get the posts with location data, you can also add a filter specification to the query by appending “&with=location” to it. The full query therefore becomes:

Source = Facebook.Graph("https://graph.facebook.com/v2.8/Me/posts?fields=place,message,story,created_time,id,permalink_url,picture&with=location")

Adding query parameters to the Facebook Graph in Power BI

At this point, we have location data, but it’s in the form of a record, and you need to expand it. You can do so by clicking the expander icon at the top right of the column header. The only field that you want to retain is location, so first deselect the “Use original name…” checkbox and the “(Select All Columns)” checkbox, then select location, and click “OK”.

Expanding the Facebook Place field in Power Query

This returns location, also in the form of a column. Repeat the same steps to expand this column, but this time select all of them.

Expanding the Facebook location field in Power Query

After selecting OK, your data has 4 new columns for the geographic information.

At this point, you need to flag the data types of your columns. Set the latitude and longitude to the decimal type, the created time column to Date/Time/Timezone and delete the object id column. If like and comment data is wanted, it can be added later in a separate query. Once the correct data types are set, select Close and Apply to load the data into the data model.

Building the Report

At this point you have all the data you need. If you want to use the permalink url and the picture columns to display links and pictures, you’ll need to flag the data category appropriately. You’ll also need to do this for your geographic columns. This is done by selecting the field in the fields selector, then opening the “Modeling” tab, and selecting the appropriate value for “Data Category”. Once done, the url field will render as a link or icon, the picture field will render as an image, and the geo fields will work on a map.

Setting Data Category properties in the Power BI data model (PowerPivot)

The precise values to set these fields to are as follows:

Field Category
city City
country Country/Region
latitude Latitude
longitude Longitude
permalink_url Web URL
picture Image URL

Once set, it is possible to build a report displaying check ins on a map, as a function of time, along with slicers and detail. By adding in comment and like data to the model as well, it is possible to create a report like the one below. To see how it was built, download this Power BI Desktop template file, open it in Power BI desktop, and enter your credentials. The report will populate with your check in data, and you can then see precisely how this report was built. If you just want to see all my Facebook check in data, you can slice and dice the report below. If everything is working properly, it should be updated daily. You can also see it fullscreen here – http://link.tygraph.com/FBJPW 

The important thing to note is that although the Power BI connector may not retrieve the data that you need for analysis by default, the full power of the Facebook graph is there for you to take advantage of. This is particularly valuable for doing analysis of Facebook pages.

 

]]>
https://whitepages.unlimitedviz.com/2017/11/facebook-location-tracking-power-bi/feed/ 2 15493
Get Power BI Desktop from the Microsoft Store https://whitepages.unlimitedviz.com/2017/10/get-power-bi-desktop-microsoft-store/ https://whitepages.unlimitedviz.com/2017/10/get-power-bi-desktop-microsoft-store/#comments Thu, 12 Oct 2017 18:07:25 +0000 https://whitepages.unlimitedviz.com/?p=15476 Continue readingGet Power BI Desktop from the Microsoft Store]]> If you’re a Power BI Desktop user, you are likely familiar with the monthly update cadence. Once per month, a new Power BI Desktop is released, and normally it contains at least one new feature that you WILL want to use. In case you miss it, a little indicator will light up in the bottom right hand corner that will prompt you to update. Of course, you normally don’t notice this until you’re in the middle of designing something, and it’s never a good time.

Then you do click on the prompt, you’re take to a web page from which to download PBI Desktop. Doing so can take a few minutes (it’s relatively large), and when it starts, more often than not, you’ve forgotten to exit the current PBI Desktop. Once you do, you can run the installer and then you’re up to date. For 30 days. While I think that most of us welcome the new features, this update process itself is cumbersome. Thankfully, the availability of Power BI Desktop in the Microsoft store removes this pain.

Power BI Desktop – Microsoft Store Edition

Getting Power BI Desktop from the Windows store allows you to have it automatically updated in the background whenever new versions are available, just like any other app in the store. To be sure, it is not a separate “Microsoft Store Edition” of the product, but precisely the same product in a Windows Centennial (presumably) wrapper.

Installing Power BI Desktop from the Store

Before installing the store version of Power BI Desktop, it’s a good idea to uninstall the older version. This is not required, and the two can operate side by side, but this will most likely be confusing. The non-store version is uninstalled by opening “Programs and Features” in Control Panel and uninstalling from there. As an aside, the store version will not appear in this list – like other apps, it will only appear in the start menu, and if necessary can be uninstalled by right-clicking on the tile.

To install PBI Desktop, open the store and search for Power BI. At the moment, PBI Desktop does not appear in the search dropdown, but the Power BI app does. The app is not what you’re looking for here. Execute the search and you should see two results, Microsoft Power BI Desktop, and Microsoft Power BI. The icons for the two are very similar (but not identical!). Be sure that you select he one with “Desktop”.

Installing Power BI Desktop from the Microsoft Store

The power BI app is the Windows 10 app for Power BI and is designed for a mobile experience. It is like the versions found on iOS and Android. Selecting the Desktop tile should download and install Power BI Desktop.

Copy Your Saved Settings

One of the downsides of this new isolated model is that all the stored credentials, cached data and saved settings from your prior versions of Power BI desktop do not come forward into the store version. However, the good news is that you can bring them forward manually. To do so, navigate to your AppData folder for Power BI, likely at:

C:\Users\[YourUserName]\AppData\Local\Microsoft\Power BI Desktop

Copy everything in this folder (including subfolders) and copy it into:

C:\Users\[YourUserName]\AppData\Local\Packages\Microsoft.MicrosoftPowerBIDesktop_8wekyb3d8bbwe\LocalCache\Local\Microsoft\Power BI Desktop Store App\

The latter is simply the isolated store app version of the former. This content can be relatively large, so it’s a good idea to delete it from the source once finished. Uninstalling the original application does not clean up this content – in fact in my case, as advised above, I uninstalled it prior to the installation of the store app itself.

Once you’re up and running with the store version, you should never need to worry about manually updating Power BI Desktop again.

]]>
https://whitepages.unlimitedviz.com/2017/10/get-power-bi-desktop-microsoft-store/feed/ 4 15476
OneDrive is Ready – It’s Seriously Time to Ditch the X: Drive https://whitepages.unlimitedviz.com/2017/09/onedrive-is-ready-its-seriously-time-to-ditch-the-x-drive/ https://whitepages.unlimitedviz.com/2017/09/onedrive-is-ready-its-seriously-time-to-ditch-the-x-drive/#comments Fri, 29 Sep 2017 14:33:49 +0000 https://whitepages.unlimitedviz.com/?p=15429 Continue readingOneDrive is Ready – It’s Seriously Time to Ditch the X: Drive]]> If you are in information worker of any sort, and have been at it for any more than a couple of years, you’ve experienced it – the X: drive. Or the S: drive, P, R, whatever the letter. It’s the drive letter that is mapped to a network based file share that contains most of the company’s documents.

My first experience with IT was in 1989, setting up and managing a Novell Netware 3 network for a University department. Logging into the network (through a DOS prompt) would automatically add a drive to your machine with all of the resources that you needed, the storage you could ever want (measured in MB). It was “magical”.

This basic model exists to this day. We’ve tried to move away from it, we’ve tried very hard. We’ve had large monolithic document management systems imposed from above like FileNet and Documentum. These solutions gained success in specific areas, as they were mandated from above. SharePoint itself came along and democratized document management to a much broader degree, but the pesky X: drive still persists. Why?

One word. Usability.

End users want to be able to open up File Explorer, navigate to their drive, browse their folder structure and work with their documents. The drive mapping metaphor has succeeded so well because it fits this scenario perfectly, and its familiarity. Ever since personal computing began, we’ve accessed file storage using drive letters.

Users use formal document management systems reluctantly. This is often due to overzealous metadata requirements (just fill out this 20 field form to store your document), burdensome procedures, or performance issues. However given the choice, they retreat to the familiarity of their file systems, and their X: drives more often than not.

Consultants and vendors preaching a new way of doing things are in the end shouting against thunder. We can’t expect users to adapt to new systems quickly – what we need, at least transitionally is for the systems to adapt to the users. This is where OneDrive comes in.

One of the most compelling features of SharePoint 2013 in my opinion was OneDrive for Business. The reason that I felt that was that for the first time, SharePoint document management was tightly integrated into File Explorer. There had been previous attempts at synchronizing (SharePoint Workspace), but that required separate client software and required a lot of manual dragging and dropping.

The implementation of OneDrive for Business was initially hobbled by restrictions and limits, and was rather confusing, limiting adoption. However, through the combination of the current OneDrive sync client, with the Files On Demand feature available in the Windows Fall Creators update, OneDrive is truly ready for widespread adoption. Over the past few years, OneDrive has become both reliable and fast, and Files on Demand combined with in-context sharing put it over the top.

OneDrive Files on Demand

Files On Demand in File Explorer

Screenshot taken while flying on airplane mode from a laptop containing a single 256 GB drive.

Files on Demand allows you to easily control what files are synchronized to your local device, while still being able to see all your file assets, directly from File Explorer.

In the figure above, the cloud icon indicates that that folder is not currently stored locally. That’s a good thing, because opening up the folder properties reveals that it is over 1 TB in size, and the drive on the laptop (that I’m currently writing this with) is only 256 GB. Moreover, that screen shot was taken while flying, and totally disconnected. I could still see things that were not on my local device.

If connected, cloud files can be interacted with the same way that local files can, by any application. Opening them just requires a little more time as the file is downloaded in the background. If you will be offline, bringing a file local is a simple matter of right clicking on that file, or folder, and selecting “Always keep on this device”.

Files on Demand is currently available to Windows Insiders, and will be generally available with the Windows Fall Creators Update on October 17, 2017.

In Context Sharing

OneDrive sharing from File Explorer

The new OneDrive in context sharing experience

Up until very recently, sharing a drive from Explorer was a rather frustrating experience. You could identify your file, right click on it, click on share, and then a browser window would pop open, and if you were lucky, you would be presented with a view of the OneDrive web user interface. This experience was jarring, and required multiple steps.

Over the past summer, OneDrive rolled out updates that change this behaviour significantly. Now clicking on share brings up the dialog above and sharing is done completely from there. No context switching, and no authentication failures.

Time to move

These two new features, combined with the performance and reliability improvements over the last few years puts OneDrive over the top. Finally, all of the usability issues have been addressed. End users can live completely in File Explorer should they wish to do so, and be oblivious to the workings of OneDrive and/or SharePoint. However, at the same time, they will gain significant benefits compared to the shared file system.

However, OneDrive provides much more than simply an alternate storage location for your files. Once content is stored in OneDrive, a whole host of options are opened. The organization benefits because all this content is immediately made discoverable through technologies like Delve and Search. File access can be tracked, helping users understand what content others find valuable. There are, however, many immediate benefits that occur directly to the users themselves. I wanted to call out three of them, but there are many more.

OneDrive File Viewing

OneDrive comes with a long list of file viewers. These viewers allow the contents of a file to be viewed without opening the underlying application, which tends to be significantly faster than opening the host application. In fact, the application itself does not need to be installed. This is valuable in itself, but when combined with Files On Demand a file can be viewed whether or not it is even stored locally. A large Adobe Illustrator file can be viewed locally without it even being present on disk. Files On Demand is also available on Mac, and in the OneDrive client, and therefore, this very same file can be viewed on iPhone, Android, iPad, anywhere the OneDrive application is available. This, to my mind, is a game changer.

Sharing

Sharing with OneDrive is not new, but sharing directly from the explorer window is. That sharing experience is also now being consolidated across devices, and embedded into Microsoft Office applications. In addition, OneDrive files can be shared externally with an anonymous link, or securely with others that have a Microsoft account (personal or organizational), but what will be shortly available is the ability to securely share files externally with people that have any type of account.

Files Restore

Announced at Ignite 2017, FilesRestore provides end users with the ability to easily track all versions of their files for the past 30 days, and to instantly restore them to a point of time anywhere in that window. Administrators have long had this capability through a set of operations, but FilesRestore puts this capability into the hands of the end user with a simple to use user interface. Users can rest assured that their files are safe not only from disaster, but from their own mistakes, malware, ransomware or anything. Files stored in OneDrive are safe.

These are but three compelling benefits that users can enjoy by moving content to OneDrive. There are many more. Foe a good overview, and to hear all of the OneDrive announcements from Ignite 2017, be sure to check out “OneDrive – Past, Present and Future“.

It’s time to ditch the shared X: drive once and for all.

]]>
https://whitepages.unlimitedviz.com/2017/09/onedrive-is-ready-its-seriously-time-to-ditch-the-x-drive/feed/ 2 15429
Understanding the Power BI Capacity Based SKUs https://whitepages.unlimitedviz.com/2017/09/understanding-the-power-bi-capacity-based-skus/ https://whitepages.unlimitedviz.com/2017/09/understanding-the-power-bi-capacity-based-skus/#comments Tue, 26 Sep 2017 11:34:18 +0000 https://whitepages.unlimitedviz.com/?p=15412 Continue readingUnderstanding the Power BI Capacity Based SKUs]]> Power BI licensing has changed again. This week at Microsoft Ignite, Microsoft introduced a new capacity based SKU for Power BI Embedded, intended for ISVs and developers: The A SKU. This brings the number of capacity based SKUs to 3, with each category having numerous sub categories. This means that there are a number of ways to embed content by using Power BI Pro, Power BI Embedded, or Power BI Premium. The trick is to know what will be needed for what circumstances. This post will attempt to help with the distinctions.

The SKUs are additive in nature, with A (Power BI Embedded) providing a set of APIs for developers, EM (Power BI Premium) additional ad-hoc embedding features for organizations, and P (Power BI Premium)providing a SaaS application that contains everything that the Power BI service offers. For some background, the EM SKU was initially introduced to serve the needs of both Independent Software Vendors (ISVs) and of organizations that needed to do some simple sharing within the organization, and give them access to the latest Power BI features. However, ISVs have a different business model than enterprises, which is why the A series was introduced.

Power BI Embedded A SKUs

The A SKU (A is for Azure) is a Platform-as-a-Service and set of APIs for those ISVs who are developing an application to take to market. These ISVs choose to use Power BI as the data visualization layer of that application to add value to their own application. As such, Power BI assets contained in Power BI Embedded capacities cannot be accessed by a licensed Power BI user, but are only accessed by customers of the ISV’s application.

Power BI Embedded capacity is billed hourly, can be purchased hourly, and can be paused – meaning no long-term commitments to a specific capacity. This pausing capability is critical for small ISVs that don’t yet have the revenue stream to support monthly commitments, and it addressed one of my largest concerns over moving Power BI Embedded over to the Premium model. Power BI Embedded is purchased through Microsoft Azure. Additionally, Power BI Embedded can scale up and down as needed to accommodate the requirements of the ISV business model as the vendor’s application grows.

Running the entry level A1 capacity for a full month equates to approximately $750/month, so while the capacity of the Power BI Embedded A1 SKU is equivalent to the Power BI Premium EM SKU, ISVs pay a slightly higher effective monthly price for the flexibility mentioned above.

There are 6 sizes of Power BI Embedded available, each capacity mapping to an existing Power BI Premium capacity so ISVs can grow their business as needed. Pricing starts at about $1/hour:

Name Virtual cores Memory (GB) Peak renders/hour Cost/hour
A1 1 3 300 ~$1
A2 2 5 600 ~$2
A3 4 10 1,200 ~$4
A4 8 25 2,400 ~$8
A5 16 50 4,800 ~$16
A6 32 100 9,600 ~$32

Power BI Premium EM SKUs

The EM SKU (EM is for embedding – NOT Embedded) covers off everything contained in the Power BI Embedded A SKU, but also offers the ability to share Power BI reports within an organization through content embedding. Currently, this can be accomplished through the use of the SharePoint Power BI web part for modern pages, or the through tabs using Microsoft Teams.

There are three EM SKUs, and while the largest, EM3, can be purchased through Office 365 monthly, the smaller 2 (EM1 and EM2) must be purchased through Volume Licensing. Volume licensing represents an annual commitment, and may be an incentive for ISVs to remain on the A SKU even if they are not pausing their service. EM SKUs cannot be paused – a month is the smallest available billing unit. Additionally, scaling on EM SKUs requires that you retain your monthly or annual commitment to the initial SKU purchased until the end of the contract term.

Details on the EM SKUs are below:

Name Virtual cores Memory (GB) Peak renders/hr Cost
EM1 1 3 1-300 $625/mo
EM2 2 5 301-600 $1,245/mo
EM3 4 10 601-1,200 $2,495/mo

Power BI Premium P SKUs

The P SKU (P is for Premium, but it helps to think of it as “Power BI Service”) is the “all in” version of Power BI licensed through capacity. It offers everything that is available with Power BI, which includes everything available in the A and EM SKUs. It also offers the ability to share Power BI assets in the Power BI service through apps, or if personal workspaces are in a Premium capacity, through dashboard sharing.

The entry point of the P SKU is significantly higher than EM as well, but you’re getting a business application vs a set of APIs. It also comes with significantly more resources attached to it. For example, P1 comes with 8 virtual cores and 25 GB of RAM, whereas the largest EM offering is EM3, with 4 cores and 10 GB RAM.

All the P SKUs can be purchased through the Office 365 administration center, and can be billed monthly. Details are below:

Name Virtual cores Memory (GB) Peak renders/hr Cost
P1 8 25 2400 $4,995/mo
P2 16 50 4800 $9,995/mo
P3 32 100 9600 $19,995/mo

What to use when

Sharing capabilities:

PBI Embedded A PBI Premium EM PBI Premium P
Embed PBI Reports in your own application Embed PBI Reports in your own application Embed PBI Reports in your own application
  Embed PBI Reports in SaaS applications (SharePoint, Teams) Embed PBI Reports in SaaS applications (SharePoint, Teams)
  Share Power Reports, dashboards and datasets through Power BI Apps (workspaces)
 Ad hoc dashboard sharing from personal workspaces

With the addition of the Power BI Embedded capacity based SKUs, many of the concerns around Premium pricing have been addressed. I would still like to see all EM SKUs available monthly, and to see a “P0” premium SKU, but it’s fairly clear as to which scenarios call out for which licenses.

An ISV that is embarking on the use of Power BI embedded will at the very least need a Power BI Pro license. When development gets to the point where sharing with a team is necessary, a Power BI Embedded A SKU can be purchased from Azure. Once 24/7 availability is required, the ISV may wish to switch to an Premium EM capacity. An ISV should never require a P SKU unless capacity demands it, or they have additional requirements.

An organization that has a few data analysts or Power Users that need to share reports with a broader audience would likely be well served with one of the EM SKUs. This scenario assumes that the organization is also using SharePoint Online, Microsoft Teams, or both. This approach will allow the power users (who will require a Pro license as well) to embed Power BI content within a SharePoint page or a Microsoft Teams tab where it can be accessed by users without a Pro license. This organization would need to include more than 63 users accessing the reports to be financially viable.

Finally, larger organizations with a significant investment in Power BI, or organizations that don’t currently utilize SharePoint Online or Microsoft Teams would benefit from a Premium P capacity. With this, the Power BI interface could be utilized by end users to access shared content without a Pro license. Given it’s monthly cost, compared to the monthly cost of Pro, the organization would need to have at least 500 active report consumers for this approach to practically considered.

]]>
https://whitepages.unlimitedviz.com/2017/09/understanding-the-power-bi-capacity-based-skus/feed/ 25 15412
SharePoint and the New SSRS/PBIRS Native Mode Report Viewer Web Part https://whitepages.unlimitedviz.com/2017/09/sharepoint-and-the-new-ssrspbirs-native-mode-report-viewer-web-part/ https://whitepages.unlimitedviz.com/2017/09/sharepoint-and-the-new-ssrspbirs-native-mode-report-viewer-web-part/#comments Fri, 22 Sep 2017 17:00:11 +0000 https://whitepages.unlimitedviz.com/?p=15401 Continue readingSharePoint and the New SSRS/PBIRS Native Mode Report Viewer Web Part]]>

When SQL Server 2016 was released, I wrote a post that covered how to integrate SharePoint 2016 with SQL Server Reporting Services (SSRS) in native mode. I wrote that post because although SSRS was still available in SharePoint Integrated Mode, many of the new features were only in Native mode. It also wasn’t difficult to surmise that Integrated mode had a limited lifespan at that point, a fact that was confirmed in November 2016.

As my integration article pointed out, one of the glaring problems integrating Native mode with SharePoint was that the Native Mode web part had significantly less capability than the one that was available with Integrated mode, not to mention the fact that it was deprecated. Parameter support was first and foremost among those missing features. Organizations that had incorporated SSRS reports into their SharePoint based dashboards couldn’t just move to Native Mode, and those dashboards relied heavily on the ability to pass parameter values to the SSRS reports through the web part. SSRS did make it easier to embed reports into pages without a web part through the rs:Embed URL directive, but there was no real way to make that dynamic.

The introduction of Power BI Report Server (PBIRS), and its ability to render Power BI reports also left the SharePoint dashboard folks without a solution. Although PBIRS is a superset of SSRS, it is only available in Native Mode, in fact, there’s no longer any need to mention modes. SharePoint dashboards that used SSRS report components were stuck at the 2016 version.

Until now.

Today, Microsoft unveiled the new, fully capable SSRS/PBIRS Report Viewer web part for SharePoint.

Integrated Mode Native Mode

“Meet the new boss, same as the old boss” – Pete Townshend

The web part allows paginated reports from either SSRS in Native mode, or Power BI Report Server to be embedded into a SharePoint page. Essentially, it brings all the capabilities of the integrated mode web part to the native mode web part. Organizations that have bet on SSRS running in Integrated mode can now seamlessly move their reports to SSRS Native mode, or to Power BI Report Server. Let’s have a quick look at some of the details.

Parameter and view support

All the options that we’ve come to expect from the integrated mode web part are here in the new web part. Toolbar options can be controlled at a fine level and web parts can be connected to pass in parameter values. In fact, as seen above, the only difference between the two modes now is the report location option.

C2WTS and Kerberos required… unless

With Integrated mode, Kerberos constrained delegation (KCD) was only required if you needed the user’s identity passed back to the data source. This was because Integrated mode and SharePoint itself shared the same access mechanism. Native mode and PBIRS have their own membership providers, and this new (server side) web part needs to connect to the report server itself to authenticate the user. This is an additional “hop” and therefore requires Kerberos constrained delegation. This only gets the user as far as the report server itself. To get the user’s identity further downstream, KCD will also need to be set up between the report server and data source(s).

SharePoint 2016 in most cases uses claims authentication, and SSRS/PBIRS does not. This also means that the Claims to Windows Token Service (C2WTS) must also be running in your SharePoint environment. This allows the user’s NTLM identity to be extracted and sent via KCD to the SSRS/PBIRS server.

There is one case where KCD is not required, and that is when SSRS is running on a machine in the SharePoint farm along with C2WTS. In a small environment, this may be suitable, but if scaling is a factor, you may need to crack open your Kerberos books.

Paginated reports only

Even though SSRS supports the new RSMobile report format, and PBIRS also now supports both Interactive (Power BI) and analytical (Excel) reports, this web part is only built to render paginated reports. Up until 2016, paginated was the only report type supported by SSRS, so this will not be a problem for those looking to move forward. However, for those people that are looking to embed these other formats in SharePoint on premises, there are other options.

Both Power BI and mobile reports can be embedded into a SharePoint page using the script editor web part alongside the rs:Embed=true URL directive. Parameters can be passed to mobile reports using this technique, and text filters can be passed to interactive reports as well. Excel reports can be embedded with the Excel Services web part, the way that they always have which itself allows for passing slicer values. However, for this to work, the Excel files need to be stored in SharePoint, not PBIRS.

On-premises only

The new web part deploys to the SharePoint farm as a WSP solution file, which means that this is an on-premises solution only. Given that SSRS/PBIRS are also on-premises products, this shouldn’t pose much of a problem. It IS possible to ebbed on prem reports into SharePoint Online pages with the script editor web part for classic pages, and the embed web part for modern pages. A complete summary of report/SharePoint edition embed options can be seen below.

Report type

SharePoint Online modern page

SharePoint Online classic page

SharePoint on-premises

PBIX on Power BI Service

Power BI WP

Anonymous*

Anonymous*

PBIX on PBIRS

Modern embed content WP

Script editor WP

Script editor WP

RDL on PBIRS or SSRS Native

Modern embed content WP

Script editor WP

Report Viewer WP NEW!

RDL on SSRS Integrated mode

Modern embed content WP

Script editor WP**

SSRS WP

XLSX (stored in SharePoint)

N/A

Excel Services WP

Excel Services WP

RSMobile on SSRS/PBIRS (Native)

Modern embed content WP

Script editor WP**

Script editor WP

*    NOT recommended

**     Requires adding sever to HTML Field Security options in site collection settings

In summary, the new native mode web part will be very welcome for those organizations that have been feeling stranded by the deprecation of SharePoint Integrated mode. The very fact that it is now available also underscores that SharePoint still plays an important role in the Microsoft Business Intelligence ecosystem. It may no longer be a prerequisite, but it still adds value, and those are both very good things.

]]>
https://whitepages.unlimitedviz.com/2017/09/sharepoint-and-the-new-ssrspbirs-native-mode-report-viewer-web-part/feed/ 9 15401
SQL Server Reporting Services vs Power BI Report Server – What’s the Difference? https://whitepages.unlimitedviz.com/2017/09/sql-server-reporting-services-vs-power-bi-report-server-whats-the-difference/ https://whitepages.unlimitedviz.com/2017/09/sql-server-reporting-services-vs-power-bi-report-server-whats-the-difference/#comments Mon, 18 Sep 2017 17:00:22 +0000 https://whitepages.unlimitedviz.com/?p=15415 Continue readingSQL Server Reporting Services vs Power BI Report Server – What’s the Difference?]]> Power BI Report Server (PBIRS) was first introduced in May 2017. Based on SQL Server Reporting Services (SSRS), it brings the ability to work with Power BI reports completely on premises in addition to all the other capabilities of SSRS. Given this, it would be reasonable to conclude that PBIRS was the next version of, or a replacement for SSRS, but that is not the case. I have heard people state that SSRS is “going away”, but this is simply not the case. SSRS is still a core part of the Microsoft BI stack. So, what are the differences between the two platforms? The differences boil down to features, licensing, and update cadence.

Features

Early builds of SSRS 2017 (V.Next at the time) contained the ability to render Power BI (Interactive) reports in addition to the “classic” RDL (Paginated) reports that SSRS is well known for and the recently added RSMOBILE (Mobile) report types. However, when PBIRS was introduced, SSRS lost that capability, and from a feature standpoint, it really was the only difference between the two. The recent introduction of the Excel report type (Analytical) to PBIRS has further differentiated the two products.

From a features standpoint, the differences between the two products are straightforward. PBIRS is a superset of SSRS. It contains everything that SSRS has, and it ads the ability to render both Interactive (PBIX) and Analytical (XLSX) reports.

Licensing

Licensing is where things get a little more involved. SSRS was always included on the SQL Server installation media, but with SQL Server 2017, this is no longer the case, it’s a separate download (the RC version of SSRS 2017 is currently available for download here). However, the license for SSRS is still tied to your version of SQL Server. Therefore, if you have a license for Standard mode SQL Server, you will be able to use the Standard mode features of SSRS, Enterprise unlocks the Enterprise features, etc. As of the 2017 version, there is also no longer an Integrated mode of SSRS, it’s Native Mode only.

Power BI Report Server is licensed in one of two ways. Purchasing Power BI Premium capacity gives you a license to run the same number of cores as you have in the capacity. This ONLY applies to Premium P SKUs, not any others such as EM. The other way that it can be licensed is by purchasing SQL Server Enterprise Edition + Software Assurance.

Release cadence

Just as with licensing, the timing of releases of SSRS is also tied to that of SQL Server. Whenever a new version of SQL Server is released, a new version of SSRS will be as well. This is not the case for PBIRS. Since PBIRS is considered a standalone product this makes sense, and the constant pace of change in the Power BI service itself necessitates a more frequent update cadence.

As an example, PBIRS first came into General Availability (GA) in June 2017, and as of this writing (Sept 2017) is already in preview for its next release, whereas SSRS 2017 hasn’t yet gone to GA.

How to choose

The choice between which platform to use will likely be straightforward and likely driven by requirements. If your organization only uses paginated reports on premises, you may find that SSRS is a more cost-effective option. If, on the other hand you have the need to render interactive or analytical reports on premises, or you already have SQL Server Enterprise Edition with Software Assurance, then PBIRS will likely be your best choice. There are no circumstances that I can think of where both products will be advisable, if you have PBIRS, you have everything that SSRS offers and more.

]]>
https://whitepages.unlimitedviz.com/2017/09/sql-server-reporting-services-vs-power-bi-report-server-whats-the-difference/feed/ 13 15415
Which Premium SKU is Needed to embed Power BI Reports in SharePoint and Microsoft Teams https://whitepages.unlimitedviz.com/2017/09/which-premium-sku-is-needed-to-embed-power-bi-reports-in-sharepoint-and-microsoft-teams/ https://whitepages.unlimitedviz.com/2017/09/which-premium-sku-is-needed-to-embed-power-bi-reports-in-sharepoint-and-microsoft-teams/#comments Sun, 17 Sep 2017 15:36:40 +0000 https://whitepages.unlimitedviz.com/?p=15407 Continue readingWhich Premium SKU is Needed to embed Power BI Reports in SharePoint and Microsoft Teams]]> A short time ago, I posted an article explaining the difference between a Power BI Pro license, and Power BI Premium capacity, and the fact that you’ll at least need one or the other in order to share a Power BI report on a SharePoint page via the Power BI web part. Although that article didn’t mention it, the same requirement is also true for embedding a report in a Microsoft Teams tab.

Power BI Report embedded in a Microsoft Teams tab

Power BI Report embedded in a SharePoint Page

Since there are two major SKU types for Power BI premium, and that there was (and is) a fair amount of confusion around this area, I also published another article attempting to clear up the confusion. While this article was based on all the information publicly available at the time, new information has pointed out that it is incorrect.

The two major SKU types are P and EM, with P standing for Premium and EM for Embedded. This matters significantly because the two SKU types have significantly different entry points and therefore costs.

The P SKU was the only one introduced when Premium was originally announced. It gives organizations the ability to place Power BI assets in a premium capacity container (a Power BI “app”), and once this is done, anyone can consume these assets whether or not they have a license.

Subsequent to this, an additional SK (EM) was introduced to address Power BI Embedded. Power BI embedded allows an ISV to use Power BI to add reports to their own applications. In this scenario, the reports run from the ISV’s tenant. Originally the assets were housed in Azure, but with the availability of Premium capacity, the decision was made to shift Power BI Embedded to use this new model. Given that the requirements of an ISV are not the same as a general organization, this new  SKU was introduced. The EM SKU comes with a significantly lower entry point and cost, but also with significant restrictions. This is where the confusion sets in.

The wording around the restrictions on the EM SKU indicated that it could not be used for the SharePoint web part, and that a P SKU, or a Pro license would be required for that use case. This is where my earlier article is incorrect. I have since had conversations with the product team, and have been informed in no uncertain terms that the EM SKU CAN be used for both SharePoint web part, and Teams tab embedding of Power BI reports.

This is a very significant difference. An organization that is using Power BI casually, but has a few reports that they want to share with a broad audience was looking at a cost of almost $5,000 per month to do this. Given that the cost of a Pro license is $10/user/month, this meant that the organization needed to have at least 500 frequent report consumers before Premium was even worth considering. Also given the fact that the embedding features available in both SharePoint and Teams require that Pro or Premium SKU, this could be a real disincentive to its use. However, given that the EM SKU start at approximately $650/month for the entire organization, this becomes much more approachable, and it lowers the bar to entry significantly. This should result in significantly greater adoption of these Power BI embedding features, and consequently, of Power BI as a whole.

To be clear, there are still restrictions around the EM SKU. You cannot share Power BI apps with this SKU, but you CAN use it to embed reports in both SharePoint and Microsoft Teams.

]]>
https://whitepages.unlimitedviz.com/2017/09/which-premium-sku-is-needed-to-embed-power-bi-reports-in-sharepoint-and-microsoft-teams/feed/ 8 15407
Power BI Report Server Completes the Vision for On-Premises Reporting https://whitepages.unlimitedviz.com/2017/08/power-bi-report-server-completes-vision-onpremises-reporting/ https://whitepages.unlimitedviz.com/2017/08/power-bi-report-server-completes-vision-onpremises-reporting/#comments Sat, 26 Aug 2017 17:00:05 +0000 https://whitepages.unlimitedviz.com/?p=15387 Continue readingPower BI Report Server Completes the Vision for On-Premises Reporting]]> Microsoft today made available the August 2017 preview of Power BI ReportServer 2017. This preview includes the long awaited support of embedded data models, as well as the ability to render Excel reports natively. This is a major step forward, because with this release, Microsoft has completed its vision for its on-premises reporting platform that it first articulated in October of 2015.

Excel content being rendered in Power BI Reporting Server

The big news at the time was that the platform was stated to be SQL Server Reporting Services (SSRS). Not SharePoint, not PerformancePoint, but SSRS. SSRS was a mature product that quite competently provided a platform for operational reports. What it needed was some modernization and the addition of some analytical and self-service reporting capabilities. Several of these capabilities were subsequently included with the release of SSRS 2016.

Gone would be the days of configuring complex SharePoint farms just to be able to work with analytical reports (ie Power View, Excel). New Features were being added to SSRS to make it a complete platform for both analytical and operational reports.

The Vision

The roadmap articulated four different report types, 3 of them analytical (by my definition) and one of them operational. These three types line up with reporting tools in the Microsoft BI stack:

Name Type Primary authoring tool ext
Paginated Operational SSRS Report Builder
SQL Server Data Tools
.RDL
Interactive Analytical Power BI Desktop .PBIX
Mobile Analytical Mobile Report Designer .RDLX
Analytical Analytical Excel .XLSX

Therefore, reading between the lines, in order to be a complete reporting platform, SSRS needed to be able to render all of these report types. Paginated reports were of course always native to SSRS, and the roadmap announced that Mobile reports would be included in SSRS 2016 through the integration of Datazen. The roadmap further committed to SSRS being able to render Power BI files in the future.

SSRS 2016

SSRS shipped with some significant modernization improvements, including a much awaited HTML5 rendering engine, and it included Mobile Reports. Mobile reports are delivered through the Power BI mobile application, and SSRS visuals can be pinned to Power BI dashboards.

Significant plumbing was done to move the platform forward in 2016, but it still only rendered 2 of the 4 report types.

In November 2016, it was further announced that the 2016 version of SSRS running in SharePoint Integrated mode would be the last. Moving forward, Reporting Services will only run in Native Mode. In the same announcement. In the same announcement, as I noted in another post, for the first time, the SSRS team committed to providing Excel report rendering capability as well.

Power BI Reporting Server

We first saw the on premises rendering of Power BI reports in the first community preview of SSRS V.Next in the fall of 2016. Those previews required that the reports be directly connected to SSAS tabular models, but they were ground-breaking just the same. A user could be totally disconnected from the web, and still render Power BI reports.

In May of 2017, Power BI Report Server (PBIRS) was announced. A less confusing name could have potentially been SSRS Premium, because that is in essence what it is. PBIRS is everything that SSRS is, plus the ability to render Power BI reports. SSRS will continue forward as a product without Power BI rendering capabilities. It is just a licensing distinction.

The release today of the August 2017 preview of PBIRS allows for embedded data models, and therefore a much wider breadth of capabilities. These models cannot be automatically refreshed yet, but they will upon release. This is, after all, just a preview. If you need automatic refresh of these data models in the meantime, there is an excellent third party solution to do this: PowerPivot Pro’s Power Update.

The inclusion of Excel report rendering capabilities means that PBIRS is a complete report rendering platform, more complete even that the Power BI cloud service.

Moving Forward

Now that the basic on-premises capability has been provided, SSRS/PBIRS needs to pay attention to paying back the debt that it incurred when SharePoint Integrated mode was deprecated. Chief among these features is the SSRS web part. The lack of a decent web part is a blocker for many organizations to move forward with this strategy. Some migration tools to move from Integrated to Native mode (like this one that migrates in the other direction) would be highly useful as well.

Now with on-premises covering all the bases, it’s easy to spot a glaring hole in the cloud Power BI offering. While it supports all three types of analytical reports, there is currently no way to render operational reports in the cloud. Until this capability is provided, it appears that on-premises will have the most complete solution.

]]>
https://whitepages.unlimitedviz.com/2017/08/power-bi-report-server-completes-vision-onpremises-reporting/feed/ 2 15387
Power BI Embedded is not for Embedding Power BI Reports https://whitepages.unlimitedviz.com/2017/08/power-bi-embedded-embedding-power-bi-reports/ https://whitepages.unlimitedviz.com/2017/08/power-bi-embedded-embedding-power-bi-reports/#comments Wed, 23 Aug 2017 14:51:47 +0000 https://whitepages.unlimitedviz.com/?p=15383 Continue readingPower BI Embedded is not for Embedding Power BI Reports]]> NOTICE Sept 17 2017 – The central thrust of this post is incorrect. I am leaving it here, because it still contains valid information, but for an update, please go to this article –  Which Premium SKU is Needed to embed Power BI Reports in SharePoint and Microsoft Teams

I have run into this point of confusion several times since the GA of the Power BI Premium SKU. As I mentioned in my post about licensing, the Power BI web part for SharePoint requires the user viewing the report to have a Pro license. Alternatively, if the organization has purchase Power BI premium capacity, and the report has been deployed to that capacity, then all organizational users will be able to view the report in the web part.

The initial announcement about Premium licensing laid out 5 different SKUs for premium, P1, P2 and P3. These SKUs are the “normal” SKUs that are intended to be used by Power BI customers. The “P” stands for Premium. Subsequently, 3 additional SKUs were announced at the Data Insights summit to be used by ISVs. These SKUs are EM1, EM2, and EM3. The “EM” stands for embedded. The embedded in this case means Power BI embedded. That’s where the confusion sets in.

Power BI Embedded is the ISV offering for Power BI. With Power BI embedded, software vendors can use Power BI as the reporting engine in their application. A number of vendors have taken advantage of this capability in the recent past including Nintex with their Hawkeye product, and ourselves with tyGraph for Yammer Reporting. With Power BI embedded, all of the processing for the application is done in the vendor’s Power BI tenant. Customers don’t require a Power BI license of any sort to use the applications. Recently, Power BI embedded has moved to a premium model as well, which is why the EM SKUs exist. They are for purchase by software vendors to power their own applications.

If we have a look at the pricing for each of these SKUs (in $US/month), we can see that the EM SKUs are significantly cheaper, but they also come with the important restriction that they can ONLY be used by ISVs.

Capacity Node Cores Back end cores Front end cores

Cost

P1 8 4 cores, 25 GB RAM 4 cores

$4,995

P2 16 8 cores, 50 GB RAM 8 cores

$9,995

P3 32 16 cores, 100 GB RAM 16 cores

$19,995

EM1 1 0.5 cores, 3 GB RAM 0.5 cores

$625

EM2 2 1 core, 5 GB RAM 1 core

$1,245

EM3 4 2 cores, 10 GB RAM 2 cores

$2,495

It may be natural to think that because your goal is to “embed” a Power BI report in SharePoint, that you will be able to use one of the cheaper, “embedded” SKUs. Microsoft loves to overload terms when they name things, and this is one of those times that this tendency leads to confusion. Make no mistake, in order to embed a Power BI report in a SharePoint page, and to have other users be able to view it, you will need to have a Pro license, and your users will either need Pro licenses as well, OR your organization will need to have purchased a Power BI Premium “P” SKU, not an “EM” SKU.

]]>
https://whitepages.unlimitedviz.com/2017/08/power-bi-embedded-embedding-power-bi-reports/feed/ 5 15383
What License is Needed to Use the Power BI Web Part? https://whitepages.unlimitedviz.com/2017/08/license-needed-power-bi-web-part/ https://whitepages.unlimitedviz.com/2017/08/license-needed-power-bi-web-part/#comments Mon, 21 Aug 2017 15:05:29 +0000 https://whitepages.unlimitedviz.com/?p=15379 Continue readingWhat License is Needed to Use the Power BI Web Part?]]> The Power BI web part is now a part of SharePoint online for the majority of Office 365 users. This web part allows Power BI reports to be embedded on SharePoint pages, putting them in greater context. These web parts are rendered on the client, not on the server like old style web parts, which means that they are rendered by the consuming user, not the server. This means that in order for the report to render properly, the user needs to not only have access to the report, but also needs to be licensed for it.

The Power BI web part is a feature that requires a Pro license for both producers and consumers. This actually makes sense given that any sharing features in Power BI require Pro. Also, given that the consuming user must have access to the report, the report will be contained in a Group workspace, and Group workspaces themselves require a Pro license. So, what happens when a non Pro user opens a SharePoint page containing a Power BI web part report?

Quite simply, the content doesn’t show up.

Premium Capacity

However, on June 1, 2017, the premium pricing model for Power BI became available. Premium allows organizations to purchase premium capacity in the service. When reports are deployed to this premium capacity, users can access these reports without a Pro license. The act of publishing the report still requires a Pro license, but viewing it does not. Therefore, the Pro requirement for the web part goes away if the report is deployed to premium capacity.

This is in fact how it works. To date, I have seen no official announcement or post from Microsoft on this topic. The closest thing is a response to a forum post in the Power BI community forums:

“If the if the user that is trying to consume the embedded report does not have a Power BI Pro license but is part of a Power BI Premium instance, same viewer rights apply meaning that the user can view the report but collaboration features such as Analyze from Excel are not available, in line with regular Power BI Premium related features.”

The bottom line is that in most cases, all users, both producers and consumers will need a Power BI Pro license to be able to use the Power BI web part. The only time that this is not the case is when an organization has purchased premium capacity, and the report is deployed to that capacity. In that case, only the producer requires a Pro license. It should also be noted that in this case,  some features (like export data) will still not be available to the free users.

]]>
https://whitepages.unlimitedviz.com/2017/08/license-needed-power-bi-web-part/feed/ 8 15379