Skip to content

Tag: PowerPivot

It’s time to stop using Power Pivot

Excel is an excellent tool for analyzing data. An analyst can easily connect to and import data, perform analyses, and achieve results quickly. Export to Excel is still one of the most used features of any Business Intelligence tool on the market. The demand for “self-service BI” resulted in a lot of imported data being stored in overly large Excel files. This posed several problems. IT administrators had to deal with storage requirements. Analysts were restricted by the amount of data they could work with, and the proliferation of these “spreadmarts” storing potentially sensitive data created a governance nightmare.

A little history

Power Pivot was created to provide a self-service BI tool that solved these problems. Initially released as an add-in for Excel 2010, it contained a new analytical engine that would soon be introduced to SQL Server Analysis Services as well. Its columnar compression meant that millions of rows of data could be analyzed in Excel and would not require massive amounts of space to store. Data in Power Pivot is read-only and refreshable – ensuring integrity. It allowed analysts to set up their own analytical data sets and analyze them using a familiar looking language (DAX), and visual reporting canvas (PowerView) all from within Excel.

The original version of Power BI brought PowerPivot to Office 365 through Excel before Power BI’s relaunch gave it its own consumption interface (the service) and design client (Power BI Desktop). Both the PowerPivot engine, and Power Query were incorporated into the service and Power BI Desktop, while the Silverlight based Power View was replaced with a more web friendly reporting canvas.

Excel support

Throughout all these changes, Excel has continued to be well supported in the Power BI service. Analyze in Excel allows an analyst to connect to a deployed Power BI dataset (built with Power BI Desktop) and analyze it using pivot tables, charts, etc. Recent “connect to dataset” features have made this even simpler. Organizational Data Types allow Excel data to be decorated with related data in Power BI.

Excel workbooks containing Power Pivot models have always been supported by the service. These models can even be refreshed on a regular basis. If the source data resides on premises, it can even be refreshed through the on-premises data gateway. This all because the data engine in Power BI is essentially Power Pivot.

It’s that word “essentially” that causes a problem.

Datasets that are created and stored within Excel workbooks are functional but can only be accessed by that workbook. Contrast this with a dataset created by Power BI Desktop, which can be accessed by other interactive (pbix) reports, paginated reports, and as mentioned above, by Excel itself. The XMLA endpoint also allows these reports to be accessed by a myriad of third part products. None of this is true for datasets created and stored in Excel.

So why would anyone continue to create models in Excel. The reason has been until now that although Excel can connect to Power BI datasets to perform analysis, those connected workbooks would not be updated when the source dataset changes. This meant that those analysts that really care about Excel needed to work with the Excel created models. This changed recently with an announcement at Microsoft Ignite Spring 2021. In the session Drive a data Culture with Power BI: Vision, Strategy and Roadmap it was announced that very soon, Excel files connected to Power BI datasets will be automatically updated. This removes the last technical reason to continue to use Power Pivot in Excel.

Tooling

Building a dataset with Power BI Desktop is fundamentally the same as building one with Excel. The two core languages and engines (M with Power Query, and DAX with Power Pivot) are equivalent between the two products. The only difference is that the engine versions found in Excel tend to lag those found in Power BI Desktop and the Power BI service itself. I’d argue that the interfaces for performing these transforms, and building the models are far superior in Power BI Desktop. not to mention the third-party add-in capability.

In this “new world” of Excel data analysis, Datasets will be created by using Power BI Desktop, deployed to the service, and then Excel will connect to them to provide deep analysis. These workbooks can then be published to the Power BI service alongside and other interactive or paginated reports for use by analysts. With this new capability, Excel truly resumes its place as a full-fledged first-class citizen in the Power BI space.

What to use when

With this change, the decision of what tool to use can be based completely on its suitability to task, and not on technical limitations. There are distinct types of reports, and different sorts of users. The choice of what to use when can now be based completely on these factors. The common element among them all is the dataset.

With respect to report usage, typical usage can be seen below.

ToolUsed byPurpose
Power BI ServiceReport consumersConsuming all types of reports: interactive, paginated and Excel
Excel OnlineReport consumersConsuming Excel reports from SharePoint, Teams, or the Power BI service
Power BI DesktopModel builders
Interactive report designers
Building Power BI dataset
Building interactive reports
Power BI Report BuilderPaginated report designersBuilding paginated reports
ExcelAnalystsBuilding Excel reports
Analyzing Power BI datasets

Making the move

Moving away from Power Pivot won’t require any new services or infrastructure, and existing reports and models don’t need to be converted. They will continue to work and be supported for the foreseeable future. Microsoft has neither said not indicated that Power Pivot in Excel is going anywhere. However, by building your new datasets in Power BI Desktop, you will be better positioned moving forward.

If you do want to migrate some or all your existing Excel based Power Pivot datasets, it’s a simple matter of importing the Excel file into Power BI Desktop. This is completely different than connecting to an Excel file as a data source. From the File menu in Power BI Desktop, select Import, then select Power Query, Power Pivot, Power View. You will then select the Excel file that contains your dataset.

Power BI will then import all your Power Query queries, your Power Pivot dataset, and if you have any it will convert PowerView reports to the Power BI report types. The new report can then replace your existing Excel file. Once deployed to the Power BI service, other Excel files can connect to it if so desired.

Building your datasets with Power BI Desktop allows you to take advantage of a rich set of services, across a broad range of products, including Excel. Building them in Excel locks you into an Excel only scenario. If you already use Power BI, then there’s really no reason to continue to build Power Pivot datasets in Excel.

6 Comments

The Difference Between Reporting and Analytics is 42

In his novel “The Hitchhiker’s Guide to the Galaxy”, Douglas Adams envisioned a giant supercomputer named “Deep Thought” that was built to solve the answer to the ultimate question of life, the universe and everything. For the 5 people out there that are unfamiliar with the story, I’ll relate the important bits here. Deep Thought was commissioned by a race of pan-dimensional beings and required seven and a half million years to complete its calculations. When it was finally complete, Deep Thought informed the descendants of the original creators that the answer was 42. The receivers were understandably disappointed with this response, and when they questioned Deep Thought further, the computer postulated that perhaps the problem was that they never really knew what the question was.

Undeterred, the race then commissioned a second computer (which happened to be the Earth) that would calculate the ultimate question. After a couple of 10 million year attempts, the ultimate question was determined to be “What do you get when you multiply six by nine”. Of course, Adams never claimed that the universe made sense.

To my mind, this is an excellent demonstration of the difference between reporting and analytics. The accurate answer (report) provided a result, but not meaning. Further analytics were necessary to determine context.

Like many information technology terms (Big Data, machine learning, CRM) Business Intelligence (BI) is one of those umbrella terms that many people use regularly without fully understanding its meaning. BI is comprised of many tools that help to glean information and insights from raw data. Thus, an ETL package that moves data from one location to another is just as much a BI tool as is a fancy looking infographic. Combine this lack of clarity with the overloading of the term “reporting, and we wind up with some real confusion in this space.

Reporting is the process of using data to highlight things or trends that have already happened. This can be contrasted with monitoring, which does the same for things that are happening now, and predictive analytics, which tries to predict what will happen in the future based on the same data. The difference between reporting and monitoring is only one of data latency, and as such, monitoring is often referred to as real time reporting, which further muddies the water. However, for the purposes of this article, I want to focus on historical reporting.

Reports are typically one of two types, either operational or analytical. Tools that are good at producing one type are typically not so good at producing the other. What’s the difference? Operational reports are designed to provide information that we know we need, and analytical reports are designed to help us discover things that we didn’t know, or to help answer unanticipated questions. Operational reports are typically designed to be printed. They are typically well paginated, pixel perfect, and provide a single view of the data within any given report. Analytical reports are just the opposite. They are designed with visuals as a starting point, but allow for the ability to pivot on or drill down into the data as appropriate to answer ad-hoc questions. Printing is typically a weakness for analytical reports, whereas drilldown is a weakness for operational reports.

Both report types have their place but they both have very different design point. The data that backs an operational report should ideally be relatively flat, as that best reflects the report layout and helps with performance. Conversely, cubes and data models exist simply because a flat data structure does not adequately support analytical reporting. With analytical reporting, a user may at any point decide to view quantitative data (a measure) through the lens of a different facet (dimension). This difference is so great, that we need a different type of engine to support it. OLAP cubes and tabular models are both examples of this.

Another difference is the data that is necessary to support both report types. Operational reports tend to concern themselves with various levels of subtotals per the predefined facets. In a case like that, the data mart that backs the report only needs to store those subtotals. The granularity, or resolution of the data stored in the data mart does not need to exceed that of report that references it. Analytical reporting is different. Since users will be expected to drill down on data, from on dimension to another, or to filter the data according to increasingly granular facets, it is critical to store all of the data in the data mart backing the data model. We don’t know the level of resolution the analyst will need; therefore, all detail is required.

As a simple example of this, consider the case where we want to analyze some server log data over a period of time. We can pre-aggregate the data in the data model such that it stores the total of the log entries of various entries on a daily basis. There would need to be a total based on each dimension, but the overall data storage would be less than for the raw data. Such data would allow an analyst to spot trends over several days, but the decrease in resolution means that it will be impossible to spot any usage trends within a given day. If daily trends will never be necessary, then this doesn’t matter, but the nature of analytical reports means that the designer can never be sure.

The more that the source data for the report is pre-aggregated, the less that report becomes analytical in nature, and the more it approaches operational. This is regardless of the tool used; you can build either report type with any tool, it’s just that it may not be optimal.

The issue here is one of semantics. Semantics however are important in knowing what you are getting if reports are being provided to you. Calling something “Analytics” does not make it so. If you spin up a content pack in Power BI, and find that the underlying data model provides just enough dimensions and measure to construct the provided report, and that you can’t deconstruct the data in any meaningful way, what you have is a report, not analytics, no matter what the platform. As with anything, there is a trade-off between complexity and power. Given the nuances of this topic, it’s important to look under the hood to know what you are getting.

The answer “42” is perfectly acceptable if you already knew that the question was “what is 6×9?”. But if you want to know why, that takes a little more digging. You’d also know that there might be a data problem…

3 Comments

SQL Server 2016–Which Edition Do You Need for Business Intelligence?

For the past several releases, SQL Server has come in 6 possible editions. Developer, Express, Web, Standard, Business Intelligence, and Enterprise. Developer, Express and Web are for specific workloads, which leaves Standard, BI, and Enterprise. The choice of which edition to use would seem to be obvious – the one named Business Intelligence. However, Enterprise contained all of the features that the BI edition did, and in many cases, wound up being a better choice from a licensing perspective. Standard mode also provided many BI capabilities, but not all.

The biggest difference (but not the only one) from a BI standpoint between Standard, and either BI or Enterprise edition was the support of the Tabular Mode in SQL Server Analysis Services. For those unaware, Tabular Mode is the engine behind PowerPivot, and increasingly importantly, Power BI. From a price standpoint the difference between Standard and either BI or Enterprise is quite significant. This has put the Tabular model out of reach for some small and medium sized businesses which is unfortunate, given that tabular is at the center of Microsoft’s future BI efforts.

SQL Server 2016 removes the BI Edition as an option, leaving us with a choice between only Standard and Enterprise. The biggest news in my opinion from a licensing perspective with 2016 is that Tabular Mode will now be supported in Standard Edition. This puts the tabular model within the reach of all organizations, and closes the licensing gap in the BI stack. This is fantastic news.

There are of course limitations with Standard mode. Tabular in Standard Mode is restricted to 16 GB of RAM, which may seem like a lot, but keep in mind that tabular is an in-memory technology. It’s possible to bump into this limit fairly quickly, but it’s a limit that serves the small/medium business space rather well.

PowerPivot for SharePoint also remains an Enterprise only feature. However, given the capabilities available in Power BI, and the upcoming rendering capabilities of SSRS, this may be less important than it previously was.

Given that it’s relatively simple to move from Standard to Enterprise (from a technology perspective), this approach allows organizations to get up and running, and then scale up if necessary. It removes that up front Enterprise cost barrier. It’s much easier to get budget for and Enterprise license when its value has already been proven.

Another difference between Standard and Enterprise in SSAS is that Standard mode does not support partitioning, perspectives or DirectQuery. DirectQuery allows for real-time analytical reports, which removes the cached data storage from the picture. All queries go directly back to the source. An explanation of partitions and perspectives is beyond the scope of this post, but if you don’t know what they are, the chances are that you don’t need them.

From an SSRS standpoint, the traditional differences between Standard and Enterprise are still in place. These include data alerting, data driven subscriptions, PowerView support  and scale out capability. All of the new features of SSRS 2016 are available in both Standard and Enterprise modes with one very notable exception. The new Mobile Reports are only available with Enterprise.

Mobile reports are the result of last year’s acquisition of Datazen, which has been fully integrated into SSRS. It allows on-premises SSRS servers to provide rich mobile reports on a variety of platforms. If your organization is using Power BI already, then you likely have a mobile solution, but if not, Mobile reports may fill that gap.

A complete summary of the differences between all of the different SQL Server editions can be found here. A quick PDF chart of what’s new in SQL Server can be found here.

In summary, both Standard and Enterprise editions of SQL Server 2016 are now suitable for use in business Intelligence solutions. The decision to move to Enterprise can now be based on scale and enterprise requirements, not on basic functionality. This, in my opinion, is all to the good. 

6 Comments

What you need for Business Intelligence in SharePoint 2016

Over the past few weeks, I’ve put together a number of posts that outline the intricacies of setting up SharePoint 2016 with its BI workloads, in particular Excel, PowerPivot, and SQL Server Reporting Services. With the full release today of SharePoint 2016, I wanted to summarize these posts, and to provide some context.

The major change to the BI world is of course the fact that Excel Services is no longer included, its capabilities having been replaced by Office Online Server (OOS). The posts below discuss the implications of this change, as well as how to configure all of the BI features in the new platform.

Article Description
Rethinking Business Intelligence in SharePoint and SQL Server 2016 My take on the changes to on-premises BI in the Microsoft world, and what the implications are for the present and future
Adding Excel Services Capabilities to a SharePoint 2016 Farm How to Set up Office Online Server to support the services previously available in Excel Services
Enable PowerPivot Support in Office Online Server 2016 and Sharepoint 2016 How to set up SharePoint 2016 and Office Online Server to support Excel workbooks with embedded PowerPivot data models
Using PowerPivot for SharePoint with SharePoint 2016 How to configure the PowerPivot for SharePoint 2016 service application
Configuring SSRS 2016 Integrated Mode with SharePoint 2016 How to configure SQL Server Reporting Services 2016 Integrated mode in SharePoint 2016
Integrating SharePoint 2016 with SSRS Native Mode How to configure SQL Server Reporting Services 2016 Native mode and integrate it with SharePoint 2016

Just a quick glance at the articles above will show a deep dependency on SQL Server 2016. For example, in prior versions of SharePoint, multiple versions of SSRS were supported on SharePoint. This is no longer the case with SharePoint 2016. To be clear, I am talking about the BI components (SSRS, PowerPivot for SharePoint) and not the core database server for SharePoint. SharePoint 2016 requires SQL Server 2016 versions of both PowerPivot for SharePoint and SSRS. This means that if you’re invested in Business Intelligence in SharePoint 2013, you’re going to need to wait for SQL Server 2016 before you upgrade in a production environment.

SQL Server 2016 is currently at the Release Candidate (RC0) stage, and its release won’t be that far off. You can get started today on your test migrations, knowing that the full release will likely be available by the time your testing is complete. The articles above were all written while using the CTP 3.3 version of SQL Server 2016.

Looking through the articles you’ll find a number of configurations, and requirements that line up with specific scenarios. Below is a quick guide to outline what is required to support what feature in the SharePoint 2016 BI space.

Feature Requirements
Excel workbooks connected to SSAS Data Sources Kerberos Constrained Delegation (KCD) between OOS and SSAS data source

OR

EffectiveUserName enabled on OOS Server(s)

OOS Server account(s) added to Admin list on SSAS server(s)

Connected Excel workbooks to Windows Authenticated SQL Server Data Sources KCD between OOS and SQL Server

Claims to Windows Token Service running on OOS Server with Network Service enabled

Connected Excel workbooks using stored credentials (Excel Services Authentication Options) Secure Store Service (SSS) credential created

OOS machine account added to SSS Members list

“AllowHttpSecureStoreConnections = true” set on OOS server if HTTP is used

PowerPivot enabled Excel workbooks SSAS PowerPivot Mode server available

SSAS PP Mode server added to BI server list on OOS Server via New-OfficeWebAppsExcelBIServer cmdlet

OOS Server account added to Administrators list of SSAS PowerPivot Mode Server

Automatic Refresh of PP enabled workbooks PowerPivot for SharePoint

Silverlight (client side)

PowerPivot Gallery PowerPivot for SharePoint

Silverlight (client side)

Excel files as a data source PowerPivot for SharePoint

PP4SP must have admin access on SSAS PP mode Server

KCD between OOS and SharePoint application

Claims to Windows Token Service running on OOS Server with Network Service enabled

External ODC file support
PowerPivot Management Dashboard
S2S Trust Configured between OOS and SharePoint
Power View reports SSRS Integrated mode

Silverlight (client side)

Power View in Excel
Power View with Excel as a data source
SSRS Services account must be added to the Admin group on the BI server

Silverlight (client side)

I’ll update this post if anything significant changes between now and the release of SQL Server 2016, but this should help those interested get up to speed today on Business Intelligence in SharePoint 2016.

5 Comments

Using PowerPivot for SharePoint with SharePoint 2016

While the capabilities previously provided with Excel Services have been moved to Office Online Server (OOS) in the 2016 version of SharePoint, PowerPivot for SharePoint (PP4SP) has not. PP4SP remains a SharePoint service application in the 2016 edition of the product. This service application is responsible for providing the automatic data refresh capability for PowerPivot for SharePoint enabled workbooks. As an aside, it can also refresh connected workbooks, as I discuss here. Given that the rendering engine now exists on a separate server, there are a few additional steps to perform, and this article aims to walk through them.

Basic Installation

Prior to setting up PowerPivot for SharePoint, you’ll need a SharePoint farm that has been enabled for PowerPivot workbooks, as I have previously outlined here. The Add-In is available from Microsoft here.

Installing is a simple matter of downloading the add-in and running setup. You’ll be presented with a straightforward dialog box with 4 options.

This should be installed on every SharePoint server in the farm, whether or not it will run the Service application. Technically the first option is not required for front end web servers, but it is small, and I like to keep my options open. After clicking next, the bits will be installed.

Like SharePoint, once the bits are installed, they must be configured. This is done through the PowerPivot for SharePoint configuration tool, which the earlier setup installed. It should be available from the application list on the server. It works much the same as it did with earlier versions of PP4SP and SharePoint. Run it and you’ll be prompted for the installation type. Select “Configure or repair..” and click OK. Next, you’ll be presented with the configuration detail dialog. The dialog contains a number of configuration nodes, which drive a series of PowerShell scripts that are used to perform the configuration (tip – click on the Script tab to see the scripts in question). The exclamation point icon indicates that parameters need to be supplied.

The first node is mandatory – Configure or Repair.

Here, you enter the credentials of the user that can perform the configuration – I normally use the user that was used to configure SharePoint in the first place – spSetup in my demo environments. This is the only step that is critical. However, I find it to be good practice to change the name of the Service application and database.

The default values begin with “Default Power Pivot…” and the database contains a GUID as part of its name. When searching alphabetically for PowerPivot, I tend to look under P not D, so I remove the word default from both, the GUID from the database, and further change the database name to conform with naming conventions. Finally, it’s a good idea to check the Site Collection that will be activated.

The configuration tool will activate the PowerPivot solution in one site collection by default. It can be activated later in others, but it’s worth starting off on the right foot.

Click the “validate” button, and if all of the indicators are green, go ahead and complete the configuration.  Once configured, no further Central Admin work should be necessary, at least not at this point.

PowerPivot Gallery

A PowerPivot gallery is not required. All of the PowerPivot for SharePoint features can be used in a regular document library, but the gallery centralizes things and makes these features more discoverable. It should be noted that just as with SharePoint 2013, the PowerPivot gallery is a customized document library that uses Silverlight to display its contents. This dependency on Silverlight means that in order to use it, workstations must have Silverlight installed, and neither Google Chrome, nor Microsoft Edge browsers will support it.

To create a new PowerPivot Gallery, navigate to the site contents of the target SharePoint site, and select “Add an app”. Select the PowerPivot gallery and give it a name. If you don’t see PowerPivot Gallery as an option, you may need to enable the PowerPivot Feature for Site collection in the Site Collection features list. Once added, upload a PowerPivot enabled workbook. This workbook should contain a data model where the data was imported directly into PowerPivot (not via Power Query). Once uploaded, after a few moments, the thumbnails from the workbook objects should show up in the gallery. It should be possible to interact with the workbook, as PP4SP is not required for that, but the two (or 3 depending on whether or not SSRS has been installed) icons on the right of the workbook provide access to PP4SP capability.

In order to set up scheduled refresh, click on the calendar icon (The Excel icon is for using Excel as a data source – see below). This opens up the data refresh history for this workbook. To configure it, click on the “Configure Schedule link”. On the configuration screen, select the enable check box, enter the desired schedule, and the credentials needed to connect to the source data. For testing purposes it is more deterministic to explicitly enter credentials here, but refresh supports a “refresh account” (configured via the Secure Store Service), or any Secure Store Service credentials. Also, selecting “Also refresh as soon as possible” will immediately force a refresh cycle, which will begin within 5 minutes of saving, and is useful for testing.

Once complete, open the refresh history for the workbook, and you should see either a stopwatch icon, indication a refresh is in progress, a green check mark, indicating successful completion, or a red x, indicating failure.

One thing should be noted – data models created by using Power Query in Excel will always fail – this is true as of March 2016. Power Query refresh has been stated as a feature for PowerPivot for SharePoint 2016, but as of this writing, it has not yet been included.

Workbook as a Data Source – Kerberos Enablement

The URL of a workbook that contains a data model can be used in a connection string in another workbook, and PowerPivot for SharePoint can intelligently route that connection to the backing SSAS PP Mode server. To the consuming workbook, it looks just like a regular SSAS server.

In prior versions of SharePoint and PP4SP, using a workbook as a data source “just worked”, because the service and the workbook were all on the same server. With OOS, the server is on a different server. OOS needs to connect to the source workbook (the one with the data model) with the credentials of the consuming user, which means that for this to work, Kerberos Constrained Delegation (KCD) must be configured between OOS and SharePoint.

To be sure, you only need to configure KCD if you wish to use Excel files as a data source. If not, this step can be safely skipped.

You need to allow the computer account for the OOS server to delegate credentials to the account running the HTTP service for the SharePoint application that contains the workbooks to be used as data sources. In the example below, the OOS Server is NAUTILUS2016OOS, the service account is NAUTILUS\spApps, and the application is http://home.nautilus.local. This PowerShell can be run on any Domain Controller server.

$allowedPrincipals = @()
$allowedPrincipals += Get-ADComputer -Identity NAUTILUS2016OOS

# Set the delegation property on the application pool identity.
Set-ADUser spApps -PrincipalsAllowedToDelegateToAccount $allowedPrincipals

# Set the Service Principal Names for the application pool identity.
SetSPN -S HTTP/home NAUTILUS\spApps
SetSPN -S HTTP/home.nautilus.local NAUTILUS\spApps

Once successfully configured, it should be possible to use Excel files that contain data models as a data source for other Excel files. To create a new one, click the Excel icon beside the data refresh history icon in the PowerPivot gallery.

Wrapping Up

One other feature requires further configuration to work, and that is the PowerPivot Administration dashboard. Security constraints now prevent the use of Central Administration as a container which means that the dashboard must be set up in a regular site collection. This  requires Server to Server (S2S) trust to be configured. Given that this is not a user facing feature, it’s out of scope for this article, but details on how to do it can be found in the Deploying SQL Server 2016 PowerPivot and Power View in SharePoint 2016 document.

Setting up PowerPivot for SharePoint will still not give you the ability to render Power View reports in a browser whether they are created standalone, or in an Excel workbook. For that, it is necessary to set up SQL Server Reporting Services (SSRS) in SharePoint mode, as Power View rendering is part of SSRS. That will be the topic of an upcoming article.

6 Comments