Problems Installing PowerPivot for SharePoint and Reporting Services From SQL Server 2012 SP2 on SharePoint 2013

I recently performed an installation of both PowerPivot and Reporting services for a customer that was running SharePoint 2013. In preparation for the installation, the customer had made the SQL media available to us (something that I always appreciate), so installation looked to be quick and easy. Since SP2 was released for SQL Server 2012 over a month ago (June 2014), they had downloaded the ISO file from MSDN that already included SP2.

I went ahead and installed the bits for both PowerPivot for SharePoint and for SQL Server Reporting Services. The next step in my case was to run the PowerPivot for SharePoint Configuration tool. There are normally two such tools available, the original one, for SharePoint 2010, and with the release of SQL Server 2012 SP1, there is the “PowerPivot for SharePoint 2013 Configuration Tool” which as the name might imply, is for configuring on SharePoint 2013. However, after this installation, there was only the one tool, which led me to believe that they had been consolidated. However running it resulted in the “Assembly not found” error that you get when installing on the wrong platform.

The next step in installing SSRS is normally to go and add the Service Application in Central Administration. However, it wasn’t available to add, and the typical PowerShell commands to register it (Install-SPRSService and Install-SPRSServiceProxy) resulted in not found errors. Something was clearly amiss.

After poking around a bit I was able to determine that the PowerPivot engine that was running was 11.0.2100.6, which corresponds to the RTM version of SQL Server 2012. (You can find a list of build numbers here) SharePoint Server 2013 requires at least SP1 for PowerPivot for SharePoint and for SSRS integrated mode. The problem had been identified, but what caused it?

After speaking with Todd Klindt (keeper of the SharePoint Patches log), it turns out that he had a similar experience with the SQL SP1 installation media. There is, or at least was a bug in the slipstream version of SQL Server 2012 SP1 that was available through Microsoft’s standard distribution channels. This bug caused only the RTM bits to get installed, not the patches. This has been documented in this Microsoft KB article. The bug was fixed, and the download media was updated within a few weeks. However, it appears that the same production glitch that caused the problem with SP1 happened again with the initial release of SP2, as documented in this blog post.

Happily, the problem has since been corrected, and any new downloads of SQL 2012 with SP2 should not be affected. However, if you have used an affected version, the fix is simple. All that is necessary is that you patch your install with the standalone SQLServer 2012 SP2 installer, available here. Ensure that you patch all elements, including your PowerPivot for SharePoint instance. You’ll know that you have the problem if you don’t see the PowerPivot for SharePoint 2013 configuration tool – there should be two as follows:

Perhaps the SQL team should talk to the SharePoint team about patching. Just sayin’…..

Changes to Data Loading Features in Power Query – May 2014

The Power BI team continues to deliver new features at a rapid pace. The May 2014 release of Power Query is no exception. There are quite a few new features packed into this release, which you can read about at your leisure but I’m particularly interested in the ones pertaining to data loading, as I’ve discussed several of the limitation in this area in the past.

This release is major indeed. There are three significant changes to the data loading features in this build.

Configurable defaults for data loading

I’ve just posted an article on how to do this and why. To put it simply, the default data load behaviour is to load data into a worksheet in most cases. This leads to larger than necessary workbooks, to the point where they may not work properly with Power BI. Now you can change this default behaviour, which will be welcome for anyone doing serious data modeling.

Worksheet Size Warning

If you do decide that you want to load data to the worksheet, or you’re simply unaware of the issues, you will be prompted to consider loading to the data model once your data hits 10 MB, which is the maximum non-model workbook size in Power BI. Prior to this update, the user wouldn’t know that there was a problem until after they tried to use the workbook in Office 365.

Data Model Preservation

Prior to this update, it wasn’t possible to modify the query without losing all changes to the data model, or formatting in worksheet tables. This release of Power Query remedies this situation. You can now go back and make changes to your query without having to recreate the model.

Power BI is being built in a surprisingly collaborative way, with a large amount of input from the community. I’ve never seen this done to this extent at Microsoft, and it’s very good to see. I know that all of these features have been asked for and discussed by the community in the last few months, and here they are. Kudos to everyone involved, and keep it up. This product keeps getting better and better.

Set Your Data Load Defaults in Power Query

When using Power Query to load data from a data source, the user has three choices as to where that data can be stored. It can go into a worksheet in the workbook, into the data model, or into both. The default behaviour (and therefore the one that will be used most commonly) is to load into a worksheet when one table is being imported, or into the data model when multiple tables are selected.

The reason for the load to worksheet default is understandable, as casual users will expect to see the data immediately after it is imported – they won’t want to open the PowerPivot model editor (which is itself turned off by default in Excel 2013). However, loading to the worksheet introduces quite a few limitations, as I’ve described (OK, whined about) mostly here, but also here and here. These limitations mostly revolve around the million row limit in Excel, the fact that data is stored in the model much more efficiently, and the file size limits when working with Power BI.

The great news is, that as of the May 2014 build of Power Query . These data loading defaults are now user configurable. If you’re taking the time to read this post, chances are that you should set them.

The defaults are Power Query wide, and are therefore located in the Power Query options. Once Excel is open,  go to the Power Query tab and click the Options button.

image

Now, in addition to joining the Customer Experience Program, you can set your data load settings. I always import only to the model, so I check the custom option, and Load to Data Model.

image

I can’t be completely sure, but in my limited testing, it appears that this option setting does not sync between instances of Excel on different machines the way that other (non add in) settings do. You will therefore need to set this for every Excel installation that you have. Now if there was only a Group Policy setting that could be pushed out….

Kudos to the Power Query team for implementing this – it’s a welcome addition to the product.

How to Connect to PowerPivot Workbooks With PerformancePoint

Scorecards, KPIs, and analytic charts and grids are at the core of PerformancePoint’s value proposition. In order to effectively work with them we need to work with a multidimensional data source, which means Analysis Services. Traditionally, this has meant building OLAP cubes, which is a daunting prospect for those unfamiliar with the process. More recently, the appearance of the tabular data model in Analysis Services has lowered the bar significantly, but still requires a connection to a full blown Analysis Services server.

At the same time, the proliferation of PowerPivot, and PowerPivot for SharePoint has democratised the development and sharing of multidimensional data models for the power user. Unfortunately there is no “PowerPivot” connection type in PerformancePoint, so it would appear that the advanced PerformancePoint tooling is beyond the reach of our shared Power pivot models. The good news is that it’s not – its just not obvious as to how it can be done.

First, we need to step back a bit and talk about how PowerPivot for SharePoint works.

PowerPivot for SharePoint actually consists of two components. First, there is the Service Application that runs in the SharePoint farm that is responsible for performing data refreshes, and usage analytics. The main part however is actually an instance of Analysis Services using the tabular engine. It’s properly referred to as Analysis Services SharePoint Mode, and as of SharePoint 2013/SQL Server 2012 SP1, it can be installed standalone. However, it is most commonly installed on SharePoint front end servers.

You can see this in action by opening up SQL Server Management Studio, and connecting to the PowerPivot Instance on a SharePoint front end server. The instance is normally named PowerPivot:

image

In the case above, the SharePoint front end server is named NautilusSP. You can also see that there is a model being hosted by the server already. The model is named by taking a workbook, and adding a GUID to it. This is done by Excel Services the first time that a model is interacted with. For example, if we add the file Health.xlsx, which contains an embedded PowerPivot model, and immediately refresh the object explorer in Management Studio, we will see that nothing has changed. However, if we then interact with the model at all, by clicking a slicer, or opening a pivot table category, we will see that the model has been automatically created for us.

image

The first interaction with the model will be noticeably slower than all subsequent interactions for this reason.

Now, since this is actually an instance of Analysis Services, we should be able to connect Excel to this model, and do analyses from it. In fact, we should also be able to create a PerformancePoint data connection that points to this model, allowing to use PerformancePoint Scorecards, and analytic charts and grids. We can in fact do both things, but there is a major problem with doing so.

These models are temporary. If they haven’t been used for a period of time, they get deleted. Also, if the source workbook is updated, a new model is automatically create upon first interaction. This can be seen if we edit, and save our Health.xlsx workbook, and then open it in the browser and interact with it.

image

The original model will be deleted in a garbage collection process. We therefore cannot reliably target these models, as any reference will become invalid relatively quickly.

The good new is that we can use Excel to analyze these models by using the “Open New Excel Workbook” button in the PowerPivot Gallery. This is the leftmost of the three icons to the right of any workbook in the gallery.

image

Clicking on this action will download an odc (Office Data Connection) file, which will open up Excel, and establish a connection with the underlying model, allowing us to do further analysis on it as if it was hosted in Analysis Services (because it IS hosted in Analysis Services). This connection will work no matter what the name of the underlying model, and if the model doesn’t yet exist, it will be created.

Unfortunately, nothing like this automated connection creation exists for PerformancePoint.

To see what’s going on, we can open the connection itself within Excel, and then view its properties to find the connection string.

image

We can see that it is a standard Analysis Services connection string, with an interesting twist. While the value for “Initial Catalog” is in fact our temporary model, the value for “Data Source” is the URL of the Workbook. Excel Services will automatically direct calls to this workbook to the appropriate data model. if the model has been changed, it knows, and will serve the appropriate content, so our new analysis workbooks will not become invalid.

The good news is that we can use this within PerformancePoint as well. All we need to do is to open up PerformancePoint Dashboard designer, create (or edit) a connection using the Analysis Services type, and select “Use the following connection” which allows for a connection string.

image

For the connection string, all that we really need is the data source parameter, which is the URL to the workbook. Once entered, we see the appropriate model name in the dropdown for “Cube”. Once selected, this data connection will work like any other Analysis Services data connection. We can now build KPIs, Scorecards, and Analytic charts and grids from the model embedded in out Excel workbook, hosted in SharePoint. If the workbook changes, or the temporary model gets deleted, Excel Services will take care of recreating it on the next interaction.

It is possible to use PowerPivot for SharePoint with PerformancePoint.

Using the SSIS OData Source Connector With SharePoint Online Authentication

Last week, Microsoft released the OData Source for Microsoft SQL Server 2012 . What is it? It allows SQL Server Integration Services (SSIS) to use an OData feed as a first class citizen data source in the same manner as SQL Server, Oracle, etc. Until now it was necessary to code OData connections using the script object.

This matters to those of us in the SharePoint world because any SharePoint list data can be expressed as OData.

I’ve written before about how SharePoint data can be extracted into a data warehouse using SSIS and the SharePoint List Source and Destination Adapters, available from CodePlex. These adapters plug in to SSIS and wrapper the SharePoint SOAP web services, and therefore do not need to be installed on a SharePoint server. We have used them for years, and they work very well, however, they are a CodePlex project, and therefore not fully supported.

These CodePlex adapters have more recently been bumping into another limitation. While SOAP web services are supported in Office 365, the adapters don’t support the Office 365 authentication mechanism, which effectively renders them useless. The OData services require the same authentication, but the the new OData Source supports it. It’s also an official Microsoft product, and is fully supported.

There is, however a trick to getting it working. Once you install the OData Source, you open up SQL Server Data Tools, open an SSIS project, and add or edit a data flow task. In the SSIS Toolbox,  You should see the OData Source.

image

Drag the tool on to the design surface, and double click to configure it. You’ll first need to configure an OData Connection Manager, and you’ll do that by clicking the New button.

image

Give the connection a name. The connection will be common to all lists and libraries within a site, so something based on the name of the site is likely appropriate. The Service document location is the OData endpoint. It takes the form of the URL of the site, along with the suffix /_vti_bin/listdata.svc. If the connection is on premises, you can use Windows Authentication, but if it is Office 365, you must use a stored name and password.

image

If you are using Office 365, and you click Test Connection at this point, you’ll receive an error “Test connection failed –> The remote server returned an error: (400) Bad Request.”

image

This is due to Office 365’s “unique” authentication mechanism. In order to authenticate to Office 365, you must first select the “All” button in the toolbar, and set the value of “Microsoft Online Services Authentication” to true.

image

This option may not be available to you. If the Online Services Authentication option is disabled, or greyed out, as it was for me when I first tried to use it, it’s because a prerequisite is missing. In order to authenticate to Office 365, the machine must have the SharePoint Server 2013 Client Components SDK installed on it.

Once the client components are in place, and the option is selected, the data source should be able to connect to the source, and the connection manager can be closed. Lists are exposed as Collections, so if you want to work with list data, you can then select the list from the list of Collections.

image

At this point, the data source will act like any other SSIS data source, you can select and transform columns at will. More importantly, this will help you get SharePoint data both on-prem and in the cloud into a central data warehouse.