Skip to content

Tag: SQL Server 2008 R2

Upgrading and Moving Integrated Mode Reporting Services With SharePoint

There is a plethora of instructions out there on upgrading from SharePoint 2007 to SharePoint 2010, but relatively little on doing the upgrade where Reporting Services has been set up in SharePoint integrated mode. Given that there are a few gotchas that you can run into when doing this, I decided to put together this step-by-step, complete with the gotchas.

The most common scenario that will be encountered, given the vintages of the products will be an RS upgrade from SQL Server 2005 to 2008 R2. In addition, the in place upgrade is relatively painless (in the short term….) so I’ll be walking through a DB attach upgrade, which is just as applicable to RS as it is to SharePoint. Finally as I’ve written about previously, in a small farm, it’s likely a better idea to add the Reporting Services bits to a SharePoint front end server, than to add the SQL server to the SharePoint farm, and that will also be a part of our scenario.

The first question to answer is “why bother”? One of the advantages to using RS in SharePoint Integrated mode is that unlike Native mode, the reports, data connections and models are stored directly in SharePoint. It is therefore possible to just create a new RS database, and move forward. However, since subscriptions, schedules, and cache profiles are still in the database, it’s likely worth it to do the upgrade.

Step 1 – Back Up The Asymmetric Key

Reporting Services itself uses 2 SQL databases. One of the databases is for temporary operations, but the other database stores a number of important, and sensitive items for this reason, all sensitive items in the database are encrypted with a key. If we want to get access to these items, we need the key. To do so, we need to back it up from the source server before we move ahead.

Run the Reporting Services Configuration Manager on the source RS server, and select “Encryption Keys”. Click the Backup button, select (and remember) a password, and then save the key to the file system. 

image

Once saved, copy the key file to the destination RS server (likely your SharePoint 2010 front end server).

Step 2 – Back Up the Reporting Services Databases

Run SQL Server Management Studio on the Server where the Reporting Services databases are located. Run full backups of the two RS databases. When complete, copy the backups to the destination SQL server (likely the server that will host the SharePoint 2010 databases).

image

Step 3 – Restore the Reporting Services Databases

Using SSMS, restore the two databases to the host SQL server. Once restored, it’s likely a good idea to set the recovery model to Simple, and the Compatibility level to SQL Server 2008. These steps aren’t required, but are recommended, unless you have a reason for not doing so.

image

Step 4 – Run the RS Configuration

If Reporting Services hasn’t yet been installed on the SharePoint server, do so, otherwise, proceed to configuration by running the Reporting Services Configuration Wizard on the destination RS Server. Configure the basic steps, and then when it comes to Database configuration, select the option to choose an existing database.

image

Select the server where you restored the files in step 3, and select the primary RS database (the one without the word “Temp” in it”).

image

Complete the configuration wizard.

Step 5 – Connect SharePoint to Reporting Services

From SharePoint Central Administration, select General Application Settings, and then Reporting Services Integration.

image

Complete  the integration configuration, and then select OK

image

So far so good – now we’re ready for some gotchas. If you now click on the “Set Server Defaults” link in the Reporting Services section, you likely get a rather nasty looking error. You’ll also experience this error if you access ewither of the two RS URLs defined in the RS configuration wizard. The error is:

The report server installation is not initialized. (rsReportServerNotActivated)

This error happens when the server can’t access configuration information, and the most common cause of that is that it can’t decrypt the content. In our case, it can’t because we haven’t yet restored our key.

Step 6 – Restore the Asymmetric Key

On our new RS server, we need to run the RS Configuration Manager, Select Encryption Keys, and then click the Restore button. You will be prompted for the file that you created and copied in Step 1, and this is where remembering the password comes in very handy.

image

Once this is done, we can close the configuration manager and return to Central Administration. However, now when we try to access access any aspect of RS we get a new error:

The feature: “Scale-out deployment” is not supported in this edition of Reporting Services. (rsOperationNotSupported)

The reason for this error is that when we restored the key, it added an entry in the Keys table in the Reporting Services database, causing RS to think that we’re using multiple Reporting Services servers. This is what’s known as a scale-out deployment, and is only supported in the Enterprise version of Reporting Services. Obviously this isn’t a problem for anyone running Enterprise, but if not, it’s a showstopper.

The way to fix this is to remove the old server entry in the Keys table. Using SQL Server Management Studio, connect to the Reporting Services database, and open the dbo.Keys table. The old entry should be easy to spot as it will have the old server name. Simple delete the row.

image

Once the offending entry is deleted, RS should be good to go.

7. Fix up the content type names

I have posted about this already, but often, an upgrade will break the Content Type names for the Reporting Services content types. Just follow the steps in this post to clean them up.

8. Reconnect Reports with Data Sources And/Or Republish

In addition, moving connection files and reports around in SharePoint can cause them to be disconnected from each other, or for the connection files to be disabled. It’s a good idea to navigate to all of your reports to make sure that they are connected, or better yet, to republish from the source if you had previously used BIDS to publish reports.

Leave a Comment

How to Get SharePoint Data Into a Data Warehouse (And Back Again)

With apologies to J.R.R. Tolkien……

Yesterday, as part of the ongoing Panellist Spotlight series for SharePoint Shoptalk, I presented a session on how to move your data from SharePoint, and in to a data warehouse, and then consume the warehoused data directly within SharePoint. These presentations are recorded, and posted online, and if you’re interested, you can view the entire presentation below:

In it, I demonstrate how SQL Server Integration Services (SSIS) can be used to extract the data from SharePoint, and to store it in a SQL Server data warehouse. Then I walk through the creation of an external content type, and an external list using SharePoint Designer and SharePoint Business Connectivity Services (BCS). Finally, I create a report using SQL Server Reporting Services (SSRS) in SharePoint Integrated mode.

Thanks to Jamees Wright for organizing the session, and to my fellow panellist, Laura Rogers.

2 Comments

Archiving Your SharePoint Workflow History Lists

If you’ve worked at all with SharePoint declarative workflows (the ones that you use InfoPath to create), or others, such as Nintex workflows that are based on them, you are undoubtedly aware of their ability to log items to the history list. These items  are those that appear in the Workflow History section of the workflow status page.

image

What may be less commonly known is how this works. This history list is really just a view of items that are contained in a hidden list on the site, and whenever an item is logged to the list, it gets created in the history list. For regular SharePoint workflows, this list is located at http://yoursiteurl/Lists/Workflow History, and for For Nintex workflows, you can find it at http://yoursiteurl/Lists/NintexWorkflowHistory. So, why does this matter? Well, if you need to audit what is has happened with your workflows, this is where the information is contained.

There is, however a catch. By default, SharePoint will run a Timer job named “Workflow Auto Cleanup” daily that will remove all of the the tasks associated with a workflow, and all of the history links for workflows that are over 60 days old. This is done for performance reasons. Well, unless your audit requirements only go back two months, this isn’t going to work for you.

Try doing a search for “Workflow History” and you’ll see that this has caused a number of issues (especially for those that have found it out after the fact. The good news for those people is that the workflow history list isn’t actually purged (which is also bad news, as we’ll see shortly), and those links can be recreated through reporting. However, the most common guidance found on this topic is to simply disable the automatic cleanup job, as outlined in this very poorly named Technet article.

The problem with disabling the job is that performance will suffer, potentially badly. Assume that we have an approval workflow that runs on a list that will receive 2500 approvals annually. This is a reasonably  sized list (for SharePoint 2010). Now lets also assume that during the life of the workflow, 10 items get logged to the history list. This means that in a given year, 25,000 items are being logged to the history list, which is beyond the default list view threshold of administrators, and would be considered a very large list.

What is needed is a way to balance the auditing requirements with the list size constraints of SharePoint. 25,000 items may be a large SharePoint list, but it’s trivial to a relational database like SQL. What the remainder of this article will do is to discuss how to use Microsoft’s Business Intelligence tools to extract workflow history data into a data warehouse, and then safely purge it from the workflow history list. This will be a lengthy one.

Step 1 – Extract And Load

In my opinion, one of the most underutilized tools in Microsoft’s arsenal is SQL Server Integration Services (SSIS). Almost every SharePoint site has it, and very few know about it. It is Microsoft’s ETL (Extract, Transform, and Load) tool, and it is used for taking data from source systems, performing operations on it, and loading it into a destination system, which is typically a data warehouse in the form of one or more SQL databases. This is precisely what we need to do with our Workflow History data. You can read more about SSIS here.

The problem however is that SSIS does not support SharePoint list data as a data source. Yes, ultimately all SharePoint data is stored in SQL content databases, but we all know that we’re supposed to stay out of there. SharePoint data should only be accessed via UI constructs, the SharePoint APIs, or the SharePoint web services.

Happily, a Codeplex project was created several years ago that adds both source and destination SSIS adapters for SharePoint list data, and yes, it works well with SharePoint 2007 data. What this project does is to encapsulate calls to the SharePoint web services into SSIS data adapters. Because it uses the SharePoint web services (not the API), there is no requirement for the SharePoint bits to be installed where it is being run.

You can find an excellent tutorial on how to set this up here, so I’m not going to go into any details on that. I will however cover the basic steps below. First, however I want to outline the logic involved.

What we want to be able to do is to maintain a complete log of workflow history. We also want to be able to keep the history in SharePoint for a period of time (60 days by default), and then be able to purge it, knowing that it’s secure in the data warehouse. Therefore, we need to take an initial dump of the data, and then be able to add only new items to it. The design of the data warehouse will also support multiple site history lists.

The solution will consist of 2 tables in a SQL data warehouse (a staging table and the actual archive table). The SSIS package will perform the following steps:

  1. Empty the staging table
  2. Extract the entire Workflow History List (SP) into the staging table (SQL)
  3. Query the archive table for the most recent entry
  4. Extract all items from the staging table more recent than the entry in step 3 into the archive table, and add in a site identifier.

First, open up Business Intelligence Development Studio (BIDS). BIDS is really just Visual Studio with all of the SQL BI project types added, and is normally installed when SQL is installed. If not, you can install it from the SQL media. You do not need SQL server installed to use it, but it does have some advantages.

From the Business Intelligence Projects section, select “Integration Services Project”, and give it a solution and project name. You’ll then be presented with the SSIS design canvas. The first thing that you’ll want to do is to create two connection managers – one for SharePoint, and one for SQL. In the Connection Managers pane right click anywhere in the window and select “New Connection”

image

Scroll down, and select SPCRED – Connection manager for SharePoint connections, give it a name, and select the credentials. If you use the credentials of the executing process, it will use your credentials when you test it, but the credentials of the SQL Server Agent process if you schedule it to run automatically. Alternatively, you can enter the credentials of a proxy account, which is what I typically do. Repeat this process, only this time select OLEDB and configure the connection to your SQL Data Warehouse database (if you haven’t already done so, you’ll need to create a SQL database to house the archive).

Next, from the Toolbox, drag a Data Flow Task onto the Design surface. Your surface should look something like below:

image

Double click on the Data Flow task, and the Data Flow window will open (you can also click on the Data Flow Tab). Here, from the toolbox, drag a SharePoint List Source, and an OLE DB Destination task onto the surface. Double click on the SharePoint List source, then click in the area to the right of the SharePoint Credential Connection, and set the Connection manager to the manager that you created above.

image

Next, click on the Component properties tab, and enter valid values for the SharePoint source site URL, and the list name. The List name will either be WorkflowHistory for standard SharePoint workflows, or NintexWorkflowHistory, for Nintex workflows.

image

Click OK. Next, grab the green arrow at the bottom of the SharePoint List source, and connect it to the OLE DB Destination. Double click on the OLE DB Destination, and select the New button beside the Name of the table field. What this allows us to do is to create our Temp Table in the Data Warehouse with the appropriate schema for our Workflow History List. Once the create table widow is open, simple change the name of the table to what you want (in this case wfhStaging).

image

As soon as you click OK, the table is created in SQL. Next, click the Mappings tab on the left, and confirm that all of the fields are mapped correctly from the SharePoint list, to the SQL table. No changes should be required. When complete, click OK, and the data flow is ready for testing. From the BIDS debug menu, select Start debugging. After a pause, the process will run, and the boxes will turn  yellow and green as the process executes. If all works properly, you will see something like the screen below:

image

Both boxes green indicate that the process completed successfully, and there will be an indicator showing the number of rows that were transferred. You can confirm this by opening up SQL Server Management Studio, selecting your Data Warehouse database and running the following query:

SELECT Count(id) From wfhStaging

At this point, we need to stop our debugging process and switch back to the Control Flow tab. Given that we want to repopulate the staging table whenever we run this package, we need to first clear it at the beginning of the run. Drag an “Execute SQL Task” from the toolbox onto the design surface above or to the left of the data flow task. Us its arrow to connect it to the Data Flow task, and then double click on it. Select your OLE DB connection as its connection property and enter the following SQL (substituting your table name) as its SQL Statement:

TRUNCATE TABLE wfhStaging

image

Next, we will need to create and populate our actual archive table. To do this, drag another Data Flow task onto the design surface. Connect the output from the first data flow task to it and then double click on it. Drag an OLEDB Source, a derived column, and an OLEDB destination onto the design surface.

We want to be able to store the workflow history for multiple sites in the same data warehouse table. To do this, we need to add another identifier column to the schema of the workflow history list that will uniquely identify the source site. In our case we will use the relative site URL. The derived column action will add this column to each row as it is processed.

Configure the OLEDB Source to read from the Staging table. Then, connect the OLEDB Source to the Derived Column action with the green arrow. Double click on the derived column action. Under the Derived Column Name, enter the name of the new column. Leave the Derived Column action as “add as new column”, and for the expression we will simply use a literal string for the site relative site URL. When complete, the action should appear as below.

image

Click OK to close the dialog, and then connect the derived column action to the OLEDB Destination action with the green arrow. Double click the OLEDB Destination action and repeat the steps taken above to create the staging table, only this time, you’ll create the actual archive table. This time, when you click on the Mappings tab, note that the SiteURL column has been added at the bottom. Don’t run debug at this point, as it will run the entire package. Click back on the Control Flow tab,  right click on the new Data Flow action, and select Execute Task. Just that task will run, and if you move back to the Data Flow tab, you should see that the same number of rows have been added to the archive table.

Now we need to ensure that only the new columns from the staging table are moved into the archive table. To do this, we will change the select statement in the OLEDB source of the second data flow task. Firstly, we’ll need to know what the date/time latest record in the archive table for this site. The SQL statement for this looks like

SELECT MAX(Modified) From wfhArchive 
Where SiteURL='/SalespersonChangeRequest'

So therefore, we can embed that statement into the select statement for our staging table. However, we still need to accommodate the case where there are no records in the archive table, where the above statement returns a NULL value. To deal with this, we can use the ISNULL TSQL function, and our complete staging table select statement will be

SELECT * FROM wfhStaging 
Where Modified > 
  ISNULL(
      (SELECT MAX(Modified) From wfhArchive 
       Where SiteURL='/SalespersonChangeRequest'
       ),
       '1900-01-01')

Translated into English, this basically says “Find the value for modified of the most recent record of any items with SiteURL set to /SalesPersonChangeRequest. If you don’t find any, set it to 1900-01-01. Then, get me everything from the staging table with a modified date more recent.”

Now that we have our SQL, we need to modify our OLEDB Source action. Double click on it, and then change the Data access mode from “Table or view” to “SQL command”. Then, add the select statement to the SQL command text window. At completion, the window should appear as follows:

image

Once done click back to the Control Flow tab, and then start Debugging (you can also just press the F5 key to start debug). The first Data Flow task should write all of the source records to the table, and the second should write none (assuming nothing has happened to the source since you did the initial extract. You can try deleting some of the records from the archive table, and rerunning the package – they should get replaced. That was step 1.

Step 2 – Schedule the package

Now that we have our package, we want it to run periodically (usually nightly). We do this by deploying the package to the server, and then scheduling to run with the SQL job agent.

To deploy the package, we need to first create a deployment utility for it. To do this, we must first select the Project in the Solution Explorer pane, and then select Project-Properties from the menu. The Configuration Properties window is then opened. In the left pane, select the Deployment Utility tab, and ensure that the CreateDeploymentUtility is set to True.

image

Also – take note of the DeploymentOutputPath value.

run the deployment utility for it. The deployment utility is  stored in a subfolder of the package project. You can find the folder of the project by selecting the project in the Solution Explorer pane, and then examining its FullPath  property in the Properties pane. Open the project path in Windows Explorer, and then navigate to the DeploymentOutputPath as noted above. In that folder, you’ll find a file named yourprojectname.SSISDeploymentManifest. When ready to deploy, double click on it, and the Deployment wizard will start.

The deployment wizard is straightforward and self explanatory. You’ll want to select “SQL Server deployment” on the first screen, then the SQL server that you wish to deploy to (usually (local) ), and select a location for the Package Path (the root is likely fine). Once the wizard is complete, you are ready to schedule the package.

Open Up SQL Server Management Studio, and connect the destination server. If the SQL Server Agent service is not running, start it (you’ll want to make sure that it is started automatically). Expand the Agent node, and then expand the Jobs node. Right click on Jobs, and select New Job.

image

Give the Job a name, then click on the Steps tab. Click the New button to create a new step, and give it a name. In the Type dropdown, select “SQL Server Integration Services Package”. In the General section, select the SQL server that holds the package, and then use the ellipsis in the Package field to select the package you deployed above.

image

Next, click the Schedules tab, click the new button, give the schedule a name, and select when you want the job to run. Save the schedule, and then save the job (click OK). Your job should now appear in the Jobs folder. To test it, right click on it and select “Start Job at step”. The job will run and you will see its progress in a dialog.

There are many options for scheduling SSIS jobs, and for error handling, and I would strongly recommend investigating them.

Step 3 – Purge the Workflow History Data

As mentioned above, the workflow cleanup job removes workflow history associations, but does not actually delete the items from the list, allowing that list to grow large. If you use Nintex, there’s a Nintex command that will take care of this for you:

NWAdmin.exe –o PurgeHistoryListData  -siteUrl urlToSite 
 [-workflowName workflowName] [-days daysSinceLastActivity] 
 [-lastActivityBefore datetime DateFormat)] [-state All|Running|Completed|Cancelled|Error]
 [-deletedLists] [-clearAll [-workflowItemId id -workflowListName "list name"]] [-verbose]
 [-reportOnly] [-batchSize numberDefaultIs500]
 [-pauseAfterBatch] [-maxItemsToDelete number]

This command is run on a front end server. To keep things up to date it would need to be scheduled. However, if you’re using out of the box workflows, there is no equivalent command. You could just access the history list and remove old data, but since SharePoint has built in tools for this, I recommend using them. These features are contained in the Information management policy settings of any list.

Open the Workflow history list (your site url + Lists/WorkflowHistory or NintexWorkflowHistory). Open the List settings page, and select “Information management policy settings” in the Permissions and Management section. If you don’t see this option, you may need to enable the relevant features.In the Content type policies, select the Workflow History content type, and then select “Enable Retention”. Once enabled, you will be able to select “Add a retention stage”.

The retention stage is what we will use to delete the workflow history items (which, given the name, is somewhat counter-intuitive, don’t you think?). Date occurred is when the event occurred, so it is likely our best time indicator, and I would suggest a period of time at least double to the automatic cleanup task, which is 60 days. Finally, we want the item to be deleted at this point, so we select “Permanently Delete” from the Action dropdown. When complete, the stage will appear as follows:

SNAGHTML3911db5c

Once we save our policy, the expired items will be deleted the next time the timer jobs run.

And that’s all there is to it!

Now that we’ve taken the data out of SharePoint, it’s no longer obviously available to end users. If this is important, we will need to build some Reporting Services reports, and integrate them back into the appropriate locations in SharePoint. This will (hopefully) be the subject of an upcoming post.

9 Comments

Announcing the Premiere of The Data Queen

I’m happy to announce that a new UnlimitedViz blog went live today, The Data Queen. This blog is authored by Martina White who has over 10 years experience working with Business Intelligence products, originally Cognos, but for the past 6 years has been working exclusively with the Microsoft BI stack of products.

Martina is a cube designer extraordinaire and can form data into all kinds of informative shapes, hence the nickname. We’ve convinced her to start putting her techniques out in public, and this blog is the result. She’s already posted a couple of interesting articles, and there will be many more to come.

Have a look when you get a chance – it’s at http://dataqueen.unlimitedviz.com.

Leave a Comment

Second Foundation Intelligence named as a finalist for the Business Intelligence Solution of the Year in the 2010 Microsoft IMPACT Awards

Time for a little self promotion here I think! I’m pretty proud to announce that my company, Second Foundation Intelligence, (in case you haven’t noticed the logo on the upper right of this page…) has been named a finalist for Business Intelligence Solution of the year at the upcoming Microsoft Canada Impact Awards ceremony.

We’re pretty excited and pretty proud of it. For the solution in question, we’ve built out an end to end budgeting system using SharePoint Server 2010, SQL Server 2008 R2, and Excel 2010 for a local municipality. Here is a link to the press release.

Wish us luck!

Leave a Comment