Skip to content

Month: September 2011

Archiving Your SharePoint Workflow History Lists

If you’ve worked at all with SharePoint declarative workflows (the ones that you use InfoPath to create), or others, such as Nintex workflows that are based on them, you are undoubtedly aware of their ability to log items to the history list. These items  are those that appear in the Workflow History section of the workflow status page.

image

What may be less commonly known is how this works. This history list is really just a view of items that are contained in a hidden list on the site, and whenever an item is logged to the list, it gets created in the history list. For regular SharePoint workflows, this list is located at http://yoursiteurl/Lists/Workflow History, and for For Nintex workflows, you can find it at http://yoursiteurl/Lists/NintexWorkflowHistory. So, why does this matter? Well, if you need to audit what is has happened with your workflows, this is where the information is contained.

There is, however a catch. By default, SharePoint will run a Timer job named “Workflow Auto Cleanup” daily that will remove all of the the tasks associated with a workflow, and all of the history links for workflows that are over 60 days old. This is done for performance reasons. Well, unless your audit requirements only go back two months, this isn’t going to work for you.

Try doing a search for “Workflow History” and you’ll see that this has caused a number of issues (especially for those that have found it out after the fact. The good news for those people is that the workflow history list isn’t actually purged (which is also bad news, as we’ll see shortly), and those links can be recreated through reporting. However, the most common guidance found on this topic is to simply disable the automatic cleanup job, as outlined in this very poorly named Technet article.

The problem with disabling the job is that performance will suffer, potentially badly. Assume that we have an approval workflow that runs on a list that will receive 2500 approvals annually. This is a reasonably  sized list (for SharePoint 2010). Now lets also assume that during the life of the workflow, 10 items get logged to the history list. This means that in a given year, 25,000 items are being logged to the history list, which is beyond the default list view threshold of administrators, and would be considered a very large list.

What is needed is a way to balance the auditing requirements with the list size constraints of SharePoint. 25,000 items may be a large SharePoint list, but it’s trivial to a relational database like SQL. What the remainder of this article will do is to discuss how to use Microsoft’s Business Intelligence tools to extract workflow history data into a data warehouse, and then safely purge it from the workflow history list. This will be a lengthy one.

Step 1 – Extract And Load

In my opinion, one of the most underutilized tools in Microsoft’s arsenal is SQL Server Integration Services (SSIS). Almost every SharePoint site has it, and very few know about it. It is Microsoft’s ETL (Extract, Transform, and Load) tool, and it is used for taking data from source systems, performing operations on it, and loading it into a destination system, which is typically a data warehouse in the form of one or more SQL databases. This is precisely what we need to do with our Workflow History data. You can read more about SSIS here.

The problem however is that SSIS does not support SharePoint list data as a data source. Yes, ultimately all SharePoint data is stored in SQL content databases, but we all know that we’re supposed to stay out of there. SharePoint data should only be accessed via UI constructs, the SharePoint APIs, or the SharePoint web services.

Happily, a Codeplex project was created several years ago that adds both source and destination SSIS adapters for SharePoint list data, and yes, it works well with SharePoint 2007 data. What this project does is to encapsulate calls to the SharePoint web services into SSIS data adapters. Because it uses the SharePoint web services (not the API), there is no requirement for the SharePoint bits to be installed where it is being run.

You can find an excellent tutorial on how to set this up here, so I’m not going to go into any details on that. I will however cover the basic steps below. First, however I want to outline the logic involved.

What we want to be able to do is to maintain a complete log of workflow history. We also want to be able to keep the history in SharePoint for a period of time (60 days by default), and then be able to purge it, knowing that it’s secure in the data warehouse. Therefore, we need to take an initial dump of the data, and then be able to add only new items to it. The design of the data warehouse will also support multiple site history lists.

The solution will consist of 2 tables in a SQL data warehouse (a staging table and the actual archive table). The SSIS package will perform the following steps:

  1. Empty the staging table
  2. Extract the entire Workflow History List (SP) into the staging table (SQL)
  3. Query the archive table for the most recent entry
  4. Extract all items from the staging table more recent than the entry in step 3 into the archive table, and add in a site identifier.

First, open up Business Intelligence Development Studio (BIDS). BIDS is really just Visual Studio with all of the SQL BI project types added, and is normally installed when SQL is installed. If not, you can install it from the SQL media. You do not need SQL server installed to use it, but it does have some advantages.

From the Business Intelligence Projects section, select “Integration Services Project”, and give it a solution and project name. You’ll then be presented with the SSIS design canvas. The first thing that you’ll want to do is to create two connection managers – one for SharePoint, and one for SQL. In the Connection Managers pane right click anywhere in the window and select “New Connection”

image

Scroll down, and select SPCRED – Connection manager for SharePoint connections, give it a name, and select the credentials. If you use the credentials of the executing process, it will use your credentials when you test it, but the credentials of the SQL Server Agent process if you schedule it to run automatically. Alternatively, you can enter the credentials of a proxy account, which is what I typically do. Repeat this process, only this time select OLEDB and configure the connection to your SQL Data Warehouse database (if you haven’t already done so, you’ll need to create a SQL database to house the archive).

Next, from the Toolbox, drag a Data Flow Task onto the Design surface. Your surface should look something like below:

image

Double click on the Data Flow task, and the Data Flow window will open (you can also click on the Data Flow Tab). Here, from the toolbox, drag a SharePoint List Source, and an OLE DB Destination task onto the surface. Double click on the SharePoint List source, then click in the area to the right of the SharePoint Credential Connection, and set the Connection manager to the manager that you created above.

image

Next, click on the Component properties tab, and enter valid values for the SharePoint source site URL, and the list name. The List name will either be WorkflowHistory for standard SharePoint workflows, or NintexWorkflowHistory, for Nintex workflows.

image

Click OK. Next, grab the green arrow at the bottom of the SharePoint List source, and connect it to the OLE DB Destination. Double click on the OLE DB Destination, and select the New button beside the Name of the table field. What this allows us to do is to create our Temp Table in the Data Warehouse with the appropriate schema for our Workflow History List. Once the create table widow is open, simple change the name of the table to what you want (in this case wfhStaging).

image

As soon as you click OK, the table is created in SQL. Next, click the Mappings tab on the left, and confirm that all of the fields are mapped correctly from the SharePoint list, to the SQL table. No changes should be required. When complete, click OK, and the data flow is ready for testing. From the BIDS debug menu, select Start debugging. After a pause, the process will run, and the boxes will turn  yellow and green as the process executes. If all works properly, you will see something like the screen below:

image

Both boxes green indicate that the process completed successfully, and there will be an indicator showing the number of rows that were transferred. You can confirm this by opening up SQL Server Management Studio, selecting your Data Warehouse database and running the following query:

SELECT Count(id) From wfhStaging

At this point, we need to stop our debugging process and switch back to the Control Flow tab. Given that we want to repopulate the staging table whenever we run this package, we need to first clear it at the beginning of the run. Drag an “Execute SQL Task” from the toolbox onto the design surface above or to the left of the data flow task. Us its arrow to connect it to the Data Flow task, and then double click on it. Select your OLE DB connection as its connection property and enter the following SQL (substituting your table name) as its SQL Statement:

TRUNCATE TABLE wfhStaging

image

Next, we will need to create and populate our actual archive table. To do this, drag another Data Flow task onto the design surface. Connect the output from the first data flow task to it and then double click on it. Drag an OLEDB Source, a derived column, and an OLEDB destination onto the design surface.

We want to be able to store the workflow history for multiple sites in the same data warehouse table. To do this, we need to add another identifier column to the schema of the workflow history list that will uniquely identify the source site. In our case we will use the relative site URL. The derived column action will add this column to each row as it is processed.

Configure the OLEDB Source to read from the Staging table. Then, connect the OLEDB Source to the Derived Column action with the green arrow. Double click on the derived column action. Under the Derived Column Name, enter the name of the new column. Leave the Derived Column action as “add as new column”, and for the expression we will simply use a literal string for the site relative site URL. When complete, the action should appear as below.

image

Click OK to close the dialog, and then connect the derived column action to the OLEDB Destination action with the green arrow. Double click the OLEDB Destination action and repeat the steps taken above to create the staging table, only this time, you’ll create the actual archive table. This time, when you click on the Mappings tab, note that the SiteURL column has been added at the bottom. Don’t run debug at this point, as it will run the entire package. Click back on the Control Flow tab,  right click on the new Data Flow action, and select Execute Task. Just that task will run, and if you move back to the Data Flow tab, you should see that the same number of rows have been added to the archive table.

Now we need to ensure that only the new columns from the staging table are moved into the archive table. To do this, we will change the select statement in the OLEDB source of the second data flow task. Firstly, we’ll need to know what the date/time latest record in the archive table for this site. The SQL statement for this looks like

SELECT MAX(Modified) From wfhArchive 
Where SiteURL='/SalespersonChangeRequest'

So therefore, we can embed that statement into the select statement for our staging table. However, we still need to accommodate the case where there are no records in the archive table, where the above statement returns a NULL value. To deal with this, we can use the ISNULL TSQL function, and our complete staging table select statement will be

SELECT * FROM wfhStaging 
Where Modified > 
  ISNULL(
      (SELECT MAX(Modified) From wfhArchive 
       Where SiteURL='/SalespersonChangeRequest'
       ),
       '1900-01-01')

Translated into English, this basically says “Find the value for modified of the most recent record of any items with SiteURL set to /SalesPersonChangeRequest. If you don’t find any, set it to 1900-01-01. Then, get me everything from the staging table with a modified date more recent.”

Now that we have our SQL, we need to modify our OLEDB Source action. Double click on it, and then change the Data access mode from “Table or view” to “SQL command”. Then, add the select statement to the SQL command text window. At completion, the window should appear as follows:

image

Once done click back to the Control Flow tab, and then start Debugging (you can also just press the F5 key to start debug). The first Data Flow task should write all of the source records to the table, and the second should write none (assuming nothing has happened to the source since you did the initial extract. You can try deleting some of the records from the archive table, and rerunning the package – they should get replaced. That was step 1.

Step 2 – Schedule the package

Now that we have our package, we want it to run periodically (usually nightly). We do this by deploying the package to the server, and then scheduling to run with the SQL job agent.

To deploy the package, we need to first create a deployment utility for it. To do this, we must first select the Project in the Solution Explorer pane, and then select Project-Properties from the menu. The Configuration Properties window is then opened. In the left pane, select the Deployment Utility tab, and ensure that the CreateDeploymentUtility is set to True.

image

Also – take note of the DeploymentOutputPath value.

run the deployment utility for it. The deployment utility is  stored in a subfolder of the package project. You can find the folder of the project by selecting the project in the Solution Explorer pane, and then examining its FullPath  property in the Properties pane. Open the project path in Windows Explorer, and then navigate to the DeploymentOutputPath as noted above. In that folder, you’ll find a file named yourprojectname.SSISDeploymentManifest. When ready to deploy, double click on it, and the Deployment wizard will start.

The deployment wizard is straightforward and self explanatory. You’ll want to select “SQL Server deployment” on the first screen, then the SQL server that you wish to deploy to (usually (local) ), and select a location for the Package Path (the root is likely fine). Once the wizard is complete, you are ready to schedule the package.

Open Up SQL Server Management Studio, and connect the destination server. If the SQL Server Agent service is not running, start it (you’ll want to make sure that it is started automatically). Expand the Agent node, and then expand the Jobs node. Right click on Jobs, and select New Job.

image

Give the Job a name, then click on the Steps tab. Click the New button to create a new step, and give it a name. In the Type dropdown, select “SQL Server Integration Services Package”. In the General section, select the SQL server that holds the package, and then use the ellipsis in the Package field to select the package you deployed above.

image

Next, click the Schedules tab, click the new button, give the schedule a name, and select when you want the job to run. Save the schedule, and then save the job (click OK). Your job should now appear in the Jobs folder. To test it, right click on it and select “Start Job at step”. The job will run and you will see its progress in a dialog.

There are many options for scheduling SSIS jobs, and for error handling, and I would strongly recommend investigating them.

Step 3 – Purge the Workflow History Data

As mentioned above, the workflow cleanup job removes workflow history associations, but does not actually delete the items from the list, allowing that list to grow large. If you use Nintex, there’s a Nintex command that will take care of this for you:

NWAdmin.exe –o PurgeHistoryListData  -siteUrl urlToSite 
 [-workflowName workflowName] [-days daysSinceLastActivity] 
 [-lastActivityBefore datetime DateFormat)] [-state All|Running|Completed|Cancelled|Error]
 [-deletedLists] [-clearAll [-workflowItemId id -workflowListName "list name"]] [-verbose]
 [-reportOnly] [-batchSize numberDefaultIs500]
 [-pauseAfterBatch] [-maxItemsToDelete number]

This command is run on a front end server. To keep things up to date it would need to be scheduled. However, if you’re using out of the box workflows, there is no equivalent command. You could just access the history list and remove old data, but since SharePoint has built in tools for this, I recommend using them. These features are contained in the Information management policy settings of any list.

Open the Workflow history list (your site url + Lists/WorkflowHistory or NintexWorkflowHistory). Open the List settings page, and select “Information management policy settings” in the Permissions and Management section. If you don’t see this option, you may need to enable the relevant features.In the Content type policies, select the Workflow History content type, and then select “Enable Retention”. Once enabled, you will be able to select “Add a retention stage”.

The retention stage is what we will use to delete the workflow history items (which, given the name, is somewhat counter-intuitive, don’t you think?). Date occurred is when the event occurred, so it is likely our best time indicator, and I would suggest a period of time at least double to the automatic cleanup task, which is 60 days. Finally, we want the item to be deleted at this point, so we select “Permanently Delete” from the Action dropdown. When complete, the stage will appear as follows:

SNAGHTML3911db5c

Once we save our policy, the expired items will be deleted the next time the timer jobs run.

And that’s all there is to it!

Now that we’ve taken the data out of SharePoint, it’s no longer obviously available to end users. If this is important, we will need to build some Reporting Services reports, and integrate them back into the appropriate locations in SharePoint. This will (hopefully) be the subject of an upcoming post.

9 Comments

Upgrading From WSS 3.0 to Search Server Express

About a year ago, I wrote a couple of articles (here and here) that discuss the merits of using Search Server Express 2010 instead of SharePoint Foundation. It really boils down to the fact that you get more stuff, and it’s still free. As opportunities for Foundation arise, we have been installing SSE and our customers are quite pleased with the result.

I recently had the opportunity to perform an in place upgrade of a WSS 3.0 site that we had built a few years ago to Foundation, and I of course decided to use SSE instead. As it turns out, the upgrade wasn’t quite as straightforward as I had hoped.

Normally, when you perform an in place upgrade from WSS to Foundation, you first install the bits, and then run the Products and Technologies Configuration Wizard, which in turn detects the pre-existing WSS installation and offers to upgrade it. Unfortunately, this doesn’t happen with SSE. The Wizard only prompted for a new or existing Farm. 

The next step was to uninstall SSE, and to Install Foundation, once this was done, the Wizard did detect the existing installation, and properly upgraded the entire farm. Once this was done, I thought “why wait?” and I went ahead and laid down the bits for Search Server Express and then for Office Web Applications.

Everything seemed alright, but when I tried to start the services on the server, they simply weren’t there. It also wasn’t possible to create the corresponding Service Applications for either SSE or for the Office Web Applications. After much head pounding, I decided to uninstall everything, OWA, SSE, and Foundation (CAREFULLY as outlined here..), and then Install SSE alone, joining it to the Pre-existing farm.

Once that was done, everything showed up properly, and I was able to properly start the appropriate search services, and OWA services, and to create the appropriate service applications.

So as it turns out, order of operations is pretty important in this scenario. If you want to upgrade from WSS 3.0 to Search Server Express 2010 (using the in place upgrade approach), you’ll want to follow these steps:

  1. Install SharePoint Foundation 2010 on your server
  2. Run the Products Configuration Wizard, and perform the upgrade
  3. Uninstall SharePoint Foundation from the server, removing it from the farm
  4. Install Search Server Express 2010 on the Server
  5. Run the Products Configuration Wizard, and re-join the existing farm
  6. Test the site to ensure that it’s functional
  7. (optional) Install any appropriate Service Packs and/or hot fixes for SSE
  8. (optional) Run the Products Configuration Wizard to update the databases (if step 7 was performed)
  9. (optional) If desired, Install Office Web Applications, and any appropriate Service Packs for OWA
  10. Run the Products Configuration Wizard to complete the OWA installation
  11. Start all necessary services, create the necessary service applications (search is a big one….)
  12. Create a basic Search Center and configure your site collection to use it.

Hopefully this helps any other folks in the same situation.

Leave a Comment

SharePoint 2010 Upgrade breaks Microsoft Access Client Application

I have been doing quite a few 2007-2010 upgrades lately, and suffering the appropriate slings and arrows. A recent upgrade resulted in a few issues, the strangest one was that Microsoft Access could no longer open a SharePoint list.

For quite some time now, Microsoft has been able to read and write data from SharePoint lists as if they were active Access tables. This is distinct from Access Services, which ships with the Enterprise version of SharePoint Server. Access Services lets you “convert” your entire Access application to a SharePoint site at which point the Access client is no longer required (for a user).

Our situation was much simpler. We were dealing with a Power user that was good with Access, and had leveraged the list read/write capability quite heavily with 2007. However, after the upgrade, Access 2007 couldn’t open some of the lists that it could previously. Compounding this problem was that Access 2010 didn’t have this problem on the lists in question. The browser could also open these lists just fine.

The answer to this one came from what appeared to be a different problem. Some of the other lists in the site couldn’t be opened by the browser. Instead, the user received the message “The query cannot be completed because the number of lookup columns it contains exceeds the lookup column threshold enforced by the administrator”:

image

SharePoint has a bad reputation for “Unknown Error” messages, but this one is really quite good. This one pointed squarely at the list throttling features available in SharePoint 2010 that of course weren’t there in 2007. Basically, 2010 allows an administrator to throttle, or prevent poorly performing functions from slowing down the system for everyone. One such expensive operation is performing lookups, and the default limit is set to 8.

Dina Ayoub has a good post here on the throttling features if you would like to learn more, but the important thing to note here is that this setting affects not just lookup fields, but Person/Group and Workflow Status fields as well, so if you have 8 or more of them, the list will simply stop working.

This setting is scoped to the application level, so if it is changed, you affect all site collections in that application. (It also means that you can’t change it at all in Office 365.) You set it through the Resource Throttling settings in Central Administration. Once in CA, click on Application Management, highlight the application to be changed, and in the General Settings dropdown, pick Resource Throttling.

image

Scroll down to the section titled “List View Lookup Threshold”:

image

Here, you can simply increase its value to where you need it.

Changing the values should be done with considerable care. These throttling features were implemented for very good reasons, and changing them risks overloading your SQL server. A much better approach would be to go back and rethink the design of your list, if that’s an option. If it isn’t then this is a decent plan B. You can always buy more hardware…….

So this fixed our post upgrade list issue in the browser, how does this relate to our Access problem? Well, it turns out that they were one and the same, just manifesting differently. It seems that Access does something when it opens a list that adds a few more lookup type items to the Query, or at least it behaves that way. It also appears that Access 2010 and Access 2007 behave differently in this regard. In the end, increasing this value sufficiently solved the Access problems.

I haven’t found anything definitive out there, but anecdotally at least, you should be aware that when you use Access to open up a SharePoint list, you pay a “”List View Lookup Threshold” penalty.

1 Comment

How To Replace A Custom Field In SharePoint 2010

One of the benefits of SharePoint is its extensibility. It comes with a cornucopia of tools and features that provide fantastic business value, but inevitably, you’ll come across a requirement that the out of the box features don’t address. The good news is that its extensible nature allow for custom development, and these features can be added.

Customization is not without its drawbacks. Just because you can build on SharePoint, it doesn’t mean that you should. With custom code comes all of the costs of ownership of running custom code, and the upgrade difficulties that come with it. If you can possibly use a no-code solution, you probably should. A recent customer challenge that we had is a good demonstration of that, and provides a good example of a highly customized solution being replaced with a no code solution.

Back in the days of SharePoint 2007 (!) we ran across a number of requirements for a cascaded combo box control. When filling out a form, we wanted the options available to one combo box to be driven by the selection in another. Unfortunately, there was no real way to do this with SharePoint forms. It can be done with InfoPath through filtering, but that was only available through InfoPath form libraries. We needed to provide this capability with standard lists.  Our solution was to implement a custom column that provided this capability.

This worked perfectly well for a few years, but when it came time to upgrade to SharePoint 2010, we found that the custom control didn’t quite work. One option would of course be to fire up Visual Studio and make the necessary modifications. However, since SharePoint 2010 allows you to use InfoPath to edit standard SharePoint lists now, this was determined to be the better choice, and we could ditch the custom controls. Another drawback of the custom columns is that InfoPath will simply not work with them, making solid form design difficult.

However, given that this was an upgrade, we wanted to make sure that all of the existing data was retained, so we couldn’t just trash the existing data, and start new. We needed to somehow convert the old column data to standard SharePoint data. We were able to do this through the PowerShell Import/Export function, with a few tweaks.

Step 1 – Upgrade the Content Database

Before you can use this method, the content databases will need to be in SP2010 format. If you’re doing an in place upgrade, this will happen automatically, but if you’re using the DB attach method, you’ll need to first install the custom solutions on the new farm, or else the database mount procedure could fail.

Step 2 – Export the List Data

Most SharePoint administrators are familiar with the STSADM import and export commands that allow you to export and import either single sites or entire site collections. This capability still exists in SharePoint 2010 with both STSADM and with PowerShell, but the PowerShell commands also allow you to export individual lists.

When ready, export your list with the following PowerShell command:

Export-SPWeb -Identity siteurl -Path outputpath -ItemUrl listurl –IncludeUserSecurity      -IncludeVersions All -NoFileCompression –Verbose

where:
siteurl  is the url to the containing site (or web) i.e. http://server1/accounts/2010
outputpath is the file system path for the output files
listurl is the relative url of the list (i.e. Lists/customers)

 

The NoFileCompression is important because we need to edit one of the output files after the export is complete.

Step 3 – Modify The List

Now that the data has been saved outside of the list, you can go ahead and remove the custom columns from the list. Once you’ve done so, be sure to replace them with identically named columns. If possible, you should also use the SharePoint column type that the custom columns were originally derived from. In my example below, I replace a Cascaded Lookup (custom) column with a Text column.

Once this is done, you’ll also want to delete all of the current items in the list. You could simply delete the list itself, but if you have any workflows, or custom forms, they’ll get deleted too. Also, a new list will have a new internal GUID, which may not work for you.

Another option is to use the Data Sheet view to copy the old field values to Excel, so that you can copy them back after the fields are changed. This approach will update the last edited date and the last edited by, so that may not be acceptable. If it is, you can omit steps 2,4 and 5.

Step 4 – Edit the Manifest File

In the path that was used in outputpath in step 2, there will be a number of files. Find any named manifest.xml or manifestx.xml (where x is a number) and edit them. These files contain the metadata for your list. Simply search for the name of your custom field type (in my case, CascadingDropDown) and replace it with the standard type name (like Text). Once all occurrences have been modified, save the file(s).

Step 5 – Import the List

Once the manifest has been modified to match the new list schema, you can bring in the exported list, with essentially the reverse of the PowerShell used in Step 2:

Import-SPWeb -Identity siteurl -Path inputpath -IncludeUserSecurity -UpdateVersions Overwrite -NoFileCompression -Verbose

where:
siteurl is the url to the containing site (or web) i.e. http://server1/accounts/2010
inputpath is the file system path for the input files

 

Once the PowerShell completes, you should have all of your data in a list with no custom columns. From there, you can use InfoPath to modify your forms, or anything else that is possible with the standard column types.

One caveat though. We had customized the standard New/Edit/View aspx forms with SharePoint designer. Before we could move toward our InfoPath goal, we needed to recreate these forms as standard forms.

2 Comments