Author Archives: John White

The Power BI Premium Pricing Model – The Good, The Bad and the Ugly

On May 3, Microsoft announced sweeping changes to the pricing of Power BI by introducing a new “Power BI Premium” SKU. The announcement itself can be found here, and there a number of other related resources worth that I am listing here for convenience:

Distribute to large audiences with Power BI apps
Changes to Power BI embedded
Power BI Premium White Paper
Power BI Pricing
Premium pricing calculator
Introducing Power BI Report Server

Power BI Premium is intended to address deficiencies in the current pricing model primarily with respect to sharing content. In my opinion, the new model succeeds in this goal for the most part, but it leaves a significant number of customers behind, and it also leaves many unanswered questions. These problems need to be addressed for Power BI to succeed in its goal of bringing BI to the masses. Overall, I like what Microsoft is trying to do with this new pricing model, and with a few tweaks, I think that it can resonate.

First, we need to understand the new model, and to do that, we need to understand the former model and the need for Premium. Given that the former model (consisting of free and Pro licenses) has not been replaced (although it is changing significantly), we will refer to it as the original model, and when Premium is added to it, we will refer to that as the Premium model. The original model is still completely relevant moving forward.

The original model and the need for change

The original model is relatively simple, and relatively unique to the industry. Power BI users are licensed for either free or Pro features. If a report contains any Pro capabilities, any consuming user requires a Pro license. A free user can create a report that uses Pro features, but that same user will not be able to consume that report in the free service. This is a very important distinction to understand. The author of a report (using Power BI Desktop) could be a free user, use a Pro feature, and after deploying the report to the service, be unable to use it in the service.

Difference between free and Pro from a feature standpoint is no longer (as of this writing) available on the Power BI pricing page, however, prior to June 1, 2017 it is the list below.

Therefore, if a report is configured to be refreshed more than once per day, or even if the time of day is specified, or if the report uses on-premises data, then all users accessing that report require a Pro license. Given that Power BI is all about bringing Business Intelligence to the masses, when each one of those masses needs to pay $10/month, it tends to constrain adoption, particularly if a report’s audience is organization wide, and you are in a very large organization.

Report sharing is also relatively limited. Reports can be shared anonymously, which is insecure. Dashboards and their constituent reports can be shared either internally or externally, but they are read only. Finally, both dashboards and reports can be shared through Group workspaces (now app workspaces). Currently, Group workspaces do not allow for external sharing, but they are the preferred means of sharing. However, they too require Pro licenses, which constrains adoption. For the free user, anonymous and dashboard sharing are the only real options.

New model

The introduction of Power BI Premium aims to solve some of the sharing issues listed above, and therefore to help drive adoption. Premium capacity is an add on to a Power BI tenant (organization), and is different that free or Pro licenses which are assigned to users. An organization can purchase Premium capacity, and then a Pro user (this is restricted to Pro) can move or publish content to the Premium capacity. Once the content is in Premium storage, all users can utilize all the features in the dashboards and reports. Premium effectively removes all feature barriers from the reports.

Premium storage also brings many performance enhancements, such as the ability to refresh data up to 48 times per day (vs the previous 8), and the effective removal of data model size limits.

Without Premium, there are also several changes to the original licensing model. According to the May 3 Announcement FAQ on the Power BI community site:

Beginning June 1, the free service will have capabilities equivalent to Power BI Pro. This includes the same 1 GB workbook size limit, up to 8 daily scheduled refreshes for datasets, and maximum 1 million rows/hour streaming data rate. We’re also providing access to all data sources, including those available through the on-premises data gateway. Peer-to-peer dashboard sharing, group workspaces (now called app workspaces), export to PowerPoint, export to CSV/Excel, and analyze in Excel with Power BI apps are capabilities limited to Power BI Pro.”

Therefore, after June 1, 2017, Pro features are effectively an addition to the free features, and the feature differences should be as below:

From the May 3, 2017 announcement:

“Going forward, we will improve the free service to have the same functionality as Power BI Pro, but will limit sharing and collaboration features to only Power BI Pro users.”

The only features that Pro will have that free will not are those that are related to sharing. The above feature list reflects that.

Power BI Embedded

Power BI Embedded is the way that developers can embed Power BI in their applications. Using Power BI Embedded, until now, developers build reports, deploy them to their Azure instance, and call them from their applications. End users do not need any sort of Power BI licenses, and the developers are charged per report “render session”. This charging model has been one of the criticisms of Power BI embedded in that it is very difficult to predict costs. ISVs are at the mercy of the end users viewing reports, and any measure that is put into place to curb these render sessions is by definition a disincentive to adoption.

The fact that Embedded runs in a different namespace than the core Power BI service is another, leading to differences between the capabilities of Power BI Embedded and the core Power BI service. For example, the current iteration of Power BI Embedded cannot use the On-Premises Data Gateway, which can be quite restrictive.

Power BI Embedded is changing to use the new Premium capacity model. ISVs will purchase Premium capacity, and serve reports to their end users from that space. There will only be a single namespace for all Power BI content.

What’s Good

Power BI Premium solves to sharing problem for organizations that want to distribute their BI assets across the organization. If organizations would be accessing on premises data, a key feature of Power BI for enterprises, the Pro license requirement has discouraged adoption. With Premium capacity, an report publisher can share content with as many users as necessary without worrying about licensing the target users. Even better, those target users can be external, further extending the reach of that content.

For large enterprises, this has the potential to turn Power BI from a niche solution to something that is used by everyone.

The changes to the original model also makes things clearer for report designers and publishers. These publishers can work with the full range of Power BI features while the report is being built, and while they are themselves using it. When it comes time to share the report to a wider audience, they can publish it to Premium capacity where anyone can access it. If the organization has not purchased Premium, then the original model applies, and all recipients will still require a Pro license.

On the Power BI Embedded side, switching to Premium capacity completely eliminates the unpredictability of the current model. The fact that the reports will be rendered from the core Power BI service means that it will be fully on par with other Power BI reports, and developers will be able to take advantage of the full spectrum of Power BI features as they appear in the service.

What’s not so good

If you are a large company, there is very little not to like with this new model. It was large organizations that felt most of the pain with the original model, and it is they that benefit most from the Premium model. In fact, in my opinion, they are the only ones that benefit from the Premium model. Well, they and organizations that have no sharing requirements. The issue here is cost.

The Premium pricing estimator can be found online, but at present, it boils down to this. The smallest block of capacity that can be purchased by an organization is “P1”. To publish content to Premium capacity, you must also have a Power BI Pro license. Therefore, the minimum cost of entry is $4,995 (P1) plus $9.99 (Pro) for a total of $5,004.99 per month. This is well out of the reach of most small to medium sized organizations. In fact, an organization needs to be larger that 500 users (and those would be active Power BI users) for Premium to begin to make sense from a licensing perspective. The model size limit removal and the increased refresh frequency are also compelling reasons to move to Premium, but it’s easy to see that Premium is only for larger organizations.

Compounding this issue for small to medium sized organizations is the fact that as of June 30, dashboard sharing has been removed from the free SKU of the original pricing model. Any dashboards that had previously been shared broadly to free users will cease to function as of the cut-off date. If Premium does not make sense for these organizations, then they do have the option of purchasing Pro for the consumer. To help ease this transition, Microsoft is offering a year’s worth of Power BI Pro to all active free users that signed up prior to May 3, 2017.

However, dashboards can be shared with external users, and it’s a pretty tall order to expect an external user to subscribe to Pro just to be able to read your report.

With Power BI Embedded switching to the Premium model, the ISV now needs to buy Premium capacity. Given that the entry price for Premium is so high, it is (in my opinion) out of reach of most of the services that would rely on it, not to mention those developers that simply want to get up to speed on it or do some testing. There has recently been some indications on the forums that the barrier to entry won’t be as high for developers, but even a figure as low at $600/month may still be too high for many to swallow.

Conclusions

Overall, I think that the Premium pricing model solves a problem that desperately needed to be solved. This approach opens the door to Power BI truly democratizing Business Intelligence and becoming almost as ubiquitous as Excel. The opening up of features to the free SKU and focusing the Pro SKU on sharing means less confusion for report designers.

Unfortunately, for the moment price stands in the way of that goal of many small-medium sized businesses. These businesses may be small in stature, but they are many in number. The removal of sharing from the free SKU actually represents a step backward for them. The floodgates have been opened for large businesses, but the stream has been dammed for smaller ones.

Fortunately, pricing is a simple problem to solve. My hope is that the entry point for Premium comes down to something that would make sense for even a 10-person company, and that the cost for developers using Embedded could scale with far more elasticity, starting at $0 to encourage investment. These changes would, in my opinion, truly set the stage for Power BI dominance.

Use Power BI to Help Manage Your SharePoint Sites

Note – This article first appeared on April 12, 2017 on the Microsoft Partner Network

When it comes to Business Intelligence, SharePoint is most often used as a platform to access dashboards and reports. With the recent availability of the Power BI web part, Power BI joins SQL Server Reporting Services and Excel as a go-to reporting tool within SharePoint.

Occasionally, list data is used as a data source for these reports. This doesn’t work for large amounts of data, but for smaller lists, this is perfectly adequate. Given that Power BI has native connectors for both SharePoint lists and libraries, it is perfectly suited for this sort of task. Combining these two results in some interesting possibilities, as the following article demonstrates.

We work extensively with modern Groups in Office 365. Each group gets its own SharePoint site, and within that, its own OneDrive, or “Shared Documents” library. Depending on the usage of the group, the storage in that library can grow quickly, and it’s not always easy to spot where all the content is being stored. By building a Power BI report that uses the OneDrive as a data source, we can create a report of storage allocation by file and folder, and then show that report on the home page of the SharePoint site.

There are several steps to building this report. It all starts with Power BI Desktop.

Get the Data

To start with, we Launch Power BI Desktop, and Select “Get Data”. Then we select the “SharePoint Folder” file source, and enter the URL of the SharePoint site. Even though we are prompted for the URL of the folder, we must enter the URL of the site itself. The query editor can be used later to filter out any unwanted folders. Only user created document libraries and folders will be returned.

The query will return a number of columns that are irrelevant to this report, and they can be removed. We need to create a column for the URL to the files themselves. The attributes column can be expanded to get the size of any files in bytes. We also use the split function to split the folder path by the “\” delimiter which will allow us to create a folder hierarchy. Finally, we set the appropriate data types on columns, and give them user friendly names.

The scope of this article does not allow for a complete step by step walkthrough of the query editor, but the code below can be pasted into the advanced editor (after replacing the URLs appropriately).

let
  Source = SharePoint.Files("https://yoursharepointsiteurl", [ApiVersion = 15]),
  #"Removed Columns" = Table.RemoveColumns(Source,{"Content"}),
  #"Added Custom" = Table.AddColumn(#"Removed Columns", "Folder", each [Folder Path]),
  #"Added Custom1" = Table.AddColumn(#"Added Custom", "URL", each [Folder Path] & [Name]),
  #"Removed Columns1" = Table.RemoveColumns(#"Added Custom1",{"Folder Path"}),
  #"Replaced Value" = Table.ReplaceValue(#"Removed Columns1","https://unlimitedviz.sharepoint.com/sites/Presentations/Shared Documents/","",Replacer.ReplaceText,{"Folder"}),
  #"Renamed Columns" = Table.RenameColumns(#"Replaced Value",{{"Folder", "FolderBase"}}),
  #"Added Custom2" = Table.AddColumn(#"Renamed Columns", "Custom", each Text.Trim([FolderBase],"/")),
  #"Renamed Columns1" = Table.RenameColumns(#"Added Custom2",{{"Custom", "Folder"}}),
  #"Removed Columns2" = Table.RemoveColumns(#"Renamed Columns1",{"FolderBase", "Date accessed"}),
  #"Expanded Attributes" = Table.ExpandRecordColumn(#"Removed Columns2", "Attributes", {"Size"}, {"Size"}),
  #"Changed Type" = Table.TransformColumnTypes(#"Expanded Attributes",{{"Size", Int64.Type}}),
  #"Renamed Columns2" = Table.RenameColumns(#"Changed Type",{{"Size", "Size (bytes)"}}),
  #"Added Custom3" = Table.AddColumn(#"Renamed Columns2", "Size (KB)", each [#"Size (bytes)"] /1024),
  #"Changed Type1" = Table.TransformColumnTypes(#"Added Custom3",{{"Size (KB)", type number}}),
  #"Added Custom4" = Table.AddColumn(#"Changed Type1", "Size (MB)", each [#"Size (KB)"] /1024),
  #"Changed Type2" = Table.TransformColumnTypes(#"Added Custom4",{{"Size (MB)", type number}, {"Date created", type datetime}, {"Date modified", type datetime}}),
  #"Split Column by Delimiter" = Table.SplitColumn(#"Changed Type2","Folder",Splitter.SplitTextByDelimiter("/", QuoteStyle.Csv),{"Folder.1", "Folder.2", "Folder.3"}),
  #"Changed Type3" = Table.TransformColumnTypes(#"Split Column by Delimiter",{{"Folder.1", type text}, {"Folder.2", type text}}),
  #"Renamed Columns3" = Table.RenameColumns(#"Changed Type3",{{"Folder.1", "Folder"}, {"Folder.2", "Subfolder 1"}, {"Folder.3", "Subfolder 2"}})
 in
  #"Renamed Columns3"

Build the Report

When the Query is complete, we click on the load the data into the model. We don’t need to do a lot of model editing for this report, it’s relatively straightforward. There is only one table, and the Date Created field gives us enough time intelligence that we don’t need to create a date table. There are two edits to the model that I used that bear mention.

One thing that I wanted to show was the accumulation of storage over time. With the size of the file and the create date, I could show the total size that was added for a given day, month or year, but that doesn’t show the accumulation. To do that we need to create a calculated measure, “Cumulative Size”. The formula below calculates a running total of file size based on date:

Cumulative Size (MB) =
 CALCULATE (
  SUM ( Files[Size (MB)] ),
  FILTER (
  ALL ( Files[Date modified] ),
  Files[Date modified] <= MAX ( Files[Date modified] )
  )
 )

It’s not strictly necessary, but it’s convenient to create a folder hierarchy by dragging subfolder1 onto Folder, and then dragging in subfolder2 to the bottom of it. That allows all levels of the folder hierarchyto be managed as one.

Finally, we add our visual elements to the report. The report itself can be seen above. In this case, the Size by Folder chart uses the folder hierarchy as the x axis so that clicking on a data bar (while in drill down mode) will open a lower level folder. Marking the data category of the URL field will cause the report to display a clickable URL in any tabular visuals, and setting the “URL icon” property (in the Values section) of the table will display a link icon instead of the long URL. Doing this will allow the user to open any of these files directly from the report. The Growth Rate chart used the Cumulative Size calculated measure created above.

Embed the Report

Once completed, we publish the report into Power BI. It is important to select the correct workspace for this. Since we will be embedding the report into a SharePoint page, it is important to ensure that all viewers will have access to the report. By publishing the report to the same Power BI Workspace that is used by the SharePoint site in question, this will be automatic. In this case, we are reporting against the “Presentations” team site that is associated with the “Presentations” group, so we publish this report to the “Presentations Power BI workspace.

Once published, we need to get the embed URL for SharePoint. This can be determined by opening the report in Power BI, selecting File- Embed in SharePoint Online.

Once we have the URL, we navigate to the SharePoint site and edit the home page (note – the home page needs to be a modern SharePoint page). Once in edit mode we add a new web part, and select the Power BI web part. When prompted, we enter the embed code retrieved above. Once the page is published, all is complete.

Finally, the data source in Power BI will need to be set up to refresh on the frequency required.

With a few simple steps, we have not only gained insights into the storage patterns of our team sites, but we have made those insights available to all members of the site in a highly interactive fashion, without making them open another application.

The Difference Between Reporting and Analytics is 42

In his novel “The Hitchhiker’s Guide to the Galaxy”, Douglas Adams envisioned a giant supercomputer named “Deep Thought” that was built to solve the answer to the ultimate question of life, the universe and everything. For the 5 people out there that are unfamiliar with the story, I’ll relate the important bits here. Deep Thought was commissioned by a race of pan-dimensional beings and required seven and a half million years to complete its calculations. When it was finally complete, Deep Thought informed the ancestors of the original creators that the answer was 42. The receivers were understandably disappointed with this response, and when they questioned Deep Thought further, the computer postulated that perhaps the problem was that they never really knew what the question was.

Undeterred, the race then commissioned a second computer (which happened to be the Earth) that would calculate the ultimate question. After a couple of 10 million year attempts, the ultimate question was determined to be “What do you get when you multiply six by nine”. Of course, Adams never claimed that the universe made sense.

To my mind, this is an excellent demonstration of the difference between reporting and analytics. The accurate answer (report) provided a result, but not meaning. Further analytics were necessary to determine context.

Like many information technology terms (Big Data, machine learning, CRM) Business Intelligence (BI) is one of those umbrella terms that many people use regularly without fully understanding its meaning. BI is comprised of many tools that help to glean information and insights from raw data. Thus, an ETL package that moves data from one location to another is just as much a BI tool as is a fancy looking infographic. Combine this lack of clarity with the overloading of the term “reporting, and we wind up with some real confusion in this space.

Reporting is the process of using data to highlight things or trends that have already happened. This can be contrasted with monitoring, which does the same for things that are happening now, and predictive analytics, which tries to predict what will happen in the future based on the same data. The difference between reporting and monitoring is only one of data latency, and as such, monitoring is often referred to as real time reporting, which further muddies the water. However, for the purposes of this article, I want to focus on historical reporting.

Reports are typically one of two types, either operational or analytical. Tools that are good at producing one type are typically not so good at producing the other. What’s the difference? Operational reports are designed to provide information that we know we need, and analytical reports are designed to help us discover things that we didn’t know, or to help answer unanticipated questions. Operational reports are typically designed to be printed. They are typically well paginated, pixel perfect, and provide a single view of the data within any given report. Analytical reports are just the opposite. They are designed with visuals as a starting point, but allow for the ability to pivot on or drill down into the data as appropriate to answer ad-hoc questions. Printing is typically a weakness for analytical reports, whereas drilldown is a weakness for operational reports.

Both report types have their place but they both have very different design point. The data that backs an operational report should ideally be relatively flat, as that best reflects the report layout and helps with performance. Conversely, cubes and data models exist simply because a flat data structure does not adequately support analytical reporting. With analytical reporting, a user may at any point decide to view quantitative data (a measure) through the lens of a different facet (dimension). This difference is so great, that we need a different type of engine to support it. OLAP cubes and tabular models are both examples of this.

Another difference is the data that is necessary to support both report types. Operational reports tend to concern themselves with various levels of subtotals per the predefined facets. In a case like that, the data mart that backs the report only needs to store those subtotals. The granularity, or resolution of the data stored in the data mart does not need to exceed that of report that references it. Analytical reporting is different. Since users will be expected to drill down on data, from on dimension to another, or to filter the data according to increasingly granular facets, it is critical to store all of the data in the data mart backing the data model. We don’t know the level of resolution the analyst will need; therefore, all detail is required.

As a simple example of this, consider the case where we want to analyze some server log data over a period of time. We can pre-aggregate the data in the data model such that it stores the total of the log entries of various entries on a daily basis. There would need to be a total based on each dimension, but the overall data storage would be less than for the raw data. Such data would allow an analyst to spot trends over several days, but the decrease in resolution means that it will be impossible to spot any usage trends within a given day. If daily trends will never be necessary, then this doesn’t matter, but the nature of analytical reports means that the designer can never be sure.

The more that the source data for the report is pre-aggregated, the less that report becomes analytical in nature, and the more it approaches operational. This is regardless of the tool used; you can build either report type with any tool, it’s just that it may not be optimal.

The issue here is one of semantics. Semantics however are important in knowing what you are getting if reports are being provided to you. Calling something “Analytics” does not make it so. If you spin up a content pack in Power BI, and find that the underlying data model provides just enough dimensions and measure to construct the provided report, and that you can’t deconstruct the data in any meaningful way, what you have is a report, not analytics, no matter what the platform. As with anything, there is a trade-off between complexity and power. Given the nuances of this topic, it’s important to look under the hood to know what you are getting.

The answer “42” is perfectly acceptable if you already knew that the question was “what is 6×9?”. But if you want to know why, that takes a little more digging. You’d also know that there might be a data problem…

Enabling the new OneDrive Sync Client for SharePoint

I recently wrote about the fact that the new OneDrive sync client now supports the synchronization of SharePoint libraries, and the benefits that it brings. Since the release however, I have heard from several people that even though they have the new client, their libraries continue to sync with the older OneDrive for Business client. Microsoft has documented all of the procedures for getting it to work in this article, but I wanted to call out a few common issues here. If you’ve been using the old OneDrive for Business Sync client, and you want to move to the Next Generation Sync Client (NGSC), you’ll want to check the items below.

Make sure you have the correct version

The Next Generation Sync Client has been available for over a year, but the ability to synchronize SharePoint libraries was only added in January 2017. If you use Windows 10, the client is updated automatically, but you may not have it yet. To check your version, right click on one of the OneDrive clouds in the system tray (not any OneDrive for Business icons) and select “Settings”

Next, click on the “About” tab and check the version.

If you have version 17.3.6743.1212 or above, you’re good to go. If not, or you’re not running Windows 10, you can download the latest version here.

Ensure That Your Tenant is Configured for the New Client

Administrators can configure their tenant to use either the new OneDrive Sync client or the old OneDrive for Business Sync Client. This configuration setting is in the SharePoint administration of Office 365. To change this setting, log into the Office 365 Admin portal (or have your tenant admin do this if you don’t have rights). The URL for the portal is https://portal.office.com/adminportal/. Once there, launch the SharePoint admin center by clicking SharePoint in the Admin Centers section.

The setting that we’re after is in the “settings” section of the SharePoint admin center. Select it, then scroll to the “Sync Client for SharePoint” section. The options are straightforward – Start the new client, or start the old one. Once selected, click on Save (scroll down for the button). This setting controls what happens when the “Sync” button is selected in a SharePoint library.

Initiate the Takeover Process

Even with this setting turned on, the old OneDrive for Business sync client may be active. You’ll need to take action to have the new client take over. This can be done one of several ways. Firstly, running the setup process for the new sync client will do it (download is above). You can also run “OneDrive.exe /takeover” to accomplish this, but the easiest approach is to simply sync a new library by clicking on its Sync button. Doing so will not only sync the new library, but will take over syncing anything that the older client is doing.

Once the takeover process is complete, the old client will be removed on the next system restart. That’s the last you’ll see of GROOVE.EXE.

OneDrive and SharePoint – Together Again

A little over a year ago I wrote a post entitled “OneDrive, TwoDrive, ThreeDrive” in which I took a slightly cheeky look of what has become known as the “Next Generation Sync Client” (NGSC) for OneDrive, and its many idiosyncrasies. I then turned that post into a speaking session (Changing the title slightly to OneDrive, TwoDrive, White Drive, BlueDrive), and that session has been presented at many events over the past year. In that post and session, it is pointed out that the NGSC still had some work to do, and that it would get done.

True to their word, it seemed that every time I presented that session, I had to modify the slides in one way or another, as another feature was added, bug was squished, or idiosyncrasy clarified. In September, at Microsoft’s Ignite event in Atlanta, Reuben Krippner announced the public preview of a new sync client (as I like to call it, the Next Next Generation Sync Client”. This version of the client addresses the principal shortcoming of its predecessor – namely that it didn’t synchronize SharePoint libraries. I’ve been running the preview ever since. With SharePoint libraries forming the backbone of all document storage in Office 365, including Office 365 Groups, this shortcoming was particularly glaring.

The good news is that this new version of the NGSC is now generally available. You can download it from the OneDrive site, or, if you use Windows 10 and are frequently updating, you’ll get it automatically. With the general availability of the new client today, it seemed like a good time to circle back and see how many of my original criticisms have been addressed.

SharePoint Library Sync

Obviously, the biggest disappointment with the original NGSC was the fact that while it added OneDrive for Business repositories in addition to OneDrive personal stores, it was unable to sync SharePoint libraries. Any library contained in an Office 365 Group or a SharePoint site was therefore excluded and resulted in users needing to run a mix of old and new client. We had this odd situation in where you would sync OneDrive for Business with the OneDrive Sync client, and all your SharePoint libraries with the OneDrive for Business sync client. Now add to it the fact that Group libraries were referred to as the Group OneDrive, and it was quite confusing for end users. Apart from the technical limitations of the old sync client (no more than 5,000 items per library, no more than 20,000 items across all libraries), adding SharePoint libraries to the new sync client greatly reduces confusion for end users and complexity for administrators.

System Tray Inconsistencies

After the rollout of the original NGSC, after connecting my personal OneDrive, my OneDrive for Business, and SharePoint libraries, I would wind up with three OneDrive icons in my system tray.

The white cloud represented the sync process for my personal OneDrive, the blue cloud with the bright white border represented my OneDrive for Business, and the blue cloud with the slightly dimmer white outline (really – look at the picture) represented all the SharePoint libraries that I was synchronizing, including Group OneDrives. If I were interacting with two different Office 365 tenants as I do today, I would have five icons for everything, and while I can certainly cope with it, the inconsistencies made it rather confusing for the end user.

Adding SharePoint libraries to the modern client reduces this complexity. Now the same scenario will show two icons – one white, one blue. The white icon represents the personal account, and the blue icon includes the OneDrive for business as well as all SharePoint libraries being synced. If two tenants are being used, as in the image below, there will be two blue icons, one for each tenant. Hovering over the icon will identify the tenant in question.

The icon styles are also now more consistent, and as an added bonus, they always line up at the top of the system tray, which is a nice touch. While we still have more than “One” drive, it’s much more understandable and usable.

File Explorer Inconsistencies

The user interface insistencies extended to the File explorer integration as well. In the same scenario as above, syncing a personal OneDrive, and a OneDrive for Business with SharePoint libraries from a single Office 365 tenant, I previously wound up with three root nodes in the Windows File Explorer.

“OneDrive – Personal” was my consumer, or personal OneDrive, “OneDrive – UnlimitedViz” was my OneDrive for Business storage connected to my UnlimitedViz tenant, and “SharePoint” contained all my SharePoint synced libraries. One inconsistency is the fact that the personal icon is white in the system tray but blue in the File Explorer. In an organization, people also tend to distinguish their content stored in their OneDrive from organization content by referring to it as “personal” so the use of the word “Personal” here can cause confusion here as well. Finally, the OneDrive branding is completely thrown out the door here when it comes to SharePoint libraries. Keep in mind that at the time, the only way to synchronize SharePoint libraries was with the “OneDrive for Business Sync Client”. However, the resulting node is called “SharePoint”

The latest client makes some significant improvement in this area as well.

“OneDrive – Personal” remains my personal (consumer) OneDrive. The two nodes here names “OneDrive – Serendipity” and “OneDrive – UnlimitedViz” are my two OneDrive for Business locations on the two tenants named “UnlimitedViz” and Serendipity”. Finally, the two nodes “Serendipity” and “UnlimitedViz” contain all the synchronized SharePoint libraries in those two tenants. While the personal icon remains stubbornly blue, the nodes here make significantly more sense and in my opinion in least are much more intuitive.

Selective Sync for SharePoint Libraries

It almost goes without saying, but the all-or-nothing approach to the OneDrive for Business sync client (previously Groove), rendered a lot of large libraries un-syncable. By bringing SharePoint libraries into the NGSC, they too get to participate in the selective folder sync that the consumer client has had for quite some time.

Pause

The previous OneDrive for Business sync client wasn’t all bad, and the NGSC wasn’t all good when compared to each other. One very useful feature that the older client had that NGSC didn’t was the ability to pause a sync. Pausing is a relatively frequent need for various reasons, but the only way that the NGSC could be paused originally was by shutting it down. Given the time it required to start back up sometimes, this was a problem. Luckily, at some point over the past year, the NGSC picked up pausing capability, and you can now pause a sync for 2, 8 or 24 hours.

Stability and Performance

Apart from features, stability and performance is probably the most obvious area where the new client outshines its predecessor. There are countless tales of users having their work “eaten” by the older client. While this hasn’t happened to me, I can point to many times that a sync got corrupted, and the only way to fix it would be to resync the entire library. This would necessarily mean a new repository as the older client couldn’t work with pre-existing content. Having used it for several months now, I have yet to experience any issue that required the total resync of a library.

The sync performance of the new client is acceptable as well. To be sure, it could still be better. Startup times are quite long for me (keep in mind that I’m syncing quite a lot of content), and occasionally the sync process gets bogged down and needs to be restarted. However, it was good enough for me to decide to move my almost 1 TB of content back into OneDrive for Business. That very same content made the move in the other direction 2 years ago, due to performance issues.

Overall, I must say that my overall impression of the new OneDrive sync client is that it is finally ready for prime time. Shortly after the preview was announced in September, I was sufficiently impressed to move my relatively large Dropbox file system (where I had a 1 TB limit) over to OneDrive for Business (with its unlimited storage). I then heaped quite a bit more storage on top of it, and it seems to be performing well now. My main OD4B storage account is currently at 3.3 TB, and my personal OneDrive is at 600 GB. I even have several Groups set up in my Office 365 MVP tenant for managing my household and those libraries are synced by myself and my wife.

Stability is fine, and performance is good enough, apart from the occasional “looking for changes” hang-up. Its value and integration have tipped the scales in its favour, certainly with respect to Dropbox in my opinion. The Office team said they were going to fix it, and they did. Good for them.