Azure Data Explorer (ADX) is a great platform for storing large amounts of transactional data. The Incremental Refresh feature (now available for Pro users!) in Power BI makes it much faster to keep data models based on that data current. Unfortunately, if you follow the standard guidance from Microsoft for configuring Incremental Refresh, you’ll quickly bump into a roadblock. Luckily, it’s not that difficult to get around.
Incremental Refresh works by setting up data partitions in the dataset in the service. These partitions are based on time slices. Once data has been loaded into the dataset, only the data in the most recent partition is refreshed.
To set this up in Power BI Desktop, you need to configure two parameters, RangeStart, and RangeEnd. These two parameters must be set as Date/Time parameters. Once set, the parameters are used to filter the Date/Time columns in your tables accordingly, and once published to the service, to define the partitions to load the data into.
When Power Query connects to ADX, all Date/Time fields come in as the Date/Time/Timezone type. This is a bit of a problem. When you use the column filters to filter your dates, the two range parameters won’t show up because they are of a different type (Date/Time). Well, that’s not a big problem, right? Power Query lets us change the data column type simply by selecting the type picker on the column header.
Indeed, doing this does in fact allow you to use your range parameters in the column filters. Unfortunately, data type conversions don’t get folded back to the source ADX query. You can see this by right-clicking on a subsequent step in the Power Query editor. The “View Native Query” option is greyed out, which indicates that the query cannot be folded.
Query folding is critical to incremental refresh. Without it, the entirety of the data is brought locally so that it can be filtered vs having the filter occur at the data source. This would completely defeat the purpose of implementing Incremental Refresh in the first place.
The good news is that you can in fact filter a Date/Time/Timezone column with a Date/Time parameter, but the Power Query user interface doesn’t know that. The solution is to simply remove the type conversion Power Query step AFTER performing the filter in the Power Query UI.
Alternatively, if you’re comfortable with the M language, you can simply insert something like the following line using the Advanced Editor in Power Query (where CreatedLocal is the name of the column being filtered).
#"Filtered Rows" = Table.SelectRows(Source, each [CreatedLocal] >= RangeStart and [CreatedLocal] < RangeEnd),
If the filtration step can be folded back into the source, Incremental Refresh should work properly. You can continue setting up Incremental Refresh using the DAX editor. You will likely see some warning messages indicating that folding can’t be detected, but these can safely be ignored.
Application Insights (AI) is a useful way of analyzing your application’s telemetry. Its lightning-fast queries make it ideal for analyzing historical data, but what happens when you start to bump into the limits? The default retention for data is 90 days, but that can be increased (for a fee) to 2 years. However, what happens when that’s not enough? If you query too much, or too often you may get throttled. When you start to bump into these limits, where can you go?
The answer lies in the fact that Application Insights is backed by Azure Data Explorer (ADX or Kusto). Moving your AI data to a full ADX cluster will allow you to continue using AI to collect data, and even to analyze recent data, but the ADX cluster can be sized appropriately and used when the AI instance won’t scale. The fact that it is using the same engine and query language as AI means that your queries can continue to work. This article describes a pattern for doing this.
Requirements
We’ll be working with several Azure components to create this solution. In addition to your AI instance, these components are:
Azure Data Explorer cluster
Azure Storage Account
Azure Event Namespace and at least one Event hub
Azure Event Grid
The procedure can be broken down into a series of steps:
Enable Continuous Export from AI
Create an Event Grid subscription in the storage account
Create an ADX database and ingestion table
Create an Ingestion rule in ADX
Create relevant query tables and update policies in the ADX database
Enable Continuous Export from Application Analytics
AI will retain data for up to 2 years, but for archival purposes, it provides a feature called “Continuous Export”. When this feature is configures, AI will write out any data it receives to Azure blob storage in JSON format.
To enable this, open your AI instance, and scroll down to “Continuous Export” in the “Configure” section. Any existing exports will show here, along with the last time data was written. To add a new destination, select the “Add” button.
You will then need to select which AI data types to export. For this example, we will only be using Page Views, although multiple types can be selected.
Next, you need to select your storage account. First select the subscription (if different from your AI instance), and then select the storage account and container. You will need to know what data region the account is in. Once selected, save the settings.
Initially, the “Last Export” column will display “Never”, but once AI has collected some data, it will be written out to your storage container, and the “Last Export” column will display when that occurred. Once it has occurred, you should be able to open your storage account using Storage Explorer, and then the container to see the output. In the root of the container selected above, you’ll see a folder that is named with the AI Instance name, and the AI instance GUID.
Opening that folder, you’ll find a folder for each data type selected above (if there has been data for them). Each data types will be further organized into folders names for the day, and the hour. Multiple files will be contained withing with the .blob extension. These are multiline json files and can be downloaded and opened with a simple text editor.
The next step is to raise an event whenever new content is added to this storage container.
Create an Event Grid subscription in the storage account
Prior to this step, ensure that you have created, or have available an Event namespace, and an Event hub. You will connect to this hub in this step.
From the Azure portal, open the storage account and then select the “Events” node. Then click the “Event Subscription” button at the top.
On the following screen, you’ll need to provide a name and schema for the subscription. The name can be whatever you wish, and the schema should be “Event Grid Schema”. In the Topic Details section, you will provide a topic name which will pertain to all subscriptions for this storage account. In the “Event Types” section, you select the types of actions that will fire an event. For our purposes, all we want is “Blob Created”. With this selection, the event will fire every time a new blob is added to the container. Finally, under “Endpoint Details”, you will select “Event Hubs” from the dropdown, and then you click on “Select an endpoint” to select your Event Hub.
Once created an event will fire anytime a blob is created in this storage account. If you wish to restrict this to specific folders or containers, you can select the Filters tab, and create a subject filter to restrict it to specific file types, containers, etc. More information on Event Grid filters can be found here. In our case, we do not need a filter.
When ready, click the “Create” button, and the Event subscription will be created. It can be monitored from the storage account and can also be monitored in the Event hub. As new blobs are added to the storage account, more events will fire.
Create an ADX database and ingestion table
From Azure portal, navigate to your ADX cluster and either select a database or create a new one. Once the database has been created, you need to create at least one table to store the data. Ultimately, Kusto will ingest data from the blobs added above whenever they are added, and you need to do some mapping to get that to work properly. For debugging purposes, I find it useful to create an intermediate ADX table to receive data from the blobs, and them transform the data afterward.
In this case, the intermediate table will have a single column, Body that will contain the entirety of each ingested record. To create this table, run the following KQL query on your database:
.create-merge table Ingestion (Body: dynamic)
The dynamic data type in ADX can work with JSON content, and each record will go there. For this to work, you also need to add a mapping to the table. The mapping can be very complex, but in our case, we’re doing a simple load in, so we’re matching the entire JSON record to the Body column in our database. To add this mapping, run the following KQL command:
At this point, we are ready for an ingestion rule.
Create an Ingestion rule in ADX
From the Azure portal, open your ADX cluster, and select the “Databases” node in the “Data” section, then click on your database.
The setting that we need is “Data ingestion” in the resulting window. Selecting that takes you to the ingestion rules. Now you want to create a new connection by selecting the “Add data connection” button.
The first selection is the data connection type. The options are Event Hub, Blob storage, or Iot Hub. We need to select Blob storage. Both it, and Event hub will connect to an Event hub, but the difference it that using “Blob storage”, the contents of the blobs will be delivered, and selecting “Event Hub” will only deliver the metadata of the blob being added.
Once the type is selected, you give it a name, and choose the event grid to connect to (the one that you created above) and the event type. Next, you select “Manual” in the Resources creation section. Selecting “Automatic” will create a new Event Hub Namespace, Hub, and Event grid and you won’t have any control of the naming of these resources. Selecting Manual allows you to keep it under control. Select your event grid here.
Next, select the “Ingest properties” tab, and provide the table and mapping that you created above (which in our case was “RawInput”). Also, you need to select “MULTILINE JSON” as the data format.
Once these values are complete, press the Create button and the automatic ingestion will commence. Adding a new blob to the storage account will fire an event, which will cause ADX to load the contents of the blob into the Body column of the Ingestion table. This process can take up to 5 minutes after the event fires.
Create relevant query tables and update policies in the ADX database
Once ingestion happens, your “Ingestion” table should have records in it. Running a simple query in ADX using the table name should show several records with data in the “Body” column. Opening a record will show the full structure of the JSON contained within. If records with different schema are being imported, a query filter can be employed to limit the results to only those records.
For example, the pageViews table in AI will always contain a JSON none named “view”. The query below will return only pageView data from the ingestion table:
This ingestion table can be queried in this matter moving forward, but for performance and usability reasons, it is better to “materialize” the views of this table. To do this, we create another table, and set an update policy on it that will add relevant rows to it whenever the ingestion table is updated.
The first step is to create the table. In our case, we want to replicate the schema of the pageViews table in Application Insights. This is because we want to be able to reuse any queries that we have already built against AI. All that should be necessary is to change the source of those queries to the ADV cluster/database. To create a table with the same schema of the AI pageViews table (mostly), the following command can be executed in ADX:
Once the table is created, we need to create a query against the Ingestion table that will return pageViews records in the schema of the new table. Without getting deep into the nuances of the KQL language, a query that will do this is below:
The “where isnull(Body.view) == false” statement above uniquely identifies records from the pageViews table. This is useful if multiple tables use the same Ingestion table.
Next, we need to create a function to encapsulate this query. When we add an update policy to the pageViews table, this function will run this query on any new records in the Ingestion table as they arrive. The output will be added to the pageViews table. To create the function, it’s a simple matter of wrapping the query from above in the code below and running the command:
.create-or-alter function pageViews_Expand {
Query to run
}
This creates a new function named pageViews_Expand. Now that the function has been created, we modify the update policy of the pageViews to run it whenever new records are added to the Ingestion table, and its output will be added to the pageViews table. The command to do this can be seen below:
After the next ingestion run, not only will you see records in the Ingestion table, but if there were page views, you should see the results show up in the pageViews table as well.
If you have data already in the Ingestion table that you want to bring in to the pageViews table, whether for testing or for historical purposes, you can use the .append command to load rows into the table from the function:
.append pageViews <| pageViews_Expand
Finally, if you don’t want to maintain data in the Ingestion table for very long, or not at all, you can set the retention policy on it. Data will be automatically purged from it at the end of the time limit. Setting the value to zero will purge the data immediately, and in that case, the Ingestion table simply becomes a conduit. To set the retention policy on the Ingestion table to 0, you can run the following command:
There are several steps involved, but once everything is wired up, data should flow from Application Insights to Azure Data Explorer within a few minutes. This example only worked with the pageViews table, but any of the AI tables can be used although of course their schemas will be different.
The combination of Power BI and Application Insights (AI)/Log Analytics (LA) is a powerful one. These tools provide a quick, convenient, and relatively cheap way to collect and analyze telemetry on a wide variety of applications. One drawback of AI/LA is that any data query will return a maximum of 500,000 rows, which can be quite constraining in some cases. This article describes a way to work around this limit.
In this example, we’ll be working with an Application Insights instance that is being populated by the WordPress Application Insights plugin – in fact, it’s the one used on this very blog. There are a couple of ways to connect Power BI Desktop to AI data. The Power Query code is downloadable directly from Application Insights, and you can also use the Azure Data Explorer proxy address as outline in my post on the topic here. This approach will work for both methods, and for our purposes, we’ll be using the generated Power Query code approach.
To begin, access your Application Insights instance, and open the Logs window. If necessary, dismiss the “Queries” window that pops up. Next, form your query using Kusto Query Language (KQL). In our case, we want a simple dump of all rows in the “pageViews” table, so the query is simple – just pageViews.
Once we have the query the way that we want it, we select the Export button, and choose “Export to Power BI (M query). M is the name of the language that Power Query uses. Once chosen, a text file will be downloaded that contains the Power Query that we will need in Power BI Desktop.
At this point, we launch Power BI Desktop, and choose “Get Data”. Since we already have the query that we need, we will choose “Blank Query”.
Next, we name our query “Page Views”, and select the Advanced Editor. This is where we can paste in the query generated by Application Insights above. At this point, we open the file that was downloaded above, copy the contents, and paste them in this window (the top comments can be excluded).
Of note here is the value that will be automatically set for timespan. By default, this will be set to P1D, which means data will be retrieved only for the previous day. In our example above, we have changed it to show data for the past 365 days.
Selecting “Done” will load a preview of our data into Power Query. However, if we want to then load it into the data model, it will do so in a single pull, and we will be subject to the 500,000 row limit. What we need to do is break up our query into multiple queries, and Power Query lets us do this through the use of functions.
The first thing that we’ll need to do is to decide on how to segment the AI data. In our case, it is unlikely that we will have more than 500,000 page views per month, so if we performed one query per month, we should be able to retrieve all of our data. In order to do this, we need to go back to Application Insights, and form up a query that will return a list of year and month for our data. In our case, this query is:
pageViews
| where timestamp > now(-365d)
| summarize by
Year = datetime_part('Year',timestamp),
Month = datetime_part('Month',timestamp)
Note that the number of days in the where clause above should match the number of days in the larger query above. Next, export this query to Power BI, and create another query in Power Query. Leave the name as default for now. Selecting Done should return a list of years and months for your data. These values are all numbers, and Power Query recognizes them as such. However, we need to work with them as text later on , so we change their types to text.
Now we will return to our original query, and modify it so that it only returns data for a single month. Reopen the advanced editor and replace the query “pageViews” with:
pageViews | where datetime_part('Month',timestamp) == 10 and datetime_part('Year',timestamp) == 2020
The values chosen don’t matter, but they should return data, In the end, the edited code should look as follows:
Selecting done, we verify that we have data restricted to the specified month. This is where the fun begins. We are now going to turn this query into a function. To do so, we right click on our pageViews Query, and select “Create Function”
We are then presented with a dialog box that asks if we want to create the function without parameters. We can go ahead and select “Create”. We are then prompted to name the function, and we’ll call it “GetViewsByMonthAndYear”. We now need to edit the function. To do so, with the function selected in the query pane, we select the Advanced Editor once again. We then dismiss the following warning, and then we edit the function in two places. First, we need to define two variables to be passed to the function Month and Year , and then we add them to our query.
In the function declaration we add “Month as text” and “Year as text”. We then replace the explicit month and year that we originally queried for with these new variables, Month and Year. Our function code now appears as below:
Now we are ready to use our function. We select our query that contains the list of years and months, select the “Add Column” tab from the ribbon, and choose “Invoke Custom Function”. We give the new column a name “Views”, select our function from the dropdown, and then we select our column containing years and the column containing months to be passed to the function.
At this point, selecting “OK” will cause the function to be executed for each of the listed months. These are individual queries to AI, not one large one. Each query is still subject to the 500,000 row limit, but provided that no specific month exceeds that limit, all of the data will be returned.
Initially, the data is returned as a single table per day, but selecting the expand icon at the right of the column header allows us to retrieve the row values. It’s also a good idea to turn off the “Use original column name” option.
Selecting OK at this point displays all of the appropriate column values. We can then remove the “Year” and “Month” columns, as well as the original Page Views table that we used to create the function. We also need to set the data types for all of our columns because Power Query is unable to detect them using this approach.
Renaming our combined Query to Views, gives us the following result:
We still have a single table, but there is no longer a 500,00 row limit. At this point, we can load the data into the model and build our report.
Have you ever wanted to show your time data in different time zones simultaneously? Or allows users of the same report to display time values in their own time zone? This article outlines one approach for doing so.
If you’ve spent much time building reports for users in more than one time zone, you’ve likely come across a few of the idiosyncrasies of Power BI and date/time values. In fact, if you’ve worked with time zone values in Power Query and you don’t happen to live in the UK) , you’ve likely noticed that your reports show different time based values when they get published to the service. This is because the Power BI service operates in the UTC time zone, and evaluates all locale based time functions in that time zone. Power BI Desktop evaluates them according to the locale of the user.
For that reason, UTC date/time values are paramount. Luckily, most source data is available in UTC format, and it’s up to report designers to convert it as necessary. However, what happens when a single report is meant to serve users in different time zones? Alternatively, what if you want to use a single data model to serve reports in different time zones?
Time calculations can be performed both in Power Query, and in DAX. However, if we want our users to be able to to select their time zone from filters or slicers, we’re going to be restricted to using DAX. We’re also going to need a good source of time zone data. In the end, we need the time offset from UTC so that our time calculations can adjust time accordingly.
One good source of time zone offset is the Time Zone Database. You can register for an API key (its free), and call it directly using Power BI’s web connector. This means that when we refresh our data, we will get up to date offset data when daylight saving time changes, or there are local changes to the time zone rules.
To retrieve the time zone data, connect to it using Get Data in Power BI Desktop, then select the Web connector. If prompted, choose “Anonymous” as the authentication type, and enter the following for the url:
Where key is the API key that you received when registering at the Time Zone Database.
As of October 2020, Power Query will then convert the resultant JSON data into a simple table. Some of the columns are unnecessary, and we can safely remove status, message, and timestamp. I like to rename the columns into something a little more user friendly. The offset value returned in in seconds. DAX does its date calculations in days, so I create another column with the same value converted to days (the listed value divided by 86,400). It’s also a good idea to rename the query. When complete, your table should look something like below.
At this point, we can select Close and Apply to load the data into the model.
Our report will show the current time for any selected time zone. We therefore need to know which time zone is selected. We will assume that a filter or slicer, or a row filter has been applied, and there is only one currently selected value. We need to use an aggregate function in order to return the offset value, so in this case, we will MAX. We can therefore create a calculated measure to hold the selected offset value:
Current Offset = MAX('Time Zones'[Offset (days)])
Next, we need the current time. DAX has a Now() function that will return this value, but it will be returned in the locale of the user. When it runs on the service, it will return UTC time. We want this to work properly everywhere, so instead of Now() we will use UTCNow() which always returns the current time in UTC. We will next create two calculated measures – Current time (UTC) and Current time (Local).
Current Time (UTC) = UTCNOW()
Current Time (Local) = UTCNOW() + [Current Offset]
Now we can add a slicer to our report page, and use the “Zone ” dimension. Next, we add two card slicers, one displaying the current time in the UTC time zone, and the other will display the current time in the zone selected in the slicer. It’s a good idea to use the slicer’s selection control to “Single select” to prevent multiple zones from being selected. Every slicer selection will update the two “clocks” and the local time should reflect the currently selected time zone.
To see row filters in action, simply open a new page, and add a table that displays the Zone name (and any other relevant dimensions) along with the Current time (Local).
Given that the fact that slicer selections and filter values can be selected by users and persisted, this allows a single report to be used my multiple users in different time zones, but these users can see the data in their own local time zone right in the Power BI service.
At Microsoft Ignite this week, the Power BI team unveiled the next generation of the architecture for their “dedicated capacity” customers. This architecture promises to resolve many of the issues experienced with the first generation of Premium, and opens up several possibilities moving forward.
Gen-1
The Power BI dedicated capacity SKUs (which include Premium) were introduced 3 years ago in order to provide a scalable pricing model, and to provide predictable performance. Unlike the Pro SKU, which is licensed per user, these SKUs represent specific sets of resources running in Azure. There are currently 3 SKUs in this category, the A SKU (purchased hourly from Azure, the EM SKU (for embedding), and the well know P or Premium SKU.
When an organization purchases one of these SKUs, they are essentially purchasing memory, CPU cores and storage. The isolation allows for predictability, but it also means that when the resources become over allocated, catastrophic errors can occur. Refreshing a data set can also be particularly memory intensive, and the memory usage during a refresh could increase by more than double what might be normally required. This has an impact on normal operations during refresh, and means that capacity need to be over-sized to accommodate refresh in some cases.
Once acquired, Gen-1 capacities need to be configured, and decisions made as to what services to allow, and how many resources to allocate to them. It’s not always obvious as to what those settings should be. I’ve also seen situations where an overzealous administrator gets so excited about the new Premium capacity that they assign hundreds of workspaces to it, and bring reports to their knees.
Gen-2 – A Different Approach
The new “Gen-2” architecture aims to deliver the isolation and predictability that dedicated capacity brings, without the drawbacks. Gen-2 is in fact, not dedicated, as resources are drawn from a massive pool as needed. However, the performance level is guaranteed, and is focused on CPU cycles. In fact, memory is not even a consideration, apart from per-dataset overall size limits.
Memory will be allocated as needed to refreshes, ending the need to worry about refreshes failing, or impacting end user experiences. The CPU charge for refreshes will be allocated immediately, but the usage allocation will be spread out throughout the day. This provides consistent fast performance without requiring resource overallocation to handle peaks due to resourcing. Memory is no longer a factor whatsoever for refresh.
This architecture also allows for automatic scale up/down for intensive or unpredictable workloads. Administrators will no longer need to make so many decisions up front, or react to changes as frequently. If autoscale is not enabled, queries can potentially be slowed down, but a refresh kicking off can no longer make reports unavailable. The new architecture is moving back to a SaaS model, after being rather close to IaaS with Gen 1
In the past, services that required full isolation like Paginated reports were not available on some of the lower end A and EM SKUs. With this new architecture, they will be available on all of them. In fact, with the newly announced Premium per user SKU, they will even be available on a per user basis.
This new architecture will be available to all of the “dedicated” SKUs, A, EM and P. The preview of the new P SKU will begin later in 2020. As an ISV, I am particularly interested in this new architecture for the A SKUs. The memory spikes caused by large refreshes have been particularly problematic for us in the past. Gen-2 architecture appears to be just what the doctor ordered.
I have often referred to this group of SKUs as the dedicated capacity SKUs in the past, but with this change, that term is no longer accurate. With the term Premium being so pervasive, I think we’ll just have to call them all Premium SKUs, whether they are P or otherwise.