Microsoft Fabric August 2023 update
Welcome to the August 2023 update. We have lots of features this month including the new layout switcher for Power BI, SSD caching in Synapse Data Warehouse, in-line Python support for KQL in Synapse Real-time Analytics, lookup activity for Data Factory Dataflows, and much more. Continue reading for more details on our new features!
- Data Connectivity
- Data Warehouse
- Data Engineering
- Data Science
- Real-time Analytics
- Dataflows Gen2
- Power Query editor
- Refresh history
- Other improvements
- Data pipelines
- Trigger and property design template
- Quickly assign columns as properties in an object
- Data Activator now supports Power BI visuals with a time axis
- Trigger Power Automate flows from Data Activator
In our latest update, we’ve introduced an enhancement that preserves all Fabric items opened in a single browser tab on the left navigation bar, even in the event of a page refresh. This ensures you can refresh the page without the concern of losing context.
We have updated Monitoring Hub to allow users to personalize activity-specific columns. You now have the flexibility to display columns that are relevant to the activities you’re focused on.
We’ve added new buttons that make it easy for you to quickly switch between web and mobile layouts while you’re developing your reports. You’ll find the new switcher buttons down at the bottom of the screen, right next to the page navigator.
We are introducing the new bubble range scaling setting for scatter chart and map visuals! This setting gives report creators more control over how the bubble (marker) sizes respond to the data, making it more accurate or distinctive based on preference.
With the magnitude option, the bubble areas closely follow the data proportions. With the data-range option, the bubble size limits are mapped to data minimum and maximum. The auto option, which is the default setting for new reports, selects the appropriate option based on data characteristics. For more information, visit our docs.
This setting can be adjusted in formatting panel, Markers > Shape > Range scaling for scatter charts or Bubbles > Size > Range scaling for maps.
For reports authored in earlier Power BI versions, these settings default to (Deprecated) for scatter charts (which differs in handling negative values), and Data range for map charts.
Azure Maps charts will also include this feature in a coming product update.
In the figure above, the size of each country represents Urban Population, which is also shown on the y-axis.
The new on-object interaction feature released to preview back in March. We’ve been busy adding even more improvements to the preview, here’s what’s part of the Aug release:
We’ve now added the ability to resize the on-object menus horizontally, this is especially helpful when you’re working with long field names.
We’ve also improved the positioning of the on-object menus to make better use of the canvas space. Previously, when a visual was near the bottom of the canvas, the on-object menu was super small and required scrolling to be able to see and use the field wells. Now, the on-object menu moves up and stretches into the canvas to bring the field wells into view without needing to scroll.
When spotlighting a visual or expanding the visual in focus mode, you can now use on-object formatting to subselect and format styles.
In focus mode, it can be hard to tell when you’ve entered format mode with just the subtle border. To address this, we’ve added a button to the header to better indicate when you’re in format mode and how to exit format mode while staying in focus mode.
Thanks for continuing to try out the new preview! We’re working hard to react to your suggestions and add the necessary changes to make on-object work for you. Please continue to provide your comments directly in this blog post or in our community forum via the “Share feedback” button next to the preview switch.
If there are blanks in the data, you can specify where to order them by adding ‘BLANKS LAST’ or ‘BLANKS FIRST’. For example, this is a perfectly valid expression:
ALLSELECTED ( 'DimCustomer' ),
ORDERBY ( SUM ( 'FactInternetSales'[SalesAmount] ), DESC BLANKS LAST)
Specifying how blanks are handled is optional and can be combined with specifying the order direction (DESC/ASC). Valid values include:
- BLANKS DEFAULT. This is the default value. The behavior for numerical values is blank values are ordered between zero and negative values. The behavior for strings is blank values are ordered before all strings, including empty strings.
- BLANKS FIRST. Blanks are always ordered on the beginning, regardless of ascending or descending sorting order.
- BLANKS LAST. Blanks are always ordered on the end, regardless of ascending or descending sorting order.
We are excited to introduce the new data connectivity and discovery experience in Dataflow, Dataflow Gen2, and Datamart. Today, users spend a lot of time finding the right data, the right connection info and credentials. With the new Get Data experience, we make it easy to browse different Fabric artifacts through the OneLake data hub. This improved experience aims to expedite this process and get you closer to the data that you’re looking for in the quickest way possible.
Learn more in our blog post Announcing a new modern data connectivity and discovery experience in Dataflows
This update includes significant performance improvements to the Lakehouses connector. Be sure to update to the August version of Power BI Desktop and Gateway to experience these improvements!
We are excited to announce that Direct Lake datasets now support XMLA-Write operations. Now you can use your favorite BI Pro tools and scripts to create and manage Direct Lake datasets.
Whether you prefer SQL Server Management Studio (SSMS), Tabular Editor, DAX Studio, or something else, you can connect to your Direct Lake datasets using XMLA endpoints and perform operations such as deploying, customizing, merging, scripting, debugging, and testing. You can use tools like Azure DevOps or GitHub to implement source control, versioning, and continuous integration for your data models. You can automate and streamline your development and deployment processes. You can also use PowerShell or REST APIs to automate tasks such as refreshing or applying changes to your Direct Lake datasets. XMLA Write is incredibly powerful and the key to data modelling efficiency and productivity. For more information about XMLA Write support in general, check out the article Dataset connectivity with the XMLA endpoint in the product documentation.
We are excited to announce that we have finalized Dataset Scale-Out configuration APIs and completed the replica synchronization feature. Specifically, you no longer need to enable Scale-Out at the workspace level by using a burdensome XMLA request. The XMLA command is deprecated and will no longer work. You can now enable Scale-Out on a dataset-by-dataset basis using the Power BI REST API for datasets. You also no longer need to synchronize read replicas manually if you want to take advantage of automatic replica synchronization. Automatic replica synchronization is enabled by default. However, it is also possible to disable automatic synchronization to synchronize the read/write and read replicas of a dataset manually for controlled refresh isolation.
Learn more in the automatic replica synchronization announcement blog
Power BI mobile app users can now choose which item they want to have open automatically whenever they launch the Power BI Mobile app. This feature saves time for users who mostly view a specific item on their mobile app, and don’t want to waste time navigating from the app’s home page every time they open the app.
To configure a launch item for yourself, open the item you want to see when you launch the app. This can be a specific report page, dashboard, scorecard, report in an app or entire app. When the item is open, open the More options (…) menu from the header and select Set as launch item. This will mark the item as the launch item. Only one item at a time can be marked as the launch item.
Administrators can also use mobile device management (MDM) tools to remotely configure a launch item for a group of users (front-line workers, for example) to simplify their experience with the app.
Until recently, your customer leads may have come solely from customers downloading them from AppSource.com. However, now you have access to even more leads through Power BI – both the desktop and web embed AppSource.
To access these new leads, simply navigate to the Referrals workspace in Partner Center. Here, you can see all the leads you receive from Power BI, as well as those from AppSource.com. Plus, if you’ve connected your CRM, you’ll see them there too.
By utilizing these new leads from Power BI, you can potentially reach more customers and increase your business’s growth. So be sure to check your Referrals workspace and CRM regularly to stay on top of your leads!
You are now able to directly publish your Power BI Project (PBIP) files directly from Power BI Desktop, eliminating the need to save it as PBIX in order to activate the Publish feature.
From an opened Power BI Project, choose: File > Publish > Publish to Power BI or select Publish on the Home ribbon.
Select the destination workspace:
And that’s it, your PBIP dataset and report (or just report if it’s a Live Connect) will be published to the selected workspace:
To learn more about Power BI Projects, visit: https://aka.ms/pbidesktopdevmode
The following are new visuals this update:
- Performance Flow – xViz
- Time-lines by BI-Champ
- Composed Line Area Bar Chart by Devlup Funnels
- Galigeo For Power BI
- Radial chart by Devlup Funnels
- accoPLANNING Enterprise – Planning Power BI Writeback
- Sunburst Chart by Powerviz
- Spider Chart for Power BI by VisioChart
- Advanced Trellis / Small Multiples – xViz
- Drill Down Combo PRO
- Zebra BI Cards
Drill Down Combo Bar PRO by ZoomCharts offers a wide selection of customization options, letting creators build everything from regular bar charts to box and whisker plots. This visual also offers powerful cross-chart filtering capabilities combined with intuitive on-chart interactions.
- Multiple chart types – choose between column, line, and area charts
- Full customization – customize X and Y axes, legend, outline and fill settings
- Stacking and clustering – choose normal, 100% proportional, or zero-based stacking
- Static and dynamic thresholds – set up to 4 thresholds
- Multi-touch device friendly – get the same experience on mouse and touch input devices
POPULAR USE CASES:
- Sales and marketing – sales strategies, sales results, and campaign-by-campaign marketing metrics
- Human resources – hiring, overtimes and efficiency ratios by department
- Accounting and finance – financial performance by region, office, or business line
- Manufacturing – production efficiencies and quality metrics by product line
ZoomCharts Drill Down Visuals are known for their interactive drilldowns, smooth animations, and rich customization options. They are mobile friendly and support: interactions, selections, custom and native tooltips, filtering, bookmarks, and context menu.
We are thrilled to present the new Sunburst Chart by Powerviz, a powerful visualization designed to display hierarchical data in a user-friendly and intuitive format. With its concentric circle design, you can easily display part-to-whole relationships and gain valuable insights from your data.
- Rich Customization: Control display style, labels, Center circle, fill patterns and dynamic images.
- Color Options: Choose from 30+ color palettes, including color-blind safe options.
- Ranking: Easily filter the Top/Bottom N by each level and show the remaining categories as “Others”.
- Conditional Formatting: Easily identify outliers based on value or category rules.
- Interactive Features: Enjoy full interactivity with zoom, drill down, and cross-filtering for detailed exploration.
Many other features included annotations, grid view, show condition, and accessibility support.
Business Use Cases:
- Sales and Marketing: Market share analysis and customer segmentation.
- Finance: Department budgets and expenditures distribution.
- Operations: Supply chain management, identify inefficiencies in manufacturing process.
- Education: Course structure, curriculum creation.
- Human Resources: Organization structure, employee demographics.
Check out our video Introducing Sunburst Visual by Powerviz – A Powerful Power BI Custom Visual
Get Sunburst Visual for FREE from AppSource
Check out the visual features in sample file.
Step by Step instructions and documentation can be found here.
To learn more, visit Powerviz website
xViz Performance Flow by Lumel is an integrated business flow monitoring visual with an interactive KPI tree visualization for organizational performance management use cases. It offers insights into People, Places, Processes and Entities with Performance Indicators, Trendlines, and Advanced Alerting on Goals, Metrics and their Variances.
Performance Flow is suitable for a wide range of use cases:
- Organization Performance – Unveil HR/Employee Performance insights through an interactive pictorial Org Tree Chart with performance attributes and trends.
- Process Flow – Visualize business flows in various stages using Swim lanes with connector lines, icons, KPI metrics, and trends in one single integrated view.
- Financial Performance – Cost Center analysis with Scorecards or KPI Trees.
- Sales Performance – Dive deep into Sales Performance at Regional, Departmental, and Salesperson Levels with Decomposition Trees.
The visual is packed with analytical and interactive features like:
- Interactive hierarchy navigation including quick search, zooming, subtree analysis, etc.
- Streamline Processes using Swim Lanes like in Visio
- Uncover hidden/Dotted connections using Links
- Data-driven conditional formatting rules
- Custom Tabs for end-users
T-SQL queries targeting massive amounts of data not fitting in-memory cache suffer from cache misses and higher latency due to repetitive reads from remote storage. SSD caching stores frequently accessed data on local disks in highly optimized format, significantly reducing IO latency and accelerating query processing.
For more information check https://learn.microsoft.com/en-us/fabric/data-warehouse/caching.
We are thrilled to announce session sharing in Fabric through High Concurrency mode for Data Engineering and Data Science workloads. You can run notebooks simultaneously on the same cluster without compromising performance or security when paying for a single session. Session sharing is strictly within a single user boundary offering enhanced security and isolation also allowing you to do more while paying less.
To learn more about High Concurrency mode in Fabric for Data Engineering and Data Science check out the documentation : High Concurrency mode in Fabric Spark
We’ve made several usability enhancements to our Model & Experiment tracking features. You can now stay informed with real-time notifications for model and experiment updates. Plus, users can now enjoy a more seamless browsing and comparison experience with improved Run List and Model List views.
We’re thrilled to announce that Microsoft Fabric is introducing two exciting new Data Science Samples that showcase the power of Microsoft Fabric capabilities.
The first sample focuses on bank customer churn problem and aims to build a machine learning model to predict whether bank customers would churn or not.
The second sample is about machine failure and revolves around the use of machine learning to have a more systematic approach to fault diagnosis to proactively identify issues and take actions before a machine’s actual failure.
Both these samples provide a comprehensive display of end-to-end data science workflow, demonstrating Microsoft Fabric versatility in addressing diverse real-world challenges with AI-driven solutions. We can’t wait to see the incredible possibilities these samples will unlock for our customers! To check these new Data Science samples along with others, please visit Microsoft Fabric and then select Synapse Data Science. Click on Use a Sample to access all Data Science samples.
Fabric KQL Database supports running Python code embedded in Kusto Query Language (KQL) using the python() plugin . The plugin runtime is hosted in a sandbox, an isolated and secured environment hosted on KQL Database compute nodes. This sandbox contains the language engine as well as common mathematical and scientific packages. The plugin extends KQL native functionalities with a huge archive of OSS packages, enabling Fabric users to run advanced algorithms, such as machine learning, artificial intelligence, statistical tests, time series analysis and many more as part of the KQL query.
The Python plugin runs a user-defined function (UDF) using a Python script. The Python script gets tabular data as its input, and produces tabular output. The plugin is disabled by default. Before you start, enable the Python plugin in your KQL database. To enable the plugin browse to your KQL Database, select Manage -> Plugins, and enabled the plugin by toggling the button to On.
The KQL Database provisioning process has been optimized. Now you can provision a KQL Database within a few seconds.
All you need to do to Create a new KQL database, is to give it a name, and after a few seconds you will have a fully functional KQL Database, that you can now start ingesting and querying your data.
Edit connection in Manage Connections
The Manage Connections feature was recently release with only the capability to view the linked connections to your dataflow and the ability to unlink a connection from your dataflow. We’ve now added the capability to Edit a connection credentials and gateway from within the dialog.
We continue to work on improving this experience during this calendar year and we welcome you to give this feature a try and look forward to future improvements on this feature.
The concept of staging data was introduced in Dataflows Gen2 for Microsoft Fabric and now you have the ability to define what queries within your Dataflow should use the staging mechanisms or not.
Learn more about the staging mechanism used in Dataflows Gen2 from the Data Factory Spotlight blog post.
With the introduction of the ability to set the behavior of a query to be staged or not, you can now set all your queries to be evaluated without any staging and load the data directly to a destination of your choice.
Note that your Dataflow Gen2 must at least have a single query with a data destination defined.
The main benefit of this pattern is that your data will not need to be staged first which could potentially save time if the evaluation of your Dataflow might be quick and doesn’t need the overhead of staging mechanisms or it simply goes more in line with how you want your solution to be designed for a number of reasons.
When creating a Dataflow Gen2, we are now modifying the maximum number of entities that can be part of a particular Dataflow. The new maximum number of entities is 50. If you have 51 or more entities in your Dataflow, you will receive a warning letting you know that you need to reduce the number of entities to a maximum of 50 before you can publish your Dataflow. You are still able to save your Dataflow as draft before you make any changes.
If you are ever in a situation where your Dataflow refresh failed, you can now click the warning sign right next to the timestamp in the Refreshed column and get taken directly to the Refresh history dialog for that particular refresh attempt.
The goal of this new improvement is to reduce the number of clicks to get you to the last failure.
Inside the refresh history dialog you are now able to drill down to a particular table and see the volume processed for it as well as the endpoint where the volume was processed. More information such as Duration, Start Time and End Time are still available to you in this dialog.
This update includes significant performance improvements to the Lakehouses connector. Be sure to update to the August version of Power BI Desktop and Gateway to experience these improvements!
We are actively listening to your feedback and the feedback of thousands of customers who are trying out Dataflow Gen2 today. Some of the feedback doesn’t directly translate to new features, but rather to fixes or quality improvements to our backend and how reliable our service can be.
Our team has been able to triage and work on more than 600 fixes and improvements in the past month. The list below is a small list of some of the categories of these fixes and the impact that it’ll have on how you use Dataflow Gen2 today:
- Better error messages: We are actively improving the error messages and have improved some of the most common error messages in the past few weeks. There’ll be multiple improvements on the error messages and categorization front in the coming months but we’re happy to be making tactical changes today to improve the experience and make it clear to users on what could be happening to produce the error.
- Multitasking efforts: There have been some issues in regard to multitasking in Fabric. A few of those have been addressed, but we’re actively working on a much better experience for Dataflow Gen2 with multitasking in mind much similar to how other artifacts leverage multitasking capabilities.
- Reliability and performance: We’re continuously working towards improving the reliability and performance of Dataflow Gen2 in Microsoft Fabric. This will translate into much faster refreshes times and with more reliability.
We’re excited to announce that the FTP connector is now available to use in your Data Factory data pipelines. In your data pipeline, you can create a new connection to your FTP data source to copy, extract, and transform your data.
The Lookup activity now connects to Fabric Lakehouse, Data Warehouse, and KQL Database. This now makes it easier for you to read or look up records, table names, and other values from your Fabric artifacts to use in downstream activities in your data pipeline.
The Get Metadata activity now connects to Fabric Lakehouse and Data Warehouse, making it easy to retrieve metadata from your data in your Fabric artifacts to use downstream in your data pipeline.
We’ve recently added Pipeline run status so that developers can easily see the status of the pipeline run. You can now view your Pipeline run status from the Output panel.
We’ve added advanced settings for the Set Variable activity called Secure input and Secure output. When you enable secure input or output, you can hide sensitive information from being captured in logs.
As mentioned last month, we have been working on a new experience for designing triggers and it’s now available in our preview!
You now see 3 cards in every trigger: Select, Detect, and Act:
‘Select’ is where you choose the value you want to monitor. It can be a direct reference to a column from an event stream, or to an existing property.
Once you’ve selected an input, you’ll see a preview and can add grouping/smoothing or filters to get the right value:
The Detect card is where you specify the conditions and thresholds that you want to take action on. You select the type of threshold, enter the values and can optionally choose things like firing the trigger every time the condition is met, or only when it’s met a certain number of times over a longer period.
The top chart shows, for the instances you have selected in the preview, when this trigger would have fired. The bottom charts shows the overall nubmer of times the trigger would have fired for all instances. This gives you an idea of how many emails or Teams messages would have been generated!
Finally the Act card lets you set the action you want Data Activator to take. You can choose the recipient, optional information etc. as you build it out.
From the data view, you can quickly assign an event stream to a new or existing object and make multiple properties from one UI.
You need to choose the key column that identifies the individual object instances that you care about (e.g. a package ID, employee ID, location name etc.). Then, in the ‘Assign properties’ dropdown, select the columns from your event stream that you want to use as properties in the object.
You can use the ‘Assign to existing’ option to map a second event stream onto an existing object, combining data from two events.
Data Activator now supports Power BI visuals with a time axis. In the screenshot below, we create an alert if the “occupancy” measure on the visual goes above 60%. Note that Data Activator has detected the presence of the time axis and is highlighting this in the alert pane.
When a data activator trigger fires, you can now trigger a Power Automate flow. This means that you can use Data Activator to drive actions in any system that Power Automate can connect to. You can send an alert in a 3rd party alerting system, log a ticket in a ticketing system, or call a REST API to trigger actions in an operational system. The list is (almost) endless!
To connect Data Activator to Power Automate, you create a custom action. A custom action is a reusable action template that triggers a flow. Once you have made a custom action, you can use it in any trigger, in any Data Activator workspace or item. Here, we create a custom action to send an SMS message via a 3rd party connector:
After creating the custom action you can use it in any trigger, by selecting it in the “Act” card in the trigger designer. The act card prompts you for the input fields that need to go to the flow. Here, we see the action card for our SMS action. It is prompting for the phone number and message:
For the trigger creator, the action works just like natively-supported email and Teams actions. This means that you can get a Power Automate expert in your organization to create the custom action, then roll it out to all Data Activator users, even they don’t have any Power Automate experience.
That is all for this month! Please continue sending us your feedback! If you haven’t already head over to the Fabric Community to join in the conversation!