Microsoft Fabric Updates Blog

Fabric Capacities – Everything you need to know about what’s new and what’s coming

The Fabric Capacities team is excited to share details about the improvements we’re making to the Fabric capacity management platform for Fabric and Power BI users. In this article we’ll cover:

  1. What are capacities?
  2. Great performance and simplified management with bursting and smoothing.
  3. Using Capacity Metrics to monitor usage and spend.
  4. Capacity platform and monitoring improvements.

What are Capacities?

Fabric is a unified data platform that offers shared experiences, architecture, governance, compliance, and billing. Capacities provide the computing power that drives all of these experiences. They offer a simple and unified way to scale resources to meet customer demand and can be easily increased with a SKU upgrade.

A screenshot of a computer

Description automatically generated

Purchase a capacity once and use it to power everything Fabric

Capacities are the foundation for simplicity and flexibility of Fabric’s licensing model. With Fabric, you only need one capacity to drive all your Fabric experiences, without the hassle of provisioning many different services for every project. If you’re just getting started, Capacities can be acquired in three ways – the Fabric trial, Power BI Premium capacities, and Fabric capacities. If you have an existing Power BI Premium capacity you automatically have access to try out Fabric, you can learn about enabling fabric here.

The 60-day Fabric free trial allows you to test out Fabric experiences with 64 CUs of throughput. You can then use the trial capacity to load-test and choose a SKU size to match demand. Selection of a capacity size determines the amount of capacity throughput, measured in Capacity Unit seconds (CUs). With the flexibility of shared compute, all for one fixed price, organizations can experience predictable spend on top of the flexibility of all the Fabric capabilities.

On June 1, we announced the availability of Fabric pay as you go capacities for purchase in Azure. Fabric capacities come with a very low starting price point, complete pricing is available here. Additionally, reserved instance capacities will be coming in the near future that will provide even greater discounts when pre-committing.

Great performance and simplified management with bursting and smoothing.

Bursting for blazing performance running Fabric experiences.

Bursting allows you to consume extra compute resources beyond what have been purchased to speed the execution of a workload. For example, instead of running a job on 64 CU and completing in 60 seconds, bursting could use 256 CUs to complete the job in 15 seconds.

  • Bursting is a SaaS feature and requires no user management. Behind the scenes, the capacity platform is pre-provisioning Microsoft managed virtualized compute resources to optimize for maximum performance.
  • Compute spikes generated from bursting will not cause throttling due to smoothing policies outlined in the next section.

Smoothing helps streamline management by allowing you to plan for average usage instead of peak.

When a capacity is running multiple jobs, a sudden spike in compute demand may be generated that exceeds the limits of a purchased capacity. Smoothing simplifies capacity management here by spreading the evaluation of compute to ensure that your jobs run smoothly and efficiently.

  1. For interactive jobs run by users: capacity demand is typically smoothed over 5 minutes to reduce short-term temporal spikes.
  2. For scheduled, or background jobs: capacity demand is spread over 24 hours, eliminating the concern of job scheduling or contention.

Smoothing will not impact execution time, that is always at peak performance! Smoothing simply also allows you to size your capacity based on average, not peak usage.

Using Capacity Metrics to monitor usage and spend

As we announced in July, the extended preview period for Fabric will end on September 30th, 2023. Starting on October 1st workload usage will count against capacity limits and capacity limits will be enforced. On October 1st OneLake usage will also be charged on F SKU’s.  The Fabric Capacity Metrics app, announced in May’ 23 makes it easy to:

  • Analyze Fabric experience resource consumption against capacity limits.
  • View consumption metrics to plan for capacity scale-up or optimization.
  • View OneLake consumption by capacity and workspace.

Announcing Data Warehouse usage reporting for Capacity Metrics

On September 18th Capacity Metrics will support analysis of Fabric Data Warehouse usage reporting. This feature will let users measure the impact their organization’s Data Warehouse experiences against capacity limits for capacity planning and to better understand their compute spend generated from Data Warehouse experiences. Capacity Metrics will show Data Warehouse usage in the Warehouse Tab of the Items Table. Learn more in the Data Warehouse post on this update.

Figure 3: Capacity Metrics with Data Warehouse usage

Announcing OneLake storage reporting in Capacity Metrics

We are excited to announce the availability of OneLake usage reporting in Capacity Metrics, starting on 9/18/2023. With this new feature, you can easily analyze your storage consumption by selecting your Capacity, choosing the date range, and viewing usage by workspace. This will provide you with valuable insights into your overall storage spend and enable you to monitor daily or hourly trends with usage of drill-through. Click here to view learn more about this feature.

Figure 4: Capacity Metrics with OneLake storage analytics

Capacity platform and monitoring improvements

Starting on 10/1, we’re improving the Capacity platform to handle larger and more diverse workloads. Here’s what you can expect:

  • Optimizations for long-running jobs: We’re optimizing the platform for long-running jobs, so if a job exceeds capacity limits, it will run to completion (only subsequent jobs will be evaluated against limits) and the overage will be burned down against future capacity.
  • Reduced throttling: We’re introducing new policies to reduce throttling for customers who experience intermittent spikes in usage.
  • Added overage protection: We’re adding protection for large-scale jobs with automatic queue management to help prevent overloading of the capacity.
  • Improved observability: The latest Capacity Metrics now has a throttling tab to help you monitor the new platform policies.

Evolving the Capacity platform for longer running workloads

We’re introducing a new optimization for long-running jobs. Historically, if a job’s reported usage exceeded capacity limits, the following jobs would be throttled. Now, if a job’s reported usage exceeds capacity limits, throttling will not be immediately applied to following jobs. Instead, any overage will be automatically balanced against future capacity when the system has unutilized capacity. This feature to “borrow from the future” is in addition to smoothing and is also seamless to customers and supported by the following new analytics experience in Capacity Metrics.

A screenshot of a computer

Description automatically generated

Updated throttling policies with overage protection

Before the October 1st update, throttling occurs when smoothed usage is greater than 100% of the purchased capacity throughput.

After the October 1st platform update, capacity throttling policies will now be based on the amount of future capacity consumption that resulted from smoothing policies, this offers increased Overage protection for when future use is less than 10 minutes and richer queue management features to prevent excessive overload when usage exceeds an hour. The 4 new policies are outlined in Table 1.

Future Smoothed Consumption – Policy Limits

Platform Policy

Experience Impact

Usage <= 10 minutes

Overage protection

Jobs can consume 10 minutes of future capacity use without throttling.

10 minutes < Usage <= 60 minutes

Interactive Delay

User requested interactive type jobs will be throttled.

60 minutes < Usage <= 24 hours

Interactive Rejection

User requested interactive type jobs will be rejected.

Usage > 24 hours

Background Rejection

User Scheduled background jobs will be rejected from execution.

Table 1: Updated Capacity Throttling Policies

To help you monitor and analyze the new policies, we’ve added a new throttling tab in the utilization section of the Capacity Metrics. You can now easily observe future usage as a percentage of each limit, and even drill down to specific workloads that contributed to an overage.

Evolving Power BI Premium from v-cores to capacity units (CU)

In the May ’23 post, we announced the rollout of capacity units as the unit of measurement for capacity throughput on Fabric. Capacity Units offer more granularity than the previously used v-cores and let us offer smaller sized capacities to Fabric customers with a very low entry point for pricing. Starting on 10/1, we will be updating all Power BI premium SKU’s (EM, P and A) to report in capacity units. Key takeaways for this change:

  • This update will not result in any change to the throughput of a capacity.
  • Power BI Premium SKU’s EM / A and P will now report usage using CUs.
  • There will be one version of the Capacity Metrics app that supports all Power BI and Fabric capacity SKUs. See Figure 5: for an overview of capacity evaluation and throughput before and after the change.
A screenshot of a test

Description automatically generated

Figure 5: Capacity Throughput, Measurement an Evaluation

The change to consolidate the metrics units used for capacity analysis sets the stage for new cross-capacity analytics experiences for our customers who manage a large number of capacities.

Next Steps

On September 18th, please update to the latest version of the capacity metrics to get access to the latest OneLake analytics. Capacity Administrators can access Capacity Metrics directly from the Capacity settings page of Admin portal. We’ll also be releasing another update on October 1st that includes analysis for the new platform capabilities outlined above. The team’s super excited to share these platform and observability features to simplify management and administration of Fabric capacities. We look forward to your feedback and can’t wait to see all the amazing solutions you’ll create using Fabric experiences!

Liittyvät blogikirjoitukset

Fabric Capacities – Everything you need to know about what’s new and what’s coming

lokakuuta 31, 2024 tekijä Jovan Popovic

Fabric Data Warehouse is a modern data warehouse optimized for analytical data models, primarily focused on the smaller numeric, datetime, and string types that are suitable for analytics. For the textual data, Fabric DW supports the VARCHAR type that can store up to 8KB of text, which is suitable for most of the textual values … Continue reading “Announcing public preview of VARCHAR(MAX) and VARBINARY(MAX) types in Fabric Data Warehouse”

lokakuuta 29, 2024 tekijä Dandan Zhang

Managed private endpoints allow Fabric experiences to securely access data sources without exposing them to the public network or requiring complex network configurations. We announced General Availability for Managed Private Endpoint in Fabric in May of this year. Learn more here: Announcing General Availability of Fabric Private Links, Trusted Workspace Access, and Managed Private Endpoints. … Continue reading “APIs for Managed Private Endpoint are now available”