Microsoft Fabric Updates Blog

Introducing Optimistic Job Admission for Fabric Spark

We are excited to announce a new feature which has been a long ask from Synapse Spark customers, Optimistic Job Admission for Spark in Microsoft Fabric.
This feature brings in more flexibility to optimize for concurrency usage (in some cases ~12X increase) and prevents job starvation. This job admission approach aims to reduce the frequency of job starvation and improve the job admission experience for Fabric users running their Data Engineering or Data Science jobs, especially during peak usage hours.  

In the current job admission policy, Fabric Spark reserves the maximum number of cores that a job may need during its execution, based on the maximum number of nodes that the job can scale up to. This ensures that the job will always have enough resources to run, but it also limits the number of concurrent jobs that can be admitted in the cluster. 

With Optimistic Job Admission, Fabric Spark only reserves the minimum number of cores that a job needs to start, based on the minimum number of nodes that the job can scale down to. This allows more jobs to be admitted if there are enough resources to meet the minimum requirements. If a job needs to scale up later, the scale up requests are approved or rejected based on the available cores in capacity.  

Optimistic Job Admission can significantly increase the max number of concurrent jobs for our customers, especially for those who use large SKUs. For example, if a customer is using F32 SKU, which has 64 Spark VCores (Based on 1 CU = 2 Spark VCores) and 192 Max Burst Cores for concurrency (based on the 3X burst multiplier), they can only run 3 jobs concurrently in the compute reservation-based job policy, assuming they have the default starter pool configuration.  

Now with Optimistic Job Admission, they can run up to ~24 jobs concurrently with the same configuration when submitted concurrently. This is an 8x improvement! 

When Spark Autoscale requests for additional nodes during runtime, the job admission control approves the scale up allowing it to scale to its max limits based on the total available cores. When Autoscale requests are rejected, these active running jobs are not impacted and continue to run with their current configuration till cores become available.

Optimistic job admission is enabled by default for all workspaces. For jobs that require compute reservation, users can disable autoscale which would result in allocation of total cores for the job.

To learn more about the optimistic job admission experience and the job scale up scenarios on Spark, please refer to our documentation Job admission and management in Fabric Spark – Microsoft Fabric | Microsoft Learn

To learn more about the throttling experience on Fabric Spark based on the Fabric capacity SKU, please refer to our documentation Concurrency limits and queueing in Microsoft Fabric Spark

To learn more about the job queueing experience in Fabric Spark, please refer to our documentation Job queueing for Fabric Spark – Microsoft Fabric | Microsoft Learn

Zugehörige Blogbeiträge

Introducing Optimistic Job Admission for Fabric Spark

Oktober 4, 2024 von Jason Himmelstein

We had an incredible time in our host city of Stockholm for FabCon Europe! 3,300 attendees joined us from our international community, and it was wonderful to meet so many of you in person. Throughout the week of FabCon Europe, our teams published a wealth of valuable content, and we want to ensure you have … Continue reading “Fabric Community Conference Europe Recap”

Oktober 2, 2024 von Miguel Llopis

Last week was such an exciting week for Fabric during the Fabric Community Conference Europe, filled with several product announcements and sneak previews of upcoming new features. Thanks to all of you who participated in the conference, either in person or by being part of the many virtual conversations through blogs, Community forums, social media … Continue reading “Recap of Data Factory Announcements at Fabric Community Conference Europe”