Microsoft Fabric Updates Blog

Fabric Spark Autotune and Run Series Job Analysis

We are thrilled to announce the public preview of Run Series Analysis, in conjunction with our recent announcement of the Autotune feature at the Fab conference. These two features are designed to help you gain insights into Spark application executions across your recurring runs of your Notebook and Spark job definitions, facilitating performance tuning and optimization of your Spark job performance!

What is the feature of Autotune?

Autotune automatically fine-tune Spark settings to reduce execution time and optimize efficiency without manual tuning. Autotune streamlines your workflows by dynamically adjusting configurations for each query, improving performance. Key settings adjusted are spark.sql.shuffle.partitions, spark.sql.autoBroadcastJoinThreshold, and spark.sql.files.maxPartitionBytes.

Autotune query tuning examines individual queries and builds a distinct ML model for each query. It specifically targets:

  • Recurrent queries
  • Long-running queries (those with more than 15 seconds of execution)
  • Spark SQL API queries – autotune optimizes all queries regardless the language (Scala, PySpark, R, Spark SQL)

This feature is compatible with notebooks, Spark Job Definitions, and pipelines. The benefits vary based on the complexity of the query, the methods used, and the structure. Extensive testing has shown substantial benefits for tasks associated with exploratory data analysis, such as reading data, running joins, aggregations, and sorting.

Autotune also includes a mechanism to monitor performance and identify any performance regressions. For example, if a query processes an unusually large volume of data, Autotune will automatically deactivate to prevent inefficient configurations. It typically requires 20 to 25 iterations to accurately learn and identify the optimal settings.

A practical case highlighted how Autotune significantly reduced the execution time for a customer’s query by optimizing configurations specifically tailored to their usage patterns.

What is the feature of Spark Run Series Analysis?

The Spark Run Series automatically classifies your Spark applications from your recurring pipeline activities, notebook runs, Spark job runs, as well as recurring Autotune-enabled runs from the same notebook or Spark job definition into respective run series. The Run Series feature auto-scans the run series and detects whether there are any anomalous Spark application runs. You can leverage the Run Series Analysis feature to compare and analyze the outcomes of Autotune, view the performance for each run along with the input and output data, examine the execution time breakdown for each run, and observe the auto-tuned configuration values for Spark SQL queries.

Highlights of Key Benefits: The Run Series Analysis feature offers the following key capabilities.

  • Run Series Comparison: You can compare the duration of a Notebook run with that of previous runs and evaluate the input and output data to understand the reasons behind prolonged run durations.  
  • Outlier Detection and Analysis: The system can detect outliers in the run series and analyze them to pinpoint potential contributing factors. 
  • Detailed Run Instance View: Clicking on a specific run instance provides detailed information on time distribution, which can be used to identify opportunities for performance enhancement, as well as the corresponding Spark configurations.

How to enable Autotune?

Autotune is available across all production regions but is disabled by default. You can activate it through the Spark configuration setting within your environment. To enable Autotune, either create a new environment or set the Spark property spark.ms.autotune.enabled = true in an existing environment, as shown in the screenshot below. This setting is then inherited by all notebooks and jobs running in that environment, automatically tuning them.

How to access to the Spark Run Series Analysis?

You can access the Run Series Analysis feature through the Monitoring Hub’s historical view, the Notebook or SJD’s recent runs panel, or the Spark application monitoring detail page.

Summary

In summary, Autotune automatically fine-tunes your Spark executions to optimize both performance and efficiency, while the Run Series Analysis feature allows you to view the performance trend across Spark applications. By integrating these two features, you can observe the effects of Autotune within the Run Series Analysis Feature. Here’s an example from a customer case: The initial query, which was complex and included multiple joins, took nearly 8 minutes to execute. However, after adjusting the Spark settings through tuning, the execution time was reduced to 3 minutes and 36 seconds.

関連するブログ記事

Fabric Spark Autotune and Run Series Job Analysis

6月 25, 2024 作成者: Santhosh Kumar Ravindran

We are excited to announce the Capacity Pools for Data Engineering and Data Science in Microsoft Fabric. As part of the Data Engineering and Science settings in the Admin portal, capacity administrators can create custom pools based on their workload requirements. Optimizing Cloud Spend and Managing Compute Resources In enterprise environments, managing cloud spending and … Continue reading “Introducing Capacity Pools for Data Engineering and Data Science in Microsoft Fabric”

6月 12, 2024 作成者: Estera Kot

The Native Execution Engine showcases our dedication to innovation and performance, transforming data processing in Microsoft Fabric. We are excited to announce that the Native Execution Engine for Fabric Runtime 1.2 is now available in public preview. The Native Execution Engine leverages technologies such as a columnar format and vectorized processing to boost query execution … Continue reading “Public Preview of Native Execution Engine for Apache Spark on Fabric Data Engineering and Data Science”