Microsoft Fabric Updates Blog

Announcing: Automatic Log Checkpointing for Fabric Warehouse

We are excited to announce automatic log checkpointing for Data Warehouses!

One of our goals with the Data Warehouse is automate as much as possible to make it easier and cheaper for you to build and use them. This means you will be spending your time on adding and gaining insights from your data instead of spending it on tasks like maintenance. As a user, you should also expect great performance which is where log checkpointing comes in!

What is Log Checkpointing and why is it important?

To understand what log checkpointing is and why it is important, we need to first talk about how tables are stored and how they are queried.

When you create a table and add data to it, the data is stored in parquet files on OneLake. Internally, there is also a log file that keeps track of which parquet files, when combined, make up the data that is in the table. These log files are internal and cannot be used directly by other engines. Instead, we automatically publish Delta Lake Logs so that other engines can directly access the right parquet files.

Now, imagine that you load data into your table every 5 minutes. That means over the course of a year, you would have loaded data to your table 105,120 times. Each time, a new log file would be created that tells the system that when reading the table, the new parquet files need to be read as well. That means when reading the table, the system first needs to read all 105,120 log files which is not very performant.

This is where log checkpointing comes in! As of the time of this blog, after every 10 transactions, we automatically and asynchronously create a new log file that is called a checkpoint. This file is basically a summary of all the previous log files. Now when you query the table, the system needs to read the latest checkpoint and any log files that were created after. Instead of having to read 105,120 log files, we would typically need to read 10 or less files!

Conclusion

Log Checkpointing is one of the ways that we help your Data Warehouse to provide you with great performance and best of all, it involves no additional work from you! This helps give you more time to work on leveraging your Data Warehouse to gain more value and insights!

Please look forward to more announcements about more automated performance enhancements!

Bài đăng blog có liên quan

Announcing: Automatic Log Checkpointing for Fabric Warehouse

tháng 11 4, 2024 của Salil Kanade

AI is transforming data warehousing, making complex analytics more accessible and efficient. With tools like Copilot for Data Warehouse and AI Skill, Microsoft Fabric offers two powerful, complementary resources that serve both data developers and business users. This blog explores how these tools differ, when to use each, and how they can work together to … Continue reading “Data Warehouse: Copilot & AI Skill”

tháng 10 31, 2024 của Jovan Popovic

Fabric Data Warehouse is a modern data warehouse optimized for analytical data models, primarily focused on the smaller numeric, datetime, and string types that are suitable for analytics. For the textual data, Fabric DW supports the VARCHAR type that can store up to 8KB of text, which is suitable for most of the textual values … Continue reading “Announcing public preview of VARCHAR(MAX) and VARBINARY(MAX) types in Fabric Data Warehouse”