Microsoft Fabric Updates Blog

Welcome to the August 2024 Update.

Here are a few, select highlights of the many we have for Fabric. V-Order behavior of Fabric Warehouses allows you to manage the V-Order behavior at the warehouse level. Monitor ML Experiments from the Monitor Hub allows you to integrate experiment items into Monitoring Hub with this new feature. You can easily browse and connect to your Azure resources automatically with the modern data experience of Data Pipeline.

There is more to explore, please continue to read on.

European Fabric Community Conference

Join us at Europe’s first Fabric Community Conference, the ultimate Power BI, Fabric, SQL & AI learning event in Stockholm, Sweden from September 24 -27, 2024.

With 120 sessions, daily keynotes, 10 pre-conference workshops, an expo hall with community lounge, and “ask the expert” area, the conference offers a rich learning experience you don’t want to miss. This is a unique opportunity to meet the Microsoft teams building these products, customers betting their business on them, and partners at the forefront of deployment and adoption.

Register today using code MSCUST for an exclusive discount on top of early bird pricing!

Attention Power BI users! 

If you are accessing Power BI on a web browser version older than Chrome 94, Edge 94, Safari 16.4, Firefox 93, or equivalent, you need upgrade your web browser to a newer version by August 31, 2024. Using an outdated browser version after this date, may prevent you from accessing features in Power BI.

Contents

Copilot and AI

Ask Copilot questions against your semantic model (preview)

You can now ask Copilot for data from your entire semantic model in Desktop! Just tell Copilot what you’re looking for, and Copilot will query your model to answer your question with a visual.

Since the Copilot pane in Desktop is still in preview, you do not need to turn on the preview toggle to use this new capability.

To find out more about how this feature works and the types of questions that are supported check out our previous blog post and documentation page.

Reporting

Visual level format strings (preview)

Visual level format strings are here, providing you with more options to configure formatting. Originally built for visual calculations, the core ability that visual-level format strings provide is the ability to format visual calculations. Since visual calculations are not in the model, you could not format them, unless you were using them in data labels or in specific parts of the new card and new slicer visuals. With visual level format strings, you can!

Visual level format strings, however, are useful even without using visual calculations.

With the introduction of visual-level format strings, Power BI now has three levels for format strings:

  • Model. You can set a format string for columns and measures in the model. Anywhere you use that column or measure the format string will be applied, unless it’s overridden by a visual or element level format string.
  • Visual. This is what we’re introducing today. You can set format strings on any column, measure or visual calculation that is on your visual, even if they already had a format string. In that case the model level format string will be overridden, and the visual level format string is used.
  • Element. You can set a format string for data labels and for specific elements of the new card and the new slicer visuals. This level will be expanded to include much more in the future. Any format string you set here will override the format string set on the visual and model level.

These levels are hierarchical, with the model level being the lowest level and the element level the highest. A format string defined on a column, measure or visual calculation on a higher-level override what was defined on a lower level.

Since visual calculations are not in the model, they cannot have a format string set on the model level but can on the visual or element level. Measures and columns can have format strings on all three levels:

Level

Impacts

Available for

Measures, columns

Visual calculations

ELEMENT

Selected element of the selected visual

X

X

Visual

Selected visual

X

X

Model

All visuals, all pages, all reports on the same model

X

 

The image below summarizes this and shows that higher level format strings override lower-level format strings:

Take a look at this example using a measure.

There is a Profit measure in the model, which is set to a decimal number format. To do this, you might have set the formatting for this measure using the ribbon:

Alternatively, you could have made the same selections in the properties pane for the measure in the model view or entered the following custom formatting code:

If you put this measure on a visual it now returns a decimal number, as expected:

However, on a particular visual you want that measure to be formatted as a whole number. You can now do that by setting the format code on the visual level by opening the format pane for that visual and the Data format options found there under General:

Now that same measure shows as a whole number, but just on that visual:

On top of that, you might want to use a scientific notation for that measure but only in the data label on a particular visual. No problem, you set the format code on the data label for that measure:

Now the total shows in scientific notation, but only in the data label and not in other places (such as the tooltip as shown below). Notice how the element level format is used in the data label but the visual or model level format string is still used for the other elements in the same visual.

For visual calculations the same principle applies but of course without the model level. For example, if you have a visual calculation that returns a percentage, you can now format it as such using the Data Format options in the General on the visual in the format pane:

The ability to set visual level format strings makes it much easier to get the exact formatting you need for your visualizations. However, this is only the first iteration of the visual level format strings. We are planning to add the settings you’re used to for the model level format strings to the visual level soon.

Since visual level format strings are introduced as part of the visual calculations preview, you will need to turn on the visual calculations preview to use them. To do that, go to Options and Settings > Options > Preview features. Select Visual calculations and select OK. Visual calculations and visual level format strings are enabled after Power BI Desktop is restarted.

Please refer to our docs to read more about format strings or visual calculations.

Dynamic per recipient subscriptions (Generally Available)

Dynamic per recipient subcriptions are now generally available for Power BI and paginated reports. Dynamic per recipient subscriptions is designed to simplify distributing a personalized copy of a report to each recipient of an email subscription. You define which view of the report an individual receives by specifying which filters are applied to their version of the report. The feature is now available in Sov. Clouds as well.

Create a dynamic per recipient with a simple drag and drop experience. First, subscribe to the report by selecting “Subscribe to report”, then “Create Subscriptions”.

Select “Dynamic per recipient” subscription.

Connect to data that has recipient email, names or report parameters.

Then, select and filter data that you want in your subscription. You probably only want to send emails conditionally. To do that, you can filter the data in the “Filter” pane.

You can select the recipient email addresses and the email subject from the dataset that you connected to by selecting “Get Data”.

Then map your data to the subscription.

Next, schedule the subscription and save it.

The subscriptions will be triggered based on the schedule that you have set up. Personalized reports can be sent to up to a thousand recipients! Learn more about Dynamic per recipient subscriptions for Power BI reports, and paginated reports.

Deliver subscriptions to OneDrive and SharePoint (Generally Available)

Do you have reports that are too large to be delivered by email? Do you have reports that are eating into your email in just a few weeks, or do you need you to move it to a different location? You can now deliver Power BI and paginated report subscriptions to OneDrive or SharePoint. With this capability, you can schedule and send full report attachments to a OneDrive or SharePoint location. Learn more about how to deliver report subscriptions to OneDrive or SharePoint.

Updated Save and Upload to OneDrive Flow in Power BI

Beginning the first weeks of August, desktop users should see a preview switch starting in SU8 to turn on the updated Save and Upload to OneDrive experience in Power BI. To enable this, navigate to the Preview features section of Options in Power BI. Users will then need to select “Saving to OneDrive and SharePoint uploads the file in the background”.

With these updates, we’ve improved the experience of uploading new Power BI files to OneDrive, and easily upload new changes in the background.

Preview switch that needs to be selected.

For uploading new files, after navigating to the correct location in the OneDrive file picker and saving, a dialog box appears while the file is being uploaded. The option to cancel the upload is there if needed. This dialog will only show up the first time a new file is uploaded to OneDrive.

Dialog for saving a new file to OneDrive.

When new changes are saved to a file uploaded to OneDrive, the top of the toolbar indicates that the new changes are also being uploaded to OneDrive.

Additional changes being uploaded in the background to the existing file.

If you click on the title bar flyout in the toolbar, you can also now access more information about the file. Clicking “View your file in OneDrive” will provide a direct link to where the file is stored in OneDrive.

Drop down including the link to the file in OneDrive.

Data limit

We are introducing the data limit capability to help you manage performance issues. This feature allows you to set the maximum data load for a single session per visual displaying only the rows of data in an ascending order by default.

To use this feature:

  1. Go to the ‘Filters on this visual’ menu in the filter pane.
  2. Click on the data limit menu to open a new filter card.

3. Set your desired data limit value.

The filter card features include:

  • Removing, locking, or clearing filters.
  • Hiding or showing filters.
  • Expanding or collapsing filter cards.
  • Applying filters.
  • Renaming and reordering filters.

Report consumers can see any data limits applied to a visual in the filter visual header, even if the filter pane is hidden.

Visuals, shapes and line enhancements

Over the past few months, we have been fine-tuning the visual elements of your reports, including columns, bars, ribbons, and lines. We have given you the ability to craft these Cartesians with precision. However, we noticed that the legends and tooltips were not quite accurate.

With the latest update, the legend and tooltip icons will now automatically and accurately reflect per-series formatting settings, such as border colors, shapes, and line styles. This makes it easier to match series to their visual representations. Additionally, we have added consistency to how per-series formatting is applied to line charts, column/bar charts, scatter charts, and other Cartesian formatting options for common items like error bars and anomalies.

Modeling

DAX query view in the web

Write DAX queries on your published semantic models with DAX query view in the web. DAX query view, already available in Power BI Desktop, is now also available when you are in the workspace.

Look for Write DAX queries on your published semantic model.

  1. Right-click on the semantic model and choose Write DAX queries.
  2. Click on the semantic model to open the details page, then click Write DAX queries at the top of the page.

This will launch DAX query view in the web, where you can write DAX queries, use quick queries to have DAX queries written for you on tables, columns, or measures, or use Fabric Copilot to not only write DAX queries but explain DAX queries, functions, or topics. DAX queries work on semantic models in import, DirectQuery, and Direct Lake storage mode.

Write permission, that is permission to make changes to the semantic model, is currently needed to write DAX queries in the web. And, the workspace setting, User can edit data models in the Power BI service (preview), needs to be enabled.

DAX query view in the web includes DAX query view’s way to author measures. Define measures with references, edit any of them, and try out changes across multiple measures by running the DAX query, then update the model with all the changes in a single click of a button. DAX query view in web brings this functionality for the first time to semantic models in Direct Lake mode!

If you do not have write permission, you can still live connect to the semantic model in Power BI Desktop and run DAX queries there.

Try out DAX query view in web today and learn more about how DAX queries can help you in Power BI and Fabric.

Embedded Analytics

Narrative visual with Copilot available in SaaS embed

Narrative visual with Copilot is available for user owns data scenarios (SaaS) and secure embed. When a user embeds a report containing the narrative visual in a solution where users must sign in – they will now be able to refresh the visual with their data. The first step on our Copilot embed journey!

When you embed a Power BI report in an application in the “embed for your organization” scenario, it allows organizations to integrate rich, interactive data visualizations seamlessly into their internal tools and workflows. Now this solution supports the Copilot visual. A sales team might want to embed a Power BI report in their internal CRM application to streamline their workflow. By integrating sales performance dashboards directly into the CRM, team members can easily monitor key metrics like monthly sales targets, pipeline status, and individual performance, without switching between different tools. This integration enables quicker access to actionable insights, helping the team make informed decisions, identify trends, and react swiftly to market changes, all within the secure environment of their organization’s data infrastructure.

Supported Scenarios:

Unsupported Scenario:

To get this set up, there are a few steps to follow – so make sure to check out the documentation

You will need to Edit your Microsoft Entra app permissions to enable the embedded scenario to work.

From here you’ll need to add the MLModel.Execute.All permission.

Once completed, your visual should work in your embedded experiences where users still sign in.

Check out the documentation for additional details.

Other

Paginated Reports: Sharing of reports connecting to Get Data data sources made easy

We announced the ability to create paginated reports from Power BI Report Builder by connecting to over 100 data sources with the Get Data experience. You can learn more about Connect paginated reports to data sources using Power Query (Preview) – Power BI | Microsoft Learn. You no longer need to share the shareable cloud connection. You only need to share the report and ensure that those consuming the report have access to view the report. This update will be rolling out in the coming weeks.

Core

Workspace filter improvement to support nested folders

Since folders were introduced in workspaces, it can be hard to find items sometimes because they are all hidden in the hierarchy. We have upgraded the filter experience to support filtering through the entire workspace or through a specific folder with all its nested folders.

OneLake

OneLake data access role improvements

OneLake data access roles allow for granular security to be defined within a lakehouse. This month, we’ve updated data access roles based on key feedback.

The assign roles page in the user interface has been redesigned to make it easier to understand access. The page now has the “Add people or groups” control front and center, with less emphasis on assigning users via item permissions.

Item permissions control has been rebuilt for ease of use, including a new “user count” to show how many users will get added to the role based on the permissions you select. The members list now includes sort options and shows newly added users and groups with an icon to help validate pending changes.

Schema support for lakehouses was announced last month, and OneLake data access roles now allows for defining security for schemas. Schema support follows the same inheritance model as folders, allowing multiple tables to be managed easily by securing the parent schema. To get started with schema level security, create a new lakehouse and check the “enable schemas” preview button, and then enable OneLake data access roles for that lakehouse.

To learn more or get started, view the documentation on OneLake data access roles.

Synapse

Data Warehouse

Managing V-Order behavior of Fabric Warehouses

V-Order is a write-time optimization for the Parquet file format, enabling Direct Lake mode with Power BI. It applies special sorting and compression to Parquet files, offering query benefits but potentially increasing data ingestion time in Fabric Warehouse.

We are excited to share a new feature in Fabric Warehouse that allows you to manage V-Order behavior at the warehouse level. You can now disable V-Order, providing better control over ETL performance when V-Order is unnecessary. This enhancement is particularly beneficial for scenarios where V-Order optimization may not offer significant advantages, such as in typical ETL processes with write-intensive staging tables.

Disabling V-Order on a warehouse is a non-reversible operation. You should thoroughly test the performance of your warehouse queries and ETL processes to determine if this option is suitable for your scenario before disabling it. Currently, there is no way to re-enable V-Order.

For more details and to determine if disabling V-Order is suitable for your scenarios, refer to the product documentation.

Delta Lake Log Publishing Pause and Resume

Fabric Warehouse publishes Delta Lake Logs for every table created in your Warehouses. Any modifications made to a Warehouse table will be reflected in the Delta Lake Log within a minute of the transaction being committed. This enables other analytical engines in Microsoft Fabric to read the latest data on user tables without any data duplication.

This new feature allows you to pause and resume the publishing of Delta Lake Logs for Warehouses. When publishing is paused, Microsoft Fabric engines that read tables outside of the Warehouse will see the data as it was before the pause. This ensures that reports remain stable and consistent, reflecting data from all tables as they existed before any changes were made to the tables. It is especially beneficial if you have many tables and reports that are frequently changing, it will minimize the risk of data inconsistencies.

Once your ETL/ELT process is complete, you can resume Delta Lake Log publishing to make all recent data changes visible to other analytical engines.

To learn more about Delta Lake Log publishing pause and resume, refer to the product documentation.

Alter Table add Nullable Column

Alter Table add Nullable Columns is now generally available! This feature allows for adding new nullable columns to previously created Delta Parquet-backed tables in a warehouse in Microsoft Fabric. In the ever-evolving data landscape of data organizations, schemas are shifting and changing to keep up with the influx of new data.

Whether your schema modifications are few and far between, or a regular occurrence that constantly needs to adapt to changing requirements, we have you covered. Our goal is to ensure that customers have everything they need for a seamless warehousing experience, and we continue to strive towards ensuring our TSQL surface area meets the needs of our customers.

Note that today, only the following subset of ALTER TABLE operations in Warehouse in Microsoft Fabric are supported:

  • ADD nullable columns of supported column data types.
  • ADD or DROP PRIMARY KEY, UNIQUE, and FOREIGN_KEY column constraints, but only if the NOT ENFORCED option has been specified.

All other ALTER TABLE operations such as drop / rename/ adding non nullable columns are currently blocked

Here are some example syntaxes to get started with ALTER TABLE functionality.

ALTER TABLE [AdventureWorksSales].[dbo].[Product]

ADD [status] VARCHAR(50); — any supported data types

Learn more here.

Truncate Table

Truncate Table in Fabric Datawarehouse, enhancing data management capabilities for the users is now generally available.

Truncate removes all rows from any warehouse user table that the user has permission to update, while preserving the table’s metadata. This command is beneficial for situations that demand quick data removal and table maintenance. It is ideal for scenarios where staging tables need regular data clearing without altering the table’s structure.

Truncate operation is only allowed on a parquet backed Datawarehouse user table.

There are few Limitations such as:

  • Truncating a specific table partition is not allowed in Fabric Datawarehouse
  • Truncation of a Lakehouse table, views, table referenced by materialized views, system tables, system DMVs are also not allowed.

Syntax for truncate table to remove rows from a Parquet-backed user table in Microsoft Fabric Warehouse:

TRUNCATE TABLE { database_name.schema_name.table_name | schema_name.table_name | table_name } [;]

Learn more here

Mirroring Azure SQL Database

We’ve heard your requests and we’re excited to extend our capabilities with support for mirrored tables within additional SQL’s Dynamic Definition Language (DDL). Now, operations such as Drop Table, Rename Table, and Rename Column can be seamlessly executed while tables are in the process of mirroring.

Click here to watch a demo.

Mirroring integration with modern get data experience

You can use a more intuitive experience with Modern Get Data to connect data source for Mirroring. On the homepage of Data Warehouse experience, click any Mirroring module to get started.

Let’s use the “Mirrored Azure SQL Database” to get started.

After entering the “Mirrored Azure SQL Database”, you can use the Modern Get Data experience by choosing all the available databases in the OneLake data hub. More details of how to do this, you can take reference on this document Tutorial: Configure Microsoft Fabric mirrored databases from Azure SQL Database (Preview).

Data Engineering

Announcing MsSparkUtils upgrade to NotebookUtils

Beginning in August, the library MsSparkUtils will be rebranded as NotebookUtils. This change reflects our commitment to providing the best tools and utilities for your data processing needs.

While NotebookUtils will be backward compatible with MsSparkUtils, we want to inform you that new features will only be added to the NotebookUtils namespace. Therefore, we strongly recommend that you start replacing the old namespace with the new one in your projects, and the NotebookUtils will be only supported on runtime version 1.2 and above, please note to upgrade your runtime version as well.

The transition to NotebookUtils is a step forward in our journey to enhance your experience and efficiency. We appreciate your cooperation in this change and are here to support you through the transition process. Thank you for your continued support and happy coding!

Import Notebook UX improvement

The Import Notebook feature has recently been enhanced making it more accessible and intuitive. With this update, you can effortlessly import notebooks, reports, or paginated reports using the unified entry in the workspace toolbar. This improvement streamlines the process, ensuring a seamless experience for developers.

Fabric Runtime Lifecycle

We recently released the Lifecycle of Apache Spark runtimes in Fabric. Our team diligently delivers new versions, ensuring they are of high quality, well-integrated, and supported continuously. Each version includes about 110 components, and as the runtime grows, we make sure it integrates smoothly into Microsoft Fabric and Azure.

We cautiously approach new preview runtime releases, aiming for an experimental preview in roughly 3 months, although the exact timeline varies per case. The latest runtime, Version 1.3 based on Apache Spark 3.5, is currently in Public Preview, and we’re preparing it for General Availability (GA). Runtime 1.2, based on Apache Spark 3.4, is already stable in GA. We have announced the end-of-support date for Runtime 1.1 on Apache Spark 3.3, which will be deprecated and not available after March 31, 2025.

Data Science

Semantic Link Labs is now live!

Recently, Michael Kovalsky released a python library called ‘fabric_cat_tools‘ for Fabric notebooks. This library includes 120+ additional functions which extend semantic-link’s capabilities ranging from automating the migration of semantic models to Direct Lake, analyzing semantic models via a Best Practice Analyzer, showing Vertipaq Analyzer statistics, wrapping the full Tabular Object Model…and much more. You can even automatically translate your entire semantic model’s metadata into any language in seconds! All of this is self-contained inside of the Fabric ecosystem.

Check out this GIF

 

Figure 1: Using Semantic Link Labs to analyze semantic models via Vertipaq Analyzer.

We are excited to announce that this library has been renamed to Semantic Link Labs and open-sourced on Microsoft’s official GitHub page. As its name implies, semantic-link-labs is now an official extension of Semantic Link, offering early access to many features not yet available in Semantic Link but having the reassurance of a Microsoft-branded, open-sourced product. All functions within Semantic Link Labs are fully documented here. We are confident Semantic Link Labs will help Power BI developers and admins easily automate previously complicated tasks as well as make semantic model optimization tooling more easily accessible within the Fabric ecosystem. The overwhelmingly positive feedback from this library (in its time as fabric_cat_tools) shows that semantic-link and Semantic Link Labs can offer a great deal to the Power BI community, not just the data science community. We warmly welcome your contributions to our GitHub repository.

Check out this GIF

Figure 2: Using Semantic Link Labs to analyze semantic models via a Best Practice Analyzer.

Check out this GIF

Figure 3: Using Semantic Link Labs to automatically translate a semantic model’s metadata.

Apply MLFlow tags on ML experiment runs and model versions

We have released a new feature that allows users to apply MLflow tags directly on their ML experiment runs and ML model versions from the user interface. This enhancement empowers users to add annotations, track changes, and incorporate additional metadata seamlessly. Whether you are fine-tuning a model or running extensive experiments, tagging will enable you to organize and contextualize your results more effectively.

Moreover, these tags are easily accessible from the run or model version details page, providing a comprehensive overview briefly. Users can also utilize these tags to compare views within the item or inline notebook authoring, making it effortless to compare across multiple runs or model versions. This streamlined tagging capability is designed to enhance your workflow, offering deeper insights and improved management of your machine learning projects.

To learn more about tagging, refer to the product documentation.

Track related ML Experiment runs in your Spark Application

We have a new feature for you that is designed to help users track related ML experiment runs within their Spark applications. This feature allows users to monitor the progress of their Spark applications, which can create one or multiple experiments and related experiment runs. By navigating to the Monitoring Hub, users can click on a Spark application and go to item snapshots to find experiments and runs created within that application.

This enhancement makes it easier to debug Spark applications, especially when they impact specific machine learning workflows. For instance, if a Spark application fails, users can quickly identify which ML training runs were affected, facilitating efficient troubleshooting and resolution. This new capability ensures a smoother and more transparent process for managing and debugging ML workflows within Spark applications.

To learn more about monitoring data science items, refer to the product documentation.

Monitor ML Experiments from the Monitor Hub

Integrate Experiment items into the Monitoring Hub with this new feature! With this enhancement, users can track experiment runs directly from the Monitoring Hub, providing a unified view of all their activities. This integration includes powerful filtering options, enabling users to focus on experiments or runs created within the last 30 days or other specified periods.

This feature offers a comprehensive “single pane of glass” experience, giving users a holistic overview of all activities in their workspace. Whether managing multiple projects or tracking specific experiment runs, this integration simplifies the process, making it easier to stay organized and informed. Enhance your workflow with this seamless monitoring capability, designed to provide clarity and efficiency in managing your machine learning experiments.

To learn more about monitoring data science items, refer to the product documentation.

Use PREDICT with Fabric AutoML models

We’re releasing an update to code-first AutoML in Fabric. With this update, we now automatically log the input and output schema for non-Spark models trained using AutoML. This enhancement allows users to seamlessly move from training with AutoML to making predictions by leveraging the built-in Fabric PREDICT UI and code-first APIs for batch predictions. This streamlined process ensures a smooth transition from model training to making accurate and efficient predictions, enhancing overall productivity and ease of use.

To learn more about AutoML in Fabric Data Science, refer to the product documentation.

The AI skill is now in public preview

You can now build your own generative AI experiences over your data in Fabric with the Public Preview of the AI skill! With this new capability, you can build question and answering AI systems over your Lakehouses and Warehouses. You can configure the AI to respond to questions in the way that works for you and your organization by providing instructions and examples.

Many generative AI experiences are generic; they do not understand all the nuances that come along with your data. These systems can output very reasonable looking answers that are ultimately incorrect. The issue is that this nuance is not expressed in the schema or the data itself. How should the AI count sales across different time zones? Or what should you do if the customer ID is not unique? These kinds of issues arise frequently in real world data systems, and only you have all the context needed to provide these answers.

With the new AI skill experience, domain experts and data professionals now have the power to configure and guide the AI by providing instructions and examples. Your colleagues can then ask their questions and have the AI generate reliable queries for them. The AI skill parses the generated queries before the queries are executed, to ensure they do not alter or delete data. Also, the AI skill uses your credentials to execute the query, which ensures that data governance rules remain adhered to.

For more details on the AI Skill, read the full update here.

Real-Time Intelligence

Fabric Real-Time Hub Teaching Bubble

The Fabric Real-Time Hub now introduces ‘teaching bubbles’ that provide a step-by-step guide through its major functionalities. These interactive guides allow you to seamlessly navigate each tab of the Real-Time Hub, offering a clear understanding of the value and core functions. By following these teaching bubbles, you can efficiently kick off the GetEvents flow, ensuring a best-in-class streaming data ingestion experience. This feature is designed to enhance your learning curve and maximize the capabilities of the Real-Time Hub.

KQL Queryset REST API support

KQL Queryset is a data analysis tool that enables users to write and execute KQL queries against large datasets. It allows for advanced data exploration, manipulation, and visualization, making it easier to derive insights from structured and semi-structured data.

The new Fabric Queryset REST APIs allow you to create/update/delete KQL Querysets in Fabric, and programmatically manage them without manual intervention.

Read more here.

Data Factory

Dataflow Gen2

Certified connector updates

Below are the new and updated connectors in this release:

Are you interested in creating your own connector and publishing it for your customers? Learn more about the Power Query SDK and the Connector Certification program.

Data pipeline

New connectors

We are excited to announce the release of two powerful new connectors in Fabric Data Factory data pipeline:

  • Salesforce Connector
  • Vertica Connector

The Salesforce connector using Bulk API 2.0 in Fabric Data Factory offers a high-performance solution for integrating Salesforce data with Microsoft Fabric. Bulk API 2.0 is designed to handle large volumes of data efficiently by processing batches of records asynchronously, making it ideal for large-scale data migrations and synchronizations. This connector enables users to seamlessly extract, transform, and load extensive datasets from Salesforce into various destinations or vice versa.

By utilizing this connector, organizations can accelerate data operations, streamline workflows, and leverage the full potential of their Salesforce data alongside other data sources, ensuring robust and scalable data integration and analytics.

The Vertica connector offers unparalleled ease of connection to Vertica, a high-performance, distributed SQL database designed for big data analytics through on-premises gateway. By leveraging the Vertica connector, organizations can harness the robust analytical capabilities of Vertica alongside the scalability and flexibility of Fabric Data Factory, thereby enhancing their data workflows and accelerating insights across their enterprise data ecosystem.

Elevate your data integration experience with these new connectors in Fabric Data Factory and unlock the full potential of your data ecosystems.

Learn more here.

Data Warehouse Connector Supports TLS 1.3

The introduction of TLS 1.3 support in the Data Warehouse connector marks a significant improvement in data security for data integration processes. TLS 1.3, the latest version of the Transport Layer Security protocol, provides improved encryption, reduced latency, and enhanced privacy over previous versions. By supporting TLS 1.3, the Data Warehouse connector ensures that data transmitted between the data warehouse and other systems is protected with state-of-the-art encryption, mitigating the risk of cyber threats and data breaches.

This update not only strengthens the security posture of data operations but also optimizes performance, enabling faster and more secure data transfers. Organizations can now confidently leverage the Data Warehouse connector in data pipeline to integrate and analyze their data, knowing that their information is safeguarded with the most advanced security protocols available.

Learn more here.

Easily Connect to your Azure Resources by Modern Get Data Experience in Data pipeline

You can easily browse and connect to your Azure resources automatically with the modern data experience of Data Pipeline. With the Browse Azure experience, you can quickly connect to Azure resources without manual filling in some information, such as endpoint, URL, etc.

Currently we have supported connecting Azure Blob, Azure Data Lake Gen 2, Azure Cosmos DB and Synapse through Browse Azure experience. To browse the Azure resources, you need at least a Reader access role on the data source. To be able to connect to the resource with OAuth, you need at least the Storage Blob Data Reader role on the data source. For more information on access roles.

Learn more here.

Let’s get started with the Geta Data experience in Fabric Pipeline. After you navigate to the Azure module, you will land into browse Azure module.

In the browsing Azure module, you can select subscription, resource group and resource types to filter on the resources that you want. In this example, my account is Azure Blob storage, so I will connect the resource with the Azure Blobs storage type.

After connecting to Azure Blob storage, you can easily select the file to preview data.

You can also use a similar easy way to connect with other supported Azure resources. Learn more here.

Get certified on Fabric!

We’d like to thank the thousands of you who completed the Fabric AI and 30 Days to Learn It Skills Challenge and earned a discount voucher for Exam DP-600 which leads to the Fabric Analytics Engineer Associate certification.

If you earned a discount voucher, you can find redemption instructions in your email. We recommend that you schedule your exam promptly, before your discount voucher expires. 

If you need a little more help with exam prep, visit the Fabric Career Hub which has expert-led training, exam crams, practice tests and more.

Fabric Sticker Challenge Winners Announced!

The Fabric Community Sticker Challenge launched August 1-23 and winners are in! All Fabric Community members were invited to create unique stickers showcasing their enthusiasm and creativity under the following categories: Community Enthusiasm, Inspirational, “Inside Joke” for developers and data, and Super Users. To see winning designs, check out our Community News. Thank you all who participated in this challenge; it was great to see so much involvement!

Fabric Influencers Spotlight

Check out our latest initiative, the Fabric Influencers Spotlight.  Each month, we’ll be highlighting some of the great blog, videos presentations and other contributions submitted by members of Microsoft MVP & Fabric Super User communities that cover the Fabric Platform, Data Engineering & Data Science in Fabric, Data Warehousing, Power BI, Real-Time Intelligence, Data Integration, Fabric Administration & Governance, Databases and Learning. 

 

Bài đăng blog có liên quan

Microsoft Fabric August 2024 Update

tháng 10 31, 2024 của Jovan Popovic

Fabric Data Warehouse is a modern data warehouse optimized for analytical data models, primarily focused on the smaller numeric, datetime, and string types that are suitable for analytics. For the textual data, Fabric DW supports the VARCHAR type that can store up to 8KB of text, which is suitable for most of the textual values … Continue reading “Announcing public preview of VARCHAR(MAX) and VARBINARY(MAX) types in Fabric Data Warehouse”

tháng 10 29, 2024 của Dandan Zhang

Managed private endpoints allow Fabric experiences to securely access data sources without exposing them to the public network or requiring complex network configurations. We announced General Availability for Managed Private Endpoint in Fabric in May of this year. Learn more here: Announcing General Availability of Fabric Private Links, Trusted Workspace Access, and Managed Private Endpoints. … Continue reading “APIs for Managed Private Endpoint are now available”