Microsoft Fabric Updates Blog

Fabric workloads are now generally available!

Microsoft Fabric is now generally available! Fabric brings together the best of Microsoft Power BI, Azure Synapse Analytics, and Azure Data Factory into a single software as a service (SAAS) platform. Fabric provides multiple workloads purpose-built for specific personas and specific tasks. Keep reading to learn about all of Fabric’s workloads!

Synapse

Data Warehouse

We are thrilled to announce the General Availability of Synapse Data Warehouse in Microsoft Fabric! Synapse Data Warehouse is the next generation of data warehousing in Microsoft Fabric that natively supports an open data format. The data warehouse unifies capabilities from Synapse Dedicated and Serverless SQL Pools and is modernized with key improvements that were outlined in our public preview blog. These capabilities help enterprises achieve new scenarios in analytics that were never possible before. Since announcing public preview, over the last several months, we have heard your ideas and feedback and added many new capabilities including ability to share Warehouses, clone tables as of current time, compute statistics automatically for enhanced querying performance, support SQL Projects and Deployment Pipelines, supplemented SQL security constructs with Column Level Security and Row Level Security. For the full list of capabilities, please refer to the Fabric Updates Blog or Fabric Data Warehousing documentation.

Today, in addition to announcing GA, we are excited to bring additional capabilities that will help you develop, deploy, secure, and manage your data warehousing solutions in Fabric meeting your scale, data security, and governance needs:

  • We have made several Warehouse user experience improvements such as easy cloning of tables, saving results as a table, or saving a query as a View, thus enabling more citizen developers to be productive without learning to write code.
  • For the professional developer, we are adding experiences to develop and deploy data warehouse applications via SQLPackages and REST APIs. SQLPackage is a command-line utility that automates several database development tasks and can be incorporated into CI/CD pipelines. Warehouse REST APIs enable customers to automate and scale their solutions easily and also build custom applications that interact with Fabric Warehouse. In the upcoming weeks, you can expect Warehouse to support Git as well.
  • Solutions can now be monitored using Query Insights. Whether you are looking for query history, understanding your long running or frequently executed queries, tracking and monitoring performance is at your fingertips.
  • Secure your applications with SQL Dynamic Data Masking (DDM). DDM allows you to define masking rules for specific columns ensuring that sensitive information is only viewed by authorized users.
  • Building on our vision of no knobs performance, we have made quite a few improvements to querying performance, so you don’t have to worry about it. You don’t need to think about indexing or distributions any longer, as we manage resources elastically to give you the best possible performance. We automatically compact parquet files of the Warehouse managed tables, a new parser gives you enhanced CSV file ingestion time, metadata is now cached in addition to data, we’ve improved assignment of compute resources to milliseconds, multi-TB result sets are streamed to the client, so you don’t have to worry about any sized query results!
  • T-SQL language support now includes SP_RENAME, TRIM and GENERATE_SERIES.
  • We have improved our data recovery capabilities with support for table cloning to a previous point-in-time. In addition, within the next month you can expect to get the ability to restore your entire Warehouse to a previous point-in-time via system generated recovery points or your own restore points.

For the full list of updates, please refer to the November Fabric blog.

Lastly, as Arun announced in his blog post, we are thrilled to announce Mirroring in Fabric is coming soon. Mirroring provides a modern way of accessing and ingesting data continuously and seamlessly from any database or data warehouse into the Data Warehousing experience in Fabric. Any database can be accessed and managed centrally from within Fabric without having to switch database clients. There is no complex setup or ETL. By just providing connection details, your database is instantly available in Fabric. Data is replicated in a reliable way in real-time and lands as Delta tables for consumption in any Fabric workload. Data Science experiences and Power BI Direct Lake mode open the possibility to implement limitless analytics scenarios. For full announcement and details on how to sign-up as an early adopter, read the full blog post Introducing Mirroring in Microsoft Fabric.

Data Engineering & Data Science

We are thrilled to announce the GA of the Data Engineering (DE) & Data Science (DS) workloads in Microsoft Fabric! Over the last few months we have added many exciting new capabilities to the DE & DS experience including lakehouse sharing, high concurrency mode, a notebook resource folder and semantic data science.

Today’s GA announcement covers all core DE & DS items, including the lakehouse, notebooks, Spark Job definition, models and experiments. Data engineers will also be able to work with lakehouses, notebooks and Spark job definitions using the Synapse VS Code desktop extension on Fabric.

We are also excited to announce the GA of the Spark engine. This includes starter pools as well as custom pools for running Spark applications. Users can also leverage the brand new Fabric runtime which includes Spark 3.4, Delta 2.4, Java 11 and Python 3.10. Finally, users can monitor all their Spark applications, including job details, logs and related items in the Fabric monitoring hub.

In addition to the GA, we are announcing many exciting public preview updates to the features. Data engineers can now enjoy CI/CD integration (Git + Deployment pipelines) for notebooks and the lakehouse. They can also start working with these items programmatically thanks to the new REST API support. We are also excited to announce the public preview of the Environment item, enabling customers to configure & manage all their settings and libraries in a centralized place that can be attached to notebooks and Spark jobs. Data engineers will also be able to work with their lakehouses, notebooks and Spark job definitions using the new fully remote Synapse VS Code extension.

With the Fabric GA, data scientists are also getting many exciting new updates! You can now leverage an MLFlow widget directly inside the notebook, letting you do comparisons of your runs inline, without leaving the notebook experience. We are also excited to announce the GA of Synapse ML 1.0, our open-source ML library for Spark that simplifies the application of machine learning at scale. Noteworthy updates in this release of Synapse ML include Vector search integration and APIs for easy data transformation using natural language and LLMs. We also have various new public preview additions, including a built-in AI endpoint for programming with Azure Open AI models, out-of-the-box integration with pre-built AI models from AI Services, and data wrangler support for Spark dataframes.

Last but not least, we are improving the productivity of data engineers and data scientists by introducing a public preview of Copilot in Fabric notebooks. You can open up the Copilot pane and interact with your data and code with natural language, or leverage magic commands directly inside the notebook cells. And all the copilot experiences are Fabric aware, meaning it has context about your lakehouse data and more.

For the full list of Data Engineering & Data Science updates, please refer to the November Fabric blog.

Real-Time Analytics

Fabric Real-Time Analytics is a robust platform tailored to deliver real-time data insights and observability analytics capabilities for a wide range of data types. This includes observability time-based data like logs, events, and telemetry data. It’s the true streaming experience in Fabric! The long-awaited moment is here: Real-Time Analytics in Microsoft Fabric has reached general availability (GA), unveiling a wide range of transformative features and capabilities to empower data-driven professionals across diverse domains.

Key Features of Real-Time Analytics

Real-Time Analytics offers countless features all aimed at making your data analysis more efficient and effective. Here are some key features

  1. Rapid Deployment: Create a database, ingest data, run queries, and generate a Power BI report all in less than 5 minutes. Real-Time Analytics prioritizes efficiency and speed, enabling you to get to the heart of data analysis without delay.
  2. Low latency data streaming and query: By-default streaming provides high performance, low latency, high freshness data analysis. Go from data to complex business insights in mere seconds.
  3. Query Versatility: Whether you’re a fan of Kusto Query Language (KQL) or prefer traditional SQL, Real-Time Analytics has got you covered. The service allows you to generate quick KQL or SQL queries, ensuring you can work in your preferred language and get results within seconds.
  4. One Lake Integration: Your data doesn’t live in isolation. Real-Time Analytics seamlessly integrates with OneLake and Azure storage, making it easier to access and analyze your data from multiple sources.
  5. Autoscaling for Peak Efficiency: Say goodbye to the hassle of infrastructure management. Real-Time Analytics offers built-in autoscaling based on workload factors like hot cache, memory, CPU usage, and ingestion. This feature ensures the seamless operation of your analytics solution with minimal cost, allowing you to concentrate on your data analysis.
  6. Sample Data Gallery: Kickstart your data analytics journey with a rich selection of sample data from diverse sources and use cases. Experiment and learn how to run sample queries without the need to create your own dataset.
  7. Graph Query Semantics: Visualize a scenario where complex data relationships become clear as day. Real-Time Analytics introduces Graph Query Semantics, enabling users to perform graph analytics with a Cypher-like query language within Kusto Query Language (KQL). Dive deep into interconnected data structures and extract invaluable insights, transforming the way you interpret complex datasets.
  8. In-Place Data Sharing: Real-Time Analytics supports in-place, real-time data sharing across tenants, fostering seamless teamwork and knowledge exchange.
  9. Inline Python in KQL Database: Users can now enable the Python plugin within their KQL database to query every delta table in One Lake or Azure Storage, making data analysis more versatile and accessible for professionals of varying skill levels.

Industry Scenarios: Real-Time Analytics in Action

Real-Time Analytics is revolutionizing industries. In e-commerce, it optimizes deliveries worldwide and enhances the shopping experience. For marketing, it offers real-time insights into campaign impact. In education, it provides instant access to student data for tailored strategies. Automotive benefits from supply chain optimization and vehicle performance enhancement. Energy sees improved resource management and sustainability. The healthcare sector gains real-time patient insights. These scenarios showcase the potential of Real-Time Analytics to revolutionize data-driven decision-making across industries.

The Future of Real-Time Analytics

Our journey has just begun. Real-Time Analytics is committed to ongoing improvement and innovation. Expect exciting developments, including Natural Language to KQL, Real-Time Dashboards, and Copilot integration.

Real-Time Analytics in Microsoft Fabric is your ticket to unlocking the potential of real-time data insights. Whether you’re charting new data horizons, seeking to optimize your analytics solutions, or simply looking for a more efficient and user-friendly data analysis experience, this service is your trusted partner. Stay ahead of the data game and embark on your journey with Real-Time Analytics today. For more information on Real-Time Analytics, see Real-Time Analytics – Microsoft Fabric , or Get started with Real-Time Analytics in Microsoft Fabric – Training

Sign up for the free trial. For more information read the Fabric trial docs.

Event Streams

With Microsoft Fabric event streams, you can seamlessly ingest, capture, transform and route real-time events to various destinations in Microsoft Fabric with a no-code experience. It enables customers to ingest real-time event data from external event sources into the data stores in Fabric. The events could be transformed into the native formats required for target destinations. For example, Event Streams could transform the events into Delta Lake format for the Lakehouse or transform/filter events before routing the events to a KQL table.

  1. Centralized place for real-time events: It provides the capability to capture, modify, and direct your streaming data in real-time using a fully managed and scalable infrastructure.
  2. Multiple source connectors: It enables you to ingest your real-time streaming data from 4 source types today: Azure Event Hubs, Azure IoT Hub, Sample data or Custom applications using the Event Hub SDK, AMQP, or Kafka API (like Kafka, Rabbit MQ, Logic Apps, Functions etc.)
  3. Multiple destinations: It enables you to transform/capture/route real-time streaming data to 4 destination types today: KQL database, Lakehouse, Reflex or Custom applications (like Kafka, Rabbit MQ, Logic Apps, Functions etc.).
  4. No code experience: It provides an intuitive and easy to use drag & drop experience with end-to-end data visibility and monitoring.

As part of GA, we are excited to share the following product updates:

  • IoT Hub as a source – Event Streams now supports Azure IoT Hub as a source. Using this source, you can seamlessly ingest data from Azure IoT Hub into Fabric. For more details, follow the doc here: Build a real-time dashboard by streaming events from Azure IoT Hub to Microsoft Fabric – Microsoft Fabric | Microsoft Learn
  • Stream transformation before sending to KQL DB as a destination – Customers can now do stream transformation on incoming events from Event Streams before sending the events to the KQL Database that allows you to route, validate, filter, transform or aggregate the events. To transform and route the real-time events to KQL Database, simply add a KQL Database destination in your Event Streams. For a detailed guide on how to create this destination, follow the doc here: Add and manage Event Streams destinations – Microsoft Fabric | Microsoft Learn
  • Data Activator/Reflex as a destination – Event Streams now supports Data Activator as a destination. To make a Data Activator alert on your Event Streams, configure a “Reflex” destination. Triggers can then be set up when the data hits certain thresholds or matches other patterns.
  • AMQP & Kafka format connection string in custom source and destination – Event Streams now offers support for AMQP and Kafka format connection strings in both Custom App source and destination, allowing for seamless integration with external message brokers like RabbitMQ with Shovel and other Kafka applications. Once you’ve added a Custom App to your Event Streams, you can choose from Event HubAMQP and Kafka protocols for streaming data into or from Event Streams. The Sample Java code provided shows you how to send or receive events using these different protocols. You can copy and paste the code into your application, initiating the streaming of data into Event Streams. For more details on how to add a custom app to your Event Streams, follow the doc here: Add and manage Event Streams sources – Microsoft Fabric | Microsoft Learn
  • Lakehouse destination improvements
    1. Two ingestion modes are now available at Event Streams Lakehouse destination. You can select one of these modes to optimize how Event Streams writes to Lakehouse based on your scenario.
      1. Rows per file – You can now specify the minimum number of rows that Lakehouse ingests in a single file. The smaller the minimum number of rows, the more files Lakehouse will create during ingestion.
      2. Duration – You can now specify the maximum duration that Lakehouse would take to ingest a single file. The longer the duration, the more rows will be ingested in a file.
    2. Table optimization shortcut is now available inside Event Streams Lakehouse destination – Our customers have inquired about compacting numerous small streaming files generated in a Lakehouse table. We have a solution for you! This solution facilitates users by launching a Spark job within a Notebook, which consolidates these small streaming files within the target Lakehouse table.

For more details, follow the doc here: Add and manage Event Streams destinations – Microsoft Fabric | Microsoft Learn

  • ‘Get data from Event Streams’ in multiple Fabric items – Customers can now get event data from Event Streams inside multiple Fabric items: KQL Database, Lakehouse. It enables you to seamlessly get event data from your Event Streams into various Fabric items, offering both flexibility and convenience.
  • Enhanced user experience in Event Processor – The Event Processor within Event Streams is a powerful no-code editor, enabling you to process and manage your real-time event data efficiently. You can easily aggregate and filter event data using temporal functions before they reach your Lakehouse or KQL database. The recent UX improvements introduce a full-screen mode. The insertion and deletion of event data operations have been made more intuitive, making it easier to drag and drop and connect your data transformations.

Data Factory

Data Factory in Microsoft Fabric brings Power Query and Azure Data Factory together into a modern data integration experience, with capabilities to ingest and transform data and orchestrate data workflows. This empowers data and business professionals with the data integration capabilities they need to do their best work.

Today, we are thrilled to announce the GA of the Data Factory in Microsoft Fabric! We are grateful to all the customers and community feedback, and Fabric Ideas we have received and have been continuously making improvements to Data Factory to address product feedback. As part of GA, we are excited to share the following product updates:

  • Modern Get Data experience with OneLake data hub – Being able to discover data sources and getting your data to where you need it is important. It is also an important first step before you start working on your data engineering or analytics tasks. With Data Factory in Fabric, you now can use Modern Get Data experience to access 170+ connectors.As you work with Data Factory, and build your dataflows and data pipelines, you will see this Modern Get Data experience. Learn more about the Modern Get data here.
  • Get Data and Transformation with Dataflows – Over the past months, we have been continuously working on performance improvements to dataflows in Fabric. Powered by Fabric compute, dataflows handle simple and complex transformations on data of all sizes. In addition, we have been continuously delivering performance improvements to many of the data connectors.
  • Data Orchestration with Data pipelines activities – We are excited to announce that we have brought all the activities that you love in Azure Data Factory to Data Factory in Microsoft Fabric and more. New Teams and Outlook notification activities enable you to be notified of the status of data pipelines, whenever the pipeline runs completeSee here for a list of activities that are available in Data Factory in Fabric
  • Enterprise-ready Data Movement – Whether it is petabyte-scale data to small data, Data Factory provides a serverless and intelligent data movement platform that enables you to move data between diverse data sources and data destinations reliably. With support for 170+ connectors, Data Factory in Fabric enables you to move data between multi-clouds, data sources on-premises, and within virtual networks (VNet).Intelligent throughput optimization enables the data movement platform to automatically detect the size of the compute needed for efficient data movement.
  • Improvement in monitoring experience across dataflows and data pipelines – We have worked relentlessly on making the monitoring information for dataflows and data pipelines useful, and insightful. We have made improvements to pipeline output monitoring (and how run status is represented) and dataflows refresh history provide workspace view of error details.

In addition to many of the product capabilities we announced for GA; we are excited to share the following capabilities for public preview:

  • Connecting to Microsoft 365 using Microsoft Graph Data Connect (MGDC) – Using the Copy Activity in data pipelines, you can now connect to your organization Microsoft 365 data and bring them into Microsoft Fabric for data analytics.
  • OneLake/Lakehouse connector in Azure Data Factory – For many Azure Data Factory customers, you can now integrate with Microsoft Fabric, and bring data into the Fabric Onelake.
  • Virtual Net Data Gateway (public preview) – The VNet data gateway helps customers to connect from Fabric Dataflows Gen2 to their Azure data services within a Virtual Network (VNet) without the need of an enterprise data gateway. The VNet data gateway securely communicates with the data source, executes queries, and transmits results back to Fabric.

As we listen to customers on how they work, and build their dataflows and data pipelines, we learn how they are leveraging documentation on Microsoft Learn and forums to learn how they can get started. We are grateful to many expert developers who share their data integration best practices with the community. We asked ourselves – what if we can have an AI assistant that can help with the creation of dataflows and data pipelines and help all Data Factory developers complete their data integration tasks. To achieve this, we are excited to announce the public preview of Copilot for Data Factory in Microsoft Fabric.

For the full list of Data Factory updates, please refer to the November Fabric blog.

Fabric Platform

Over the past few years, organizations have seen a massive increase in their digital footprint, leading to data fragmentation, growth, and blind spots across their data estate. We are thrilled to announce the GA of Fabric with Unified Security, Administration and Governance, Unified Compute Model, One Lake and a Unified SaaS Experience. Thank you for all your feedback and ideas!

Unified Security, Administration and Governance:

When we launched Microsoft Fabric, we announced an array of admin, governance, and security capabilities in Fabric to help provide visibility across your tenant, insights into usage and adoption, and tools to secure and govern your data end-to-end. We are now announcing an expansion of these governance and security through the General availability of these following capabilities:

  • Centralized Administration: The Fabric admin portal equips administrators with a standard and centralized medium to manage, view and apply configurations for their tenant and capacities. This admin portal, which was specific to Power BI was extended to Fabric and is now generally available. As a tenant admin you can now set configurations for your entire tenant, so every data engineer or data scientist need not worry about it. In addition, as a capacity admin, you can manage all capacities you are admin of including Fabric trial capacity.
  • Enabling Microsoft Fabric in your tenant: We had introduced a switch to control the availability of Fabric preview workloads for users within your tenant using the Fabric tenant setting in the admin portal. We will continue to support this configuration once Fabric is GA. As tenant admins you can choose the default for the entire tenant including enabling it for specific users and security groups within your tenant. Capacity admins, as owners of the capacity, can choose to enable or disable this configuration for their capacities including enabling it for specific users and security groups.
  • Enterprise promises: As you know securing your data is a key priority for us. We introduced data residency where data never leaves the region boundary and end-to-end auditability for all user and system operations. These capabilities are now GA.

In addition to that, we are announcing Disaster recovery for Fabric. With this feature your data in OneLake will be replicated across regions which will ensure availability of this data in case of regional outages. You will be able to choose which capacities need to be replicated via capacity level configurations. BCDR for Power BI will be available by default as it is today and isn’t impacted by this disaster recovery capability.

  • Lineage and impact analysis capabilities are essential for projects that span across multiple items and workspace. With the lineage view you can see and understand how data moves from source to destination and perform troubleshooting and impact analysis tasks. Learn more about Lineage and impact analysis here.
  • Endorsement is a way to make it easier for users to find the trustworthy and authoritative items they need. This becomes crucial for organizations who deal with large amount of data that is being shared between users and departments. Setting item as endorsed (certified or promoted) can be done on all Fabric items.
  • Metadata Scanner APIs facilitates governance over your organization’s data by making it possible to catalog all the metadata of your organization’s Fabric items inventory. It accomplishes this using a set of Admin REST APIs that are collectively known as the scanner APIs. Learn more about Scanner APIs here.

Security, Compliance and Governance with Microsoft Purview

Fabric has integrated Microsoft Purview solutions into Fabric to provide you with enterprise scale security, compliance and governance capabilities that enterprises use today for Office, Microsoft 365, Azure and multi-cloud platforms. By bringing Microsoft Purview into Fabric, central teams can easily manage Fabric data estate, and users can leverage the native and user-friendly experience to maintain security and compliance without impacting.

  • Microsoft Purview Information Protection sensitivity labels – The well-known concept of sensitivity from Office, where you can see if the document or email is confidential, and you may not be authorized to export sensitive data. This is done through Information Protection sensitivity labels, and these very same sensitivity labels are integrated into Fabric.
    • Manual sensitivity labeling – Data owners can apply a sensitivity label to a lakehouse or any other Fabric item.
    • Label inheritance – The label applied on a Fabric item will flow with the data to all downstream items up to the business user viewing the data in PowerBI reports. When exporting data to Office files the label and protection applied automatically.
    • Sensitivity label policies: Purview admins can configure default label to be set automatically when users create new items and can also require users to set label on items they create.
  • Purview hub (Public Preview) For easy access to all these Purview capabilities, we’ve created a centralized page called the Purview Hub, currently in public preview, which serves as a gateway to Purview and contains insights about item inventory, sensitive data, and endorsement. Purview hub is available both for Fabric admins and Fabric data owners.
  • Purview audit – Finally, Fabric is also integrated with Microsoft Purview audit which provides Fabric and compliance admins with end-to-end auditability of Fabric activities. All user and system operations are captured to the audit logs and made available in Microsoft Purview compliance portal.
  • Integration with Purview Data Catalog – We are thrilled to announce that the Microsoft Purview Data Map is automatically provisioned and attached to every Fabric instance by default with no set-up required. You can browse and search your Fabric and other assets across your data estate in the Microsoft Purview Data Catalog.

OneLake

We’ve heard your feedback and are introducting Multi select for shortcuts.

  • Shortcuts – Multi-Select
    Microsoft OneLake provides a single unified storage location for all your data analytics needs. Whether your data is stored directly in OneLake or through other storage accounts, all your data is accessible through OneLake. Microsoft OneLake makes this possible through a virtualization layer called Shortcuts. Shortcuts in OneLake allow you to reference different storage locations rather than copy the data. Data products can now be virtualized, so you have a single source of truth for all your data. Eliminating copies of data reduces overall latency in reported data and ensures that you always know the source of your data.

    Today, we are excited to announce that creating multiple OneLake shortcuts just got easier. Rather than creating shortcuts one at a time, you can now browse to your desired location and select multiple targets at once. All your selected targets then get created as new shortcuts in a single operation.

    Click here to see how it works.

  • OneLake data hub is your one-stop solution to access and manage the OneLake. The OneLake data hub is available across more than 15 experiences in Fabric and makes it easy and efficient to reuse data across the various Fabric workloads. With OneLake data hub, you can discover data in multiple ways, such as searching, looking at recent, endorsed or favorite items, or browsing your items by their workspace-hierarchy using the OneLake data hub explorer pane. Once the data is discovered, you can either manage it or use it with the OneLake data hub to achieve your goals.
  • Learn more about OneLake data hub here.
  • Domains in Onelake – with your data in OneLake, you can use domains, sub domains (public preview) and workspaces to organize your data into a logical data mesh, and allow federated governance and to optimize for business needs. Domain admins can assign workspaces and customize their domain, control specific delegated settings for granular control, create sub domains (public preview), define contributors or defaults, and more. All these, allow business optimized control as well as empower users to find their relevant business data using an intuitive, personalized data hub. Learn more about how you can optimize your data organization in OneLake here.

Introducing new capabilities for Fabric Capacities: We’re excited to annouce optimizations for our capacities so that you experience less disruptions and focus on the task at hand.

  • Evolving the Capacity platform for longer running workloads: We’re introducing a new optimization for long-running jobs. Historically, if a job’s reported usage exceeded capacity limits, the following jobs would be throttled. Now, if a job’s reported usage exceeds capacity limits, throttling will not be immediately applied to following jobs. Instead, any overage will be automatically balanced against future capacity when the system has unutilized capacity. This feature to “borrow from the future” is in addition to smoothing and is also seamless to customers and supported by the following new analytics experience in Capacity Metrics.
  • Updated throttling policies with overage protection: Before the October 1st update, throttling occurred when smoothed usage is greater than 100% of the purchased capacity throughput. After the October 1st platform update, capacity throttling policies are now based on the amount of future capacity consumption that resulted from smoothing policies, this offers increased Overage protection when future use is less than 10 minutes and richer queue management features to prevent excessive overload when usage exceeds an hour.
  • Click here to learn more about these and other updates from a recent detailed blog

Improving Fabric Workspace navigation:

We’ve heard feedback from many users that the Item icons across Fabric are hard to distinguish. It’s difficult to navigate in workspace and find the item they are looking for. We are releasing a new color system for Item icons to address this issue. Click here to learn more about the thinking behind.

Enriching Monitoring capabilities

Monitoring hub enables users to monitor various Fabric activities, such as dataset refresh and Spark Job runs and many others, from a central location. During Public Preview, users have been telling us that they find it super valuable to see all activities in one place, but also want to be able to customize the UI for their task at hand. We are releasing multiple features to address this feedback.

  • Column customization: with more items been added to Fabric, the main view of the Monitoring Hub can no longer display all activity attributes at the same time. We are adding the column option control in the upper right corner of the screen for users to decide which columns to display in the view. Users can also rearrange the column display order according to their needs.
  • Connection between recent run and monitoring hub: Recent Run serves as a way for users to quickly find historical information of a specific Item activity. User can open the Recent run panel from the context menu of an Item in the workspace page. But due to space limit, users are not able to further filter or customize columns for the shown activities. To improve this, we are adding a link called ‘View in Monitoring Hub’ in the Recent Run panel to allow users leverage the rich controls and capabilities in the Monitoring Hub for further investigation.

Entradas de blog relacionadas

Fabric workloads are now generally available!

octubre 29, 2024 por Dandan Zhang

Managed private endpoints allow Fabric experiences to securely access data sources without exposing them to the public network or requiring complex network configurations. We announced General Availability for Managed Private Endpoint in Fabric in May of this year. Learn more here: Announcing General Availability of Fabric Private Links, Trusted Workspace Access, and Managed Private Endpoints. … Continue reading “APIs for Managed Private Endpoint are now available”

octubre 28, 2024 por Gali Reznick

The Data Activator team has rolled out usage reporting to help you better understand your capacity consumption and future charges. When you look at the Capacity Metrics App you’ll now see operations for the reflex items included. Our usage reporting is based on the following four meters: Rule uptime per hour: This is a flat … Continue reading “Usage reporting for Data Activator is now live”