Microsoft Fabric Updates Blog

Integrating On-Premises Data into Microsoft Fabric Using Data Pipelines in Data Factory

We are thrilled to announce the public preview of on-premises connectivity for Data pipelines in Microsoft Fabric.

Using the on-premises Data Gateway, customers can connect to on-premises data sources using dataflows and data pipelines with Data Factory in Microsoft Fabric. This enhancement significantly broadens the scope of data integration capabilities. In essence, by using an on-premises Data Gateway, organizations can keep databases and other data sources on their on-premises networks while securely integrating them to Microsoft Fabric (cloud).

Thank you to all product feedback from customers and the Microsoft Fabric community who have been working with us closely to deliver this new capability!

Let’s help you get started!

Create an on-premises data gateway

  1. An on-premises data gateway is a software designed to be installed within a local network environment. It provides a means to directly install the gateway onto your local machine. For detailed instructions on how to download and install the on-premises data gateway, refer to Install an on-premises data gateway.
  2. Sign-in using your user account to access the on-premises data gateway, after which it’s prepared for utilization.

Create a connection for your on-premises data source

  1. Navigate to the admin portal and select the settings button (an icon that looks like a gear) at the top right of the page. Then choose Manage connections and gateways from the dropdown menu that appears.

2. On the New connection dialog that appears, select On-premises and then provide your gateway cluster, along with the associated resource type and relevant information.

Using on-premises data in a pipeline

  1. Go to your workspace and create a Data Pipeline.

2. Add a new source to the pipeline copy activity and select the connection established in the previous step.

3. Select a destination for your data from the on-premises source.

4. Run the pipeline.

You have now created a pipeline to load data from an on-premises data source into a cloud destination.

Resources to help you get started

Have any questions or feedback? Leave a comment below!

Related blog posts

Integrating On-Premises Data into Microsoft Fabric Using Data Pipelines in Data Factory

April 16, 2024 by Ruixin Xu

We are pleased to share a set of key updates regarding the Copilot in Microsoft Fabric experiences. The information in this blog post has also been shared with Fabric tenant administrators. Below are the highlights of the changes. This change is an important milestone to eventually allow Copilot to reach GA within this year. See … Continue reading “Copilot in MS Fabric: Soon available to more users in your organization”

April 15, 2024 by Santhosh Kumar Ravindran

Users orchestrate their data engineering or data science processes using notebooks and in most of the enterprise scenarios pipelines and job schedulers are used as a primary option to schedule and trigger these Spark jobs. We are thrilled to announce a new feature Job Queueing for Notebook Jobs in Microsoft Fabric. This feature aims to … Continue reading “Introducing Job Queueing for Notebook in Microsoft Fabric”