Microsoft Fabric Updates Blog

Easily Move Your Data Across Workspaces Using Modern Get Data of Fabric Data Pipeline

We are excited to share that the new modern get data experience of data pipeline now supports copying to Lakehouse and Datawarehouse across different workspaces with an extremely intuitive experience.

When you are building a medallion architecture, you can easily leverage Data Pipeline to copy your data into Bronze Lakehouse/Warehouse across different workspaces. This feature was developed in response to valuable customer feedback, and we’re eager to hear your thoughts on it.

Let’s open the Copy Assistant inside a Data Pipeline to get started.

How to start Pipeline Copy assitant

When you choose your source or destination, the “OneLake data hub” tab offers a user-friendly way to locate your data across various workspaces by letting you to search your workspace name and filter on the connection types. This screenshot shows how easily users can find other workspace by searching the key words and filter on the connection types to continue.

When choosing your destination, you also can easily create the new Fabric artifacts from other workspaces by choosing different workspaces names under the “Workspace” dropdown.

Create LH in another workspace

Have any questions or feedback? Leave a comment below!

관련 블로그 게시물

Easily Move Your Data Across Workspaces Using Modern Get Data of Fabric Data Pipeline

6월 24, 2024 작성자 Justin Barry

When we talk about Microsoft Fabric workspace collaboration, a common scenario is developers and their teams using a shared workspace environment, which means they have access to “live items”. A change made directly within a workspace would override and affect all other developers or users utilizing that workspace. This is where git becomes increasingly important … Continue reading “Microsoft Fabric Lifecycle Management: Getting started with development in isolation using a Private Workspace”

6월 21, 2024 작성자 Marc Bushong

Developing ETLs/ELTs can be a complex process when you add in business logic, large amounts of data, and the high volume of table data that needs to be moved from source to target. This is especially true in analytical workloads involving relational data when there is a need to either fully reload a table or incrementally update a table. Traditionally this is easily completed in a flavor of SQL (or name your favorite relational database). But a question is, how can we execute a mature, dynamic, and scalable ETL/ELT utilizing T-SQL with Microsoft Fabric? The answer is with Fabric Pipelines and Data Warehouse.