Microsoft Fabric Updates Blog

Use Semantic Kernel with Lakehouse in Microsoft Fabric



Microsoft Fabric allows enterprises to bind different data sources through OneLake, and data engineer can call a unified API for different business scenarios to complete data analysis and data science. This article will describe how to allow data scientists to use Semantic Kernel with Lakehouse in Microsoft Fabric

In Microsoft Build 2023, Microsoft proposed the concept of Copilot Stack, and clarified the important steps and methods when we use LLM to build applications. We can use Microsoft’s open source Semantic Kernel to implement applications based on the concept of Copilot Stack. Semantic Kernel is better maintain the connection between Prompt and traditional code. We can quickly build Copilot apps with Semantic Kernel

Enterprises are used to storing unstructured documents on Azure Blob Storage. We can connect documents that require QA on Azure Blob Storage and related prompts through Lakehouse of Microsoft Fabric. Microsoft Fabric‘s Lakehouse is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a unified location. Store documents and related prompts through Lakehouse and then use Semantic Kernel to complete the prototype design of Copilot applications

Let’s link the documents in Azure Blob Storage with LakeHouse firstly

PS: Please upload Prompt and Docs to Azure Blob Storage before to run this sample

  1. Go to Microsoft Fabric Portal (https://app.fabric.microsoft.com/) , Select ‘Data Engineering’

    SelectDataEngineering
  2. Create New Lakehouse, Give a name ‘SKQALakeHouse’

    Create New Lakehouse
  3. In Lakehouse portal, Select ‘New Data Pipeline’

    create data pipline

    Give a name ‘SKAzureBlobPipeline’

    pipeline name
  4. Copy Azure Blob Storage Data to Lakehouse

    choose Azure Blob Storage as your data source

    choose data source

    Get your Azure Blob Storage account name in Azure Portal

    get your azure blob name

    Select ‘Create new connection‘, you can insert URL like https://{Your blob storage account name}.blob.core.windows.net/{Your blob storage container}

    azure blob url

    If your URL is Okay, the result is as follows



    Choose ‘msfabricblob'(Your container name), and ‘Select your binary copy’




    Set data destination choose ‘Files’



    Save, All is done



  5. Choose ‘SKQADemoLakehouse’ to confirm azure blob storage import to Lakehouse

When Lakehouse is successfully built, we can write related Notebooks in Microsoft Fabric through the role of data science

  1. Choose Data Science


  2. Create New Notebook



    Add ‘SKQADemoLakehouse’ to Notebook


  3. Go to Notebook , Select ‘PySpark(Python)’


  4. Edit your notebooks, We use Semantic Kernel to do a knowledge extraction

    # Install Semantic Kernel SDK



    # Import Library



    # Init Your Azure OpenAI Settings

    PS: To run this example, use gpt-3.5-turbo-16k on Azure OpenAI Service



    # Set Semantic Kernel Configuration

    Use Semantic Kernel to read the Lakehouse path, such as the path of Skills in ‘/lakehouse/default/Files/skills’


    # Read docs with Semantic Kernel


    # Get the result in JSON



    You have taken the first step to integrate the Semantic Kernel into the Microsoft Fabric scenario. Make Copilot’s application better combined with your enterprise big data scenarios to complete more intelligent work.

Liittyvät blogikirjoitukset

Use Semantic Kernel with Lakehouse in Microsoft Fabric

marraskuuta 4, 2024 tekijä Salil Kanade

AI is transforming data warehousing, making complex analytics more accessible and efficient. With tools like Copilot for Data Warehouse and AI Skill, Microsoft Fabric offers two powerful, complementary resources that serve both data developers and business users. This blog explores how these tools differ, when to use each, and how they can work together to … Continue reading “Data Warehouse: Copilot & AI Skill”

lokakuuta 22, 2024 tekijä Estera Kot

We’re thrilled to announce that Fabric Runtime 1.3 has officially moved from Public Preview to General Availability (GA). This is a major upgrade to our Apache Spark-based big data execution engine, which powers both data engineering and data science workflows. Fabric Runtime 1.3 is now fully integrated into the Fabric platform, ensuring a smooth and … Continue reading “Fabric Runtime 1.3 is Generally Available! Upgrade your data engineering and science workloads to harness the latest innovations and performance enhancements”