Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (2024)

Semantic Kernel is one of the best AI Framework for enterprise application. It provides cool and useful out of box features. In this blog we will look into how you can integrate Semantic Kernel with Azure Monitors, to meter your Token Usage. Semantic Kernel provides default metrics to meter the token usage.

Additionally, we will explore Filters in Semantic Kernel and how they can be used to emit custom metrics.

Let’s understand first what are types to tokens.

  • 📄Prompt Token Usage:This refers to the number of tokens that make up the input prompt you provide to the model. For example, if you input the sentence “What is the capital of France?”, this sentence will be broken down into a series of tokens. The total number of these tokens is the prompt token usage.
  • 💬Completion Token Usage:This refers to the number of tokens generated by the model in response to your prompt. If the model responds with “The capital of France is Paris,” this response will also be broken down into tokens, and their total is the completion token usage.
  • 🔢Total Token Usage (Prompt + Completion):This is simply the sum of the prompt and completion token usage. It represents the total number of tokens used in the interaction, which is important for tracking and managing costs, as many LLMs charge based on the number of tokens processed.
  • ⚙️Optimization:If you’re building an application that uses an LLM, minimizing token usage can reduce costs and improve performance.
  • 💡Insights:Understanding the relationship between different prompts and token usage helps in refining the prompts for better, more concise outputs.
  • 📊Monitoring:Keeping track of token usage is crucial, especially when working within token limits imposed by the LLM service or when dealing with large volumes of requests.

Step 1: Lets create Azure Monitor’s AppInsights resource. You can use this link —https://portal.azure.com/#create/Microsoft.AppInsights

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (3)

Step 2: Make your Azure OpenAI deployment ready. You can also use OpenAI. Want to learn difference between Azure OpenAI and OpenAI or how to create deployments in Azure OpenAI? Check out my previous blog: https://medium.com/gopenai/step-by-step-guide-to-creating-and-securing-azure-openai-instances-with-content-filters-061293f0a042

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (4)

Step 3: In your Semantic Kernel .NET application, implement metering using theMeterclass from theSystem.Diagnostics.Metricsnamespace. Add the ConnectionString from your Application Insights resource, as demonstrated in Step 1.

var meterProvider = Sdk.CreateMeterProviderBuilder() .AddMeter("Microsoft.SemanticKernel*") .AddAzureMonitorMetricExporter(options => options.ConnectionString = "InstrumentationKey=<COPY IT FROM YOUR RESOURCE>") .Build();

Step 4: Build a Kernel and invoke your first message

var builder = Kernel.CreateBuilder().AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey);Kernel kernel = builder.Build();var response = await kernel.InvokePromptAsync("hello");
Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (5)

Step 5: Lets try Summarisation prompt

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (6)

Step 6: Go to Application insights -> metrics to view token usage. If everything was done right, you should be able to see the graph

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (7)
Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (8)

Now you can pin these metrics to a dashboard. You can create a new one or use an existing dashboard. Additionally, you can pin the metrics to an Azure Grafana dashboard, which is an enterprise version of the open-source Grafana dashboard. Based on my experience, Azure Grafana is very easy to use.

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (9)

In most recent updates, Semantic Kernel included filters. Filters allow developers to add custom logic that can be executed before, during, or after a function is invoked. They provide a way to manage how the application behaves dynamically, based on certain conditions, ensuring security, efficiency, and proper error handling. Filters help in preventing undesired actions, such as blocking malicious prompts to Large Language Models (LLMs) or restricting unnecessary information exposure to users.

There are 3 types of filters Semantic Kernel offers:

  1. 🛡️ Prompt Render Filters:These filters allow developers to modify prompts before they are sent to an LLM, ensuring that the content is safe or properly formatted.
  2. 🔄 Auto Function Invocation Filters:A new type of filter designed for scenarios where LLMs automatically invoke multiple functions. This filter provides more context and control over the sequence and execution of these functions.
  3. ⚙️ Function Invoked Filters:These filters allow developers to override default behavior or add additional logic during function execution. We can export metrics for Token Usage after every function execution.

Now, let’s start implementing! 🚀

Step 1: Lets create a simple plugin, which will process the user prompt. If the Large Language Model (LLM) knowledge is insufficient to answer a user’s question, Retrieval-Augmented Generation (RAG) is employed.

using System.ComponentModel;public class Plugins { private readonly Kernel Kernel; private const string resume = @"# John DoeEmail: john.doe@example.comContact: (123) 456-7890# Experience## Software Engineer at TechCorp- Developed and maintained web applications using JavaScript, React, and Node.js.- Collaborated with cross-functional teams to define, design, and ship new features.- Implemented RESTful APIs and integrated third-party services.- Improved application performance, reducing load time by 30%.## Junior Developer at CodeWorks- Assisted in the development of web applications using HTML, CSS, and JavaScript.- Participated in code reviews and contributed to team discussions on best practices.- Wrote unit tests to ensure code quality and reliability.- Provided technical support and troubleshooting for clients.# Skills- Programming Languages: JavaScript, Python, Java, C++- Web Technologies: HTML, CSS, React, Node.js, Express.js- Databases: MySQL, MongoDB- Tools and Platforms: Git, Docker, AWS, Jenkins- Other: Agile Methodologies, Test-Driven Development (TDD), Continuous Integration/Continuous Deployment (CI/CD)# Certifications- Certified Kubernetes Administrator (CKA)- AWS Certified Solutions Architect – Associate- Microsoft Certified: Azure Fundamentals# Education## Bachelor of Science in Computer Science- University of ABC, 2016-2020- Relevant coursework: Data Structures, Algorithms, Web Development, Database Systems## High School Diploma- Example High School, 2012-2016- Graduated with honors"; public Plugins (Kernel kernel) { this.Kernel = kernel; } [KernelFunction, Description("Use this plugin function as entry point to any user question")] public async Task<string> ProcessPrompt(string userprompt) { var classifyFunc = Kernel.CreateFunctionFromPrompt($@"Do you have knowledge to answer this question {userprompt}? If yes, then reply with only ""true"" else reply ""false"". Don't add any text before or after bool value ", functionName: "classifyFunc"); var response = await classifyFunc.InvokeAsync(Kernel); Console.WriteLine(response.ToString()); KernelArguments arg = new KernelArguments(); var renderedPrompt = userprompt; if (response.ToString().Equals("false")) { var document = resume; // AI Search/VectorDB can be used for fetching the relevant document arg.Add("document", document); renderedPrompt += "Using {{$document}} answer the following question: " + renderedPrompt; } var answerTheQuestionFunc = Kernel.CreateFunctionFromPrompt(renderedPrompt, functionName: "answerTheQuestionFunc"); response = await answerTheQuestionFunc.InvokeAsync(Kernel, arg); return response.ToString(); }}

Don’t forget to register your Plugin with Kernel,

kernel.Plugins.AddFromObject(new Plugins(kernel));

So the flow will be something like this:

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (10)

Step 2: Add Filter usingIFunctionInvocationFilter. This filter willemit metrics called “RagUsage”. If the context has argument named “document”, that means RAG technique will be used (usedRag=true), if it is not there it means RAG was not used (usedRag = false).

public class RAGUsageFilter : IFunctionInvocationFilter{ public async Task OnFunctionInvocationAsync(FunctionInvocationContext context, Func<FunctionInvocationContext, Task> next) { var functionName = context.Function.Name; var meter = new Meter("Microsoft.SemanticKernel", "1.0.0"); var ragCounter = meter.CreateCounter<int>("semantic_kernel.ragUsage"); if (context.Arguments.ContainsName("document")) { var renderedPrompt = ""; ragCounter.Add(1, KeyValuePair.Create<string, object>("UsedRag", false)); } else { ragCounter.Add(1, KeyValuePair.Create<string, object>("UsedRag", true)); } await next(context); }}

Then update the kernel builder to add this as singletonIFunctionInvocationFilter

var builder = Kernel.CreateBuilder() .AddAzureOpenAIChatCompletion(modelId, endpoint, apiKey);builder.Services.AddSingleton<IFunctionInvocationFilter, RAGUsageFilter>();Kernel kernel = builder.Build();

Step 3: Enable AutoFunctionCalling. I am going to use OpenAI FunctionCalling, to automatically invoke the plugin function.

OpenAIPromptExecutionSettings openAIPromptExecutionSettings = new() { ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions};KernelArguments args = new KernelArguments();args.ExecutionSettings = new Dictionary<string, PromptExecutionSettings>() { {PromptExecutionSettings.DefaultServiceId, openAIPromptExecutionSettings}};

Now lets invoke the prompt, and record if user questions requires RAG or not

// RAG needed, as the question is about my resumevar response = await kernel.InvokePromptAsync("Tell me more about my resume", args); response.ToString()
// RAG not needed, as the question is general questionvar response = await kernel.InvokePromptAsync("Tell me story about lion", args); response.ToString()

Note: I am using .NET Interactive notebook, but the above code blocks can be inside the loop and can take input from users to give real chat like experience.

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (11)

Finally we can get this dashboard

Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (12)
Step-by-Step Guide to Building a Powerful AI Monitoring Dashboard with Semantic Kernel and Azure Monitor | Semantic Kernel (2024)
Top Articles
Two years of Rob Thomson, the trusting manager who helped change everything for the Phillies
Reunion in Saskatchewan for NBA G League player-coach duo, Harkless and Abney | Globalnews.ca
Craigslist Free En Dallas Tx
Trevor Goodwin Obituary St Cloud
Urist Mcenforcer
Ofw Pinoy Channel Su
Prosper TX Visitors Guide - Dallas Fort Worth Guide
How to change your Android phone's default Google account
Alpha Kenny Buddy - Songs, Events and Music Stats | Viberate.com
craigslist: south coast jobs, apartments, for sale, services, community, and events
30% OFF Jellycat Promo Code - September 2024 (*NEW*)
Wfin Local News
Ncaaf Reference
The Wicked Lady | Rotten Tomatoes
Pollen Count Los Altos
The Murdoch succession drama kicks off this week. Here's everything you need to know
Huge Boobs Images
800-695-2780
Morgan And Nay Funeral Home Obituaries
Nutrislice Menus
Katherine Croan Ewald
Destiny 2 Salvage Activity (How to Complete, Rewards & Mission)
DBZ Dokkan Battle Full-Power Tier List [All Cards Ranked]
Equibase | International Results
97226 Zip Code
Music Go Round Music Store
Barber Gym Quantico Hours
Morse Road Bmv Hours
Two Babies One Fox Full Comic Pdf
A Cup of Cozy – Podcast
Target Minute Clinic Hours
Mandy Rose - WWE News, Rumors, & Updates
As families searched, a Texas medical school cut up their loved ones
The Powers Below Drop Rate
Speechwire Login
35 Boba Tea & Rolled Ice Cream Of Wesley Chapel
After Transmigrating, The Fat Wife Made A Comeback! Chapter 2209 – Chapter 2209: Love at First Sight - Novel Cool
Mumu Player Pokemon Go
Metro By T Mobile Sign In
new haven free stuff - craigslist
Chuze Fitness La Verne Reviews
Maxpreps Field Hockey
Thanksgiving Point Luminaria Promo Code
Td Ameritrade Learning Center
Let's co-sleep on it: How I became the mom I swore I'd never be
Bustednewspaper.com Rockbridge County Va
M&T Bank
Wpne Tv Schedule
Gonzalo Lira Net Worth
Estes4Me Payroll
91 East Freeway Accident Today 2022
Latest Posts
Article information

Author: Patricia Veum II

Last Updated:

Views: 5788

Rating: 4.3 / 5 (64 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Patricia Veum II

Birthday: 1994-12-16

Address: 2064 Little Summit, Goldieton, MS 97651-0862

Phone: +6873952696715

Job: Principal Officer

Hobby: Rafting, Cabaret, Candle making, Jigsaw puzzles, Inline skating, Magic, Graffiti

Introduction: My name is Patricia Veum II, I am a vast, combative, smiling, famous, inexpensive, zealous, sparkling person who loves writing and wants to share my knowledge and understanding with you.