Azure Monitor Container Insights cost presets (preview)
Microsoft have recently added a new preview feature within the Azure portal to perform some cost optimization on the Container Insights data that is collected to a Log Analytics workspace. This is helpful as going with the default collection could leave you with unnecessary cost, as you don’t care about particular namespaces, or you’ve got a non-production cluster, so don’t care about collecting data every minute, but still want to monitor your cluster and workloads.
Here’s how you can quickly set this up.
From the Azure Portal, open up the blade for your Kubernetes resource (works for both AKS and Arc enabled Kubernetes
Navigate to Monitoring / Insights and then Configure azure monitor (If you’ve not previously enabled it)
If you have configure container insights previously, click on Monitor Settings :
From the blade that’s opened, you’ll need to make sure that Use Managed Identity (preview) is enabled. This appears for Azure Arc for Kubernetes enabled clusters. If this isn’t checked, you will find the Costs presets (preview) list box is grayed out.
From the list box, you can choose from the following profiles:
Standard
Cost-optimized
Custom
None
Standard profile has the following settings:
1 minute collection frequency
No namespace filtering
Syslog collection disabled
Cost-optimized profile has the following settings:
5 minute collection frequency
3 namespaces excluded [kube-system, gatekeeper-system, azure-arc]
Syslog collection disabled
You can use the defined standards as the basis for your own custom collection profile.
Select a profile from the drop down and click on Edit collection settings.
You can now modify the configuration to meet your requirements. For ContainerLog filtering, you can use the link here for more information. The profile will be saved as Custom within the Cost presets list box.
You can also enable Syslog collection for security events on your nodes, but that will increase your Log Analytic costs somewhat, depending on how busy your cluster is!
Azure Service Bus Monitoring and Alerting using Azure Function and Application Insights
Being designing and architecting solutions for our clients on Azure Cloud for many years, we know that Service Bus plays an integral part in most of the application architectures when a messaging layer is involved. At the same time we also know that there is no straight answers when customer ask us about native monitoring and alerting capabilities of the service bus. For visual dashboards, you would need to drill down to the overview section of the queue blade.
For Diagnostic, there are only operational logs available natively.
Although there are few 3rd party products available in the market who have built a good story around monitoring and alerting on azure service bus but they come at an additional cost.
In quest of answering our customer question on how we can get monitoring and alerting capabilities of azure service bus, I figured out that answer lies within azure itself. This blog post illustrate a proof-of-concept solution which was done as part of one of our customer engagement. The PoC solution uses native azure services including:
- Service Bus
- Functions
- Application Insights
- Application Insight Analytics
- Application Insight Alerts
- Dashboard
The only service that would add cost to your monthly azure bill would be functions (assuming application insight is already part of your application architecture). You would need to analyze the cost of purchasing a 3rd part monitoring product vs. function cost.
Let’s deep dive in the actual solution quickly.
Step 1: Create an Azure Service Bus Queue
This is of course a perquisite since we will be monitoring and alerting around this queue. For PoC, I created a queue (by name queue2) under a service bus namespace with root managed key. Also I filled up the queue using one of my favorite tool “Service Bus Explorer”.
Step 2: Create an Azure Function
Next step is to create a function. This function logic is to:
- Query the service bus to fetch all the queues and topics available under it.
- Get the count of active and dead letter messages
- Create custom telemetry metric
- And finally log the metric to Application Insight
I choose to use the language “C#” but there are other language available. Also I configured the function to trigger every 5 seconds so it’s almost real time.
Step 3: Add Application Insight to Function
Application Insight will be use to log the telemetry of service bus by the function. Create or reuse an application insight instance and use the instrumentation key in the C# code. I have pasted the function code used in my PoC. The logging part of the code relies on custom metrics concept of application insights. For PoC, I created 2 custom metric – “Active Message Count” and “Dead Letter Count”.
Sample Function:
#r "Microsoft.ServiceBus" using System; using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; using System.Text.RegularExpressions; using System.Net.Http; using static System.Environment; using Microsoft.ApplicationInsights; using Microsoft.ApplicationInsights.DataContracts; public static async Task Run(TimerInfo myTimer, TraceWriter log) { var namespaceManager = NamespaceManager.CreateFromConnectionString( Env("ServiceBusConnectionString")); foreach(var topic in await namespaceManager.GetTopicsAsync()) { foreach(var subscription in await namespaceManager.GetSubscriptionsAsync(topic.Path)) { await LogMessageCountsAsync( $"{Escape(topic.Path)}.{Escape(subscription.Name)}", subscription.MessageCountDetails, log); } } foreach(var queue in await namespaceManager.GetQueuesAsync()) { await LogMessageCountsAsync(Escape(queue.Path), queue.MessageCountDetails, log); } } private static async Task LogMessageCountsAsync(string entityName, MessageCountDetails details, TraceWriter log) { var telemetryClient = new TelemetryClient(); telemetryClient.InstrumentationKey = "YOUR INSTRUMENTATION KEY"; var telemetry = new TraceTelemetry(entityName); telemetry.Properties.Add("Active Message Count", details.ActiveMessageCount.ToString()); telemetry.Properties.Add("Dead Letter Count", details.DeadLetterMessageCount.ToString()); telemetryClient.TrackMetric(new MetricTelemetry("Active Message Count", details.ActiveMessageCount)); telemetryClient.TrackMetric(new MetricTelemetry("Dead Letter Count", details.DeadLetterMessageCount)); telemetryClient.TrackTrace(telemetry); } private static string Escape(string input) => Regex.Replace(input, @"[^A-Za-z0-9]+", "_"); private static string Env(string name) => GetEnvironmentVariable(name, EnvironmentVariableTarget.Process);
Step 4: Test your function
Next step is to test your function by running it. If everything is setup right, you should start seeing the telemetry in the application insight. When you select one the trace, you should be able to view the “Active Message Count” and “Dead Letter Count” under custom data. In the screenshot below, my queue2 has 17 active messages and 0 dead letter.
Step 5: Add an Application Insight Analytics Query
Next step is to use AI Analytics to render service bus chart for monitoring. From the AI blade, you need to click on the Analytics icon. AI Analytics is a separate portal with a query window. You would need to write a query which can render a time chart for a queue based on those custom metrics. You can use the below sample query as a start.
Sample Query:
traces | where message has 'queue2' | extend activemessagecount = todouble( customDimensions.["Active Message Count"]) | summarize avg(timestamp) by activemessagecount | order by avg_timestamp asc | render timechart
Step 5: Publish the Chart to Dashboard
The AI Analytics chart can be publish (via pin icon) to Azure Dashboard which will enable monitoring users to actively monitor the service bus metrics when they login to azure portal. This will remove the need to drill down to the service bus blade.
Refer this to know more about the creating and publishing charts to dashboards.
Step 6: Add Alerts on the custom counter
The Last step is to create application insight alerts. For PoC, I created 2 alerts on “Active Message Count” and “Dead Letter Message Count” with a threshold. These will alert monitoring users with an email, if the message count exceeds a threshold limit. You can also send these alert to external monitoring tools via web hook.
Attached is sample email from azure AI alert:
Hope these steps will at least gives you an idea that above custom solution with azure native services can serve basic monitoring and alerting capabilities for service bus and for that matter other azure services as well. The key is to define your custom metrics that you would like to monitor against and then setup the solution.
Topic Search
-
Securing TLS in WAC (Windows Admin Center) https://t.co/klDc7J7R4G
Posts by Date
- March 2025 1
- February 2025 1
- October 2024 1
- August 2024 1
- July 2024 1
- October 2023 1
- September 2023 1
- August 2023 3
- July 2023 1
- June 2023 2
- May 2023 1
- February 2023 3
- January 2023 1
- December 2022 1
- November 2022 3
- October 2022 7
- September 2022 2
- August 2022 4
- July 2022 1
- February 2022 2
- January 2022 1
- October 2021 1
- June 2021 2
- February 2021 1
- December 2020 2
- November 2020 2
- October 2020 1
- September 2020 1
- August 2020 1
- June 2020 1
- May 2020 2
- March 2020 1
- January 2020 2
- December 2019 2
- November 2019 1
- October 2019 7
- June 2019 2
- March 2019 2
- February 2019 1
- December 2018 3
- November 2018 1
- October 2018 4
- September 2018 6
- August 2018 1
- June 2018 1
- April 2018 2
- March 2018 1
- February 2018 3
- January 2018 2
- August 2017 5
- June 2017 2
- May 2017 3
- March 2017 4
- February 2017 4
- December 2016 1
- November 2016 3
- October 2016 3
- September 2016 5
- August 2016 11
- July 2016 13