Setting Up and Accessing Azure Cognitive Services with PowerShell

Alright folks – we’re going to dive into how you can leverage Azure Cognitive Services with PowerShell to not only set up AI services but also to interact with them. Let’s go!

Prerequisites

Before we begin, ensure you have the following:

  • An Azure subscription.
  • PowerShell 7.x or higher installed on your system.
  • Azure PowerShell module. Install it by running Install-Module -Name Az -AllowClobber in your PowerShell session.

Use Connect-AZAccount to get into your subscription, then run this to create a new RG and Cognitive Services resource:

$resourceGroupName = "<YourResourceGroupName>"
$location = "EastUS"
$cognitiveServicesName = "<YourCognitiveServicesName>"

# Create a resource group if you haven't already
New-AzResourceGroup -Name $resourceGroupName -Location $location

# Create Cognitive Services account
New-AzCognitiveServicesAccount -Name $cognitiveServicesName -ResourceGroupName $resourceGroupName -Type "CognitiveServices" -Location $location -SkuName "S0"

It’s that simple!

To interact with Cognitive Services, you’ll need the access keys. Retrieve them with:

$key = (Get-AzCognitiveServicesAccountKey -ResourceGroupName $resourceGroupName -Name $cognitiveServicesName).Key1

With your Cognitive Services resource set up and your access keys in hand, you can now interact with various cognitive services. Let’s explore a couple of examples:

Text Analytics

To analyze text for sentiment, language, or key phrases, you’ll use the Text Analytics API. Here’s a basic example to detect the language of a given text:

$text = "Hello, world!"
$uri = "https://<YourCognitiveServicesName>.cognitiveservices.azure.com/text/analytics/v3.1/languages"

$body = @{
    documents = @(
        @{
            id = "1"
            text = $text
        }
    )
} | ConvertTo-Json

$response = Invoke-RestMethod -Uri $uri -Method Post -Body $body -Headers @{
    "Ocp-Apim-Subscription-Key" = $key
    "Content-Type" = "application/json"
}

$response.documents.languages | Format-Table -Property name, confidenceScore

So this code will try and determine the language of the text submitted. The output might look like this:

Name           ConfidenceScore
----           ---------------
English        0.99

Let’s try computer vision now:

Computer Vision

Azure’s Computer Vision service can analyze images and extract information about visual content. Here’s how you can use PowerShell to send an image to the Computer Vision API for analysis:

$imageUrl = "<YourImageUrl>"
$uri = "https://<YourCognitiveServicesName>.cognitiveservices.azure.com/vision/v3.1/analyze?visualFeatures=Description"

$body = @{
    url = $imageUrl
} | ConvertTo-Json

$response = Invoke-RestMethod -Uri $uri -Method Post -Body $body -Headers @{
    "Ocp-Apim-Subscription-Key" = $key
    "Content-Type" = "application/json"
}

$response.description.captions | Format-Table -Property text, confidence

This code is trying to describe the image, so the output might look like this – pardon the bad word wrap:

Text                            Confidence
----                            ----------
A scenic view of a mountain range under a clear blue sky 0.98

To learn more about Cognitive Services – check out the Docs!

Azure and AI Event Automation – #2

Assessing Your Current Azure Setup

Before integrating AI into your Azure automation processes, it’s crucial to assess your current Azure environment. This assessment will help identify the strengths, limitations, and potential areas for improvement.

  1. Evaluate Existing Resources and Capabilities
    • Take an inventory of your current Azure resources. This includes virtual machines, databases, storage accounts, and any other services in use.
    • Assess the performance and scalability of these resources. Are they meeting your current needs? How might they handle increased loads with AI integration?
    • Use Azure’s built-in tools like Azure Advisor for recommendations on optimizing resource utilization.
  2. Review Current Automation Configurations
    • Examine your existing automation scripts and workflows. How are they configured and managed? Are there opportunities for optimization or enhancement?
    • Consider the use of Azure Automation to streamline these processes.
  3. Identify Data Sources and Workflows
    • Identify the data sources that your automation processes use. How is this data stored, accessed, and managed?
    • Map out the workflows that are currently automated. Understanding these workflows is crucial for integrating AI effectively.
  4. Check Compliance and Security Measures
    • Ensure that your setup complies with relevant data protection regulations and security standards. This is particularly important when handling sensitive data with AI.
    • Use tools like Azure Security Center to review and enhance your security posture.
  5. Assess Integration Points for AI
    • Pinpoint where in your current setup AI can be integrated for maximum benefit. Look for processes that are repetitive, data-intensive, or could significantly benefit from predictive insights.
    • Consider the potential of Azure AI services like Azure Machine Learning and Azure Cognitive Services in these areas.

Setting Up Essential Azure Services

After assessing your Azure environment, the next step is to set up and configure the essential services that form the backbone of AI-driven automation. Here’s how you can approach the setup of these key services:

  1. Azure Machine Learning (AML)
    • Log into Azure Portal: Access your account at https://portal.azure.com.
    • Navigate to Machine Learning: Find “Machine Learning” under “AI + Machine Learning” in the ‘All services’ section.
    • Create a New Workspace: Click “Create” and choose your Azure subscription and resource group.
    • Configure Workspace: Provide a unique name, select a region, and optionally choose or allow Azure to create a storage account, key vault, and application insights resource.
    • Review and Create: Verify all details are correct, then click “Review + create” followed by “Create” to finalize.
    • Access the Workspace: After creation, visit your resource group, select the new workspace, and note the key details like subscription ID and resource group.
    • Explore Azure Machine Learning Studio: Use the provided URL to access the studio at https://ml.azure.com and familiarize yourself with its features.
    • Set Up Additional Resources: If not auto-created, manually set up a storage account, key vault, and application insights resource in the same region as your workspace.
  2. Azure Cognitive Services
    • Navigate to Cognitive Services: Search for “Cognitive Services” in the portal’s search bar.
    • Create a Resource: Click “Create” to start setting up a new Cognitive Services resource.
    • Fill in Details: Choose your subscription, create or select an existing resource group, and name your resource.
    • Select the Region: Choose a region near you or your users for better performance.
    • Review Pricing Tiers: Select an appropriate pricing tier based on your expected usage.
    • Review and Create: Confirm all details are correct, then click “Review + create”, followed by “Create”.
    • Access Resource Keys: Once deployed, go to the resource, and find the “Keys and Endpoint” section to get your API keys and endpoint URL.
    • Integrate with Applications: Use the retrieved keys and endpoint to integrate cognitive services into your applications.
  3. Azure Logic Apps
    • Search for Logic Apps: In the portal, find “Logic Apps” via the search bar.
    • Initiate Logic App Creation: Click “Add” or “Create” to start a new Logic App.
    • Configure Basic Settings: Select your subscription, resource group, and enter a name for your Logic App. Choose a region.
    • Create the Logic App: After configuring, click “Create” to deploy your Logic App.
    • Open Logic App Designer: Once deployed, open the Logic App and navigate to the designer.
    • Design the Workflow: We will go over this later! This is where the fun begins!!

Setting up these essential Azure services is a foundational step in creating an environment ready for AI-driven automation. Each service plays a specific role, and together, they provide a powerful toolkit for automating complex and intelligent workflows.

Leveraging Azure Machine Learning

  1. Create a Machine Learning Model:
    • Navigate to Azure Machine Learning Studio.
    • Create a new experiment and select a dataset or import your own.
    • Choose an algorithm and train your machine learning model.
  2. Deploy the Model:
    • Once your model is trained and evaluated, navigate to the “Models” section.
    • Select your model and click “Deploy”. Choose a deployment option (e.g., Azure Container Instance).
    • Configure deployment settings like name, description, and compute type.
  3. Consume the Model:
    • After deployment, get the REST endpoint and primary key from the deployment details.
    • Use these details to integrate the model into your applications or services.

Utilizing Azure Cognitive Services

  1. Select a Cognitive Service:
    • Determine which Cognitive Service (e.g., Text Analytics, Computer Vision) fits your needs.
    • In Azure Portal, navigate to “Cognitive Services” and create a resource for the selected service.
  2. Configure and Retrieve Keys:
    • Once the resource is created, go to the “Keys and Endpoint” section.
    • Copy the key and endpoint URL for use in your application.
  3. Integrate with Your Application:
    • Use the provided SDK or REST API to integrate the Cognitive Service into your application.
    • Pass the key and endpoint URL in your code to authenticate the service.

Automating with Azure Logic Apps

  1. Create a New Logic App:
    • In Azure Portal, go to “Logic Apps” and create a new app.
    • Select your subscription, resource group, and choose a name and region for the app.
  2. Design the Workflow:
    • Open the Logic App Designer.
    • Add a trigger (e.g., HTTP request, schedule) to start the workflow.
    • Add new steps by searching for connectors (e.g., Azure Functions, Machine Learning).
  3. Integrate AI Services:
    • Add steps that call Azure Machine Learning models or Cognitive Services.
    • Configure these steps by providing necessary details like API keys, endpoints, and parameters.
  4. Save and Test the Logic App:
    • Save your changes and use the “Run” button to test the Logic App.
    • Check the run history to verify if the workflow executed as expected.

Azure and AI Event Automation – #1

Introduction

Welcome to my new blog series on “AI-Enhanced Event Automation in Azure,” where I will delve into the integration of AI with Azure’s amazing automation capabilities. This series will be more than just a conceptual overview; it will be a practical guide to applying AI in Azure.

Through this series I will explore the role of AI in enhancing Azure’s event monitoring and automation processes. This journey will be tailored for those with a foundational understanding of Azure, aiming to leverage AI to unlock unprecedented potential in cloud computing.

We will begin with the basics of setting up your Azure environment for AI integration, where we’ll reference Azure’s comprehensive Learn documentation.

Moreover, I’ll explore advanced AI techniques and their applications in real-world scenarios, utilizing resources from the Azure AI Gallery to illustrate these concepts.

Let’s dig in!

Key Concepts and Terminologies

To ensure we’re all on the same page let’s clarify some key concepts and terminologies that will frequently appear throughout this series.

  1. Artificial Intelligence (AI): AI involves creating computer systems that can perform tasks typically requiring human intelligence. This includes learning, decision-making, and problem-solving. Azure provides various AI tools and services, which we will explore. Learn more about AI in Azure.
  2. Azure Automation: This refers to the process of automating the creation, deployment, and management of Azure resources. Azure Automation can streamline complex tasks and improve operational efficiencies. Azure Automation documentation offers a comprehensive guide.
  3. Azure Logic Apps: These are part of Azure’s app service, providing a way to automate and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organizations. Explore Azure Logic Apps.
  4. Machine Learning: A subset of AI, machine learning involves training a computer system to learn from data, identify patterns, and make decisions with minimal human intervention. Azure’s machine learning services are pivotal in AI-enhanced automation. Azure Machine Learning documentation provides detailed information.
  5. Event-Driven Architecture: This is a design pattern used in software architecture where the flow of the program is determined by events. In Azure, this concept is crucial for automating responses to specific events within your infrastructure. Understanding Event-Driven Architecture in Azure can give you more insights.

Understanding these terms will be key to concepts we will discuss in this series. They form the building blocks of our exploration into AI-enhanced automation in Azure.

The Role of AI in Azure Automation

AI is not just an add-on but a transformative resource. AI in Azure Automation opens up new avenues for efficiency, intelligence, and sophistication in automated processes.

  1. Enhancing Efficiency and Accuracy: AI algorithms are adept at handling large volumes of data and complex decision-making processes much faster than traditional methods. In Azure, AI can be used to analyze operational data, predict trends, and automate responses with high precision. This leads to a significant increase in the efficiency and accuracy of automated tasks. AI and Efficiency in Azure provides further insights.
  2. Predictive Analytics: One of the most significant roles of AI in Azure Automation is predictive analytics. By analyzing historical data, AI models can predict future trends and behaviors, enabling Azure services to proactively manage resources, anticipate system failures, and automatically adjust to changing demands. The Predictive Analytics in Azure guide is a valuable resource for understanding this aspect.
  3. Intelligent Decision Making: AI enhances Azure automation by enabling systems to make smart decisions based on real-time data and learned patterns. This capability is particularly useful in scenarios where immediate and accurate decision-making is critical, such as in load balancing or threat detection. Azure’s Decision-Making Capabilities further explores this topic.
  4. Automating Complex Workflows: With AI, Azure can automate more complex, multi-step workflows that would be too intricate or time-consuming to handle manually. This includes tasks like data extraction, transformation, loading (ETL), and sophisticated orchestration across various services and applications. Complex Workflow Automation in Azure provides a deeper dive into this functionality.
  5. Continuous Learning and Adaptation: A unique aspect of AI in automation is its ability to continuously learn and adapt. Azure’s AI-enhanced automation systems can evolve based on new data, leading to constant improvement in performance and efficiency over time.

By integrating AI into Azure Automation, we unlock a realm where automation is not just about executing predefined tasks but about creating systems that can learn, adapt, and make intelligent decisions. This marks a significant leap from traditional automation, propelling businesses towards more dynamic and responsive operational models.

Examples of AI Enhancements in Automation

Understanding the practical impact of AI in Azure automation is easier with real-world examples.

  1. Automated Scaling Based on Predictive Analytics: Utilizing AI for predictive analysis, Azure can dynamically adjust resources for an e-commerce platform based on traffic and shopping trends, optimizing performance and cost. Learn more about Azure Autoscale.
  2. Intelligent Data Processing and Insights: Azure AI can analyze large datasets, like customer feedback or sales data, automating the extraction of valuable insights for quick, data-driven decision-making. Explore Azure Cognitive Services.
  3. Proactive Threat Detection and Response: AI-driven monitoring in Azure can identify and respond to security threats in real-time, enhancing network and data protection. Read about Azure Security Center.
  4. Custom Workflow Automation for Complex Tasks: In complex sectors like healthcare or finance, AI can automate intricate workflows, analyzing data for risk assessments or health predictions, improving accuracy and efficiency. Discover Azure Logic Apps.
  5. Adaptive Resource Management for IoT Devices: For IoT environments, Azure’s AI automation can intelligently manage devices, predict maintenance needs, and optimize resource allocation. See Azure IoT Hub capabilities.

These examples highlight AI’s ability to revolutionize Azure automation across various applications, demonstrating efficiency, insight, and enhanced security.

Challenges and Considerations

While integrating AI into Azure automation offers numerous benefits, it also comes with its own set of challenges and considerations.

  1. Complexity of AI Models: AI models can be complex and require a deep understanding of machine learning algorithms and data science principles. Ensuring that these models are accurately trained and tuned is crucial for their effectiveness. Understanding AI Model Complexity provides more insights.
  2. Data Privacy and Security: When dealing with AI, especially in automation, you often handle sensitive data. Ensuring data privacy and complying with regulations like GDPR is paramount. Azure’s Data Privacy Guide offers guidelines on this aspect.
  3. Integration and Compatibility Issues: Integrating AI into existing automation processes might involve compatibility challenges with current systems and workflows. Careful planning and testing are essential to ensure seamless integration. Azure Integration Services can help understand these complexities.
  4. Scalability and Resource Management: As your AI-driven automation scales, managing resources efficiently becomes critical. Balancing performance and cost, especially in cloud environments, requires continuous monitoring and adjustment. Azure Scalability Best Practices provides valuable insights.
  5. Keeping up with Technological Advancements: The field of AI is rapidly evolving. Staying updated with the latest advancements and understanding how they can be applied to Azure automation is crucial for maintaining an edge. Azure Updates is a useful resource for keeping up with new developments.

By understanding and addressing these challenges, you can more effectively harness the power of AI in Azure automation, leading to more robust and efficient solutions.

That’s all for now! In the next post we will dig into Azure and actually start to get our hands dirty!

MQTT Data to Azure Log Analytics

Setting up solar on our farm in North Texas has been a treasure trove of data to analyze – everything from Inverter load, to PV watts in, to state-of-charge for the batteries – It’s a massive amount of useful information.

So naturally I want that data in Azure Log Analytics. It’s perfect platform to handle this amount of data. Small messages, lots of entries. Just ready for reporting and analytics.

To do this, we are going to use IOT Hub as a go-between. Here are the high-level steps:

To get MQTT data into Azure Log Analytics via IoT Hub, you will need to perform the following steps:

  1. Set up an IoT hub in your Azure account.
  2. Set up a device in your IoT hub.
  3. Configure the device to send data to your IoT hub using the MQTT protocol.
  4. Set up a log analytics workspace in your Azure account.
  5. Connect your IoT hub to your log analytics workspace.
  6. Start sending MQTT data from your device to your IoT hub.

And now some details:

  1. Set up an IoT hub in your Azure account:
  • In the Azure portal, click on “Create a resource” in the top left corner, then search for “IoT Hub” and select it.
  • Follow the prompts to create a new IoT hub, including selecting the subscription, resource group, and region you want to use.
  • Make note of the IoT hub’s name and connection string, as you will need these later.
  1. Set up a device in your IoT hub:
  • In the Azure portal, go to the IoT hub blade and select “IoT Devices” from the left menu.
  • Click the “Add” button to create a new device.
  • Follow the prompts to set up the device, including giving it a unique device ID and generating a device key or certificate.
  • Make note of the device ID and key/certificate, as you will need these later.
  1. Configure the device to send data to your IoT hub using MQTT:
  • The specific steps for this will depend on the device you are using, but generally you will need to specify the MQTT endpoint and authentication information for your IoT hub.
  • The MQTT endpoint will be in the format “YOUR-IOT-HUB-NAME.azure-devices.net”, and you will need to use the device ID and key/certificate you obtained in step 2 to authenticate.
  • Specify the MQTT topic you want to publish data to.
  1. Set up a log analytics workspace:
  • In the Azure portal, click on “Create a resource” in the top left corner, then search for “Log Analytics” and select it.
  • Follow the prompts to create a new log analytics workspace, including selecting the subscription, resource group, and region you want to use.
  • Make note of the workspace ID and primary key, as you will need these later.
  1. Connect your IoT hub to your log analytics workspace:
    • In the Azure portal, go to the IoT hub blade and select “Diagnostic settings” from the left menu.
    • Click the “Add diagnostic setting” button to create a new setting.
    • Select the log analytics workspace you created in step 4 as the destination for the data.
    • Select the data types you want to collect
    1. Start sending MQTT data from your device to your IoT hub:
    • Using the device ID, key/certificate, MQTT endpoint, and topic you obtained in steps 2 and 3, publish data to your IoT hub using the MQTT protocol.
    • The data should be automatically forwarded to your log analytics workspace and be available for analysis.

    Adding Azure Alert Rules with PowerShell 7

    Here is a quick and dirty post to create both Metric and Scheduled Query alert rule types in Azure:

    To create an Azure Alert rule using Powershell 7 and the AZ module, you will need to install both Powershell 7 and the AZ module.

    PowerShell 7: https://docs.microsoft.com/en-us/powershell/scripting/install/installing-powershell?view=powershell-7.1

    To install the AZ module, run the following command in Powershell:

    Install-Module -Name AZ

    Once both Powershell 7 and the AZ module are installed, you can use the following commands to create an Azure Alert rule.

    To create a metric alert rule:

    $alertRule = New-AzMetricAlertRule `
        -ResourceGroupName "MyResourceGroup" `
        -RuleName "MyMetricAlertRule" `
        -TargetResourceId "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Compute/virtualMachines/{vm-name}" `
        -MetricName "Percentage CPU" `
        -Operator GreaterThan `
        -Threshold 90 `
        -WindowSize 30 `
        -TimeAggregationOperator Average
    

    And to create a Scheduled Query rule:

    $alertRule = New-AzLogAlertRule `
        -ResourceGroupName "MyResourceGroup" `
        -RuleName "MyScheduledQueryAlertRule" `
        -Location "East US" `
        -Condition "AzureMetrics | where ResourceProvider == 'Microsoft.Compute' and ResourceType == 'virtualMachines' and MetricName == 'Percentage CPU' and TimeGrain == 'PT1H' and TimeGenerated > ago(2h) | summarize AggregateValue=avg(MetricValue) by bin(TimeGenerated, 1h), ResourceId" `
        -ActionGroupId "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Insights/actionGroups/{action-group-name}"
    

    You will have to replace the main bits you would expect – ResourceGroupName, Subscription-ID, Action-Group-Name, Location, etc.

    Hope this helps!

    Azure Monitor Action Group Source IP Addresses

    One of the hurdles a company might run into when moving to Azure, especially with Azure Monitor and Log Analytics, is integration with Action Groups. Rarely are the actions of SMS or Email good enough to integrate with an internal Service Management system, so that leaves webhook as the simplest way to get data back into the datacenter. But that leaves one problem – Firewalls.

    The Azure backend can change at any time, so it’s important to know what IP addresses the Action Group can originate from – and we can do that with Get-AZNetworkServiceTag.

    To start, let’s install the az.network module. I am specifying allowclobber and force, to both update dependencies and upgrade existing modules. Make sure you are running or pwsh window or VSCode as an administrator.

    install-module az.network -allowclobber -force
    Downloading and Installing

    If you aren’t already connected, do a quick Connect-AzAccount. It might prompt you for a username/password, and will default to a subscription, so specify an alternative if necessary.

    Once connected, we can run the Get-AZNetworkServiceTag cmdlet. We will specify what region we can to get the information for – in this case, EastUS2.

    Get-AzNetworkServiceTag -Location eastus2

    The ‘values’ property is what we are after. It contains a massive amount of resource types, regions, and address ranges. In our case, let’s just get the Action Groups, specifically for EastUS2.

    (Get-AzNetworkServiceTag -Location eastus2).values|where-object {$_.name -eq 'ActionGroup.EastUS2'}

    And there we go! A list of 3 ranges. We can now work on our Firewall rules and make sure the webhook can get back in to the datacenter.

    Automation with Azure Event Grid

    This is another post in a series of posts by fellow MVP Billy York and myself on migration from on-prem automation to Azure. Check out his post here to see the full list.

    One of the challenges that needs to be addressed with moving off an on-prem tool such as Orchestrator is how to trigger your automations. Almost all of the tools individually have a method to call them remotely – things such as webhooks, watchers, api endpoints, etc… In this post I would like to highlight how another Azure tool – Azure Event Grid – can prove a useful tool for correlation and centralization. If you need a quick refresher on Event Grids – check out this post. I’m not going to dig into the concepts of Event Grid, but I will walk through how to do a quick setup of Event Grid and get it ready to trigger your workflows.

    The first thing we are going to do is create an Event Grid Topic – go to the appropriate resource group, and create a new resource – pick Event Grid Topic, and click ‘Create’.

    Specify the event topic name, subscription, resource group, location, etc… The deployment will take a minute or two.

    When it’s created, you should see something like this. Take note of the “+ Event Subscription” and the Topic Endpoint.

    The topic endpoint is important – this is where you can forward events to from your on-prem resources in order to for event grid to pick them up. This URL takes a json payload (under 64kb in the general release version, with 64kb increments being charged separately – 1mb version in public preview now). Because I am who I am, I wrote a quick PowerShell script that could be used by your resources as a quick and dirty integration.

    Connect-AzAccount -Credential (get-credential)
    $body = @{
        id= 123743
        eventType="recordInserted"
        subject="App01/Database/TLogFull"
        eventTime= (get-date).ToUniversalTime()
        data= @{
            database="master"
            version="2019"
            percent="93.5"
        }
        dataVersion="1.0"
    }
    $body = "["+(ConvertTo-Json $body)+"]"
    
    $topicname="EventGridTopic01"
    try{
        $endpoint = (Get-AzEventGridTopic -ResourceGroupName AzureAutomationOptions -Name $topicname).Endpoint
        $keys = Get-AzEventGridTopicKey -ResourceGroupName AzureAutomationOptions -Name $topicname
        Invoke-WebRequest -Uri $endpoint -Method POST -Body $body -Headers @{"aeg-sas-key" = $keys.Key1}
    }
    catch{
        write-output $_
    }

    The important bits:

    • Subject – What the event subject would be, which is what we will key off of later for subscriptions – this is a personal preference. You can trigger from the event type in the basic editor, but I prefer subject.
    • eventTime – Required, in UTC
    • id – Important, but only if you want unique event identifiers.
    • data – The important bits from your on-prem resources – the bits we will want to pass to the Azure Automation resources.

    After sending a couple of events, you can look at the ‘Published Events’ metric to ensure they are coming in as expected:

    Now it’s time to give the event grid something to do when the events arrive. There are several ways to accomplish this – all done with a ‘subscription’ to the events. Some of the ‘automation’ flavored options include:

    • Azure Functions
    • WebHooks
    • Sending to Event Hubs
    • Service Bus Queues and Topics

    WebHooks are pretty self-explanatory, but also one of the most powerful. For example, you can create a Logic App that is triggered by an http request, and from there break out and perform any type of automation that you want. I will go over that in another post. For this post, I will show a slightly more difficult to setup, but equally as powerful method to start automation – the Azure Function EventGrid triggered function. Head over to your Azure Function, and let’s add a new function:

    Select the “Azure Event Grid trigger” and give it a name – in my case I gave it the descriptive name ” EventGridTrigger1″. After it’s created, you should see something like this:

    You can see the parameters – eventGridEvent and TriggerMetadata. Keep those in mind. Now head back over to the Event Grid Topic, and let’s add a new subscription. When you select the endpoint in the new subscription, you should see your EventGridTrigger function:

    Great – since the function is created already, the endpoint is something we can select. If the function wasn’t created, there would be no option to create a new one.

    Now we can dig into the actual subscription properties – notice the 3 tabs. Basic, Filters, and Advanced Features. Filters is where we will do most filtering for automation, although some can be done in the basic tab via the event type. For now, just set the Event Type to ‘recordInserted ‘, since that is what we put in the PowerShell code, so we can switch over to the filters and do the rest of the work.

    On the Filters tab, the first thing we want to do in this example is check the box marked “Enable subject filtering”. If you remember in the PowerShell, we set the filter to “App01/Database/TLogFull”. You could imagine this being the unique identifier to trigger the appropriate automation – almost something like a unique monitor ID or an automation trigger id. In this example, let’s set that as our filter. We will do this simple one in this post, but branch out and look at advanced features in a future one:

    When you are ready to create your new subscription, head back to the ‘Basic’ tab and give it a name – numbers, letters, and dashes only. Do it right, and you will be greeted with something like this:

    Now that we have the subscription created, let’s send some events and see if they trigger the subscription! If you run that PowerShell snippet a couple of times, wait a minute or two, and check out the ‘monitor’ tab of the EventGridTrigger function, you will see something like this:

    And clicking on the details:


    From here, we can see the data fields that were passed, and which could be used in our PowerShell function.

    Hopefully you can see how using an Event Grid can help trigger your automations, especially when dealing with IOT and monitoring situations, or when a simple webhook is all you have to work with. They offer an easy way to take those PowerShell workflows from Orchestrator and directly import them into Azure with little modification, while at the same time providing a centralized method of tracking.

    Automation Tools in Azure – Q1 2020 Edition

    Whether you are well into your automation journey or just starting out, it’s important to know what options are available. Moving a manual workload to the wrong automation engine can be just as disruptive as automating a bad workload. Luckily Microsoft has a plethora of tools, so you can be sure to pick the right tool for the right job.

    Azure Automation – Process Automation

    It’s tough to start an article about automation tools in Azure without starting with Azure Automation – so I won’t try. Azure Automation is going to be the first place you look if you are migrating things like:

    • PowerShell scripts
    • Python scripts
    • System Center Orchestrator runbooks
    • Simple commands called repeatedly (restarting services, for example)

    Azure Automation uses RunBooks and Jobs, which will immediately be familiar to Orchestrator admins. Azure Automation supports PowerShell, and Python 2 scripts, along with PowerShell workflows. The automation jobs can be run either on-prem via Hybrid Workers, or in the cloud. A little known secret about Azure Automation – it runs a lot of the backend process that power Azure!

    There is another piece to Azure Automation worth calling out – it’s CHEAP. Azure gives you 500 run-time minutes for free each month, with each additional minute costing only $0.002. Watcher tasks are even cheaper – I will go over those in another blog post.

    Azure Functions

    The server-less powerhouse of the automation options in Azure – Functions are designed for scale, speed, and complete extensibility. Deploy code or docker containers for your function and then build your functions with .Net Core, Node.js, Python, Java, and even PowerShell Core.

    With the language options available, moving on-prem workloads should be a breeze. Access your functions from anywhere via API or schedule them to run automatically. Customize your compute stack, secure the functions with multiple keys, and monitor your runs with Log Analytics and App Insights.

    You can build your functions in VSCode, any other code editor you choose, or edit and test your function right in the Azure portal. Each Function App can have multiple functions, and scaling can occur manually or automatically. There are so many options available for Azure Functions, it deserves it’s own blog series.

    As with Azure Process Automation, Functions are priced really competitively. Check out the pricing list here.

    Azure Logic Apps

    Anyone coming from a tool like System Center Orchestrator, or other automation tools like MicroFocus Operations Orchestration will tell you one thing those tools have that the tools I have previously mentioned dont – a UI that shows logic flow. Microsoft’s answer to that – Logic Apps. Logic Apps are a personal favorite of mine, and I use them extensively

    Building a Logic App couldn’t be simpler. You can start with a blank app, or choose from a LARGE selection of templates that are pre-built. Once in the Logic App Editor, it’s practically drag and drop automation creation. Logic Apps are started with ‘Triggers’, which lead to ‘Actions’. The apps can access services via ‘connections’, of which there are hundreds. If you do happen to find a 3rd party service that doesn’t have a built-in connector, build a custom one!

    Logic Apps makes it easy to build complex automations by helping you with things like automatically creating loops when arrays are detected. Allowing you to control parallelism, offering you hundreds of ways to call your app, and more. You can add conditions, switches, do-until loops, etc… There isn’t much they can’t do.

    Of course you get the enterprise controls you would expect – version controls, IP access restrictions, full metrics and logging, diagnostics, etc. We run a massive Systems Management and Monitoring conference almost entirely with Logic Apps.

    If you are considering migrating from Orchestrator (or other 3rd party automation tool), then look no further. This should be one of the first Azure tools you do a proof of concept with.

    Power Apps/Power BI/Power Automate

    The tools I have talked about so far are focused on you – the enterprise system admin. But PowerApps gives your organization an exciting and probably under-utilized automation opportunity – your Business users! Even the biggest automation organizations don’t have the resources to automate everything their users want, so why not let them handle some of that automation on their own.

    Power Apps let you or your users create new desktop or mobile business applications in a matter of minutes or hours. These can be self contained, or reach out to tools like Azure Functions to extend these simple to make apps into something truly enterprise worthy.

    Power BI gives world class data visualizations and business intelligence to the average business user. Using Power BI you can allow your users to create their own dashboards or become their own data scientists directly from their desktop.

    Power Automate is the tool formerly known as Flow. If you are familiar with Logic Apps, then Power Automate will look almost identical – and for good reason! Flow was originally built from Logic App code. There are a couple of big differences, though:

    • Power Automate has an amazing mobile app. Start a flow, or even create one from your phone.
    • Power Automate can no simulate screen clicks – Remember AutoIt?

    Configuration and Update Management

    I am going to lump these two into one description, mainly because each is slightly meta. Configuration management is like PowerShell DSC for your Azure and on-prem resources. Describe what your environment should look like, and determine if you want auto-remediation or not. Expect more information on this in future blog posts.

    Update management is patching for all of your resources – on-prem or in Azure. Group your servers logically and schedule OS and app updates, or trigger update management from Log Analytics, runbooks, or functions.

    The great thing about Configuration and Update management? The cost. Update and configuration management is practically free – only pay for the data ingestion used by Log Analytics. Update management is even ‘free’ for on-prem resources, including Linux! Configuration management does have a cost for on-prem resources, but the cost is still low.

    Event Grid and Hub

    Although not automation in the strictest sense of the term, Event Grids and Hubs are prime examples of triggers for automation. For most use cases, Event Grid is going to be the best trigger – Event Hub and even Service Bus are more for telemetry and high-value data, but Event Grid is designed to handle reactionary data. Filter events as they come into Grid, and create action groups based off filtered events. Action groups can have actions for starting Azure Functions, generic web-hooks, automation rubooks, Logic Apps, and more! Send your events to a endpoint API, and you are set to start your automation flows automatically!

    Meta – ARM

    What’s the first thing you need to automate if you are moving to Azure? The automation workflows themselves, of course! Whether it’s configuration or full deployments, ARM is your best friend.

    Disaster leads to Azure

    How a sinister act leads to great things – in about 15 minutes

    This Sunday, as the wife and I traveled back from Dallas to Austin after a weekend away I get a text from an automated website monitor. My WordPress blog – this blog – was either offline or not responding correctly. Happens occasionally. When I got home I popped up the site, and immediately got a message about php being a really old version – like 5.x – when it normally runs a 7.x version. I decided to log into my provider and check it out. I was not prepared with what I saw.

    It was obvious that I had been hacked, big time. Redirects to shady pharma sites in Russia, CSS injection on every post, random hacked php files in practically every directory in not just this domain but in 8 sub-domains as well. I was well and truly up the creek. To make it even better, the hosting service I use seems to think that UI enhancements are forbidden – hence my inability to download a current backup.

    I had to do something, and quick. This blog contains some of my contributions to the community, and does (shockingly) show up in search results. I needed to get it back up and running quick. The only thing I had was an XML export from the blog a couple of weeks back. I immediately decided to use Azure to get this running quick.

    For those who don’t know, the easiest way to export from a WordPress blog is to go to Tools – Export – All content. I suggest you do it often.

    I jumped over to Azure, and provisioned a new resource. Since this was going to be an actual production thing, and not just some testing resource for a conference or for a post, I decided to use a new resource group.

    Note – I choose to do a MySQL database in the app – mainly because I needed this up and running quickly, and I don’t have traffic substantial enough to warrant scaling backend MySQL instances. For large instances, I would recommend using ‘Azure Database for MySQL’ – that allows options like scaling, larger instances, etc…

    The deployment of the WordPress instance was honestly the longest part of the process – it took between 7 – 10 minutes, but once it was up and running you are presented with a brand new WordPress instance:

    Click on the URL, and you are presented with the interface for your brand new WordPress instance. Now we continue with the WordPress setup

    Now update your WordPress – immediately. After updating, log out and back in, just in case the WordPress database needs to update as well. Once everything is updated, it’s time to import the WordPress XML.

    When you install the importer, then the ‘Run Importer’ option will appear. Upload your XML file and let the importer run. In my case it took about 5 minutes. A great thing about this import – it takes all of your settings – preferences, link post IDs, media, etc… This was in many ways much better than doing a restore on my normal hosting provider – I have my blog back up and running (minus some theme customization), but I have the entire power of Azure behind it! I get Azure Monitor – Azure Sentinel, App Insights, Log Analytics, and more!

    Next, it was time to redirect my old-and-busted blog to the new blog website. This is going to differ by hosting provider, but in my case it was fairly simple.

    What are the next steps? In my case, it is going to be adding a custom domain to my App service, so that I no longer have to rely on my old hosting provider, and use them solely as domain registrar. That will be in an upcoming post.

    In my case, something as horrible as a hack has led to a great outcome – hosting in Azure, really cheaply, with an absolute glut of new features at my disposal. This hack might have been the best thing to happen to this blog in a while. Who knows – maybe I will just continue to add new Azure features and see how it turns out.

    PoshAzurelab – starting azure machines in order

    aka – please be careful of asking me something

    Recently a friend sent me an email asking how to start a series of machines in Azure in a certain order.  These particular machines were all used for labs and demos – some of them needed to be started and running before others.  This is probably something that a lot of people can relate to; imagine needing to start domain controllers before bringing up SQL servers before bringing up ConfigMgr servers before starting demo client machines..  You get the idea.

    So, even though we got something for my friend up and running, I decided to go ahead and build a repo for this.  The idea right now is that this repo will  start machines in order, but I will expand it later to also stop them.  Eventually I will expand it to other resources that have a start/stop action.  Thanks a lot, Shaun.

    The basics of the current repo are 2 files – start-azurelab.ps1 and config.json.  Start-AzureLab performs the actual heavy lifting, but the important piece is really config.json.  Let’s examine the basic one:

    {
        "TenantId": "fffffff-5555-4444-fake-tenantid",
        "subscriptions": [
            {
                "subscription_name": "Pay-As-You-Go",
                "resource_groups": [
                    {
                        "resource_group_name": "AzureLabStartup001",
                        "data": {
                            "virtual_machines": [
                                {
                                    "vm_name": "Server001",
                                    "wait":false,
                                    "delay_after_start": "1"
                                },
                                {
                                    "vm_name": "Server002",
                                    "wait":false,
                                    "delay_after_start": "2"
                                }
                            ]
                        }
                    }
                ]
            }
        ]
    }

    The first thing you see is the ‘tenantid’.  You will want to replace this with your personal tenant id from Azure.  To find your tenant id, click help and show diagnostics:

    Next, you can see the subscription name.  If you are familiar with JSON, you can also see that you can enter multiple subscriptions – more on this later.  Enter your subscription name (not ID). 

    Next, you can see the Resource_Group_Name.  Again, you can have multiple resource groups, but in this simple example there is only one.  Put in your specific resource group name.  Now we get down to the meat of the config.

    "data": {
             "virtual_machines": [
              {
                    "vm_name": "Server001",
                    "wait":false,
                    "delay_after_start": "1"
               },
               {
                     "vm_name": "Server002",
                     "wait":false,
                     "delay_after_start": "2"
                }
         ]
    }

    This is where we put in the VM names.  Place them in the order you want them to start.  The two other properties have special meaning – “wait” and “delay_after_start”.  Let’s look at “wait”.

    When you start an Azure VM with start-azvm, there is a property you specify that tells the cmdlet to either start the VM and keep checking until the machine is running, or start the VM and immediately return back to the terminal.  If you set the “wait” property in this config to ‘false’, then when that VM is started the script will immediately return back to the terminal and process the next instruction.  This is important when dealing with machines that take a bit to start  – i.e. large Windows servers.

    Now – combine the “wait” property with the “delay_after_start” property, and you have capability to really customize your lab start ups.  For example, maybe you have a domain controller, a DNS server , and a SQL server in your lab, plus a bunch of client machines.  You want the DC to come up first, and probably want to wait 20-30 seconds after it’s running to make sure your DC services are all up and running.  Same with the DNS and SQL boxes, but maybe you don’t need to wait so long after the box is running.  The client machines, though – you don’t need to wait at all, and you might set the delay to 0 or 1.  Just get them up and running and be done with it.

    So now we can completely control how our lab starts, but say you have multiple resource groups or multiple subscriptions to deal with.  The JSON can be configured to allow for both!  In the github repo, there are 2 additional examples – config-multigroup.json and config-multisub.json.    A fully baked JSON might look like this:

     

    {
        "TenantId": "ff744de5-59a0-4b5b-a181-54d5efbb088b",
        "subscriptions": [
            {
                "subscription_name": "Pay-As-You-Go",
                "resource_groups": [
                    {
                        "resource_group_name": "AzureLabStartup001",
                        "data": {
                            "virtual_machines": [
                                {
                                    "vm_name": "Server001",
                                    "wait":true,
                                    "delay_after_start": "1"
                                },
                                {
                                    "vm_name": "Server002",
                                    "wait":false,
                                    "delay_after_start": "2"
                                }
                            ]
                        }
                    },
                    {
                        "resource_group_name": "AzureLabStartup002",
                        "data": {
                            "virtual_machines": [
                                {
                                    "vm_name": "Server001",
                                    "wait":true,
                                    "delay_after_start": "1"
                                },
                                {
                                    "vm_name": "Server002",
                                    "wait":true,
                                    "delay_after_start": "2"
                                }
                            ]
                        }
                    }
                ]
            },
            {
                "subscription_name": "SubID2",
                "resource_groups": [
                    {
                        "resource_group_name": "AzureLabStartup001",
                        "data": {
                            "virtual_machines": [
                                {
                                    "vm_name": "Server001",
                                    "wait":true,
                                    "delay_after_start": "1"
                                },
                                {
                                    "vm_name": "Server002",
                                    "wait":true,
                                    "delay_after_start": "2"
                                }
                            ]
                        }
                    },
                    {
                        "resource_group_name": "AzureLabStartup002",
                        "data": {
                            "virtual_machines": [
                                {
                                    "vm_name": "Server001",
                                    "wait":true,
                                    "delay_after_start": "1"
                                },
                                {
                                    "vm_name": "Server002",
                                    "wait":true,
                                    "delay_after_start": "2"
                                }
                            ]
                        }
                    }
                ]
            }
        ]
    }

    So there you have it – hope you enjoy the repo, and keep checking back because I will be updating it to add more features and improve the actual code.  Hope this helps!