New PowerShell Module – PoshMTLogging

At the end of this post, I promised a simple module for logging that can make use a mutex and has other neat features. Well, here it is!

This simple module will write to a log file. This module has a couple of unique features:

– Optional ‘UseMutex’ switch which helps avoid resource contention so multiple threads can write to the log at the same time
– Entry Severity make log readers like CMTrace color code automatically
– Standard line entry with automatic timestamping
– Automatic log rolling at 5mb

So check it out! Let me know what you think, and feel free to branch/pull with any ideas!

PowerShell Logging – What about file locking?

Today I was working on one of my large-scale enterprise automation scripts, when a common annoyance reared it’s ugly head. When dealing with multi-threaded applications, in this case Runspaces, it’s common to have one runspace trying to access a resource another runspace is currently using. For example – writing some output data to a log file. If two or more runspaces attempt to access the same log at the same time you will receive an error message. This isn’t a complete stoppage when dealing with log files, but if a critical resource is locked then your script could certainly fail.

So what is the easiest way to get around this? Enter the humble Mutex. There are several other posts that deal with Mutex locking, so I won’t go over the basics. Here I want to share some simple code that makes use of the mutex for writing to a log file.

The code below is a sample write-log function that takes 4 parameters. 3 of them are mandatory – Text for the log entry, name of the log to write to, and the ‘level’ of the entry. Level is really only used to help color coding and reading of the log in something like CMTrace. The last of the four parameters is ‘UseMutex’. This is what is going to tell our function whether or not to lock the resource being accessed.

<#
    .SYNOPSIS
        Simple function to write to a log file
    
    .DESCRIPTION
        This function will write to a log file, pre-pending the date/time, level of detail, and supplied information
    
    .PARAMETER text
        This is the main text to log
    
    .PARAMETER Level
        INFO,WARN,ERROR,DEBUG
    
    .PARAMETER Log
        Name of the log file to send the data to.
    
    .PARAMETER UseMutex
        A description of the UseMutex parameter.
    
    .EXAMPLE
        write-log -text "This is the main problem." -level ERROR -log c:\test.log
    
    .NOTES
        Created by Donnie Taylor.
        Version 1.0     Date 4/5/2016
#>
function Write-Log
{
    param
    (
        [Parameter(Mandatory = $true,
                   Position = 0)]
        [ValidateNotNull()]
        [string]$text,
        [Parameter(Mandatory = $true,
                   Position = 1)]
        [ValidateSet('INFO', 'WARN', 'ERROR', 'DEBUG')]
        [string]$level,
        [Parameter(Mandatory = $true,
                   Position = 2)]
        [string]$log,
        [Parameter(Position = 3)]
        [boolean]$UseMutex
    )
    
    Write-Verbose "Log:  $log"
    $date = (get-date).ToString()
    if (Test-Path $log)
    {
        if ((Get-Item $log).length -gt 5mb)
        {
            $filenamedate = get-date -Format 'MM-dd-yy hh.mm.ss'
            $archivelog = ($log + '.' + $filenamedate + '.archive').Replace('/', '-')
            copy-item $log -Destination $archivelog
            Remove-Item $log -force
            Write-Verbose "Rolled the log."
        }
    }
    $line = $date + '---' + $level + '---' + $text
    if ($UseMutex)
    {
        $LogMutex = New-Object System.Threading.Mutex($false, "LogMutex")
        $LogMutex.WaitOne()|out-null
        $line | out-file -FilePath $log -Append
        $LogMutex.ReleaseMutex()|out-null
    }
    else
    {
        $line | out-file -FilePath $log -Append
    }
}

To write to the log, you use the function like such:

write-log -log c:\temp\output.txt -level INFO -text "This is a log entry" -UseMutex $true

Let’s test this. First, let’s make a script that will fail due to the file being in use. For this example, I am going to use PoshRSJob, which is a personal favorite of mine. I have saved the above function as a module to make sure I can access it from inside the Runspace. Run this script:

import-module C:\blog\PoshRSJob\PoshRSJob.psd1
$I = 1..3000
$I|start-rsjob -Throttle 250 -ScriptBlock{
    param($param)
    import-module C:\blog\write-log.psm1
    Write-Log -log c:\temp\mutex.txt  -level INFO -text $param
}|out-null 

Assuming you saved your files in the same locations as mine, and this runs, then you should see something like this when you do a get-rsjob|receive-rsjob:

So what exactly happened here? Well, we told 3000 Runspaces to write their number ($param) to the log file….250 at a time (throttle). That’s obviously going to cause some contention. In fact, if we examine the output file (c:\temp\mutex.txt), and count the actual lines written to it, we will have missed a TON of log entries. On my PC, out of the 3000 that should write, we ended up with only 2813 entries. That is totally unacceptable to miss that many log entries. I’ve exaggerated the issue you will normally see using these large numbers, but this happens all the time when using smaller sets as well. To fix this, we are going to run the same bit of code, but we are going to use the ‘UseMutex’ option in write-log function. This tells each runspace to grab the mutex and attempt to write to the log. If it can’t grab the mutex, it will wait until it can (in this case forever – $LogMutex.WaitOne()|out-null). Run this code:

import-module C:\blog\PoshRSJob\PoshRSJob.psd1
$I = 1..3000
$I|start-rsjob -Throttle 250 -ScriptBlock{
    param($param)
    import-module C:\blog\write-log.psm1
    Write-Log -log c:\temp\mutex.txt  -level INFO -text $param -UseMutex $true
}|out-null 

See the ‘-UseMutex’ switch? That should fix our problem. A get-rsjob|receive-rsjob now returns this:

Success! If we examine our output file, we find that all 3000 lines have been written. Using our new write-log function that uses a Mutex, we have solved our locking problem. Coming soon, I will publish the actual code on Github – stay tuned!

Custom Log Analytics logging with Logic Apps!

Here is a quick demo on sending a simple API to Log Analytics using Logic Apps. It used to be quite a pain to get data to Log Analytics – using an API, sending via something like an Azure Function, Azure Automation Runbook, PowerShell scripts, etc… Now, you can do it in about 3 minutes with no code!

First – some assumptions:
1: You have a Log Analytics workspace already set in Azure. See this article if you need help with that.
2: Actually, there is no 2. Just make sure you have a Log Analytics workspace.

Create a Logic Apps….App. That naming convention seems wrong.

It will take a few seconds for the app to be created. When it is, enter the designer. In this example, we are going to retrieve a simple piece of data via a web API – we are going to get a current stock quote from IEXTrading.com. For those that don’t know, IEXTrading has an AMAZING web API for pulling stock data. Seriously – check it out: https://iextrading.com/developer/docs/

In this example we will setup a simple 15 minute timer, pull the data from IEXTrading, take the JSON payload from the API call, and send that to Log Analytics. It’s actually really easy.

If you haven’t setup a Log Analytics connection in Logic Apps, then there are a couple of pieces of information from Log Analytics you are going to need. Go into your Log Analytics workspace, click on the ‘Advanced Settings’ section and copy down the “Workspace ID” and either the “Primary” or “Secondary” key. Enter those into the connection information for the Logic Apps Connector. I’ve shown you mine here:

Now – just let it run! It might fail for the first couple of runs – I believe this has something to do with the creation of the custom log in Log Analytics. After a (short) period of time, you can query for your custom log in Log Analytics. The one thing you should know is that the log name you specified in Logic Apps will be appended with “_CL”. That stands for Custom Log, and it will show up if you want it to or not. You can search for you log like this:

search *
| where Type == "Stock_Prices_CL"

Logic Apps or Flow – String to Array

Ran across an interesting problem today – how to, in Flow or Logic Apps, take a string and create a data array from it. In this particular instance, the bit of data was being emailed to an Outlook.com inbox, and we wanted to parse that data and only work with 3 pieces of the string. It sounds easy, as actually is, but takes a small bit of knowledge ahead of time. Hopefully this helps you avoid some web-searches.

So let’s say we have some data formatted like this:

["f8c8e13b-48be-4a91-818e-c6fasdfd001","OM.DEMO","Processor","% Processor Time","1",45.9060745239258,"2018-07-17T19:46:55.577Z","OpsManager12","\\\\OM.DEMO\\Processor(1)\\% Processor Time","d88f095c-8bf8-4e88-d827-663fe6asdf01",null,null,null,null,null,null,"Perf"]

This data is a sample performance metric – something you might receive from Log Analytics, for example. This data is obviously an array in theory, but right now it’s just a long string to Flow or Logic Apps. So, let’s get this into a usable array and access only the bits we want.

Here is the email:

Now, let’s create a trigger and add a “Compose” action. There is literally no configuration to the “Compose” action.

Next, initialize a variable and set it’s type to an Array. Set the value to the output of the “Compose” action.

From here, you can call the individual instances of the array in a very straight forward method. You simple reference the index of the item you want like this: variables(‘StringArray’)[1] (This would return the second item in the array since the array numbering starts at 0). In this example, I pull out three pieces of data, and (for no particular reason) email it back to myself.

And the email:

PowerShell – Get is optional

Here is something I learned a while back from Mr. Snover himself, and it was something I just couldn’t believe. Sure enough it’s true, and it’s still true in PowerShell Core 6. The “Get-” part of almost all “Get-” commands is completely optional. Yeah – you heard that right. “Get” is optional for almost all. Get-Process can’t run correctly, mainly because “Process” expects some arguments. Otherwise, give it a try!

PS  C:\Blog (9:24:23 PM) > timezone


Id                         : Central Standard Time
DisplayName                : (UTC-06:00) Central Time (US & Canada)
StandardName               : Central Standard Time
DaylightName               : Central Daylight Time
BaseUtcOffset              : -06:00:00
SupportsDaylightSavingTime : True
PS  C:\Blog (9:24:27 PM) > random
1240576702
PS  C:\Blog (9:25:18 PM) > computerinfo


WindowsBuildLabEx                                       : 16299.15.amd64fre.rs3_release.170928-1534
WindowsCurrentVersion                                   : 6.3
WindowsEditionId                                        : EnterpriseN
WindowsInstallationType                                 : Client
WindowsInstallDateFromRegistry                          : 12/7/2017 10:29:37 AM
WindowsProductId                                        : 00330-00180-00000-AA567
WindowsProductName                                      : Windows 10 Enterprise N
WindowsRegisteredOrganization                           :
WindowsRegisteredOwner                                  : Draith
WindowsSystemRoot                                       : C:\WINDOWS
WindowsVersion                                          : 1709
WindowsUBR                                              : 192
BiosCharacteristics                                     : {7, 11, 12, 15...}
BiosBIOSVersion                                         : {ALASKA - 1072009, V1.12, American Megatrends - 4028F}
BiosBuildNumber                                         :
BiosCaption                                             : V1.12
BiosCodeSet                                             :
BiosCurrentLanguage                                     : en|US|iso8859-1
BiosDescription                                         : V1.12
BiosEmbeddedControllerMajorVersion                      : 255
BiosEmbeddedControllerMinorVersion                      : 255
BiosFirmwareType                                        : Uefi
BiosIdentificationCode                                  :
BiosInstallableLanguages                                : 1
BiosInstallDate                                         :
BiosLanguageEdition                                     :
BiosListOfLanguages                                     : {en|US|iso8859-1}
BiosManufacturer                                        : American Megatrends Inc.
BiosName                                                : V1.12
BiosOtherTargetOS                                       :
BiosPrimaryBIOS                                         : True
BiosReleaseDate                                         : 8/10/2015 7:00:00 PM
BiosSerialNumber                                        : To be filled by O.E.M.
BiosSMBIOSBIOSVersion                                   : V1.12
BiosSMBIOSMajorVersion                                  : 2
BiosSMBIOSMinorVersion                                  : 8
BiosSMBIOSPresent                                       : True
BiosSoftwareElementState                                : Running
BiosStatus                                              : OK
BiosSystemBiosMajorVersion                              : 4
BiosSystemBiosMinorVersion                              : 6
BiosTargetOperatingSystem                               : 0
BiosVersion                                             : ALASKA - 1072009
CsAdminPasswordStatus                                   : Disabled
CsAutomaticManagedPagefile                              : True
CsAutomaticResetBootOption                              : True
CsAutomaticResetCapability                              : True
CsBootOptionOnLimit                                     :
CsBootOptionOnWatchDog                                  :
CsBootROMSupported                                      : True
CsBootStatus                                            : {0, 0, 0, 0...}
CsBootupState                                           : Normal boot
CsCaption                                               : DESKTOP-KL1CDTP
CsChassisBootupState                                    : Safe
CsChassisSKUNumber                                      : To be filled by O.E.M.
CsCurrentTimeZone                                       : -360
CsDaylightInEffect                                      : False
CsDescription                                           : AT/AT COMPATIBLE
CsDNSHostName                                           : <>
CsDomain                                                : WORKGROUP
CsDomainRole                                            : StandaloneWorkstation
CsEnableDaylightSavingsTime                             : True
CsFrontPanelResetStatus                                 : Disabled
CsHypervisorPresent                                     : True
CsInfraredSupported                                     : False
CsInitialLoadInfo                                       :
CsInstallDate                                           :
CsKeyboardPasswordStatus                                : Disabled
CsLastLoadInfo                                          :
CsManufacturer                                          : MSI
CsModel                                                 : MS-7917
CsName                                                  : <>
CsNetworkAdapters                                       : {Ethernet 2, Ethernet, vEthernet (Default Switch), vEthernet
                                                          (ExternalSwitch)...}
CsNetworkServerModeEnabled                              : True
CsNumberOfLogicalProcessors                             : 8
CsNumberOfProcessors                                    : 1
CsProcessors                                            : {Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz}
CsOEMStringArray                                        : {To Be Filled By O.E.M.}
CsPartOfDomain                                          : False
CsPauseAfterReset                                       : -1
CsPCSystemType                                          : Desktop
CsPCSystemTypeEx                                        : Desktop
CsPowerManagementCapabilities                           :
CsPowerManagementSupported                              :
CsPowerOnPasswordStatus                                 : Disabled
CsPowerState                                            : Unknown
CsPowerSupplyState                                      : Safe
CsPrimaryOwnerContact                                   :
CsPrimaryOwnerName                                      : Draith
CsResetCapability                                       : Other
CsResetCount                                            : -1
CsResetLimit                                            : -1
CsRoles                                                 : {LM_Workstation, LM_Server, NT}
CsStatus                                                : OK
CsSupportContactDescription                             :
CsSystemFamily                                          : To be filled by O.E.M.
CsSystemSKUNumber                                       : To be filled by O.E.M.
CsSystemType                                            : x64-based PC
CsThermalState                                          : Safe
CsTotalPhysicalMemory                                   : 34305863680
CsPhysicallyInstalledMemory                             : 33554432
CsUserName                                              : <>\Draith
CsWakeUpType                                            : PowerSwitch
CsWorkgroup                                             : WORKGROUP
OsName                                                  : Microsoft Windows 10 Enterprise N
OsType                                                  : WINNT
OsOperatingSystemSKU                                    : WindowsEnterprise
OsVersion                                               : 10.0.16299
OsCSDVersion                                            :
OsBuildNumber                                           : 16299
OsHotFixes                                              : {KB4048951, KB4053577, KB4054022, KB4055237...}
OsBootDevice                                            : \Device\HarddiskVolume2
OsSystemDevice                                          : \Device\HarddiskVolume4
OsSystemDirectory                                       : C:\WINDOWS\system32
OsSystemDrive                                           : C:
OsWindowsDirectory                                      : C:\WINDOWS
OsCountryCode                                           : 1
OsCurrentTimeZone                                       : -360
OsLocaleID                                              : 0409
OsLocale                                                : en-US
OsLocalDateTime                                         : 1/24/2018 9:25:52 PM
OsLastBootUpTime                                        : 1/20/2018 4:14:27 PM
OsUptime                                                : 4.05:11:25.3101843
OsBuildType                                             : Multiprocessor Free
OsCodeSet                                               : 1252
OsDataExecutionPreventionAvailable                      : True
OsDataExecutionPrevention32BitApplications              : True
OsDataExecutionPreventionDrivers                        : True
OsDataExecutionPreventionSupportPolicy                  : OptIn
OsDebug                                                 : False
OsDistributed                                           : False
OsEncryptionLevel                                       : 256
OsForegroundApplicationBoost                            : Maximum
OsTotalVisibleMemorySize                                : 33501820
OsFreePhysicalMemory                                    : 22288576
OsTotalVirtualMemorySize                                : 38482556
OsFreeVirtualMemory                                     : 25010532
OsInUseVirtualMemory                                    : 13472024
OsTotalSwapSpaceSize                                    :
OsSizeStoredInPagingFiles                               : 4980736
OsFreeSpaceInPagingFiles                                : 4980736
OsPagingFiles                                           : {C:\pagefile.sys}
OsHardwareAbstractionLayer                              : 10.0.16299.192
OsInstallDate                                           : 12/7/2017 4:29:37 AM
OsManufacturer                                          : Microsoft Corporation
OsMaxNumberOfProcesses                                  : 4294967295
OsMaxProcessMemorySize                                  : 137438953344
OsMuiLanguages                                          : {en-US}
OsNumberOfLicensedUsers                                 :
OsNumberOfProcesses                                     : 194
OsNumberOfUsers                                         : 2
OsOrganization                                          :
OsArchitecture                                          : 64-bit
OsLanguage                                              : en-US
OsProductSuites                                         : {TerminalServicesSingleSession}
OsOtherTypeDescription                                  :
OsPAEEnabled                                            :
OsPortableOperatingSystem                               : False
OsPrimary                                               : True
OsProductType                                           : WorkStation
OsRegisteredUser                                        : Draith
OsSerialNumber                                          : 00330-00180-00000-AA567
OsServicePackMajorVersion                               : 0
OsServicePackMinorVersion                               : 0
OsStatus                                                : OK
OsSuites                                                : {TerminalServices, TerminalServicesSingleSession}
OsServerLevel                                           :
KeyboardLayout                                          : en-US
TimeZone                                                : (UTC-06:00) Central Time (US & Canada)
LogonServer                                             : \\<>
PowerPlatformRole                                       : Desktop
HyperVisorPresent                                       : True
HyperVRequirementDataExecutionPreventionAvailable       :
HyperVRequirementSecondLevelAddressTranslation          :
HyperVRequirementVirtualizationFirmwareEnabled          :
HyperVRequirementVMMonitorModeExtensions                :
DeviceGuardSmartStatus                                  : Running
DeviceGuardRequiredSecurityProperties                   : {0}
DeviceGuardAvailableSecurityProperties                  : {BaseVirtualizationSupport, DMAProtection}
DeviceGuardSecurityServicesConfigured                   : {0}
DeviceGuardSecurityServicesRunning                      : {0}
DeviceGuardCodeIntegrityPolicyEnforcementStatus         : Off
DeviceGuardUserModeCodeIntegrityPolicyEnforcementStatus : Off

500 Error when setting up Windows Azure Pack

I am putting this out there so the next person doesn’t have to spend the DAYS I wasted trying to fix this. Here is the story:

You want to install SMA on your brand new Windows 2016 Server, but obviously need Windows Azure Pack. Grab the web installer, fire it up, verify that you have the right pre-reqs, and pick the Windows Azure Pack Express and Admin API selection. The install goes fine, and a web page will launch so you can configure Azure Pack. You enter your database info, user and passphrase info, and ‘next’ your way to the end. You press that last checkmark and expect all of the little circles to come back green……but then you see that the Admin Authentication Site has come back with an error:

“500 Internal Server Error – Failed to configure databases and services: Some or all identity references could not be translated.”

Long story short – Go to your inetpub directory, and pull up the properties on the folder named “MgmtSvc-WindowsAuthSite”. Un-check the Read-Only box at the bottom, apply, and when prompted tell it apply to all sub-items.

From what I can tell – the connection strings and users you entered during the config are entered into the webconfig for this site. My guess is that the setup program is attempting to decrypt the webconfig and create additional files (an un-encrypted version of the webconfig?) but was failing due to the read-only property. I can’t verify that (I refuse to go through that setup program again), but I do know that unchecking that box finally got the config to complete successfully, I was able to finally launch the Service Management Portal, and get my SMA web service registered.

The literal days I wasted on this – I hope none of you have to go through that.

Super-fast mass update of management servers for OpsMgr

Here’s a quick one – you want to update the failover management servers on your agents en-mass, and don’t want to wait 12 years for it to complete. Why do you want to set it? Maybe you only want certain agents talking to certain data-centers, or specific management servers have very limited resources. Regardless of the reasons, if you do need to update the agent config, it can be a bit slow. Here is a quick little script that can make those update a LOT quicker.

First thing first – download PoshRSJob from Boe Prox. It’s about the best thing since sliced bread, and I use it constantly. Download the module and place it in one of your module directories (C:\Windows\System32\WindowsPowerShell\v1.0\Modules, for example). Next, create a CSV called FailOverPairs.csv. This should have 2 columns – Primary and Failover. For example:

Primary,Failover
MS01,MS02
MS03,MS04
MS05,MS06

You will want that header line – mainly because it saves us a couple of lines of code in PowerShell. Next, save that CSV in the same directory as the script below. This CSV will be used to set the appropriate failover partner. Save the script below in the same directory as the csv, and you are good to go! Here is the script:

Import-Module PoshRSJob -Force
Import-Module OperationsManager -Force
$modules = (Get-Module | Where-Object{ $_.Name -notlike 'Microsoft.*' -and $_.Name -ne 'PoshRSJob' -and $_.Name -ne 'ISE' }).path
try
{
    $agents = Get-SCOMAgent
}
catch
{
    write-verbose "Cannot load agent list"
}
$Pairs = Import-Csv -Path $PSScriptRoot + '\FailoverPairs.csv' -Header Primary,Failover
$agents|Start-RSJob -Name { $_.DisplayName } -Throttle 20 -ModulesToImport $modules -ScriptBlock {
    param($agent)
    $Pairs = $using:Pairs
    $primary = $agent.PrimaryManagementServerName
    $CurrentFailover = ($agent.GetFailoverManagementServers().DisplayName)
    foreach ($Pair in $Pairs)
    {
        if ($Pair.Primary -eq $primary){$secondary = $Pair.Failover}
        if ($Pair.Failover -eq $primary){$secondary = $Pair.primary}
    }
    if ($secondary -ne $CurrentFailover)
    {
        $AgentName = $agent.DisplayName
        write-verbose "$AgentName Secondary wrong. Primary $Primary, Current Secondary $CurrentFailover, Discovered Secondary $secondary"
        try
        {
            $Failover = Get-SCOMManagementServer | Where-Object {$_.Name -eq $secondary}
            if ($Failover.IsGateway -eq $true)
            {
                $FailOverServerObject = Get-SCOMGatewayManagementServer | Where-Object {$_.Name -eq $secondary}
            }
            else 
            {
                $FailOverServerObject = $Failover
            }
            Set-SCOMParentManagementServer -Agent $agent -FailoverServer $FailOverServerObject
            write-verbose "$AgentName $secondary set."
        }
        catch
        {
            $ErrorText = $error[0]
            write-verbose "$AgentName Failed to set failover. Current Failover $CurrentFailover, Discovered Failover $secondary.$ErrorText"
        }
    }
}|Out-Null
get-rsjob|Wait-RSJob|Remove-RSJob -force|Out-Null

Let’s examine some of this – the imports are obvious. If you have any issue with unblocking files or execution policy, leave a comment and I will help you through the import. The next line is different:

$modules = (Get-Module | Where-Object{ $_.Name -notlike 'Microsoft.*' -and $_.Name -ne 'PoshRSJob' -and $_.Name -ne 'ISE' }).path

What we are doing here is to get a list of the loaded modules, then exclude some of them. We are doing this because when we run this script, we are creating a ton of Runspaces. By default, these runspaces will need to know which modules to load. We don’t need them to load PoshRSJob, and we don’t need them to load things like the ISE because they are ephemeral – they will go away after they have completed their processing. This line can be modified if you don’t need to load other modules. It will load the OperationsManager module, which is the heavy lifter of this script.

Next, we get all of the agents from the management group. This script needs to be run from a SCOM server, but you could easily modify this script to run from a non-SCOM system by adding the “-computername” switch to the get-scomagent command. Then we import the CSV that contains our failover pairs.

Now the fun starts – this line starts the magic:

$agents|Start-RSJob -Name { $_.DisplayName } -Throttle 20 -ModulesToImport $modules -ScriptBlock {

This is the magic. We are feeding the list of SCOM agents (via the pipeline) to the start-rsjob cmdlet. The “-name” parameter tells the runspaces to use the Agent name as the job name, and the “-Throttle” parameter is set to control the number of runspaces we want running at once. I typically find that there isn’t a lot of benefit to going much over 2 or 3 times the number of logical cores. Maybe if you have remote processes that were very long running it might be beneficial to go up to 5-10 times the number of processors, but for this I found 2-3 to be the sweet spot. You will also see that we are telling start-rsjob what modules to import (see above).

The rest of the script is the scriptblock we want PoshRSJob to run. This is actually pretty straight forward – we set some variables (some of the we have to get with “$using:“). Then we find the current primary and failover, see if they match our pairs, and if they don’t we correct them. This isn’t a fast process, but if you are doing 20 of them at a time, it goes by a lot faster!

At the end of the script, we are simply waiting for the jobs to finish. In fact, if you want to track the progress, comment out this line:

get-rsjob|Wait-RSJob|Remove-RSJob -force|Out-Null

If you comment that line out, you can track how fast your jobs are completing by using this:

get-rsjob|group -property state

We’ve been able to check several thousand systems daily in very little time to make sure our primary and failover pairs are set correctly. I hope you guys get some use from this, and go give Boe some love for his awesome module! Leave a comment if you have any questions, or hit me up on Twitter.

PowerShell Core – Get it now!!

PowerShell Core 6.0 was released yesterday! This is a huge step forward for PowerShell as it becomes an even stronger cross-platform powerhouse. In addition to the public announcement, Jeffery Snover and the PowerShell team did an AMA!
Here’s a quick primer:
Get it here.

Windows PowerShell and Powershell Core are different products and no, Windows PowerShell isn’t going away! There is a caveat, though – no new features for Windows PowerShell – Core is the future!
Windows PowerShell is built on full .Net whereas Core is built on .Net Core. In order to work on .Net Core and consequently be cross-platform, so things are either gone or being deprecated. Things like Workflows, and WMI cmdlets are a couple that will not work with Core.

Check out this OS List!:
Windows 7, 8.1, and 10
Windows Server 2008 R2, 2012 R2, 2016
Windows Server Semi-Annual Channel
Ubuntu 14.04, 16.04, and 17.04
Debian 8.7+, and 9
CentOS 7
Red Hat Enterprise Linux 7
OpenSUSE 42.2
Fedora 25, 26
macOS 10.12+
Arch Linux*
Kali Linux*
AppImage*
Windows on ARM32/ARM64**
Raspbian (Stretch)**
* = Community Support Only;** = Experimental

What does this new PowerShell look like? I grabbed the MSI (will do posts on Linux and MacOS later!) and took the defaults:

Super easy. Now you can launch a console window from the start menu, but if you browse to your install directory from a command line you get your first surprise!

That’s right! No powershell.exe! PowerShell.exe has been renamed to pwsh.exe. Start it, and check out your version – $PSVersionTable:
Name Value
—- —–
PSVersion 6.0.0
PSEdition Core
GitCommitId v6.0.0
OS Microsoft Windows 10.0.16299
Platform Win32NT
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0…}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0

If you have read my blog before, you know I like my customized prompt and profile. Core has a profile just like Windows PowerShell – located here on a typical Windows machine: \Documents\PowerShell\Microsoft.PowerShell_profile.ps1 Time to make Core feel like home again:

Yep – that’s better. There is a ton of fun stuff to do now – cross platform remote scripting is going to be a focus on some upcoming posts. Stay tuned!

Need some Demo boxes? Azure Burstable VMs

Back in September, Microsoft announced the new B-series of Azure virtual machines. In a nutshell, these are cheap VMs that are great for workloads that normally run little to no CPU utilization, but at times have “burst” workloads that consume more CPU. When the VM is running idle, a bank of credits is accumulated. When the VM needs to really use the CPUs, credits are consumed. Here is a quick glance of the sizes:

While getting my demos ready for an upcoming presentation at ExpertsLiveUS I had a thought. Why on Earth am I concerning myself with the cpu/ram on my demo laptop, when I can have a fully setup demo environment running in Azure? The B-series allows me to run with little cost, as they sit idle most of the time. When I am actually preparing or doing the demo, I get all the CPU I need! Combine that with AutoShutdown, and my costs are amazingly low. Setting up is really easy. Simply start to create your VM as normal, selecting one of the follow B-series VMs:

Once setup your VM will start to either bank or consume CPU credits. You can see these credits in the Azure Portal, and see how well your machines are trending. First, let’s see how a typical demo environment might look – here is a snapshot of a ConfigMgr server with about 6 clients (and remote SQL). As you can see, the CPU is normally low for a demo environment:

See that spike in the CPU consumption at the end of the graph? I started an update cycle, which is obviously going to consume some CPU as the updates are downloaded, processed, and made ready for deployment/installation. Let’s find out how my burstable credits are stacking up. To see how your machine is banking, click on “Metrics” under the “Monitoring” section.

There are two metrics we are concerned with – [Host] CPU Credits Consumed and [Host] CPU Credits Remaining. These show up how our credits are banking (or being consumed). Here is a snapshot of that same ConfigMgr box consuming those credits:

And the remaining credits:

To give you an idea – this is what the accumulation of those credits looked like:

It doesn’t take long to max your credit bank, and you can keep those in your back pocket for when you really need them – doing presentations or prepping for presentations. Of course you can use the B-series for normal workloads, but these boxes make an excellent demo environment. No more worrying about my laptop handling the workload!

MVP

I got a nice surprise this Sunday. I have been awarded MVP status in Cloud and Datacenter Management!! I am beyond happy – this is something that I have worked towards for a while, and it feels great! I am humbled to bear the same moniker of same of the legends in our field…I don’t feel like I deserve to be in the same room as some of them, but I will strive to do my best and continue to contribute to the community as best I can.

I was asked what my journey to MVP was like – my experiences, the steps I took, and what I wanted to get out of it. Here’s how it all played out.

It started a couple of years ago – I had (and still have) several MVP friends that I always looked up to. They are pillars of the tech community who willingly share their hard fought knowledge, so I naturally wanted to join their ranks. I was already on the board of the Central Texas Systems Management User Group, so I was already involved in the community to a certain extent, but I knew that I needed to up my game if I was going to qualify.

I started speaking more at user groups – CTSMUG, Austin PowerShell, DFWSMUG, etc… In a previous job at Dell, I had engaged in a lot of customer briefings (200+ in one year!), and had spoken at old school MMS several times, so I was comfortable speaking to large groups. The speaking at the user groups really helped me adjust the tone and detail, though. I found out what the users wanted to hear, how much detail to put in my presentations, and to make sure that every presentation ended with the users walking away with actionable knowledge. I went from speaking once a year to speaking to groups once a month or more. The more I spoke to these groups, the more I wanted to do it again.

At the same time I began to blog more – I always had a WordPress blog, but I was horrible about putting fingers-to-keyboard and actually writing articles. Just as with the speaking engagements, I found that the more I wrote, the more I wanted to write. I would be in the middle of writing an article when suddenly an idea for _another_ article would pop into my head. OneNote became my best friend – handy place to store those ideas/drafts and access them anywhere! I still don’t write as much as I want, but it’s definitely a lot more than it used to be!

MMSMOA. I cannot say enough about this conference. I submitted four session ideas, two of which were accepted. Both of my co-presenters were (still are) MVPs, and it was a wonderful experience. The feel was much closer to old-school MMS – the sessions are small and extremely technical. Managing to get through this conference, and getting excellent speaker evaluations back, is an extreme ego boost. It was at this conference when I heard, for the first time – “Wait, you aren’t an MVP?”. I knew at that point I needed to find out more.

I sent out some emails – to MVPS that would give me solid feedback. The email was simple – “If you think I could be a MVP, please nominate me. If not, let me know what I need to improve!” I specifically sent it to people I knew would smack me down if I deserved it. This was extremely valuable – I got honest responses back (along with some good-natured jabs) that helped me improve my blog posts, speaking engagements, and how to tune my message to a specific area of expertise. In addition I got 3 nominations. I was elated!

The process from there on was straight forward – I was asked to register on the MVP site, and then record my community activities. Keeping a list of those as I did them would have been a life-saver here, so if you are interested in pursuing a MVP, keep track of what you are doing! Specifically – Date, Location, Type of Activity (Blog, Speaking at user group, Speaking at Conference, etc…), description of the activity, and the reach (number of user group members present, page views, etc…). I did this at the start of June, and on October 1st, I got the email that I had been waiting for!