Thursday, June 16, 2016

Send OMS Search Results to Azure Automation: The Easy Way

A few weeks ago the Operations Management Suite (OMS) product team announced that you could include search results in webhook payloads. Article here. This is really useful if you are into automation and specifically Azure Automation. It is now much easier in my opinion to get pertinent data to Azure Automation from OMS when you include search results in the OMS Alert.

First let’s look at a simple query, we'll use the default query for finding when users were added to Domain Admins, which is already stored in OMS.

Type=SecurityEvent EventID=4728 OR EventID=4732 OR EventID=4756

As you can see there is a fair amount of information contained in this alert. Let’s add a Webhook that calls an Azure Automation runbook, to the Alert I already created in OMS so we can see what gets sent to Azure Automation.

Here is the PowerShell in my Azure Automation runbook.

In the first three lines we're passing the Parameter $WebhookData into the runbook. This variable is actually persistent in Azure Automation runbooks and it can be null or not even included if you just want to call the runbook without passing it any data. I have set the runbook to run on my Hybrid Worker and drop the variables into text files on the local hdd.

param (

if ($WebhookData -ne $null) {
    # Collect properties of WebhookData.
    $WebhookName    =   $WebhookData.WebhookName
    $WebhookHeaders =   $WebhookData.RequestHeader  
    $WebhookBody    =   $WebhookData.RequestBody
    $webhookname | out-file c:\temp\Webhookname.txt
    $webhookheaders | out-file c:\temp\webhookheaders.txt
    $WebhookBody | out-file c:\temp\webhookbody.txt 

This is what I get when I open the WebhookBody text file.

There is a lot of information in there, but when you try to parse it and add pertinent information to PowerShell variables, you get empty results. To format it and be able to assign data to variables we need to do two things. We need to modify the alert in OMS and add a couple lines of PowerShell in the runbook.

First changing the Alert. We'll add "IncludeSearchResults":true" after checking the "include custom JSON payload." 


Save the alert and then we add the PowerShell. 

Second we'll add the following to the runbook:

$SearchResults = (ConvertFrom-JSON $WebhookBody).SearchResults
$SearchResultsValue = $SearchResults.value

I added some more out-file commands so we can look at the data.

Now I'll generate the alert again and see what we get.

When I open SearchResultsValue.txt, this is what I get.

This is nicely formatted and easy to understand. More importantly I can now assign any one of the search result items on the left hand side to a PowerShell variable.

Let’s say in this example we want to get the MemberName and TargetAccount. These would be logical choices given the context of the alert. All we need to do now is add a ForEach loop and assign variables.

  Foreach ($item in $SearchResultsValue)


The complete runbook now looks like this: 

I'll trigger the alert again and we'll see the results.

From here it would be relatively easy to do some alert remediation in our on-prem Active Directory, now that we have the search result data available in PowerShell variables.

Friday, June 10, 2016

Setup an on-prem Run As Account for Azure Automation Hybrid Worker

When using a Hybrid Worker by default Azure Automation will run as the machine context. This is fine until you start trying to do things like make changes in Active Directory or other services that require a login other than the machine. This is fairly easy to setup and assumes that you've already setup a Credential Asset in Azure Automation, if you have not there are plenty of resources out there, its very straight forward on how to do that.

In the Azure Automation Portal select Hybrid Worker Groups. Then select the Hybrid Worker you want to add the credential too.

Then select Hybrid Workers under Details, then Hybrid worker group settings. And finally select Custom and select your Credentials.

Now any runbooks that you select to run on your hybrid worker will run as that credential. Event # 5532 in the Event Viewer -> Microsoft-SMA -> Operational log will show you what account a runbook was run as.

Tuesday, June 7, 2016

Operations Management Suite (OMS) Log Search is Case Sensitive

This is more of a note to self blog post, but Operations Management Suite (OMS) log search is case sensitive. If you are new to OMS, like I am, this may help save you some time.


If you type in the log search "Type=perf" you get the following result: Unknown type 'perf'
 You actually get a different error with "type=Perf"
The correct way, upper case for each word: "Type=Perf" and you can see I get 61k results from my 5 server lab.

If you move on to actual search queries, every first character before the | "select" needs to be upper case. If you miss one, the whole command will fail. On the plus side, while every first character after "select" also has to be uppercase it doesn't kill the whole command. For instance, see below, the results are missing "computer."

but "computer" is back when I fix it.

Its not a big deal, but for someone like me who already makes enough typos, clicking the shift button that often makes me more prone to typos. I'm sure there's probably a technical reason for this that I am not aware of, but at least if this post can save someone some time then it was worth it.

Thursday, March 24, 2016

Triggering Automation from Event Logs with Orchestrator and OMS AzureAutomation

I recently hooked my home lab into Microsoft Operations Management Suite (MSOMS or OMS) and have been dabbling in Azure Automation. I wanted to put together some quick examples of triggering automation through event logs in Orchestrator and compare it to OMS with Azure Automation.

This is not a super technical deep dive of automation, the automation is just a simple PowerShell script that finds all running virtual machines in my lab and puts them in a saved state. The point of the blog is to show how easy it is trigger the automation. What the automation is can be as simple or complicated as you want.

To get started in OMS you can follow this resource.
You'll also need to setup your Hybrid Worker in Azure Automation.

First we need a log to capture to trigger the automation. Using PowerShell I created a new log called "SCORCHAzure."

New-Eventlog -LogName SCORCHAzure -Source scripts

And then for generating the log:

Write-EventLog -LogName SCORCHAzure -Source scripts -Message “shutdown lab” -EventId 0 -EntryType information 

In Orchestrator we can use the "Monitor Event Log" activity to get start the automation based on this specific event. Note: I know you can do this and much more through Operations Manager (SCOM) with Orchestrator integration, but that is not the point of this post.

We'll point to our machine, and when we select the ellipses for event log, our custom event log will be available. Note: the Orchestrator service account needs to have remote access to the machines event log for the activity to work.

Under description I put "lab." I did this because maybe in the future we can add branch logic where if the event description says "shutdown" it will perform the shutdown, and conversely if the description contains "start" we can start the lab.

Next we have our "automation," as previously noted this is just a simple example.

Next we'll need to create our runbook in Azure Automation. In this case I just dropped the exact same PowerShell into the runbook, tested it with the test pane feature and it worked, connected to my HyperV lab and saved all running machines. Save it and publish it, if it is not published you will not see it in OMS.

We also need to capture our custom event log in OMS, under Settings -> Data we add "SCORCHAzure" OMS will now collect the custom event log.

And finally setting up the log search in OMS to trigger the Azure Automation Runbook. This is where OMS log analysis shines. Under log search with the following query:

Type=Event EventLog="SCORCHAzure" "shutdown lab"

The only caveat here is I couldn't figure out how to get OMS to read the RenderedDescription field which had "shutdown lab" in it. I was trying queries like:

Type=Event EventLog="SCORCHAzure" renderedDescription="shutdown lab"

Which would return no results. Putting just "shutdown lab" found it though.

The best part is there are tons of examples provided so you are not stuck trying to find what you need, you can most likely piece together what you are looking for just from the examples.

To link it to Azure Automation click on Alert and a new fly-out will appear to the right. Set your criteria for how many times this event is generated before taking action. In my case I selected greater than 0 over 15 minutes.

Enable remediation and select our runbook and also select Hybrid Worker since that is what we are testing on. Save it and generate a log and in about 15 minutes the runbook will run.

So, which was easier to setup? Orchestrator without a doubt was easier. However, this is because we had to go through OMS for Azure Automation. If we were using SCOM in conjunction with Orchestrator the setup would be more complicated.

Orchestrator is slightly less complicated if you want to do special logic with the description field. With Azure Automation you have to add a webhook in OMS to parse that data if you want to create logic from it. There's a good blog here that shows how to do that. Or you have to add a PowerShell module in Azure Automation that will live query OMS for that data.

There are benefits and drawbacks to both automation tools. Running Orhcestrator means you are responsible for your own environment, including the database(s) to run it. Where as with OMS and Azure Automation you don't have to support the environment, while the free tier provides some options in a bigger environment you will quickly jump out of the free tier.

I think OMS with Azure Automation are definitely something everyone should check out, especially if you are running SCOM.

Orchestrator and Azure Automation are both very powerful tools we should all be using to automate self service tasks, or automate server tasks

Wednesday, December 9, 2015

Windows 10 Needs work

This is a bit off topic for my blog, but none the less I want to put it out there.

I have a Surface Pro 3 and I put Windows 10 on it when RTM was released. It is my personal opinion that that build of Windows 10 has been the best build and the quality Windows 10 has been going down hill since then. Sure there could be something wrong with my Surface, but given what I am about to show below, I do not understand how that would be a hardware problem.

Lets start with the time issues. Multiple times I have tweeted that my system clock was woefully wrong. Example:

I can guarantee you I was not awake at 3:00 am to take this screenshot. I have woken up to this at least ten times since getting on Windows 10, its happened on every build. When the problem first started, toggling "set time automatically" would not fix the issue, and I would have to manually do it. Now, toggling it will reset the time, that's an improvement.

The problem appears to be related to the Surface not going to sleep and just staying on draining the battery. Here are my power settings:

As you can see whether I am on battery or power, the Surface should go to sleep at some point. However, this happens multiple times a week, where I come back to my surface to find its battery completely drained and it won't turn on.

Lets move on to Edge. I am a Firefox guy, or lets say, I was. When I first got on Windows 10 RTM, Edge was awesome, amazingly fast, responsive and never froze, despite how many tabs I have open. I am a tab hoarder, I freely admit this, as I write this I currently have 13 tabs open, which is not a lot for me. However, despite that, and despite the fact that done enough searches to reach 500
on the medal system, I have exactly zero items in my history. Puzzling to say the least.

This screenshot is from a few weeks ago, I still have no history in my Edge browser.

I have also seen Edge using 50-75% of CPU with two tabs open, just regular websites without streaming video or anything like that.

Another fantastic problem is that I have Edge set to open all the tabs I had previously open, this works, except for if you open any PDF or other document that opens in Edge, it will lose all the tabs you had open, this is especially annoying when you have no history to go back on.

Maybe I need to do a complete re-install of Windows, or maybe something is wrong with my Surface, I'm not sure, but these problems are annoying to say the least. With each progressive build my experience has gotten worse. I would really like to not have to do a full restore of Windows as I have a lot of apps like Lightroom, Photoshop, and Adobe Bridge setup just how I like them. And wasn't that the whole point of Windows 10 anyway? Being able to migrate installed applications without having to re-install them on the new build? To be fair this process is amazing, I was pleasantly surprised when Lightroom successfully migrated from Windows 8.1 to Windows 10 on my Surface.

Tuesday, November 10, 2015

Installing and Basic Configuration of Microsoft's HTML 5 Portal for Service Manager 2012

Microsoft today released their own HTML 5 self-service portal for Service Manager 2012. This is big news as the old portal was to be honest, just awful. It was awful to install, and awful to use and it required its own server. This lead to companies like Cireson and Sylliance creating their own HTML 5 web portals. I have personally used the Sylliance portal in a production environment and it was pretty great. But both of those options aren't free and they aren't cheap either. This new portal from Microsoft is completely free.

Note: I am installing the portal on a brand new install of Service Manager 2012 R2.


Prerequisites that may not be installed already:
IIS Services
- Basic Authentication
- Windows Authentication
- Microsoft IIS-feature ASP 

These are easy enough to install through PowerShell or Server Manager.

You can download the portal here:

After downloading, extract the setup folder on your Service Manager Management server. Start the setup, but make sure you right click run as administrator. It won't be able to communicate to IIS services if you don't.

Run through defaults, changing anything you want for your environment.

Install the above requirements if you haven't already. You should get the below:

Name your portal and tell it what port to use. For my purposes this is a lab so I don't need SSL.

Give it a service account to use to run the service.
 Click next and in no time at all, you should have successfully installed.

So how is it? Its fast, very fast. Going from the old portal to this one is like getting out of a run down golf cart and getting into an Audi R8.

The new portal being HTML 5 and CSS, means its highly customizable.

Here are the basic ones just about everyone wants to change.

The company logo is under \inetpub\wwwroot\SelfServicePortal\Content\Images

The main configuration file is under \inetpub\wwwroot\SelfServicePortal\Main.config

This file allows you to change the portal name at the top, the contact phone number, the email address, default language etc.There is a whole block of customizable stuff.

Lets change a few of them.

Reload the portal and viola. All changes are there.
 It shows announcements from Service Manager just fine as well, by clicking the message icon in the top right.

Here is the generic incident request form. This can be modified in Service Manager, or can be changed to be a Service Request by default.

Installation and configuration took maybe 30 minutes in total. I'm very excited about the new portal and hope to show more things off as I find them. The Sylliance portal installation was easier, it was one PowerShell script as I recall. I have heard that the new Sylliance portal has dynamic forms but I have not used that version yet. I don't think the new portal from Microsoft offers that, but lets hope they add that feature later. After all the portal itself was a result of Microsoft listening to the community through the user voice forums.

Tuesday, September 29, 2015

Replicate Spiceworks Hashtags in Service Manager with Orchestrator

I'm not sure if its laziness, a product of being "too busy," or brilliance but people seem to love Spiceworks hashtag system. For those of you not aware, Spiceworks allows your helpdesk analysts to email the ticketing system with a # and a word or words behind it to automatically have it close and assign to a specific person and even categorize the ticket. At my last job they had just migrated from Spiceworks to Service Manager a few months prior to me starting and I was asked if I could come up with a similar system in Service Manager. I was also recently at a client where they are considering changing from Spiceworks to Service Manager and I was told it would have been a complete show stopper if that functionality was not in Service Manager. One of the guys I was working with even acted like this was basic functionality all ticketing systems have. I have worked with Helpstar, ServiceNow, and Service Manager and to my knowledge none of them have this capability.

Well, Service Manager may not be capable of this by default, but never fear, we have Orchestrator, which really can do almost anything but make me coffee. (I don't drink coffee so that's fine by me)

Here are our runbooks, which I will step through. If you are going to follow along, I would recommend creating each of these runbooks as blank runbooks with just the Initialize Activity in them and the IR GUID variable. These runbooks will also be available for download at the end of the post.

1.1 Monitor Incident Creation from Email - Monitors Service Manager for new incidents created from email, filtering for descriptions containing #Flag, which passes off to the 1.2 #Flag Master Runbook.
1.2 #Flag Master - Our Master Runbook which invokes the other runbooks.

1.3 Assign Category - Assigns category of the ticket should the analyst specify one. Looking for #category

1.4 Assign User - Assigns the Incident to a user should it be specified. Looking for #assignto
1.5 Resolve Incident - Sets the incident to Resolve should it contain #Close

 1.1 Monitor Incident Creation from Email
First we need to grab the Monitor Object Activity from the Service Manager Integration Pack. In its properties, we select our Service Manager Connection, select Incident for Class, select New for trigger and add a Filter of Source Equals Email.

Next bring down the Invoke Runbook Activity and connect the Monitor Email Activity to it. Once connected, right click on the link and select properties. Select Exclude and select Add.  Select the title of the first activity and replace it with Description, then select equals and select does not contain and then put in #Flag. This is going to filter every ticket out except for tickets that have #flag in the description of the ticket. The hashtag is not case sensitive.

 Next on the Invoke Runbook activity select the Flag Master runbook if you have it created, if not add it later. Subscribe to the SC Object GUID from the Monitor Email Activity.

1.2 #Flag Master
For our Flag Master runbook we'll drag over a Initialize Data Activity and add IR GUID or SC Object GUID if you so prefer.

Next drag a Get Object Activity from the Service Manager Integration Pack. We will select Incident and Add a filter of SC Object GUID and subscribe to the GUID from Initialize Data.

Now drag a Invoke Runbook Activity and connect it from the Initialize Data Activity. Once connected click on the link and select Exclude. Add more filters as seen below for #close, #assignto and #category (these tags can be anything you want, I am just going with what the client wanted) these are here in case some thoughless analyst sent in an email with #flag in the body but didnt add any other flags, we wouldn't want our runbooks running for now reason and causing an error when it cant find the correct data.

Now on our Invoke Runbook Activity select the Ellipses and select the Assign Category Runbook and subscribe in the IR GUID to SC Object from Get Incident.

And do the same with Assign Incident

And Resolve Incident.

Lets talk about the theory behind these next runbooks for a second. The client wanted quite a lot of categories and users to be able to assign incidents to. We cant use the Map Published Data Activity because the source has to match exactly what is put into the source for Map Published Data, meaning even if nothing else was in the email except for the flags it still would never work because we have multiple flags between #Flag, #assignto, #close and #category. 

The second option would be to branch out and do link filters. This would work if you only had a handful of options, but it quickly goes from this to ridiculous really quick.
The only obvious solution was PowerShell -match, which I will show in the next two runbooks.

1.3 Assign Category

If you haven't already, create your Assign Category Runbook and drag the Initialize Data Activity to it. Create the IR GUID published data variable.

Now drag the Get Object Activity from the Service Manager Integration Pack and connect it to the Initialize Data Activity. Select Incident for the Class and add a filter for SC Object GUID and subscribe to SC Object GUID from Initialize Activity.

Now drag a Run .Net Activity and connect it to Get Incident from the previous step. This is where you need to have planned out what you want your helpdesk analysts to be able to hashtag. the cool part is in Service Manager you have Applications as its own category and you have say Citrix if you hashtag #citrix it puts the incident under applications\citrix automatically.

So we will create a variable in powershell called flag and set it equali to the description field of our incident. As such:

$flag = "subscribe to description from incident"

We will then do an if and a -match, and if powershell finds the match it will set the result to that specific category.

if($flag -match "#category application"){$result = "applications"} 

Note: these categories should match directly to your categories in Service Manager, otherwise the next step will fail.

Dont forget to go into Published Data and create your Result variable.

For the next step we drag over the Update Object Activity from the Service Manager Integration Pack. Again we will add Incident for our Class and subscribe to the SC Object GUID from Get Incident. In the Fields Pane select Optional Fields and select Classification Category, then subscribe to Result from the Get Category Run .Net Activity.

We then return the data back to the master runbook.

1.4 Assign User
For assign user we're going to do the same steps as before with reading the ticket description for a hashtag and then do a match in PowerShell. The trick here is the $result needs to be the SamAccountName of the user you want to assign the ticket to. The flag can be anything you want, as long as you have the samaccountname.

We then take the result and get that Active Directory user in Service Manager.

We then assign the user to the ticket.

Return Data and we're done.

1.5 Resolve Incident

This is the simplest of the hashtags. Again we get the incident.

Get UTC time with Powershell.

And resolve the incident.

This runbook is provided as an example and is not production ready, please test in your own environment.  The runbook is provided as is and without warranty.

The runbook can be downloaded from here