Using Azure Automation and Hybrid Worker to automate SCVMM tasks


Azure Automation and Hybrid Runbook workers are fun to play with and today I wanted to try something like automating System Center VMM tasks,

I read Markus Lassfolk´s post about changing VM´s network adapters MAC from dynamic to static which is the prefered setting that you should use for your Hyper-V VM´s. So How could I utilize this with Azure Automation runbooks instead of an script that is run on the VMM server?

In my runbook I have a VMM Automation account declared as a credential and I connect to the VMM server with that to be able to reconfigure the VM´s. If I do not use a -PSCredential for the inlinescript the Runbook worker will try to use the system account of the Worker server and that does not work so well on the connection to the VMM server,

Screen Shot 2015-09-09 at 15.24.12

Either start it from powershell console if you have Azure PowerShell module installed or within the GUI and I used only one input parameter and that can be used for an explicit VM or “All” VM´s.

Screen Shot 2015-09-09 at 15.04.50

And as you can see in the VMM log I have changed the VM´s NICS, and also if a VM have two or more NICS all of them get a static MAC configured :

Screen Shot 2015-09-09 at 15.03.07

Hope that you see the potential in Azure Automation and Happy automating!


Hyper-V Cluster Heartbeat for MDT ref VM goes bananas or?

I have been helping a customer with their environment and we had a problem that took me a while to figure out.

They were baking reference images for their SCCM environment and the best and easiest way is to use VM´s of course. The problem that occurred was that when the image was being transferred back to the MDT server the VM rebooted after half of the image had been uploaded….

So what was doing this crazy behavior? It took me a little while before I realized what it was all about and it had to do with the Hyper-V cluster platform and resilience and heartbeat functionality!

So at first the build VM boots from the MDT image, no integration tools yet then but then it restarts to install applications and stuff within the OS and as the customer works on a Windows 7 image you can see it starts to send heartbeat to the host.


As you might know, clients and servers since Windows Vista and 2008 have integrational services by default in them although best practice is to upgrade them as soon as possible if the VM shall continue to reside in Hyper-V.

The interesting part in this case was that the OS rebooted within itself when it was finished with sysprep to start the MDT image for transferring the WIM back to the MDT server and the cluster/hyper-v did not notice this and thus it thought that the heartbeat stopped.


And as it was a cluster resource this heartbeat loss was handled by default, and guess what, rebooted!

So what settings in the cluster resource does this madness? First of all, the Heartbeat setting on the cluster vm resource properties



This can be read on the technet site about Heatbeat setting for Hyper-V clusters:

Screen Shot 2015-06-04 at 10.35.22

And then you have policy what the cluster should do after it thinks the vm has become unresponsive:

Screen Shot 2015-06-04 at 14.14.56 1



There are different ways to stop the cluster from rebooting the machine and one of them is to disable heartbeat check and another is to set the response to failure to do nothing,

The customer uses mostly VMM console and for them when building a new VM for MDT reference builds they can set the integrational services to disable heartbeat check and thus not get their work postponed by unwanted reboots.


During the search for why I checked Host Nic drivers as I thought that it might have something with a transfer error but could not find anything, positively the hosts got up on the latest nic firmware and drivers 😉 . My suspicion that it had to be the cluster was awaken after I had spun up a test VM that was not part of the cluster and that one succeeded in the build and transfer.

This is a rare case and I would say that in 99% of the cases you want the default behaviour to happened as a VM can become unresponsive and then the cluster can try a reboot to regain into operations..

Clarification: If you spin up a VM with a OS or pxe image that does not have integrational services it will not reboot the VM after the cluster timeout, the OS has to start sending heartbeat to the Hyper-V host and then it will be under surveillance and managed by the cluster until properly shut down!

Hope that this helps for someone out there wondering what happens…



Taking the SCVMM 2012 R2 UR6 for a test drive

Noticed this evening that Microsoft released the UR6 for System Center and my interest is in Virtual Machine Manager so I wanted to test-install it  and also connect to an Azure IaaS subscription as this was one of the new added features besides all fixes and also of course the other added feature with Generation 2 Service template support etc.

Screen Shot 2015-04-28 at 20.15.14

Here you can read more about the fixes and also if you do not use Microsoft Update, download the files.

As I had my environment connected to the Internet I could press install,

Screen Shot 2015-04-28 at 20.18.03

Once it was finished a reboot of the server had to be done and I could start to add Azure subscriptions to VMM. Here you have to use a management certificate and that is easily created with makecert if you do not have any other CA available!

Screen Shot 2015-04-28 at 20.59.47

And when that is complete you can see my VM´s in Azure on the subscription and the commands that I can use on them,

Screen Shot 2015-04-28 at 21.03.15

Good luck in your tests of this nice new feature.


Exclude VM´s from dynamic optimization in SC VMM

In a case where we had a VM that was a bit sensitive with an application that does not like the ping loss during live migration between hosts in a cluster we wanted to exclude the vm from the automatic dynamic optimization. It might not be totally clear where you find this setting for the VM´s you want to exclude from this load balancing act.

First of all, do you know where you set up automatic dynamic optimization in System Center VMM 2012 R2? For some reason you set it up on the hosts folder and not on the cluster object in VMM:

Screen Shot 2015-04-16 at 13.00.13


So where do you exclude VM´s from this optimization? If you look under a VM´s properties you can check the actions and there you see the magic checkbox :

Screen Shot 2015-04-16 at 12.55.35

And now when the automatic job runs my sensitive VM will stay on the host.

SC VMM Error 803 after restoring a duplicate VM with alternative name

I was at a customer today testing some backup/restore scenarios with their backup providers software and where we got an interesting error: 803 “Virtual Machine restore-Test5_gen1 already exists on the virtual host other cluster nodes” Recommended Action: “Specify a new name for the virtual machine and then try the operation again”



In the Backup/restore console we wanted to do a restore to alternative place and an alternative name as we had not deleted the original VM (for instance if you need files or just verify some state or something), in the configuration of the restore job we checked that the restore process would create a new VM Id which was the first thing we thought that was why VMM complained.

The thing was that this error only appeared when we did a restore to another hyper-v host, if we restored to the same where the original VM was residing there was no error..

As you can see after the restore both the original VM and the alternate had the same “#CLUSTER-INVARIANT#” id but different VMId´s, and when we tried to refresh VM´s we got the error above.

Screen Shot 2015-03-30 at 20.56.59

The solution was not so farfetched and can be read about in the KB2974441 although that case is about RDS VDI but still, and as can be read about why the ID is in the notes field from the beginning: “VMM adds a #CLUSTER-INVARIANT#:{<guid> } entry to the description of the VM in Hyper-V. This GUID is the VM ID SCVMM uses to track the VM.”

For the VM not showing up in VMM console we just went into the notes field on Hyper-V Manager and removed that specific “#CLUSTER-INVARIANT#” id and after that VMM generated a new for that VM and it appeared in the VM list on the VMM server.

So why was it no problem when we restored to that same host? For some reason VMM managed to see the duplicate residing on the same host and generate a new id in the notes field for that and thus appearing in the VM list without any massage..


Hyper-V VM´s BIN files, to be or not to be in clusters

If you create lots of VM´s with large amount of RAM memory assigned to them and start to wonder why you have used some of the storage on the volumes then this is because if you have set up a VM without changing anything you get a bin file in the VM folder that corresponds to the size of the allocated RAM. This file is used to save the VM`s RAM to disk when the VM is going into saved state!

Screen Shot 2015-02-19 at 12.23.41

In an environment where you have all VM´s as clustered resources you will not need to be able to use the saved state when shutdown the host as you will live migrate the VM´s when doing stuff with the HW.

The setting is easily found in Hyper-V Manager for an already deployed VM:

Screen Shot 2015-02-19 at 12.22.06

It is not so easily found in System Center VMM when checking the VM properties, but when deploying a new VM you find it in the wizard:

Screen Shot 2015-02-19 at 12.29.26

If you want to change the setting for your VM´s running in a cluster via VMM you will have to use PowerShell and it is quite easy to do that with a one-liner, first you see the setting with the PowerShell command Get-SCVirtualMachine and then you can configure it with Set-SCVirtualMachine :

Screen Shot 2015-02-19 at 12.40.18
Screen Shot 2015-02-19 at 13.29.46

And now when checking the folder for the VM the BIN file has magically shrunk to 4KB :-)

Screen Shot 2015-02-19 at 13.44.03

Error dialog when opening VMM console after update to VMM 2012 R2 UR5

I have updated a VMM server with the latest UR5 and now when opening the console the following error appears

Screen Shot 2015-02-18 at 11.17.31

This was quite easily handled as you can see the folder is there but looking a bit further I found that authenticated users of the had no NTFS rights on the particular folder.

Screen Shot 2015-02-18 at 11.21.33

After updating and adding authenticaded users as described in the release notes of the UR5 and giving it access to HostSIdeAdapters I did not get any error messages after starting the console

VM Storage Migration in VMM 2012 R2 leaves unwanted leftovers

I have been playing around with a case where we have been upgrading and creating a new Windows Server 2012 R2 Hyper-V cluster and adding CSV volumes to it, and when first volume became full we started to storage migrate the VM´s to another volume but for some reason the files where left behind so I created my own PowerShell function to handle that as the built-in does not have that parameter and for some reason leaves leftovers?!

As you can see when I have done a live storage migration within a Hyper-V host with the GUI in VMM it leaves both vhdx and xml of the VM, and that can be troublesome when someone tries to import that VM while the other is already running and also you do not gain that space you thought would be reclaimed because you did a live storage migration.

Screen Shot 2015-02-16 at 15.38.54

No problem to move but as you can see in the volume that I migrated from:

Screen Shot 2015-02-16 at 15.06.32

And the volume that I migrated to:

Screen Shot 2015-02-16 at 15.06.55

I have been reproducing the migration with either just folders or both folders and vhdx/xml files still at the source..

When I run my function it cleans the source if i use the parameter -deletesource

Screen Shot 2015-02-17 at 15.51.08

Here is the PowerShell function for you to try:

Good luck in your automation :-)

Handy way to use PowerShell with VMM 2012 R2

After working with a customer and showing them the PowerShell scripts and functions I had made for automating their VMM 2012 R2 environment I realized that I needed a way to actually let them easily get them loaded and ready for use.

As you might know, you can store scripts within your VMM Library and also run them from the same place! So I thought of saving the functions there and making an initiator script that would load the functions that I had created so they could use them right away.

Really easy script that looks in the functions folder and import all functions, and as it is dynamic it will load all functions available in the folder at the time it is executed:

And when you put it in your VMM library it looks like this, I have added some description to make it more clear what it does 😉

Screen Shot 2015-02-06 at 13.24.40

And you can then run it from the console with the Run button and once the PowerShell console is loaded you can see which functions have been loaded and what names they have :-)

Screen Shot 2015-02-06 at 13.50.07

And in the folder I have added the files containing the functions that I made with .psm1 ending ( I am converting some of the scripts that I made earlier to functions and will add them later, that is why it is quite few yet). You will also have to check and edit the permissions on the share and the SCVMM_Library folder so the user trying to run the script will be able to.

Screen Shot 2015-02-06 at 14.53.50

Also, I added the server in trusted sites otherwise I got this digital signed error and I do not at this moment have a cert to sign the scripts


So to get around that one you add in Internet Explorer Trusted SItes: *:// (or of course what your VMM server FQDN is)

Screen Shot 2015-02-06 at 14.25.07

Last of all, to be able to run some of the functions that need elevation you can start the VMM GUI Console with “Run As Administrator” but you still use your Windows Credentials:

Screen Shot 2015-02-06 at 14.21.05

Happy automating within VMM :-)

And yes SMA has been thought of but right now the customer do not need that extra complexity with WAP,SMA,SPF and runbook workers…..

HyperV local storage available for placement in SCVMM

I have been working with a customer and was going to do an upgrade of one of their Hyper-v clusters to 2012 R2. During my preparations and looking at the particular hosts I found several VM´s that was residing on local storage on the hosts and not on the cluster storage.

The reason for this was two things, first of all that it was allowed to put VM´s on local disks and second that when someone created the VM´s forgot to use the appropriate HW-template that makes them highly available by default. If you create a new VM with a new HW profile make sure that it is configured correctly under the Availability tab.

The Hyper-V hosts have been deployed with Bare-Metal deploy from VMM and that is why they have a D:\

Looking at the properties for a host you can see what storage that is available for placement:

Screen Shot 2015-01-28 at 14.26.28

and as you can see the VM is not configured as highly available and have the virtual disk on local storage:

Screen Shot 2015-01-28 at 14.33.41
Screen Shot 2015-01-28 at 14.20.24

I have made a simple script that configures all hosts within a cluster and set all storage that is not cluster shared to not available for placement.

And now when trying to deploy a VM with a new HW profile that is not set to highly available I cannot deploy it as the local disks have been unchecked as available for placement.

Screen Shot 2015-01-28 at 13.57.04
Screen Shot 2015-01-28 at 13.58.09

The reason for just configuring this for hyper-v nodes that belong to a cluster is that there might be a single hyper-v host that actually should be able to provision the VM to local disks.