During the day I have been digging into the Microsoft Operations Management Suite which is a collection of cloud services that you can get for a quite reasonable price.
The different services included is Log Analytics, Security, Automation, Availability.
I have registered my on premise Windows Servers in the log analyzer and started uploading logs getting a nice overview with several out of the box solutions that will give you a heads up on areas in your environment that needs attention…
So how about the automation? I have already been using the automaton for different services within Azure but in this case I wanted to see how I could utilize the Hybrid worker and the VMware environment residing there.
With the release of VMware PowerCLI 6 some of the stack are remade as PowerShell Modules.
So if I configure a hybrid worker on premise with the PowerCLI installed I can then utilize that in a runbook that as an example takes an input variable VMName and restarts the VM (in this case I do it without being nice and asking for a shutdown but just pulling the plug)
And here is the runbook:
And here I start the runbook with the variable,
And as you can see in the vSphere Client my VM winrecover restarts
This can of course be made a bit more complex and also as you can see in the Azure automation view, scheduled. So if you have something that needs to be automated at 11 PM every night within your VMware vSphere environment it can be done by Azure Automation and Hybrid workers..
Earlier this week Microsoft released Windows Server Technical Preview 3 and System Center Technical Preview 3 to the masses.
There is also a way to easily test the System Center bits instead of installing everything with prereqs and stuff and that is to use the preinstalled VHD´s that Microsoft and the System Center team provides.
You can find the eval VHD´s here on the download site:
Or you can use my powershell script to download them and import into your Hyper-V server and start playing once downloaded :-D. If there are issues during download you can just start the script again as I check if the file already been downloaded so you will only download each file once and as I use the BITS engine the file appears in the download folder when it is completely downloaded….
As I described in an earlier post the Technical Preview 3 was released today and I wanted to test things. Apparently there was quite a few more thinking this way as it seems containers vhd takes forever to download but that gave me a reason to look more at the new NanoServer version.
in the TP3 media there is the NanoServer folder and to deploy Microsoft and the Nanoteam has added the PS scripts in that folder making it ridiculously easy to get things up and running (even if my title says monkey I doubt that a chimpanzee would pull it of?!)
So I copied the folder to my server and ran the powershell for creating a virtual hard disk and a virtual machine with that disk.. As you can see I added some parameters such as -compute (hyper-v role) and -clustering (for failover cluster role) and name of the nanoserver, in this case I wanted it to be a hyper-v vm so I added the hyper-v integration components also but you could add some other drivers for a physical box NIC´s etc instead.
So the day has come when windows 10 became available,
I did not have the patience to wait for the Windows Update so I downloaded an ISO and started to upgrade and on my Surface 3 Pro it was a matter of about 30 minutes and I had a working new version! It was a really smooth upgrade process!
This is some awesome news for companies that want an easy way to create a DR plan and site for their most important systems and not only for those lucky guys that already run Hyper-V and can utilize Hyper-V replica.
I did some evaluations for a presentation about ASR for VMware VM´s when it was in preview and it requires some additional VM´s for management of the replication such as the process server and on the Azure side, Master target and config server. If you like me evaluate this with an MSDN Azure subscription, be sure to shut down the servers on the Azure side when not using it as it otherwise will drain your money , that of course should not be done when in production. It uses when protecting Windows workloads the built in VSS to create consistent replicas.
And the supported OS´s are the ones supported in Azure:
Windows 2008 R2 SP1
Windows 2012 R2
So if you still use Windows 2003 and earlier OS then you need upgrade before utilizing this.
This session will focus on how Chef, a systems and cloud infrastructure automation framework, can manage both Windows and Linux workloads on Azure or any physical, virtual location no matter the size of the infrastructure easily.
We will also look at how Chef can interact with PowerShell Desired State Configuration to deliver a consistent and compliant infrastructure. In this session you will learn the basic paradigms of Chef, launching VM instances and deploying applications to these instances. It is DevOps times now with a faster and agile world where the IT-Dinosaurs will have to watch out!
I am very proud to tell you that I have a new sponsor on my blog and that is VirtualMetric.
Have you not heard about them then it is time to go and check out their website at http://www.virtualmetric.com to learn more because they have an awesome monitoring and reporting platform for your Hyper-V environment!. It is agentless and it also reports Hardware status for the hosts via IPMI.
There will be a more thorough post about how to get the platform up and running and how to use it later on!
I have been evaluating Nanoserver that was released and wanted to try if I could get it to work in Azure IaaS as a VM. And as you can see, it works!!
I had created a VHDx with the packages that was described on the “getting started with Nanoserver” So first of all as only vhd is supported I had to convert the disk and then I used Azure PowerShell to upload it to Azure storage:
After creating a VM I tried to connect to it from remote over the Internet but that did not work, probably something that needs to be configured with winrm setup on the Nanoserver or I just missed something. I created a VM with Winserv TP2 in the same Azure network and tried to connect to the nanoserver which succeeded:
And I can also change name and it reflects on the Azure portal:
I have been helping a customer with their environment and we had a problem that took me a while to figure out.
They were baking reference images for their SCCM environment and the best and easiest way is to use VM´s of course. The problem that occurred was that when the image was being transferred back to the MDT server the VM rebooted after half of the image had been uploaded….
So what was doing this crazy behavior? It took me a little while before I realized what it was all about and it had to do with the Hyper-V cluster platform and resilience and heartbeat functionality!
So at first the build VM boots from the MDT image, no integration tools yet then but then it restarts to install applications and stuff within the OS and as the customer works on a Windows 7 image you can see it starts to send heartbeat to the host.
As you might know, clients and servers since Windows Vista and 2008 have integrational services by default in them although best practice is to upgrade them as soon as possible if the VM shall continue to reside in Hyper-V.
The interesting part in this case was that the OS rebooted within itself when it was finished with sysprep to start the MDT image for transferring the WIM back to the MDT server and the cluster/hyper-v did not notice this and thus it thought that the heartbeat stopped.
And as it was a cluster resource this heartbeat loss was handled by default, and guess what, rebooted!
So what settings in the cluster resource does this madness? First of all, the Heartbeat setting on the cluster vm resource properties
This can be read on the technet site about Heatbeat setting for Hyper-V clusters:
And then you have policy what the cluster should do after it thinks the vm has become unresponsive:
There are different ways to stop the cluster from rebooting the machine and one of them is to disable heartbeat check and another is to set the response to failure to do nothing,
The customer uses mostly VMM console and for them when building a new VM for MDT reference builds they can set the integrational services to disable heartbeat check and thus not get their work postponed by unwanted reboots.
During the search for why I checked Host Nic drivers as I thought that it might have something with a transfer error but could not find anything, positively the hosts got up on the latest nic firmware and drivers 😉 . My suspicion that it had to be the cluster was awaken after I had spun up a test VM that was not part of the cluster and that one succeeded in the build and transfer.
This is a rare case and I would say that in 99% of the cases you want the default behaviour to happened as a VM can become unresponsive and then the cluster can try a reboot to regain into operations..
Clarification: If you spin up a VM with a OS or pxe image that does not have integrational services it will not reboot the VM after the cluster timeout, the OS has to start sending heartbeat to the Hyper-V host and then it will be under surveillance and managed by the cluster until properly shut down!
Hope that this helps for someone out there wondering what happens…