I have been evaluating Nanoserver that was released and wanted to try if I could get it to work in Azure IaaS as a VM. And as you can see, it works!!
I had created a VHDx with the packages that was described on the “getting started with Nanoserver” So first of all as only vhd is supported I had to convert the disk and then I used Azure PowerShell to upload it to Azure storage:
After creating a VM I tried to connect to it from remote over the Internet but that did not work, probably something that needs to be configured with winrm setup on the Nanoserver or I just missed something. I created a VM with Winserv TP2 in the same Azure network and tried to connect to the nanoserver which succeeded:
And I can also change name and it reflects on the Azure portal:
I have been helping a customer with their environment and we had a problem that took me a while to figure out.
They were baking reference images for their SCCM environment and the best and easiest way is to use VM´s of course. The problem that occurred was that when the image was being transferred back to the MDT server the VM rebooted after half of the image had been uploaded….
So what was doing this crazy behavior? It took me a little while before I realized what it was all about and it had to do with the Hyper-V cluster platform and resilience and heartbeat functionality!
So at first the build VM boots from the MDT image, no integration tools yet then but then it restarts to install applications and stuff within the OS and as the customer works on a Windows 7 image you can see it starts to send heartbeat to the host.
As you might know, clients and servers since Windows Vista and 2008 have integrational services by default in them although best practice is to upgrade them as soon as possible if the VM shall continue to reside in Hyper-V.
The interesting part in this case was that the OS rebooted within itself when it was finished with sysprep to start the MDT image for transferring the WIM back to the MDT server and the cluster/hyper-v did not notice this and thus it thought that the heartbeat stopped.
And as it was a cluster resource this heartbeat loss was handled by default, and guess what, rebooted!
So what settings in the cluster resource does this madness? First of all, the Heartbeat setting on the cluster vm resource properties
This can be read on the technet site about Heatbeat setting for Hyper-V clusters:
And then you have policy what the cluster should do after it thinks the vm has become unresponsive:
There are different ways to stop the cluster from rebooting the machine and one of them is to disable heartbeat check and another is to set the response to failure to do nothing,
The customer uses mostly VMM console and for them when building a new VM for MDT reference builds they can set the integrational services to disable heartbeat check and thus not get their work postponed by unwanted reboots.
During the search for why I checked Host Nic drivers as I thought that it might have something with a transfer error but could not find anything, positively the hosts got up on the latest nic firmware and drivers 😉 . My suspicion that it had to be the cluster was awaken after I had spun up a test VM that was not part of the cluster and that one succeeded in the build and transfer.
This is a rare case and I would say that in 99% of the cases you want the default behaviour to happened as a VM can become unresponsive and then the cluster can try a reboot to regain into operations..
Clarification: If you spin up a VM with a OS or pxe image that does not have integrational services it will not reboot the VM after the cluster timeout, the OS has to start sending heartbeat to the Hyper-V host and then it will be under surveillance and managed by the cluster until properly shut down!
Hope that this helps for someone out there wondering what happens…
I have been quite busy after Ignite but wanted to share with you what sessions I attended and the recordings for those so you can watch them. There were some better quality sessions and I will try to guide you to them. My interest was of course Hyper-V, Nanoserver, Docker, Azure, Automation, System Center, Storage….
Then I went to the foundation session “Bring Azure to your datacenter” with Jeffrey Snover and Mark Russinovich https://channel9.msdn.com/Events/Ignite/2015/FND1451 and in this session they where not allowed to show any demos but cheated a bit anyway 😉
In the afternoon I went to my MVP-friend and Hyper-V colleague Aidan Finn´s “Hidden treasures in Hyper-V 2012 R2” https://channel9.msdn.com/Events/Ignite/2015/BRK3506 which is a keeper as it gives you some hints on what to think about to make your hosts run better and that is with the version available for production today!
Wednesday
After a great party by Veeam I was a bit late in to the sessions on the morning but got a nice breakfast and chat with a Microsoft guy, after that I went to the Nanoserver session with Jeffrey Snover, always a pleasure to listen to a guy that can do presentations https://channel9.msdn.com/Events/Ignite/2015/BRK2461
After that It was lunchtime and then I went to watch my friend Fredrik Nilsson´s Community Theater Session “Getting started with Chef”
I guess that most of the Hyper-V MVP´s present at Ingite went to the what´s new with Hyper-V in Windows 2016 and watched the session given by Ben Armstrong and Sarah Cooley from the Hyper-V team https://channel9.msdn.com/Events/Ignite/2015/BRK3461
During lunch I went into the Expo Area and watched my friend Jakob G Svendsen´s Community Theater Session with device management and a bit lego-robotstuff.. hilarious!
There was a bunch of more sessions that I could not attend to in person but still was really valuable and I will do a follow-up post giving the links to the records of those so stay tuned 🙂