So I am working on a customer and their path of upgrading to 2016 versions. The first step was to make sure that the VMM 2012 R2 server was updated to latest UR and that I can deploy guest vm´s with 2016.
After the update of VMM to UR11 I checked the list of OS,
So to be able to see the 2016 as a guest OS i have to add a hotfix and that took some time but what ever you do, do not cancel but wait and wait and wait and the never ending progress bar will eventually go away 😉 . And yes you have to add one hotfix for the console and one for the vmm server!
I am a firm believer that Servers should not be used for the wrong things and thus I have now installed the new System Center VMM 2016 on a Windows Server 2016 Core.
In my home lab I do not have so many hosts so I have used the opportunity to install the SQL 2016 on the same core instance.
As I am installing the SQL on the same machine I had to enable the .net 3.5/2.0 feature on this server and yes I know and can´t agree more, please remove this requirement dear SQL team and move to the future!
Although it is not supported with the wizard for sql install on core it do show some progress through a graphical dialog…
So once that was up and running I installed the ADK for windows 10, and I used the one for Windows 10 1607.
And then I could start the VMM install. And yes there is a command line way of installing the VMM but this time I wanted to see if I could use the wizard in core!
During the installation the wizard complained about my memory that I had assigned to the VM that I was installing on and I could with the superduperfeature in 2016 add more to the running VM without doing any stop and start!
After that I had no more issues and the installation completed successfully!
Well once installed I had to do some patching as at the same time VMM 2016 was released Microsoft also announced the availability of CU1 🙂 and trying to use the short cut from the installation dialog fails on a Server core as those GUI parts are not present! I can though use the Sconfig and the “Download and install updates” option to get the updates I want…
Revised: Based on the SQL req page that have been updated it now is supported to run on SQL standard and from SQL 2012 SP2, the following link on the VMM page though still says 2014 Enterprise but that will be updated. My MVP friend Anders Asp have got info that I share here:
“Official MSFT statement: That is likely a carry over from earlier TP content when we had a bug that installation would fail on Std SQL(TP3?). Standard should work.”
//As you can see the System Center VMM 2016 GA will require a SQL 2014 Enterprise or later, so you will not be able to use a standard SQL to be supported. So if you are upgrading from a VMM 2012 R2 you will also have to upgrade your SQL to the Enterprise level.//
The SQL instance solely used for the System Center is included in the System Center licensing.
During last week I was working on some bare metal deployment on some Hyper-V hosts with System Center VMM. We had deployed them before using legacy boot but now we had updated the BIOS to latest version and got into some trouble.. Maybe it was because of the HPE instead of the HP 😉
During the deployment the WinPE got an error and could not connect to the VMM server,
We tried to update NIC drivers and stuff on the WinPE image but that did not help. During the testing we started the server and configured it to boot with EFI instead of legacy boot and olala it worked to connect to the VMM server but thus as the Hyper-V VHD was MBR we got the following error:
The easiest way I could think of right there and then was to create my new GPT based VHD to boot the Hyper-V host with a powershell convert script from the original MBR vhd. The script required to be run on a Hyper-V host so I connected to one of the Hyper-V nodes in the test cluster and ran the script on a patched VM that was sysprepped:
And once that was done I had to update the Physical profile to set the disk to GPT instead of MBR:
So trying to update the WinPE image got us into some more trouble, as you can see on the following screen dump. This was a new one that I have not seen before, checking the folders I could not find any strange things but…
I tested to restart the VMM service just in case and the same error appeared and thus I thought that it might go away if I remove and add the PXE server again and yes that worked!
And after this we were able to deploy a physical servers as a Hyper-V host!
I have been using and publishing my script for easily download all the VHD´s for evaluating and testing the technical previews of System Center 2016 and now it is time for the TP4! These pre-installed VHD´s make it quite easy to spin up some of the System Center Suite as VM´s on your Hyper-V box within minutes once downloaded.
Here is the script for your convenience and start playing with the new release!
Today I helped a customer that have issues with their new VM´s and doing backup using Veeam and also trying to do Checkpoints within VMM on their Hyper-V 2012 R2 environment.
Looking at the error message from VMM it showed quite clearly that it was just one of the VHDx that was having the issue and looking at where it was residing it showed the reason:
The file was residing on its own in a CSV volume and directly in the root folder, and as stated in the blog post from the Core Team the VM worker process (VMMS) does not have the relevant permissions on that level and thus getting a access denied error when trying to do a checkpoint.
So how do you solve it? By either manually or with live storage migration moving it into a subfolder which will give the right ACL´s and thus giving the VM worker process rights to create a avhdx file in that folder.
So watch out when you create new VM´s that you actually put all of the virtual disks that belong to it inside folders on those CSV´s!
I have helped two customers moving their System Center VMM 2012 R2 servers to a Hyper-V VM.. Instead of carrying legacy stuff we installed a new Generation 2 VM in Hyper-V with Windows Server 2012 R2.
Easier said than done… or?
So what went wrong at both customers and how did I solve it?
We copied the library and the database backup from the old server. Did a shutdown of that one and then started the new one and added that to the domain and then installed the VMM server.
Patched it to UR7 with windows update and after that we did a restore of the db from the old system with the binary scvmmrecover.exe -path <db-backupfile>.
After that I started the console and trying to check things in properties and stuff and the console crashed the service got a dump:
Looking at the dump, I could see that not everything was great with the database, (the old VMM server was patched to UR7 before I did a database backup). Based on the log file something is missing in the restored database…
So how did I solve it? I uninstalled the UR7 on the VMM server and then reinstalled it and voila, no more crashes!
Azure Automation and Hybrid Runbook workers are fun to play with and today I wanted to try something like automating System Center VMM tasks,
I read Markus Lassfolk´s post about changing VM´s network adapters MAC from dynamic to static which is the prefered setting that you should use for your Hyper-V VM´s. So How could I utilize this with Azure Automation runbooks instead of an script that is run on the VMM server?
In my runbook I have a VMM Automation account declared as a credential and I connect to the VMM server with that to be able to reconfigure the VM´s. If I do not use a -PSCredential for the inlinescript the Runbook worker will try to use the system account of the Worker server and that does not work so well on the connection to the VMM server,
Either start it from powershell console if you have Azure PowerShell module installed or within the GUI and I used only one input parameter and that can be used for an explicit VM or “All” VM´s.
And as you can see in the VMM log I have changed the VM´s NICS, and also if a VM have two or more NICS all of them get a static MAC configured :
Hope that you see the potential in Azure Automation and Happy automating!
I have been helping a customer with their environment and we had a problem that took me a while to figure out.
They were baking reference images for their SCCM environment and the best and easiest way is to use VM´s of course. The problem that occurred was that when the image was being transferred back to the MDT server the VM rebooted after half of the image had been uploaded….
So what was doing this crazy behavior? It took me a little while before I realized what it was all about and it had to do with the Hyper-V cluster platform and resilience and heartbeat functionality!
So at first the build VM boots from the MDT image, no integration tools yet then but then it restarts to install applications and stuff within the OS and as the customer works on a Windows 7 image you can see it starts to send heartbeat to the host.
As you might know, clients and servers since Windows Vista and 2008 have integrational services by default in them although best practice is to upgrade them as soon as possible if the VM shall continue to reside in Hyper-V.
The interesting part in this case was that the OS rebooted within itself when it was finished with sysprep to start the MDT image for transferring the WIM back to the MDT server and the cluster/hyper-v did not notice this and thus it thought that the heartbeat stopped.
And as it was a cluster resource this heartbeat loss was handled by default, and guess what, rebooted!
So what settings in the cluster resource does this madness? First of all, the Heartbeat setting on the cluster vm resource properties
This can be read on the technet site about Heatbeat setting for Hyper-V clusters:
And then you have policy what the cluster should do after it thinks the vm has become unresponsive:
There are different ways to stop the cluster from rebooting the machine and one of them is to disable heartbeat check and another is to set the response to failure to do nothing,
The customer uses mostly VMM console and for them when building a new VM for MDT reference builds they can set the integrational services to disable heartbeat check and thus not get their work postponed by unwanted reboots.
During the search for why I checked Host Nic drivers as I thought that it might have something with a transfer error but could not find anything, positively the hosts got up on the latest nic firmware and drivers 😉 . My suspicion that it had to be the cluster was awaken after I had spun up a test VM that was not part of the cluster and that one succeeded in the build and transfer.
This is a rare case and I would say that in 99% of the cases you want the default behaviour to happened as a VM can become unresponsive and then the cluster can try a reboot to regain into operations..
Clarification: If you spin up a VM with a OS or pxe image that does not have integrational services it will not reboot the VM after the cluster timeout, the OS has to start sending heartbeat to the Hyper-V host and then it will be under surveillance and managed by the cluster until properly shut down!
Hope that this helps for someone out there wondering what happens…