There are some backup vendors that nowadays have support for Hyper-V and host level backup. I have been testing some of them but also wanted to check how the Altaro Hyper-V Backup solution works.
I cannot complain about the ease of installing and getting started on the Altaro solution. I like the install wizard that directly recognises if I am trying to install on a cluster node or a single hyper-v instance.
It is fully supported to install on a core or full version of Hyper-V 2012 and also 2012 R2.
When I tested to install on my one-node Hyper-V 2012 R2 cluster it found and promoted the node as a master controller.
It was really easy to configure the VM´s that should be included in the backup and then schedule a backup job.
I observed a question on the technet forums regarding if it was possible to change name of the virtual hard disk when deploying a virtual machine or template from System Center Virtual Machine Manager, and the answer is yes.
Having a few hundred VM´s and all of them having the same name of the vhdx as the template might not look so good.
So how to solve this then, first of is when deploying new VM´s you should change the name and that can only be done in the VMM GUI when deploying to a host and not to a cloud. Notice that I create the VM from the template in the library view of the console.
When using this way the console automatically fills in the guest os name. When creating a VM from a template in the VM´s and Services view that has to be filled in manually.
So how to get the name of the VM also on the vhdx that holds the VM´s operating system, as I described above, you have to select deploy to a host and not a cloud (of course, when the VM has been created it can be updated with a cloud)
Then when you come to the configure settings view in the wizard, the option to change the vhdx name appears,
Surely you find that having vhdx that has the same name as the VM instead of the generic library vhdx name of win2012-std.vhdx more suitable?
In the next post I will show you how to change the names of the disks on already deployed VM´s.
In the upgrading of a VMM 2012 Sp1 to R2 I wanted to test and run the Configuration Analyzer. When clicking on the link from the VMM R2 GA media you should be observant that it points you to the wrong place though, as you can see on the following screendump there is a link on the splash screen.
And when pressing that link you will arrive at the following site, which is wrong! So do not start downloading the stuff from this as these files are for the VMM 2012.
Today I was at a customer and they had an interesting error in their virtualization environment with Hyper-V 2012 and SC VMM 2012 Sp1 (yes I know, it is not R2 but we are working on it). One of their Hyper-V hosts had a hardware memory failure which lead to the host crashing and the VM´s restarting on other hosts..
during this the VM´s that was on that host got an error in the VMM db, looking and handling the VM´s from Hyper-V manager or failover cluster manager was no problem. Every time we tried to refresh the VM´s showed the following error and the job did not succeed.
And when we tried to look at the properties on the VM from VMM the console just died, and that happened every time..
So how could we find the VM´s that was suffering from this, well PowerShell could be used and with the following we could get the amount of VM´s and also easily get a list of the VM´s names.
There is a cmdlet in VMM with a parameter that can be used, Remove-SCVirtualMachine -Force , this will remove the VM but not the virtual hard disk as it cannot find it..
But because we want to keep the VM we will do a bit of a workaround
Stop the vm in VMM by Stop-SCVirtualMachine own01 (even if the vhd(x) is not seen by VMM it will shut down the actual VM)
Remove the VM from the cluster in Failover cluster manager or with powershell (this will only remove the cluster instance but not the actual VM)
Start the VM in Hyper-V manager or with PowerShell Get-VM own01 -ComputerName HV01 | Start-VM
Remove the VM from the VMM with Powershell using the -force Get-SCVirtualMachine own01 | Remove-SCVirtualMachine -Force (as I have removed the VM from the cluster VMM cannot find it and delete the vm´s xml file etc)
Add the running VM to the cluster again with hyper-V PowerShell and failover cluster powershell using Get-VM own01 -ComputerName HV01 | Add-VMToCluster (Get-Cluster HVCL30)
refresh VM´s in VMM and see that now the own01 vm can open properties
The reason I start the VM after removing it from the failover cluster is that VMM should think it is in stopped state, because that makes it easier to remove! And as in Hyper-V 2012 I can add a running VM to a cluster I do not have to keep the VM shut down during the whole process but just to get it into the right state in VMM 🙂
Not the easiest way but hey who had said that VMM was self-healing 😛 And yes we are in the process of upgrading to R2 and hopefully this error will not reemerge in that version….
I got a question about an error that occured when creating a Hyper-V 2012 R2 cluster from VMM 2012 R2 and the errorlog stated the following:
“Error (25325) The cluster creation failed because of the following error: An error occurred while performing the operation.. “
In the troubleshooting I found that the VMM 2012 R2 was running on a Windows Server 2012 Standard (which is fully supported). But as VMM uses the failover cluster cmdlets from the OS where installed it fails creating the R2 cluster as it is not supported to manage Windows 2012 R2 from a Windows 2012.
I have had the the pleasure to work a bit with Vision Solutions and their product Double-Take Move that can be used when migrating between hypervisors or onto a public IaaS cloud like Azure. I have also made a test to migrate between generation 1 to 2 VM in Hyper-V R2. Last year I made a presentation on Nordic System Center Summit about migration alternatives and tools where I showed the System Center integration of Double-Take Move.
Maybe you have seen the blog post from Migration Mark where he describes the extension of their MAT PowerShell toolkit so it now also support using Double-Take Move and as you can see if you have demands from end users and the company to keep the systems up during the migration then MAT4Move is the tool..
TYPE: Streaming disk conversion
PROS: More uptime than any other solution, migrate directly to Azure
CONS: Has a cost per VM, Requires an agent
Recently there has come an service pack update to this brilliant software that now also gives support for 2012 R2, etc. One important update is that it now is possible to select synthetic NIC when migrating directly to a Hyper-V host, in the earlier version you only got a legacy and that is not what we want! Also the possibility to migrate to a VM replica residing on a SMB share. Here you can read more about some of the improvements:
Common improvements—The following improvements apply to both Double-Take Availability and Double-Take Move.
Windows 2102 R2 support—The following job types now support Windows 2012 R2.
Files and folders
Full server to Hyper-V
Full server to ESX
V to ESX
V to Hyper-V
Full server migration
Full server to Hyper-V migration
Adapter type on replica—This release allows you to select the type of adapter that will be used on a replica virtual machine for full server to ESX, full server to Hyper-V, full server to ESX migration, and full server to Hyper-V migration jobs.
Double-Take Move improvements—The following improvements apply to Double-Take Move migration jobs.
Alternate volume staging—With this release, you can stage the source’s system state data to an alternate volume on the target, if you do not have enough space on the target’s system state volume.
SMB share storage—This release allows you to store the replica virtual machine for a full server to Hyper-V migration job on a local volume or an SMB share.
Now that the Integration Services 3.5 has been released from Microsoft I wanted to try out to install them in a CentOS VM
Interestingly the support is for CentOS up to 6.3 and when reading the README file from CentOS in the dist that version is deprecated. According to the forums the CentOS version 6.4 and 6.5 has their own Integration services already in the distribution and that can be confirmed when I was testing out and trying to install 3.5 and still the modinfo said 3.1. The reason I was pursuing to install was that the nic in hyper-v said degraded..
So I tested some different cases ending up with the same result that the IS reported 3.1 and always the Network adapter as Degraded.
In the 6.3 version of CentOS there is no Integration Services by default so installing gives the following result
And when rebooting and checking the version I can see that I successfully installed the IC 3.5
The exiting part comes here where I now wanted to upgrade my CentOS 6.3 to 6.5 with “yum update” and as you can see on the next screendump I got version LIS 3.1
But my 3.5 rpm are still installed as you can see.
When checking the from the PowerShell side you can see that it first reports as version 3.5 and then when I have upgraded it says 3.1
One interesting bit in why I would like to do the LIS 3.5 upgrade on a CentOS 6.4/6.5 is that first after upgrade I get the integration services version from the host side as this is part of the new feature in 3.5, Key-Value Pair (KVP) Exchange that now works with Linux also. That is not part of the built in LIS 3.1 yet that CentOS/Redhat distributes.
As everyone else with some ITPro community engagement there is a date of the first day every quarter that makes the heart go a bit faster and the hopes for becoming one of the few privileged that get the Most Valuable Professional Award from Microsoft.
Today I got the award for my contributions in the Virtual Machine community 🙂
I have been during quite some time trying to give the community my sharings in virtualization and automation and helping out so that others can benefit and not having to reinvent the wheel again.
Thank you all that have been following me on this blog, twitter and at other social networks and I hope that you benefit somewhat from my scavenges in the wonderful world of IT!
I am looking forward to this year when I will together with my fellow MVP´s explore and work together to help out in the deployment of Cloud OS.