HyperV local storage available for placement in SCVMM

I have been working with a customer and was going to do an upgrade of one of their Hyper-v clusters to 2012 R2. During my preparations and looking at the particular hosts I found several VM´s that was residing on local storage on the hosts and not on the cluster storage.

The reason for this was two things, first of all that it was allowed to put VM´s on local disks and second that when someone created the VM´s forgot to use the appropriate HW-template that makes them highly available by default. If you create a new VM with a new HW profile make sure that it is configured correctly under the Availability tab.

The Hyper-V hosts have been deployed with Bare-Metal deploy from VMM and that is why they have a D:\

Looking at the properties for a host you can see what storage that is available for placement:

Screen Shot 2015-01-28 at 14.26.28

and as you can see the VM is not configured as highly available and have the virtual disk on local storage:

Screen Shot 2015-01-28 at 14.33.41
Screen Shot 2015-01-28 at 14.20.24

I have made a simple script that configures all hosts within a cluster and set all storage that is not cluster shared to not available for placement.

And now when trying to deploy a VM with a new HW profile that is not set to highly available I cannot deploy it as the local disks have been unchecked as available for placement.

Screen Shot 2015-01-28 at 13.57.04
Screen Shot 2015-01-28 at 13.58.09

The reason for just configuring this for hyper-v nodes that belong to a cluster is that there might be a single hyper-v host that actually should be able to provision the VM to local disks.

Configuring VMM logical switch with bandwidth limit virtual port

I got a question from a customer how they could limit a VM´s bandwidth from VMM as it was too noisy and devoured the bandwidth from the host for the other VM´s. There are both a way to set priority and also bandwidth in Hyper-V 2012 and later.

In Hyper-V Manager you can find the setting on the VM´s configuration and the virtual network adapter tab,

Screen Shot 2015-01-16 at 16.15.32

And here I can enable bandwidth management and set both a minimum and maximum, and in this case I want just a limit

Screen Shot 2015-01-16 at 16.15.13

But how do I accomplish the same in VMM? As you might have noticed there is no possibility to edit this on the VM´s settings on the virtual nic, this is a setting that I configure with a Port profile instead and apply on selected VM/VM´s, and by doing it this way I can easily just configure the same profile for several VM´s instead of configuring each VM. There are some configured by default and I can also add new with the particular setting that I need.

First there is the port classification:

Screen Shot 2015-01-17 at 15.49.49

And then the actual port profiles:

Screen Shot 2015-01-17 at 15.48.26

These two combined is used in the Logical Switch for the virtual port,

Screen Shot 2015-01-19 at 11.45.23

Easiest is to use Powershell to create a new port classification and profile and then update the logical switch to be able to use it for the VM´s that needs it, I have made a function that takes care of all the steps including adding it to the logical switch as a virtual port:

Screen Shot 2015-01-19 at 11.54.28

And also a function for removing, in this I check the VM´s connected and moving them to the default port profile before removing it,

Screen Shot 2015-01-19 at 11.55.18

Probably there will be some updates to this in the future but here you can see and test for your own needs 🙂  I will now start to test some bandwidth flooding to see that it actually limits the VM´s

 

 

 

All VM´s reporting: “Unsupported Cluster Configuration” in VMM

Today I was contacted by a customer regarding an issue they had with all their VM´s in their main cluster reporting “Unsupported Cluster Configuration”

The reason was that two of the nodes in their cluster had in VMM 2012 R2 lost their virtual switch and vnics and that resulted in the cluster saying that it did not have a HA-Virtual Switch and thus the VM`s had network connections that was not available on the cluster…

Screen Shot 2015-01-12 at 14.11.41

Searching a bit and checking the hosts and VM´s outside of VMM there was no issues on them and the logical switch and vnics was still there, the vm´s was able to reach the network so clearly a VMM issue.

I tried to refresh the cluster and nodes and also restart the VMM agent on the hosts but that did not help. Looking at the properties in VMM on a host showed nothing where it should be both the switch and the management vnics:

Screen Shot 2015-01-07 at 20.41.48

Using the failover cluster manager and live-migrating the vm´s and then restart the host did though help,

After reboot I refreshed the cluster and the network appeared again, I have been searching for a reason for the issue but have not found anything yet in the logs on either the VMM server or the hosts..

To refresh the VM´s on the cluster to remove the “Unsupported cluster configuration” after I had got the virtual switch back I used powershell

Screen Shot 2015-01-12 at 20.21.12

If you have had this issue or similar I would like to know so please comment the post 🙂

I will search some more and see if I can find the reason for this. It should be said that the customer have two 2012 hyper-v clusters and one 2012 R2 cluster and I have not seen this happen on the R2 cluster yet so maybe it is a 2012 issue… And yes we are working eagerly to move the VM´s to the R2 cluster 😛 , the VMM 2012 R2 server is also updated with the latest UR4

 

 

Error 25122 when doing refresh in VMM 2012 R2 UR3->

There is no better way than to start the year and enlighten you with a error I had this autumn at a customer…

After I updated the VMM server to UR3 at a customer I noticed that it started to give a warning (Completed w/ Info) during refresh of the cluster. And when digging into it you can see the reason for it…

Screen Shot 2015-01-07 at 19.45.26

When I set up the Hyper-V cluster I did it outside of VMM, I had though set up the hosts with logical switches and host vNics in VMM network fabric and the different cluster networks was configured with IP-Pools. And for the record, the VMM server have been upgraded from 2012 SP1.

So what was the reason for this, well when I did the cluster setup I used an IP from the Management-IP pool range for the cluster management and had not reserved it, yes I know I should have taken care of it then but did not…

Consensus is that from UR3 the VMM team has fixed the bug and now it reports this when doing a refresh on the cluster object 😛

So how did i mitigate this issue?! I searched and found a german blogpost by Michel Luescher where he solves the issue with powershell, and as you can see when reading the lines is that we find the IP-address object that is of the wrong type and change it to a HostCluster object instead:

Luckily PowerShell is in english so I could understand that part 😉 as my knowledge in the german language is a bit narrow..

 

 

Renewed as a HyperV MVP for the first time

One year passes so fast and today was the day when it was my renewal date for the Microsoft Most Valuable Professional award. This is my second time that I get the award!

I have been working quite hard in the communities during the year with live presentations, webcasts, blog posts etc and was hoping that it was enough. In the afternoon I got a bit nervous and started thinking that maybe there was someone else that have made a bit more contributions and took my place but,

At the magic time 04:29 PM CET I got the longing mail :

Screen Shot 2015-01-01 at 17.02.08

 I am in good company with about 50 other Hyper-V MVP´s and learn stuff every day from my friends and expertise-colleagues and 2015 will ROCK, so see you around and most certainly at Ignite in Chicago in May!

keep following my blog and twitter and I will try to keep up and post information and help when I encounter problems or smart solutions that you can benefit from 🙂