VMware vSphere V and the licensing

I have now tested the script that Hugo Peeters has made for checking what licensing needed with a vSphere platform when upgrading to V,

Of course this is a small platform and we do not have so much machines running, but the point is that it is a cool script that gives you a hint where you are and what your platform need in amount of licensing.

One thing my colleagues has missed and i wanted to touch and highlight is the vRAM and the pooling, i think it is well documented in the vSphere licensing, pricing and packaging document

The new licensing model is as follows

  • No more restrictions on cores
  • No max physical RAM limit
  • You still need one license/pCPU
  • Not allowed to mix different vSphere editions in the same vRAM pool, if using more than one edition managed with vCenter it will create different vRAM pools

for each license model there is a vRAM entitlement 24 GB for standard, 36 GB for Enterprise and 48 GB for Enterprise+, these are shared when connected to a vCenter so if you have a virtual machine on a host with 2 pCPU 192 GB physical RAM (with E+ you have 96 GB vRAM) and this VM has been configured with 128 GB vRAM and in your vmware vSphere cluster that this host resides have 3 other hosts with same setup and that will give you 384 GB vRAM in the pool, 384 – 128 equals 256 GB left to use for other VMs before bying more licenses.  Also if you have a linked vCenter and hosts with vRAM that is also included in the pool to be used. What i am trying to say is that although you have used more vRAM than assigned for one host you are still compliant as it is part of a pool.

As in all virtualization design you must calculate for host failures and its vRAM can be used when one host is down for maintenance or failure.

In the above example you can add more licenses for getting more vRAM, these licenses can later be used for adding a new host and for that physical CPUs.

Hope this gives some more light in the jungle

VMware vSphere and Microsoft Clustering

I have been investigating some things that need to be considered when deploying a Microsoft Cluster on a VMware platform.

As you can see on the graph, there are some different supported configurations. The kb 1037959 shows more information, i will try to highlight some things below.

Why i started this was because i was looking at the Multipathing polices for a customer and we discussed in the office that we preferred the Round-Robin policy, This is as you also can read on the KB not supported for the RDM shared disk, so if you already have  Round Robin as default on your hosts you have to set another policy on that specific LUN.

This can be done by powerCLI or by the vClient GUI, and as i am a big fan of powerCLI i will show you the command for it

VMHost hostname.test.local | Get-ScsiLun -CanonicalName "naa.60054242555" | '
Set-ScsiLun -MultipathPolicy "fixed"

If your default multipathing policy is set to one you do not want you can edit the default both with VMware CLI and powerCLI, as for the last there is a script that Stephen has made and can be found on the VMware Communities forum. otherwise you will have to change policy on every new datastore you add manually.

If you have a iSCSI SAN you will use in-guest iSCSI connection to the shared storage, then there is no need to change multipathing polices, what i do not understand and have not got any good information about is when i use in-guest shared storage, why do VMware have a limit of two cluster nodes that they support? As I see it the limit should be what MSCS has as a limit and that is 16 nodes, maybe there is no need for such a big cluster as you already have HA in your virtualization platform. If the setup as you can se in the graph is a cluster without shared storage it is not any limits set on number of nodes.

You will also need to edit your VMs scsi controller, for the Win 2008 you must have the LSI SAS controller installed, there is a KB 1002149 for that where the steps are outlined. The shared disk must reside on a dedicated scsi controller.

You will also have to setup Anti-Affinity rules in DRS to keep your cluster nodes apart, if you have for some reason decided to setup a CIB (Cluster in a Box) then you will need to setup an affinity to keep them together on the same host. For the VMs that are used for clustering you should set the DRS to partially automated.

For more information how to setup look at the VMware documentation on setup of failover clusters pdf

 

DELL Compellent – Auto Storage Tiering

In two days i have had the opportunity to test the Compellent SAN solution and i find it really cool and smooth. When integrating it with your vCenter you get the single pane of gui to handle the whole solution. You have to on every vmware client install the Compellent plugin to be able to administer the compellent and get the tabs that are plugin specific, you will also have to add the compellent management address and also a user with password and of course some rights to do SAN stuff.

After adding the Storage Center you get the following tabs:

As you can see on the Compellent view, you can add and remove datastores, if this is done in here the compellent plugin will also format the datastore with vmfs and rescan datastores on the hosts. Also as you see on this screenshot you can for each datastore see where your data is and how much of it resides where, Tier 1 is the fastest and if you have setup with more than two different hdd you will get a Tier 2 and a Tier 3. The active blocks are moved to faster or slower disk depending their demands.

With the Replay function you can get SAN backup functionality and also for full consistent backup you can install a agent in the VMs to get them to prepare a vss snapshot before running the replays. Really cool to be able to take SAN snapshots this way.

another thing as you can see also on the screenshot is that the datastore is 2 TB but only 9,32 GB is used! Of course you have to size your solution to cope with the amount of data that you are going to store.

When deleting a Datastore you get the following choices, if you put it in the Recycle bin, you can get it back as long as not the space will be needed by some other data, as long as there is space in the Compellent you can restore the datastore and that can be handy if you had some VMs that you need later and did not migrate to another datastore.

The Compellent solution is very powerfull and has much more functions than i have highlighted here, i hope that i will get some more time to test and use it in the near future!

Disaster recovery of vSphere after disk array failure

Yesterday i had the opportunity to help a customer that had a disk array failure where both their PDU died and one hdd broke, after the hw supplier had replaced the parts some virtual machines had hdd´s and databases that where corrupt.

To be able to assist the customer i used the TeamView program that works excellent and is fast to get up and running, no need to get the fw guy to open new ports or get a new vpn account! For personal use there is a free version that works in two hours at a time, but the enterprise license is not to expensive and should be considered as the tool is so powerful!

 

This leads to the most important lesson of this is that you will have to be very thorough in your design and not put all your eggs in one basket. In this particular case the vRanger backup server, domain controller, sql server and vCenter server was on this datastore, luckily the vRanger did not have any corrupt data and we could restore the whole vCenter server. When the vCenter was restored we could continue and restore mailbox, sql and domain controller. I did restore the vCenter to another VM and when we saw that restore was ok we deleted the old vCenter server, but as the customer had vSphere Essentials we will have to manually rename the vCenter-Temp and how to do it you can read on this link, otherwise if you have Enterprise or higher as your license you can rename it and then do a storage Vmotion and the files will get the right names, its kind of difficult to do a storage migration when connecting directly to the ESXi host.

The following points should be considered

  • Ensure the placement of the vCenter server by datastore and host affinity to know where it is in a disaster recovery scenario
  • Make sure that the backup server can be used in case of a total datastore blackout
  • After implementing a backup solution, check that you actually can do a restore also!
  • Do not put all Domain controllers and DNS servers in the same vSphere cluster and datastore
  • Make sure that you do not use Thin disks and over commit the datastore without the appropriate alarms set.

Using a backup solution that can do backup on the virtualization platform and not inside the vm is the most effective solution to be able to recover fast and easily in the case of a total datastore failure. Most backup solution providers can offer this feature and for those not using it today should definitively consider it. For example the Quest vRanger can do file level restore from a VM backup.

Funny Hyper-v Ad on Slideshare vSphere Storage presentation

I had to screendump the following presentation from Slideshare, I found it from some other blog and thought it was  interesting as much of the virtualization success depends of the storage and how it is performing. I am wondering if this of course is intentional and MS actually pays google?! and also pays for setting this ad on vmware related stuff…

 

If someone is interested in this presentation follow this link, I really recommend that you get to know what esxtop values to look at and also if you have not checked it before, Align the VM-disks! This is done automatically when using win 2008 and later but check your win 2003 servers. If your storage can support the new VAAI i recommend you to enable it in your vSphere environment as long as it is at least version 4.1, this can be set with powerCLI of course and gives performance gains when doing provisioning, cloning or storage vMotion and also get better cluster locking and meta data operations…

 

 

 

Pimp my Macbook Pro with powerCLI

Probably some Mac fanatics will go crazy when they see my MB Pro and what i have done to it but i think it is quite funny and if I can get more attention by giving my mac a fine VMwarish look i think its cool..

I had to big ambitions when i wanted to find a VM and in what datacenter it was located at a customer today with powerCLI, at first i thought that it was not so easy as it really appeared to be 😉

i started by tinking after looking at Get-Datacenter | Get-View and Get-Datacenter | Get-Member that i needed something like Get-Datacenter | where {$_.vmfolder -eq (get-vm partVM | Select-Object VMfolder)} (that command gave me nothing)

I then read the excellent VMware powerCLI cmdlet reference and got happy as i realized that i only needed to use the following command and get the datacenter where the vm resided


Get-Datacenter -VM partVM

or i could also pass the VM object to the Get-Datacenter cmdlet


Get-VM partVM | Get-Datacenter

if i want to get what cluster it is in i use the Get-Cluster cmdlet instead


Get-VM partVM | Get-Cluster

Second day of VMware vSphere PowerCLI Automation Course

Friday was the second and last day of the Automation course, i would say that i now have learned some more of the powerCLI and also of powershell. As the material was for the vSphere 4.0 and powerCLI 4.0 u1, some of the lab commands came up as deprecated (which means that there is a new cmdlet that could be used instead) because the lab kit that we used had ESXi 4.1 u1 and PowerCLI 4.1.1. Also i would like an update of the lab material in some task that could be shortened (which i showed my lab partner) instead of of setting variable that is only used once. At some parts of the lab material there are sometimes used alias and sometimes the cmdlet, could cause confusion, but it is also good that it shows different ways of getting the same result, as long as the student think and not only write what it says. I don´t know if just our group was faster than others but we did complete all labs quite fast. I would have liked that either the instructor or the material included some more examples, scripts and other stuff that a VMware Automation Admin uses! I showed a fellow student mr Renoufs vCheck Script, and he really liked it 🙂

 

On twitter i got a question to blog about my tweet that said deployment of servers from a csv file, as there is more than one way to do it i found a shorter way as i said above than in the labs, this is the edited command from a presentation of Alan Renouf and Luc Decens that also uses a template and a OS customization file for the deployment.

Import-CSV C:\Scripts\Servers.csv | Foreach { New-VM -Name $_.Name -VMHost <code>
(Get-VMHost bimini02.rtsvl.local) </code>
-Datastore (Get-Datastore SharedVMs) <code>
-Template $_.Template </code>
-OsCustomizationSpec WinSpec02 -RunAsync}

And when you are ready playing around you run the following command,

Get-VM Br* | Remove-VM -Confirm:$false

This of course that your playing machines are called something like Br… and that no other production machines are named starting with that 😉

First day of VMware vSphere PowerCLI Automation Course

Today i have been attending the VMware vSphere Automation course for powerCLI. I don´t know how my fellow students think but i think the course could benefit of a little update and get some more content! There are some good resources that have much more information on the internet and also the VMware PowerCLI web with the powerCLI poster etc. The VMware PowerCLI reference book is going to be mine as soon as it comes to Kindle!

One thing i learned on my own with some help from the Internet and a powershell guru is how to recalculate a result and show it in antother unit for example GB instead of MB, the cool part here is that you include the math library to be able to minimize the number of decimals, this can of course be used on any result you want to remake..


Connect-VIServer localhost

Get-Datastore | Select-Object -Property Name, @{Name="FreeSpaceGB";Expression={[math]::Round(($_.FreeSpaceMB/1024),2)}}

so instead of the screendump below

i get the following:

Another thing that can be done with powercli is to set the Mtu for a vmkernel port to enable the jumbo frames support when using ip storage, as i understand this has to be done in some kind of script/cli to enable because it cannot be done in the gui, and also as important, you cannot edit an existing vmkernel port and change the Mtu so if you have set it up with default Mtu you will have to remove and recreate. Do not forget to edit the vSwitch also because if the Mtu is not set on the vSwitch or the physical switches you will not get the benefit of using the larger frames..


$VMHost = Get-VMHost -Name esxi02.test.local
$pnic = (Get-VMhostNetwork -VMHost $VMHost).PhysicalNic[3]
$vSwitch = New-VirtualSwitch -VMhost $VMHost -Nic $pnic.DeviceName -NumPorts 64 -Name vSwitch3 -Mtu 9000

$PortGroup = New-VirtualPortGroup -Name iSCSI -VirtualSwitch $vSwitch
New-VMHostNetworkAdapter -VMHost $VMHost -Portgroup $PortGroup -VirtualSwitch $vSwitch -IP 192.168.20.68 -SubnetMask 255.255.255.0 -Mtu 9000

If i find something cool in my journey in powerCLI tomorrow i will give an update 🙂

VMware Network Performance

I found this performance study from VMware today, it was actually released yesterday but i want to put a small post about it anyway because i found the results very interesting!

This study shows that the virtual machines on vSphere 4.1 can saturate the physical network cards at 10 Gbit and also what was cool, VM-VM traffic inside one host could get speeds up to 27 Gbit. In this graph that i have copied from the report you can see that with only one vCPU you can get almost 10 Gbit both in transmit and recive, this is really cool cause this shows that virtual machines runnig in vSphere can be loaded with serious amount of network traffic and still deliver!

This report among other shows that there is no longer arguments for using a physical installation for your critical applications! and with the features that comes with virtualization you get more security and availability also.