TEC 2011 Successfully implement and Transition into Hyper-V Session

Some summary based on my own session i held at The Experts Conference 2011 in Frankfurt yesterday. I think it was about 40 people in the crowd. The TEC is about 350 attendees total.

When i have checked with Quest if i can put my whole presentation here i will do an updated posting, i will put some points what i think is crucial when setting up a new vitualization platform.

  • Assessment and Consolidation Planning
  • Design and Testing
  • Migration and Optimization
  • Capacity Planning and Performance follow up

When deciding for a new virtualization platform, no matter if it is the first or you are going to exchange an existing, there are some steps that need to be considered. First you have to know what you are running in your datacenter, what kind of operating systems and what kind of applications, also you must get a workload profile for those servers to know what their demands are. If you do not do your homework and plan for the load you will surely get some beating from your organization when you have virtualized the servers and they run like crap. As tools you can use the Microsoft Assessment and Planning Toolkit or if you already use SCCM/SCOM you will both have inventory and performance data. Another thing to consider when planning is licensing, in a big consolidation you can save quite a much money when using Datacenter licensing on the hosts.

Design your platform to be modular and easily expanded. Do serious deciding on what your boundaries are both technically and financially this should be done in a workshop with application owners and management and then documented. Do not forget about managing and monitoring software. Another thing to rethink is how to take backup in your platform, with Hyper-V integration tools you can take consistent backups with VSS snapshot support into the VMs, we recommend our customers to take backup on host level for quicker RTO. When you have decided for a platform you can do a PoC to test your decisions and see that it works as expected, Many hardware manufacturers do lend out hw for you to test for a limited time to evaluate.

When the platform is set up and correctly configured you want to do some hardware and load testing, there are several tools for this. Memtest, IOmeter, SQLiosim, Exchange jetstress, Exchange Load Gen and others. The most important thing to consider here is that you want to check that your new platform can handle the load you did measure and predict for in the analysis and design. Also test fail-over functionality so that all hardware and software works as expected when a PSU or a network cable brakes. After all testing has successfully been made you want to document this for later so you have a validation document signed.

After you have a platform set up and tested you want to start migrate and optimize workloads into this. There are some tools that can be used for this, SCVMM, disk2vhd and Quest vConverter for example. One thing to consider when doing migration is to look back at the analysis of the workloads to set the right amount of resources, both virtual processors/vm ram and vhd disk files with the partitions inside (for best performance we would like to use fixed or pass-through disks). When optimizing after migration you want to clean out hidden devices and services/software that was used for the machine in the physical world but do not have a purpose anymore!

When all your machines are migrated we want to continuously check the performance and capacity so you can prepare and implement additional host resources before it runs out. You can use SCOM/SCVMM if you have it in your environment, another great performance tool is the PAL (Performance Analysis of Logs ) that you can use in conjuction with performance counters and logman to schedule datasets on your Hyper-V Core host servers, also there is a product from Quest, vFoglight.

the last slide i had a strip from Dilbert that i find quite funny, statement though: WE DO NOT LEAVE OUR CUSTOMERS AS DOGBERT DOES AFTER A virtualization Project 😛

links

MAP 6.0

Memtest

PAL

Performance tuning win 2008 R2 SP1

 

Configure VM settings and vmdk´s with powerCLI

I want to share my latest automation scripting, i am in a project where we are in-sourcing from a hosting company. We have connected the hosts to the outsourcers NFS share, of course with powerCLI, when doing it this way i get the datastores on all servers in our cluster, without the risk of differences between the hosts datastores.

<br />
# Create NFS shares on all hosts<br />
#<br />
# Niklas Åkerlund /RTS<br />
$NFSdatas = Import-Csv -Path "nfsdatastores.csv" -Delimiter ";"<br />
$VIHosts = get-cluster -Name Cluster1 | get-vmhost | where {$_.ConnectionState -eq "Connected"}<br />
foreach ($VIHost in $VIHosts){<br />
foreach ($NFSdata in $NFSdatas){<br />
$NFSHost = $NFSdata.Host<br />
$NFSshare = $NFSdata.Share<br />
$NFSShareName = $NFSdata.ShareName<br />
if (($VIHost | Get-Datastore | where {$_.Name -eq $NFSShareName -and $_.type -eq "NFS"}-ErrorAction SilentlyContinue) -eq $null){<br />
Write-Host "Monterar NFSstore $($NFSShareName) på $($VIHost)"<br />
New-Datastore -Nfs -VMHost $VIHost -Name $NFSShareName -Path $NFSshare -NfsHost $NFSHost<br />
}<br />
}<br />
}<br />

Now when we have this in place, during the transitions the hosting company shut down the VMs on their hosts that we are going to take over.  And we add the VM to the inventory on our vCenter, when doing this the vmdk got a different datastore id in the config, also some settings should be updated to the corporate standard for the virtualization platform at the customer.

<br />
# Script to update VM with vmdk and right settings<br />
#<br />
# Argument in is VM name<br />
# Niklas Akerlund / RTS AB 2011</p>
<p>$VMname = $args[0]<br />
if ($VMname -ne $null){<br />
$VM = Get-VM $VMname<br />
$Datastore = Get-Datastore -VM $VM<br />
$HDDs = Get-Harddisk -VM $VM</p>
<p># Remove incorrect hdd referenes</p>
<p>Remove-HardDisk -HardDisk $HDDs -Confirm:$false</p>
<p>foreach ($HDD in $HDDs){<br />
$HDDname = $HDD.Filename<br />
$HDDsNames = $HDDname.Split("/")<br />
$count = $HDDsNames.count<br />
$VMdkName = $HDDsNames[$count-1]<br />
#Write-Host $VMdkName<br />
$diskpath = "[" + $Datastore.Name + "] " + $VM.Name + "/" + $VMdkName</p>
<p>#Write-Host $diskpath</p>
<p>New-HardDisk -VM $VM -DiskPath $diskpath<br />
}</p>
<p># Reconfigure VM Settings</p>
<p>$spec = new-object VMware.Vim.VirtualMachineConfigSpec<br />
$spec.MemoryAllocation = New-Object VMware.Vim.ResourceAllocationInfo<br />
$spec.MemoryAllocation.Limit = -1<br />
$spec.CpuAllocation = New-Object VMware.Vim.ResourceAllocationInfo<br />
$spec.CpuAllocation.Limit = -1<br />
$spec.tools = New-Object VMware.Vim.ToolsConfigInfo<br />
$spec.tools.toolsUpgradePolicy = "manual"<br />
$spec.swapPlacement = "inherit"</p>
<p>$VM = $VM | get-view<br />
$VM.ReconfigVM_Task($spec)<br />
}<br />

After this we start up the VM and later we do a storage vMotion of the VM to the customers FC-SAN

<br />
Get-VM theVM | Move-VM -Datastore fcdatastore1<br />

Host Profiles and vmkernel ports with Jumbo Frames MTU 9000

Today i have found a limitation using host profiles and this together with a vmkernel port that has Mtu 9000 activated. maybe it has not been a requirement when designing the host profiles?

We set up the reference host with 4 vmkernel ports, one for management, one for vmotion, one for FT and one for NFS. The port that we wanted to use Jumbo frames for was the vmotion port.

As i wrote in an earlier post, I used the powerCLI to configure the Mtu for the actual vmkernel port Get-VmhostNetworkAdapter -Name vmk1 | Set-VmhostNetworkAdapter -Mtu 9000.

Then i add this host as an reference host in the Host Profiles and attach it to the cluster. Adding a new host and then Apply profile, creates all our vmkernel ports correctly but when checking what Mtu the vmotion vmkernel port got, it is created with the default Mtu of 1500. This is not so good because i do not want to use several different ways to configure and i want to be able to trust the Host Profiles solution. The only vmkernel port that was created before applying host profiles was the management port so it has nothing to do with editing exisiting. So the result is that i need to after applying a host profile, run a powerCLI command to edit the Mtu.

Strangely no matter if the Mtu is 9000 or 1500 the hosts are compliant in the GUI..

This applies to vSphere 4.1 u1 (i do not know how this behaves in vSphere 5)

Conclusion of this is that I have to think a bit more about using the Host profiles. If it is not fully implemented then it is not usable to get uniform hosts.

Edit vmkernel port MTU on distributed switches – using PowerCLI

According to the KB 1038827 “Enabling Jumbo Frames for VMkernel ports in a virtual distributed switch”, VMware says that you have to recreate the vmkernel port to set the MTU for jumbo frames. This is not true if you use powerCLI, I do not know exactly how it is done beneath the hood but it is very easy to configure using quite a few lines scripting..  By the way, there is no way in the GUI to edit this.

<br />
$cred = Get-Credential<br />
Connect-VIServer ESXhost.test.loc -credential $cred</p>
<p>Get-VMHostNetworkadapter -name vmk2 | Set-VMHostnetworkadapter -Mtu 9000</p>
<p>Get-VMHostnetworkadapter -name vmk2 | ft Mtu<br />

Setting the Mtu on the vmkernel port is basically not different using a standard vSwitch or a distributed vSwitch.

Of course you can connect to a vcenter and add a foreach loop to set the Mtu for more than one host vmkernel port.

VMware distributed switches and PowerCLI/Onyx

I have had the opportunity to do some PowerCLI scripting on an installation where we have  vDS (virtual Distributed Switch). In the PowerCLI there is not so much cmdlets for the distributed switches, that is kind of awkward as there is so many cmdlets for everything else.. Luckily LucD had made some nice functions for me to use when creating the port groups.

I used his function for creating port groups, as the customer had about 20 vlans that needed to be added it was a perfect match to do it by powerCLI because setting up this manually is boring! So i had a csv file with the name and vlan id which i ran through in a foreach loop, then all was done.

</p>
<p># Create Distributed virtual portgroups for each VLAN<br />
# Niklas Åkerlund / RTS AB 2011-09-09<br />
#</p>
<p>$Datacenter = "datacenter"<br />
$vDSName = "dvswitch01"<br />
$vDSPortGroupPorts = 128</p>
<p># Call Functions from motherscript<br />
. .\Set-vDS-Porgroup-functions2.ps1</p>
<p>$vDS = Get-dvSwitch -DataCenterName $Datacenter -dvSwitchName $vDSName<br />
#Write-Host $vDS<br />
$vlans = Import-Csv vlan.csv -Delimiter ";"</p>
<p>foreach ($vlan in $vlans){<br />
$name = $vlan.Name<br />
$vlanid = $vlan.VLAN<br />
if ($name -ne ""){<br />
Write-Host $name<br />
New-dvSwPortgroup $vDS $name -PgNumberPorts $vDSPortGroupPorts -PgVlanType "VLAN" -PgVlanId $vlanid<br />
}<br />
}</p>
<p>

But then we realized that we needed to change some settings with both the security and load balancing so i had to remove all my port groups and start over.. I did not want to remove them manually and the powerCLI cmdlet that removes standard port groups could not be used on a vDS, I did not find the code from LucD in his blog to remove a vDS port group so i came up with the brilliant idea to use Onyx, it is a tool from VMware Labs that interprets the traffic between the vSphere Client and the vCenter and transform it to powerCLI code or .Net or SOAP or Javascript.

I then after starting this tool connected to my vCenter and through the vSphere Client removed a vDS port group, i got the powerCLI code (which I probably could have found out being a bit smarter in powershell/powerCLI without Onyx, but now I´m not :-P) So i did a small script to find all my vDS port groups and remove them.. Note that i cannot remove a vDS port group that already has been populated with connected VM´s.

</p>
<p># Remove vds port groups<br />
#<br />
# Niklas Åkerlund / Real Time Services AB</p>
<p>$vlans = Import-Csv vlan.csv -Delimiter ";"</p>
<p>$PGs = Get-VirtualPortGroup</p>
<p>foreach ($vlan in $vlans){<br />
foreach ($PG in $PGs){<br />
if ($vlan.Name -eq $PG.Name){<br />
$destroy = $PG.Id<br />
#Write-Host $destroy<br />
$pek = Get-View -Id $destroy<br />
$pek.Destroy_Task()<br />
}<br />
}<br />
}</p>
<p>

And now i could run the add script again with the added parameters for more security and load balancing.

<br />
New-dvSwPortgroup $vDS $name -PgNumberPorts $vDSPortGroupPorts<code><br />
-PgVlanType "VLAN" -PgVlanId $vlanid -SecPolMacChanges:$false</code><br />
-SecPolForgedTransmits:$false -TeamingPolicy "loadbalance_loadbased"<br />

Win 8 Server dev preview and Hyper-V NIC team

There is quite a buzz out on twitter and blogs about the new features that has come to Windows 8 and the new Hyper-V version. I want to give you a little heads up about how it works to create network team with NICs (yes it works with different nic cards. in my case a Intel and a Broadcom)

I have now installed the server on my test-machine in our office and was eager to test the NIC teaming, at first i did not understand how it was working and tried to bind two nics together in the network connections window in the control panel, as i later realized and read in Aidan Finns´s blog, that it is done through the LBFOAdmin.exe (this is opened when pressing Nic Teaming Enabled/Disabled)

There you have to highligt your server to configure it, as the new server manager can handle remote servers and you can configure several workloads at the same time and you do not have to log in to each server to administer it.

I have named my team to NET2000 and added the two nics, i have also set it to be switch independent (i have actually set it in a simple 5 port switch), you can also chose LACP or Static Teaming. For Load Distribution mode you can chose Address Hash or Hyper-V port (now i am sharing the team with the management and a hyper-v switch so i am using the Address Hash.

As yo can see i can then add several virtual nics with different vlan id. I really hope that the fix one issue though, as you can see here i have a virtual nic interface called VMnet, when i then want to add this in the hyper-v manager it does have a different name as you can see in the next screenshot. It would have been wonderful to be able to se the Name also in the virtual switch manager.

As i before had to use the same network cards from the same manufacture and use their teaming software this is a giant step forward with the win 8 and the built in teaming functions. One thing to test later when i get my hands on a nic that can handle SR-IOV is how that feature works with a team, but that is  another blog post!

 

Novell Platespin Forge upgrade

The past two days i have been upgrading a Platespin Forge from 2.5 to 3.1 on a Forge 510 Appliance, this runs VMware VI 3.5 Update 4.

I think that the Forge appliance is a really good product for companies that have a need for a Disaster Recovery solution. If you want to read more about it click here.

The customer had bought the appliance for two years ago and has not had any time to set it up and start replicating workloads.

The appliance is a customized Dell HW with a custom VI 3.5 installation, We could not upgrade it to vSphere, the only update on the Novell site is the VI 3.5 U5. We tried to upgrade via the vSphere Client Host Upgrade Utility but got a failure, we also tried the hostupgrade.sh script also failing. We have started a support case asking novell how to do and i will update the blog when i get the right procedures.

The next trouble we went into was when we tried to upgrade the Forge Management VM software from 2.5 to 3.1, The installation succeeds but when we check the gui we do not have an protect container which is kind of vital because without it we cannot start any protection of workloads, if we checked with the Platespin browser executable we could see it there but not in the web gui. The not so obvious solution to this was to do a two part upgrade, first update all windows patches and then upgrade to version 3.0.2 and verify that the container for protection was still there and working, after that we could proceed with the upgrade to Forge 3.1 (which is of today the latest version) and after this the protect container was there and refreshed correctly. Thank God for VM snapshots that we took after each step so we easily could go back after each failed step!

Although the upgrade steps in the documentation did not work for us i can recommend it because Platespin has always done a good job on writing  and explaining in their product documents.

Some strange issues regarding when we add the Management VM to their domain and install AV is left but that is another support case.

 

VMware vCenter and VMware vCenter Update Manager 11

After the vacation this summer i have had much to do and not any time for blogging, i will try to behave better and keep you readers updated in my findings..

I just want to clarify for those of you running several vCenter installations for your different virtualization platforms and use vCenter Update manager for updating your hosts.

When you install the vCenter update manager you can only add one vCenter and there is no support for using the same Update manager for several vCenter instances. From a management point of view it would have been a nice feature to be able to use the same vCenter Update Manager for several vCenter instances in a linked mode, as you would only have one to handle.

In the Update Manager documentation it clearly says : “The Update Manager installation requires a connection with a single vCenter Server instance. ”  link to vSphere 5.0 vum installation documentation is here , This is not new for the 5.0 and is also the case for earlier versions of vCenter and VUM

Move vSphere vCenter database and update perf stat jobs

Today I have helped a customer with ther vCenter database and the rollup jobs that was not present..

Yesterday i noticed that they had missed to update the stat jobs when moving their database to another server (I gave them the KB 7960893 link so they could move the db. allthough they missed step 5 in that list.). This was leading to an growing database and that the performance stats where not being updated. ultimately if the database grows to much and fills the disk the vCenter server will stop. I showed them the KB 1004382 that descripbes how you update or create new stat rollup scripts for your vCenter database, this was not successfull because they did not select the right database for the creation of the jobs..

Again i used the wonderful tool Teamviewer and connected to the customer and helped them to correctly create the jobs.

One important thing is to select the right database when running the script or it will not work when it is going to run.

As you can see on the screendump, for one that is not to familiar with SQL management studioi you must select the database beside the ! Execute before executing, the script will run and create a stat rollup job but it will not work because it is looking for stored procedures that are in the vCenter database..

If you not have logged on as the owner of the database (your vcenter service account) you should edit the jobs to be run as that account!

VMware vSphere V and the licensing

I have now tested the script that Hugo Peeters has made for checking what licensing needed with a vSphere platform when upgrading to V,

Of course this is a small platform and we do not have so much machines running, but the point is that it is a cool script that gives you a hint where you are and what your platform need in amount of licensing.

One thing my colleagues has missed and i wanted to touch and highlight is the vRAM and the pooling, i think it is well documented in the vSphere licensing, pricing and packaging document

The new licensing model is as follows

  • No more restrictions on cores
  • No max physical RAM limit
  • You still need one license/pCPU
  • Not allowed to mix different vSphere editions in the same vRAM pool, if using more than one edition managed with vCenter it will create different vRAM pools

for each license model there is a vRAM entitlement 24 GB for standard, 36 GB for Enterprise and 48 GB for Enterprise+, these are shared when connected to a vCenter so if you have a virtual machine on a host with 2 pCPU 192 GB physical RAM (with E+ you have 96 GB vRAM) and this VM has been configured with 128 GB vRAM and in your vmware vSphere cluster that this host resides have 3 other hosts with same setup and that will give you 384 GB vRAM in the pool, 384 – 128 equals 256 GB left to use for other VMs before bying more licenses.  Also if you have a linked vCenter and hosts with vRAM that is also included in the pool to be used. What i am trying to say is that although you have used more vRAM than assigned for one host you are still compliant as it is part of a pool.

As in all virtualization design you must calculate for host failures and its vRAM can be used when one host is down for maintenance or failure.

In the above example you can add more licenses for getting more vRAM, these licenses can later be used for adding a new host and for that physical CPUs.

Hope this gives some more light in the jungle