Win 8 Server dev preview and Hyper-V NIC team

There is quite a buzz out on twitter and blogs about the new features that has come to Windows 8 and the new Hyper-V version. I want to give you a little heads up about how it works to create network team with NICs (yes it works with different nic cards. in my case a Intel and a Broadcom)

I have now installed the server on my test-machine in our office and was eager to test the NIC teaming, at first i did not understand how it was working and tried to bind two nics together in the network connections window in the control panel, as i later realized and read in Aidan Finns´s blog, that it is done through the LBFOAdmin.exe (this is opened when pressing Nic Teaming Enabled/Disabled)

There you have to highligt your server to configure it, as the new server manager can handle remote servers and you can configure several workloads at the same time and you do not have to log in to each server to administer it.

I have named my team to NET2000 and added the two nics, i have also set it to be switch independent (i have actually set it in a simple 5 port switch), you can also chose LACP or Static Teaming. For Load Distribution mode you can chose Address Hash or Hyper-V port (now i am sharing the team with the management and a hyper-v switch so i am using the Address Hash.

As yo can see i can then add several virtual nics with different vlan id. I really hope that the fix one issue though, as you can see here i have a virtual nic interface called VMnet, when i then want to add this in the hyper-v manager it does have a different name as you can see in the next screenshot. It would have been wonderful to be able to se the Name also in the virtual switch manager.

As i before had to use the same network cards from the same manufacture and use their teaming software this is a giant step forward with the win 8 and the built in teaming functions. One thing to test later when i get my hands on a nic that can handle SR-IOV is how that feature works with a team, but that is  another blog post!

 

Novell Platespin Forge upgrade

The past two days i have been upgrading a Platespin Forge from 2.5 to 3.1 on a Forge 510 Appliance, this runs VMware VI 3.5 Update 4.

I think that the Forge appliance is a really good product for companies that have a need for a Disaster Recovery solution. If you want to read more about it click here.

The customer had bought the appliance for two years ago and has not had any time to set it up and start replicating workloads.

The appliance is a customized Dell HW with a custom VI 3.5 installation, We could not upgrade it to vSphere, the only update on the Novell site is the VI 3.5 U5. We tried to upgrade via the vSphere Client Host Upgrade Utility but got a failure, we also tried the hostupgrade.sh script also failing. We have started a support case asking novell how to do and i will update the blog when i get the right procedures.

The next trouble we went into was when we tried to upgrade the Forge Management VM software from 2.5 to 3.1, The installation succeeds but when we check the gui we do not have an protect container which is kind of vital because without it we cannot start any protection of workloads, if we checked with the Platespin browser executable we could see it there but not in the web gui. The not so obvious solution to this was to do a two part upgrade, first update all windows patches and then upgrade to version 3.0.2 and verify that the container for protection was still there and working, after that we could proceed with the upgrade to Forge 3.1 (which is of today the latest version) and after this the protect container was there and refreshed correctly. Thank God for VM snapshots that we took after each step so we easily could go back after each failed step!

Although the upgrade steps in the documentation did not work for us i can recommend it because Platespin has always done a good job on writing  and explaining in their product documents.

Some strange issues regarding when we add the Management VM to their domain and install AV is left but that is another support case.

 

VMware vCenter and VMware vCenter Update Manager 11

After the vacation this summer i have had much to do and not any time for blogging, i will try to behave better and keep you readers updated in my findings..

I just want to clarify for those of you running several vCenter installations for your different virtualization platforms and use vCenter Update manager for updating your hosts.

When you install the vCenter update manager you can only add one vCenter and there is no support for using the same Update manager for several vCenter instances. From a management point of view it would have been a nice feature to be able to use the same vCenter Update Manager for several vCenter instances in a linked mode, as you would only have one to handle.

In the Update Manager documentation it clearly says : “The Update Manager installation requires a connection with a single vCenter Server instance. ”  link to vSphere 5.0 vum installation documentation is here , This is not new for the 5.0 and is also the case for earlier versions of vCenter and VUM

Move vSphere vCenter database and update perf stat jobs

Today I have helped a customer with ther vCenter database and the rollup jobs that was not present..

Yesterday i noticed that they had missed to update the stat jobs when moving their database to another server (I gave them the KB 7960893 link so they could move the db. allthough they missed step 5 in that list.). This was leading to an growing database and that the performance stats where not being updated. ultimately if the database grows to much and fills the disk the vCenter server will stop. I showed them the KB 1004382 that descripbes how you update or create new stat rollup scripts for your vCenter database, this was not successfull because they did not select the right database for the creation of the jobs..

Again i used the wonderful tool Teamviewer and connected to the customer and helped them to correctly create the jobs.

One important thing is to select the right database when running the script or it will not work when it is going to run.

As you can see on the screendump, for one that is not to familiar with SQL management studioi you must select the database beside the ! Execute before executing, the script will run and create a stat rollup job but it will not work because it is looking for stored procedures that are in the vCenter database..

If you not have logged on as the owner of the database (your vcenter service account) you should edit the jobs to be run as that account!

VMware vSphere V and the licensing

I have now tested the script that Hugo Peeters has made for checking what licensing needed with a vSphere platform when upgrading to V,

Of course this is a small platform and we do not have so much machines running, but the point is that it is a cool script that gives you a hint where you are and what your platform need in amount of licensing.

One thing my colleagues has missed and i wanted to touch and highlight is the vRAM and the pooling, i think it is well documented in the vSphere licensing, pricing and packaging document

The new licensing model is as follows

  • No more restrictions on cores
  • No max physical RAM limit
  • You still need one license/pCPU
  • Not allowed to mix different vSphere editions in the same vRAM pool, if using more than one edition managed with vCenter it will create different vRAM pools

for each license model there is a vRAM entitlement 24 GB for standard, 36 GB for Enterprise and 48 GB for Enterprise+, these are shared when connected to a vCenter so if you have a virtual machine on a host with 2 pCPU 192 GB physical RAM (with E+ you have 96 GB vRAM) and this VM has been configured with 128 GB vRAM and in your vmware vSphere cluster that this host resides have 3 other hosts with same setup and that will give you 384 GB vRAM in the pool, 384 – 128 equals 256 GB left to use for other VMs before bying more licenses.  Also if you have a linked vCenter and hosts with vRAM that is also included in the pool to be used. What i am trying to say is that although you have used more vRAM than assigned for one host you are still compliant as it is part of a pool.

As in all virtualization design you must calculate for host failures and its vRAM can be used when one host is down for maintenance or failure.

In the above example you can add more licenses for getting more vRAM, these licenses can later be used for adding a new host and for that physical CPUs.

Hope this gives some more light in the jungle

VMware vSphere and Microsoft Clustering

I have been investigating some things that need to be considered when deploying a Microsoft Cluster on a VMware platform.

As you can see on the graph, there are some different supported configurations. The kb 1037959 shows more information, i will try to highlight some things below.

Why i started this was because i was looking at the Multipathing polices for a customer and we discussed in the office that we preferred the Round-Robin policy, This is as you also can read on the KB not supported for the RDM shared disk, so if you already have  Round Robin as default on your hosts you have to set another policy on that specific LUN.

This can be done by powerCLI or by the vClient GUI, and as i am a big fan of powerCLI i will show you the command for it

VMHost hostname.test.local | Get-ScsiLun -CanonicalName "naa.60054242555" | '
Set-ScsiLun -MultipathPolicy "fixed"

If your default multipathing policy is set to one you do not want you can edit the default both with VMware CLI and powerCLI, as for the last there is a script that Stephen has made and can be found on the VMware Communities forum. otherwise you will have to change policy on every new datastore you add manually.

If you have a iSCSI SAN you will use in-guest iSCSI connection to the shared storage, then there is no need to change multipathing polices, what i do not understand and have not got any good information about is when i use in-guest shared storage, why do VMware have a limit of two cluster nodes that they support? As I see it the limit should be what MSCS has as a limit and that is 16 nodes, maybe there is no need for such a big cluster as you already have HA in your virtualization platform. If the setup as you can se in the graph is a cluster without shared storage it is not any limits set on number of nodes.

You will also need to edit your VMs scsi controller, for the Win 2008 you must have the LSI SAS controller installed, there is a KB 1002149 for that where the steps are outlined. The shared disk must reside on a dedicated scsi controller.

You will also have to setup Anti-Affinity rules in DRS to keep your cluster nodes apart, if you have for some reason decided to setup a CIB (Cluster in a Box) then you will need to setup an affinity to keep them together on the same host. For the VMs that are used for clustering you should set the DRS to partially automated.

For more information how to setup look at the VMware documentation on setup of failover clusters pdf

 

DELL Compellent – Auto Storage Tiering

In two days i have had the opportunity to test the Compellent SAN solution and i find it really cool and smooth. When integrating it with your vCenter you get the single pane of gui to handle the whole solution. You have to on every vmware client install the Compellent plugin to be able to administer the compellent and get the tabs that are plugin specific, you will also have to add the compellent management address and also a user with password and of course some rights to do SAN stuff.

After adding the Storage Center you get the following tabs:

As you can see on the Compellent view, you can add and remove datastores, if this is done in here the compellent plugin will also format the datastore with vmfs and rescan datastores on the hosts. Also as you see on this screenshot you can for each datastore see where your data is and how much of it resides where, Tier 1 is the fastest and if you have setup with more than two different hdd you will get a Tier 2 and a Tier 3. The active blocks are moved to faster or slower disk depending their demands.

With the Replay function you can get SAN backup functionality and also for full consistent backup you can install a agent in the VMs to get them to prepare a vss snapshot before running the replays. Really cool to be able to take SAN snapshots this way.

another thing as you can see also on the screenshot is that the datastore is 2 TB but only 9,32 GB is used! Of course you have to size your solution to cope with the amount of data that you are going to store.

When deleting a Datastore you get the following choices, if you put it in the Recycle bin, you can get it back as long as not the space will be needed by some other data, as long as there is space in the Compellent you can restore the datastore and that can be handy if you had some VMs that you need later and did not migrate to another datastore.

The Compellent solution is very powerfull and has much more functions than i have highlighted here, i hope that i will get some more time to test and use it in the near future!

Disaster recovery of vSphere after disk array failure

Yesterday i had the opportunity to help a customer that had a disk array failure where both their PDU died and one hdd broke, after the hw supplier had replaced the parts some virtual machines had hdd´s and databases that where corrupt.

To be able to assist the customer i used the TeamView program that works excellent and is fast to get up and running, no need to get the fw guy to open new ports or get a new vpn account! For personal use there is a free version that works in two hours at a time, but the enterprise license is not to expensive and should be considered as the tool is so powerful!

 

This leads to the most important lesson of this is that you will have to be very thorough in your design and not put all your eggs in one basket. In this particular case the vRanger backup server, domain controller, sql server and vCenter server was on this datastore, luckily the vRanger did not have any corrupt data and we could restore the whole vCenter server. When the vCenter was restored we could continue and restore mailbox, sql and domain controller. I did restore the vCenter to another VM and when we saw that restore was ok we deleted the old vCenter server, but as the customer had vSphere Essentials we will have to manually rename the vCenter-Temp and how to do it you can read on this link, otherwise if you have Enterprise or higher as your license you can rename it and then do a storage Vmotion and the files will get the right names, its kind of difficult to do a storage migration when connecting directly to the ESXi host.

The following points should be considered

  • Ensure the placement of the vCenter server by datastore and host affinity to know where it is in a disaster recovery scenario
  • Make sure that the backup server can be used in case of a total datastore blackout
  • After implementing a backup solution, check that you actually can do a restore also!
  • Do not put all Domain controllers and DNS servers in the same vSphere cluster and datastore
  • Make sure that you do not use Thin disks and over commit the datastore without the appropriate alarms set.

Using a backup solution that can do backup on the virtualization platform and not inside the vm is the most effective solution to be able to recover fast and easily in the case of a total datastore failure. Most backup solution providers can offer this feature and for those not using it today should definitively consider it. For example the Quest vRanger can do file level restore from a VM backup.

Funny Hyper-v Ad on Slideshare vSphere Storage presentation

I had to screendump the following presentation from Slideshare, I found it from some other blog and thought it was  interesting as much of the virtualization success depends of the storage and how it is performing. I am wondering if this of course is intentional and MS actually pays google?! and also pays for setting this ad on vmware related stuff…

 

If someone is interested in this presentation follow this link, I really recommend that you get to know what esxtop values to look at and also if you have not checked it before, Align the VM-disks! This is done automatically when using win 2008 and later but check your win 2003 servers. If your storage can support the new VAAI i recommend you to enable it in your vSphere environment as long as it is at least version 4.1, this can be set with powerCLI of course and gives performance gains when doing provisioning, cloning or storage vMotion and also get better cluster locking and meta data operations…

 

 

 

Pimp my Macbook Pro with powerCLI

Probably some Mac fanatics will go crazy when they see my MB Pro and what i have done to it but i think it is quite funny and if I can get more attention by giving my mac a fine VMwarish look i think its cool..

I had to big ambitions when i wanted to find a VM and in what datacenter it was located at a customer today with powerCLI, at first i thought that it was not so easy as it really appeared to be 😉

i started by tinking after looking at Get-Datacenter | Get-View and Get-Datacenter | Get-Member that i needed something like Get-Datacenter | where {$_.vmfolder -eq (get-vm partVM | Select-Object VMfolder)} (that command gave me nothing)

I then read the excellent VMware powerCLI cmdlet reference and got happy as i realized that i only needed to use the following command and get the datacenter where the vm resided


Get-Datacenter -VM partVM

or i could also pass the VM object to the Get-Datacenter cmdlet


Get-VM partVM | Get-Datacenter

if i want to get what cluster it is in i use the Get-Cluster cmdlet instead


Get-VM partVM | Get-Cluster