Move vSphere vCenter database and update perf stat jobs

Today I have helped a customer with ther vCenter database and the rollup jobs that was not present..

Yesterday i noticed that they had missed to update the stat jobs when moving their database to another server (I gave them the KB 7960893 link so they could move the db. allthough they missed step 5 in that list.). This was leading to an growing database and that the performance stats where not being updated. ultimately if the database grows to much and fills the disk the vCenter server will stop. I showed them the KB 1004382 that descripbes how you update or create new stat rollup scripts for your vCenter database, this was not successfull because they did not select the right database for the creation of the jobs..

Again i used the wonderful tool Teamviewer and connected to the customer and helped them to correctly create the jobs.

One important thing is to select the right database when running the script or it will not work when it is going to run.

As you can see on the screendump, for one that is not to familiar with SQL management studioi you must select the database beside the ! Execute before executing, the script will run and create a stat rollup job but it will not work because it is looking for stored procedures that are in the vCenter database..

If you not have logged on as the owner of the database (your vcenter service account) you should edit the jobs to be run as that account!

VMware vSphere V and the licensing

I have now tested the script that Hugo Peeters has made for checking what licensing needed with a vSphere platform when upgrading to V,

Of course this is a small platform and we do not have so much machines running, but the point is that it is a cool script that gives you a hint where you are and what your platform need in amount of licensing.

One thing my colleagues has missed and i wanted to touch and highlight is the vRAM and the pooling, i think it is well documented in the vSphere licensing, pricing and packaging document

The new licensing model is as follows

  • No more restrictions on cores
  • No max physical RAM limit
  • You still need one license/pCPU
  • Not allowed to mix different vSphere editions in the same vRAM pool, if using more than one edition managed with vCenter it will create different vRAM pools

for each license model there is a vRAM entitlement 24 GB for standard, 36 GB for Enterprise and 48 GB for Enterprise+, these are shared when connected to a vCenter so if you have a virtual machine on a host with 2 pCPU 192 GB physical RAM (with E+ you have 96 GB vRAM) and this VM has been configured with 128 GB vRAM and in your vmware vSphere cluster that this host resides have 3 other hosts with same setup and that will give you 384 GB vRAM in the pool, 384 – 128 equals 256 GB left to use for other VMs before bying more licenses.  Also if you have a linked vCenter and hosts with vRAM that is also included in the pool to be used. What i am trying to say is that although you have used more vRAM than assigned for one host you are still compliant as it is part of a pool.

As in all virtualization design you must calculate for host failures and its vRAM can be used when one host is down for maintenance or failure.

In the above example you can add more licenses for getting more vRAM, these licenses can later be used for adding a new host and for that physical CPUs.

Hope this gives some more light in the jungle

VMware vSphere and Microsoft Clustering

I have been investigating some things that need to be considered when deploying a Microsoft Cluster on a VMware platform.

As you can see on the graph, there are some different supported configurations. The kb 1037959 shows more information, i will try to highlight some things below.

Why i started this was because i was looking at the Multipathing polices for a customer and we discussed in the office that we preferred the Round-Robin policy, This is as you also can read on the KB not supported for the RDM shared disk, so if you already have  Round Robin as default on your hosts you have to set another policy on that specific LUN.

This can be done by powerCLI or by the vClient GUI, and as i am a big fan of powerCLI i will show you the command for it

VMHost hostname.test.local | Get-ScsiLun -CanonicalName "naa.60054242555" | '
Set-ScsiLun -MultipathPolicy "fixed"

If your default multipathing policy is set to one you do not want you can edit the default both with VMware CLI and powerCLI, as for the last there is a script that Stephen has made and can be found on the VMware Communities forum. otherwise you will have to change policy on every new datastore you add manually.

If you have a iSCSI SAN you will use in-guest iSCSI connection to the shared storage, then there is no need to change multipathing polices, what i do not understand and have not got any good information about is when i use in-guest shared storage, why do VMware have a limit of two cluster nodes that they support? As I see it the limit should be what MSCS has as a limit and that is 16 nodes, maybe there is no need for such a big cluster as you already have HA in your virtualization platform. If the setup as you can se in the graph is a cluster without shared storage it is not any limits set on number of nodes.

You will also need to edit your VMs scsi controller, for the Win 2008 you must have the LSI SAS controller installed, there is a KB 1002149 for that where the steps are outlined. The shared disk must reside on a dedicated scsi controller.

You will also have to setup Anti-Affinity rules in DRS to keep your cluster nodes apart, if you have for some reason decided to setup a CIB (Cluster in a Box) then you will need to setup an affinity to keep them together on the same host. For the VMs that are used for clustering you should set the DRS to partially automated.

For more information how to setup look at the VMware documentation on setup of failover clusters pdf

 

Warning! Social hacking using the phone now in Sweden

Yesterday i heard from a colleague that he had been exposed to a hack attack that is very sophisticated and probably could have been successful if not my colleague had worked in IT.

What happened was that he got a call and the other party asked for his wife (this means that they in some way have target their attacks as they said her name), he said that she was not there and asked if he could be in assistans, the one on the phone informed my colleague that he was calling from Microsoft and that they had noticed that his computer was reporting lots of errors and that they could help him fix it. As he is working as an IT professional he became interested and let the man on the phone explain, which he did and told my colleague to open the event viewer and directed him to some common errors through filtering. When they found the errors the “Microsoft” represenative said that he could help him to fix this and directed him to a remote desktop software site ( a real website that had been copied and changed url by one character ), this evil site installed a Java tunneling trojan which his antivirus software did not find with the real time scan, after this my colleague said thank you and hung up and disconnected his laptop and investigated it.

Today he heard of an 80 year old lady that had been attacked using the same technique.

I can safely say that Microsoft will never ever call you and tell you stuff about your computer and ask to remote administer it!! AND FOR GOOD SAKE DO NOT ACCEPT JAVA OR ACTIVEX plugins/programs that does not come from a legitimate site

watch this youtube clip and get scared about how easy anyone can get hold of your computer. Also look at the follow up clip that shows when he set up an account and run RDP to that session..

MCTECT certification and training

Last week I attended the MCTSummit in Stockholm and the TTT course that was held there. Several experienced senior MCT instructors (Thomas Lee, Johan Arwidmark and others) gave us great information about how to be succesfull in the class or when presenting. The course was held by the MCT Europe with instructors Daniel Sörlöv and Mattias Lind. It was a course that was originally for three days but we had it in two which meant quite a fast pace and long days. The final day we had an exam presentation that was limited to 20 minutes and a jury of up to eight senior MCT instructors to judge. I have taken several certifications but this was one of the most difficult to pass.

This was my exam presentation.

 

I can honestly say that i still have much to learn about how to present and get the message through but no presenter gets perfect and always need to work on their skills..

Bill Chapman said on the first day of the Summit, “Regardless of what happens keep going”, also he said some other important stuff as you should use a video camera and film yourself presenting or teaching.

below are some points from his presentation:

What makes a great speaker

  • Confidence
  • Passion
  • Knowledge
  • Skills

The show must go on

  • Remember the story and keep it going
  • Be prepared for the unknown
  • Know your time
  • Know your slides/demos
  • Focus on the audience

Now i am going to apply for the MCT program 🙂

DELL Compellent – Auto Storage Tiering

In two days i have had the opportunity to test the Compellent SAN solution and i find it really cool and smooth. When integrating it with your vCenter you get the single pane of gui to handle the whole solution. You have to on every vmware client install the Compellent plugin to be able to administer the compellent and get the tabs that are plugin specific, you will also have to add the compellent management address and also a user with password and of course some rights to do SAN stuff.

After adding the Storage Center you get the following tabs:

As you can see on the Compellent view, you can add and remove datastores, if this is done in here the compellent plugin will also format the datastore with vmfs and rescan datastores on the hosts. Also as you see on this screenshot you can for each datastore see where your data is and how much of it resides where, Tier 1 is the fastest and if you have setup with more than two different hdd you will get a Tier 2 and a Tier 3. The active blocks are moved to faster or slower disk depending their demands.

With the Replay function you can get SAN backup functionality and also for full consistent backup you can install a agent in the VMs to get them to prepare a vss snapshot before running the replays. Really cool to be able to take SAN snapshots this way.

another thing as you can see also on the screenshot is that the datastore is 2 TB but only 9,32 GB is used! Of course you have to size your solution to cope with the amount of data that you are going to store.

When deleting a Datastore you get the following choices, if you put it in the Recycle bin, you can get it back as long as not the space will be needed by some other data, as long as there is space in the Compellent you can restore the datastore and that can be handy if you had some VMs that you need later and did not migrate to another datastore.

The Compellent solution is very powerfull and has much more functions than i have highlighted here, i hope that i will get some more time to test and use it in the near future!

SC Orchestrator 2012 Beta installation

Today i used some of my valuable time to install the newly released System Center Orchestrator Beta that can be downloaded.

The installation process have been quite worked through and compared to the Opalis it is really simple and smooth now to get it installed, up and running. First i looked at the pre req and installed the required features and also a SQL 2008 R2 on the same server.

I had an unexpected error after install but i think it was because my service account was not a user of the database, after fixing that and starting up the Orchestrator Management service i could start the Runbook Designer and start making Runbooks without any problems…

As you can see on this screenshot there is some Opalis left in this beta, it will probably be updated when the product is released. For those who have worked in opalis before you will recognize yourself in the GUI cause not so much has changed as far as i have tested right now..

If i find something smashing during further testing i will do a new post 🙂

Disaster recovery of vSphere after disk array failure

Yesterday i had the opportunity to help a customer that had a disk array failure where both their PDU died and one hdd broke, after the hw supplier had replaced the parts some virtual machines had hdd´s and databases that where corrupt.

To be able to assist the customer i used the TeamView program that works excellent and is fast to get up and running, no need to get the fw guy to open new ports or get a new vpn account! For personal use there is a free version that works in two hours at a time, but the enterprise license is not to expensive and should be considered as the tool is so powerful!

 

This leads to the most important lesson of this is that you will have to be very thorough in your design and not put all your eggs in one basket. In this particular case the vRanger backup server, domain controller, sql server and vCenter server was on this datastore, luckily the vRanger did not have any corrupt data and we could restore the whole vCenter server. When the vCenter was restored we could continue and restore mailbox, sql and domain controller. I did restore the vCenter to another VM and when we saw that restore was ok we deleted the old vCenter server, but as the customer had vSphere Essentials we will have to manually rename the vCenter-Temp and how to do it you can read on this link, otherwise if you have Enterprise or higher as your license you can rename it and then do a storage Vmotion and the files will get the right names, its kind of difficult to do a storage migration when connecting directly to the ESXi host.

The following points should be considered

  • Ensure the placement of the vCenter server by datastore and host affinity to know where it is in a disaster recovery scenario
  • Make sure that the backup server can be used in case of a total datastore blackout
  • After implementing a backup solution, check that you actually can do a restore also!
  • Do not put all Domain controllers and DNS servers in the same vSphere cluster and datastore
  • Make sure that you do not use Thin disks and over commit the datastore without the appropriate alarms set.

Using a backup solution that can do backup on the virtualization platform and not inside the vm is the most effective solution to be able to recover fast and easily in the case of a total datastore failure. Most backup solution providers can offer this feature and for those not using it today should definitively consider it. For example the Quest vRanger can do file level restore from a VM backup.

Funny Hyper-v Ad on Slideshare vSphere Storage presentation

I had to screendump the following presentation from Slideshare, I found it from some other blog and thought it was  interesting as much of the virtualization success depends of the storage and how it is performing. I am wondering if this of course is intentional and MS actually pays google?! and also pays for setting this ad on vmware related stuff…

 

If someone is interested in this presentation follow this link, I really recommend that you get to know what esxtop values to look at and also if you have not checked it before, Align the VM-disks! This is done automatically when using win 2008 and later but check your win 2003 servers. If your storage can support the new VAAI i recommend you to enable it in your vSphere environment as long as it is at least version 4.1, this can be set with powerCLI of course and gives performance gains when doing provisioning, cloning or storage vMotion and also get better cluster locking and meta data operations…

 

 

 

Pimp my Macbook Pro with powerCLI

Probably some Mac fanatics will go crazy when they see my MB Pro and what i have done to it but i think it is quite funny and if I can get more attention by giving my mac a fine VMwarish look i think its cool..

I had to big ambitions when i wanted to find a VM and in what datacenter it was located at a customer today with powerCLI, at first i thought that it was not so easy as it really appeared to be 😉

i started by tinking after looking at Get-Datacenter | Get-View and Get-Datacenter | Get-Member that i needed something like Get-Datacenter | where {$_.vmfolder -eq (get-vm partVM | Select-Object VMfolder)} (that command gave me nothing)

I then read the excellent VMware powerCLI cmdlet reference and got happy as i realized that i only needed to use the following command and get the datacenter where the vm resided


Get-Datacenter -VM partVM

or i could also pass the VM object to the Get-Datacenter cmdlet


Get-VM partVM | Get-Datacenter

if i want to get what cluster it is in i use the Get-Cluster cmdlet instead


Get-VM partVM | Get-Cluster