According to the KB 1038827 “Enabling Jumbo Frames for VMkernel ports in a virtual distributed switch”, VMware says that you have to recreate the vmkernel port to set the MTU for jumbo frames. This is not true if you use powerCLI, I do not know exactly how it is done beneath the hood but it is very easy to configure using quite a few lines scripting.. By the way, there is no way in the GUI to edit this.
$cred = Get-Credential
Connect-VIServer ESXhost.test.loc -credential $cred
Get-VMHostNetworkadapter -name vmk2 | Set-VMHostnetworkadapter -Mtu 9000
Get-VMHostnetworkadapter -name vmk2 | ft Mtu
Setting the Mtu on the vmkernel port is basically not different using a standard vSwitch or a distributed vSwitch.
Of course you can connect to a vcenter and add a foreach loop to set the Mtu for more than one host vmkernel port.
I have had the opportunity to do some PowerCLI scripting on an installation where we have vDS (virtual Distributed Switch). In the PowerCLI there is not so much cmdlets for the distributed switches, that is kind of awkward as there is so many cmdlets for everything else.. Luckily LucD had made some nice functions for me to use when creating the port groups.
I used his function for creating port groups, as the customer had about 20 vlans that needed to be added it was a perfect match to do it by powerCLI because setting up this manually is boring! So i had a csv file with the name and vlan id which i ran through in a foreach loop, then all was done.
But then we realized that we needed to change some settings with both the security and load balancing so i had to remove all my port groups and start over.. I did not want to remove them manually and the powerCLI cmdlet that removes standard port groups could not be used on a vDS, I did not find the code from LucD in his blog to remove a vDS port group so i came up with the brilliant idea to use Onyx, it is a tool from VMware Labs that interprets the traffic between the vSphere Client and the vCenter and transform it to powerCLI code or .Net or SOAP or Javascript.
I then after starting this tool connected to my vCenter and through the vSphere Client removed a vDS port group, i got the powerCLI code (which I probably could have found out being a bit smarter in powershell/powerCLI without Onyx, but now I´m not :-P) So i did a small script to find all my vDS port groups and remove them.. Note that i cannot remove a vDS port group that already has been populated with connected VM´s.
# Remove vds port groups
#
# Niklas Åkerlund / Real Time Services AB
$vlans = Import-Csv vlan.csv -Delimiter ";"
$PGs = Get-VirtualPortGroup
foreach ($vlan in $vlans){
foreach ($PG in $PGs){
if ($vlan.Name -eq $PG.Name){
$destroy = $PG.Id
#Write-Host $destroy
$pek = Get-View -Id $destroy
$pek.Destroy_Task()
}
}
}
And now i could run the add script again with the added parameters for more security and load balancing.
There is quite a buzz out on twitter and blogs about the new features that has come to Windows 8 and the new Hyper-V version. I want to give you a little heads up about how it works to create network team with NICs (yes it works with different nic cards. in my case a Intel and a Broadcom)
I have now installed the server on my test-machine in our office and was eager to test the NIC teaming, at first i did not understand how it was working and tried to bind two nics together in the network connections window in the control panel, as i later realized and read in Aidan Finns´s blog, that it is done through the LBFOAdmin.exe (this is opened when pressing Nic Teaming Enabled/Disabled)
There you have to highligt your server to configure it, as the new server manager can handle remote servers and you can configure several workloads at the same time and you do not have to log in to each server to administer it.
I have named my team to NET2000 and added the two nics, i have also set it to be switch independent (i have actually set it in a simple 5 port switch), you can also chose LACP or Static Teaming. For Load Distribution mode you can chose Address Hash or Hyper-V port (now i am sharing the team with the management and a hyper-v switch so i am using the Address Hash.
As yo can see i can then add several virtual nics with different vlan id. I really hope that the fix one issue though, as you can see here i have a virtual nic interface called VMnet, when i then want to add this in the hyper-v manager it does have a different name as you can see in the next screenshot. It would have been wonderful to be able to se the Name also in the virtual switch manager.
As i before had to use the same network cards from the same manufacture and use their teaming software this is a giant step forward with the win 8 and the built in teaming functions. One thing to test later when i get my hands on a nic that can handle SR-IOV is how that feature works with a team, but that is another blog post!
The past two days i have been upgrading a Platespin Forge from 2.5 to 3.1 on a Forge 510 Appliance, this runs VMware VI 3.5 Update 4.
I think that the Forge appliance is a really good product for companies that have a need for a Disaster Recovery solution. If you want to read more about it click here.
The customer had bought the appliance for two years ago and has not had any time to set it up and start replicating workloads.
The appliance is a customized Dell HW with a custom VI 3.5 installation, We could not upgrade it to vSphere, the only update on the Novell site is the VI 3.5 U5. We tried to upgrade via the vSphere Client Host Upgrade Utility but got a failure, we also tried the hostupgrade.sh script also failing. We have started a support case asking novell how to do and i will update the blog when i get the right procedures.
The next trouble we went into was when we tried to upgrade the Forge Management VM software from 2.5 to 3.1, The installation succeeds but when we check the gui we do not have an protect container which is kind of vital because without it we cannot start any protection of workloads, if we checked with the Platespin browser executable we could see it there but not in the web gui. The not so obvious solution to this was to do a two part upgrade, first update all windows patches and then upgrade to version 3.0.2 and verify that the container for protection was still there and working, after that we could proceed with the upgrade to Forge 3.1 (which is of today the latest version) and after this the protect container was there and refreshed correctly. Thank God for VM snapshots that we took after each step so we easily could go back after each failed step!
Although the upgrade steps in the documentation did not work for us i can recommend it because Platespin has always done a good job on writing and explaining in their product documents.
Some strange issues regarding when we add the Management VM to their domain and install AV is left but that is another support case.