I have had the pleasure to find a new book about PowerCLI that have been released.
Learning PowerCLI by Robert van den Nieuwendijk
The book has 10 chapters and is truly a bible for a VMware Admin that wants to learn how to utilize PowerCLI in their environment. The book covers the latest version of PowerCLI and PowerShell v3.
If you are new to PowerShell and PowerCLI the first chapters gives you great guidance in how to do basic things and once you feel comfortable you can continue with the other chapters and start automating your daunting tasks as a VI Admin!
I have done some magic with PowerCLI and I can recommend you to add it to your shelf to be comfortable in your career
If you are interested in getting more info on the book please follow this link http://www.packtpub.com/learning-powercli/book
I was at a meeting last friday and they where in the process of deploying a Scale Out File Server environment on Windows Server 2012 R2 and that with a Dell jbod box.
One important thing to consider when designing this is that if you want to use all the features in the new 2012 R2 with storage spaces there are some limits that could imply and affect your design.
If you want the new cool performance intelligent storage tiering in 2012 R2 that can only be used with either simple or mirroring protection on the physical level which result in quite an overhead on disks in your jbods to get some TB for the actual data. If only the deduplication feature would have been supported with server workloads and not just VDI setups, then this would not have been such a big deal.
Setting up your environment based on this would require some planning and creating several different storage selections where not everything stored on your superperformance auto-tiering parts of your SOFS. Maybe your budget would allow you to fill the JBOD with only SSD´s and then this is no problem but if not you should at least consider creating several different shares where the virtual disks either have storage tiering with mirroring or a plain disk with parity.
I would love to see some development in the SOFS storage integrated with Hyper-V in the same manner that VMware has with their Storage-DRS that could, based on different workloads and their load on different VHDX, be moved between the different datastores not just the tiers. This could be even more refined when adding a StorSimple box and the data that has not been accessed for a while would be offloaded to Azure.
Yes I know the StorSimple hardware needs a refresh in their NIC connectivity (in the current boxes there is a 2*1 Gbit active) but hopefully we will see some new hardware in a not to far future. And in an solution where the StorSimple box would be used and connected in a SOFS cluster together with JBOD´s the network bandwidth would not have such a large impact.
If you are kind of new to System Center VMM console and wondering why after a while your hosts responds “Needs Attention” and checking the host properties and seeing that everything looks green in the status, then continue and read this post!
You can just extend the console by right-click the hosts bar and add the column with “Agent Version Status” and then you can see the reason for host status,
The reason for this “Upgrade Avail” is that the VMM server has been updated with some new patches/updates that came with Windows Update and now you need to update your hosts to have the new agent to properly talk to the VMM Server.
That can of course be done with PowerShell if you have quite some hosts… You can of course also get the list of which hosts that need an update,
And then with the following command also update the hosts that need to
If you are wondering why I am filtering on Agent-Version -ne “0.0″ is because in this environment there are several VMware hosts and they do not have a VMM agent but is recognised as managed hosts and appear when getting the “managed computers”
# Check Agents
Get-SCVMMManagedComputer | where AgentVersion -ne "0.0" | Select ComputerName,VersionState, AgentVersion | Sort-Object Computername -Descending
# Update Agents needed
$cred = Get-SCRunAsAccount XRunasAccountX
Get-SCVMMManagedComputer | where AgentVersion -ne "0.0" | where VersionState -eq "UpgradeAvailable" | Update-SCVMMManagedComputer -Credential $cred
Today I had the opportunity to see an Nutanix deployed cluster with Hyper-V. If you have not heard about Nutanix before I urge you to go and check out their website and see how it works. Basically their solutions use local storage for massive performance and their Controller VM has direct access to the disks (a set of SSD and HDD) that also has tiering and dedup and present the storage to the Hyper-V node with a SMB 3 file share, for resiliency the data is replicated to other nodes so in case of an failure your VM´s will start up on another node with all the data.
When you buy an Nutanix solution the Windows 2012 R2 is installed on delivery and you will only configure networking and add the nodes to the Active Directory, so getting your Hyper-V enironment up and running is quite a breeze with this 2U x 4 node solution and it can scale with more 2U sets and this can also be added after you are up and running to scale with your added loads!
As you can see on the features you get both TRIM and ODX functionallity with the solution allowing you to clone or create VM´s within the box in just seconds and also reclaim storage when removing data and VM´s! As you will always use the local controller VM there is no need for SMB Multichannel.
The nutanix will work with your System Center environment and can be automated with VMM and Orchestrator.
I will do a more thorough post about the configuration and setup when I get my hands on a Nutanix-platform!
So I have in several posts described different ways on moving from one hypervisor to another with your VM´s. There are several different choices when it comes to the transitioning phase, some more automated than others.
But wait!! Yea I know the virtualization is Gods gift to the IT but it also carries some responsibilities. The technique gives you as the IT admin the possibility to run old legacy operating systems almost forever.
One big thing in the Move projects is to also take into account if the workload running can be upgraded within the OS, not just moved to another virtualization platform. Surely you do not want to describe on your CV that in your last employment you had Windows 2003 Servers as Domain Controllers for instance. Compare it with VHS, do you still watch movies and record stuff on the 80/90´s system?
First of all you can use the Microsoft Assessment and Planning toolkit to analyze what of your workloads can be upgraded ( I know, it does not look at your third party server applications that needs some brushing of and upgrading also)
When you have analyzed and got some workloads that can be migrated then check out this page on what tools and wizards you can use to actually smoothen the process for those windows roles and features running on your old and soon to be unsupported OS to an new version! As you can see below, Windows 2008 R2 is coming to the last date of mainstream support!
Yes I know that this often can be a quite burden on the IT department to carry out this kind of transformation projects and for several company-critical applications this can also become a serious cost that someone has to approve or maybe decide to decommission if they live outside of the support window!
Now I found the release of the Migration Automation Toolkit for Double-Take Move. As I described in an earlier post I was waiting for this and now it was released.
It is as described in the readme, a technology preview and for users with PowerShell skills.
And it has not yet appeared on the Gallery but you can find it on the Vision Solutions Support site (where you will need an account to login), and on the Move download page you see it on the top
I will write a more thorough article about how the experience to use it compared to the Vision Solutions Double-Take System Center toolkit in a later blog post!
So I have been a Apple fan for some while now and yesterday I was at a local electronic store and tested the Surface 2. This resulted in me coming home with an RT
I really like the speed and the functionallity of this device.
Maybe I will start looking at the Pro as the RT has some limits but it is fast and I can take care of several of my daily tasks. Had loved if it was possible to run not just PowerShell but also the ISE. Found some suggestions to utilize Azure IaaS with a RDS desktop with the software and tools not available in the RT and maybe that would work
By the way, this post was also written on the device, will tomorrow change to the type keyboard instead ….
There are some backup vendors that nowadays have support for Hyper-V and host level backup. I have been testing some of them but also wanted to check how the Altaro Hyper-V Backup solution works.
I cannot complain about the ease of installing and getting started on the Altaro solution. I like the install wizard that directly recognises if I am trying to install on a cluster node or a single hyper-v instance.
It is fully supported to install on a core or full version of Hyper-V 2012 and also 2012 R2.
When I tested to install on my one-node Hyper-V 2012 R2 cluster it found and promoted the node as a master controller.
It was really easy to configure the VM´s that should be included in the backup and then schedule a backup job.
Some really nice features in the Altaro are:
- Offsite backup with wan acceleration
- Exchange Item Level restore
- Remote Management Console
- Fast Hyper-V backup/restore
- Live Backup of Linux VM´s
- Instant-Boot from backup
If you register then you get a trial that works in 30 days and have the full feature-set. You can also download the free version and that gives you the possibility to do backups on two VM´s forever but with some limitations..
I observed a question on the technet forums regarding if it was possible to change name of the virtual hard disk when deploying a virtual machine or template from System Center Virtual Machine Manager, and the answer is yes.
Having a few hundred VM´s and all of them having the same name of the vhdx as the template might not look so good.
So how to solve this then, first of is when deploying new VM´s you should change the name and that can only be done in the VMM GUI when deploying to a host and not to a cloud. Notice that I create the VM from the template in the library view of the console.
When using this way the console automatically fills in the guest os name. When creating a VM from a template in the VM´s and Services view that has to be filled in manually.
So how to get the name of the VM also on the vhdx that holds the VM´s operating system, as I described above, you have to select deploy to a host and not a cloud (of course, when the VM has been created it can be updated with a cloud)
Then when you come to the configure settings view in the wizard, the option to change the vhdx name appears,
Surely you find that having vhdx that has the same name as the VM instead of the generic library vhdx name of win2012-std.vhdx more suitable?
In the next post I will show you how to change the names of the disks on already deployed VM´s.
In the upgrading of a VMM 2012 Sp1 to R2 I wanted to test and run the Configuration Analyzer. When clicking on the link from the VMM R2 GA media you should be observant that it points you to the wrong place though, as you can see on the following screendump there is a link on the splash screen.
And when pressing that link you will arrive at the following site, which is wrong! So do not start downloading the stuff from this as these files are for the VMM 2012.
The right site is at the following link: http://www.microsoft.com/en-us/download/details.aspx?id=41555 and looks like this, do also notice that it is an analyzer for the whole System Center suite so you can do analyzing of all your different SysCtr servers.
You can also at this technet site read more about how it works and what prerequisites you have to install first.