I have been configuring a demonstration environment on Azure and that connected to my local datacenter with S2S VPN.
I created several VM´s in Azure and during that process I connected them to my own network. One thing I noticed once doing this was that these VM´s was not activated.
The reason for this appeared to be quite easy and as I had the plan to setup a DC with DNS I only added that in the networking dns settings and the gallery VM´s points to a KMS in Azure and as I did not have a name resolution working the VM´s could not be activated. Checking that you can do a name resolution of the following url shows if you have any potential issues..
So after I added an second DNS (excuse me for adding a google dns, but it is so easy to remember) and started the VM´s I could see that they been successfully activated
I have today made an update to the function so that it only gives the VM´s that have either SMB or CSV storage and not if you for some reason have VM´s residing on local storage within your clusternodes which can be the case for domain controllers or other appliances etc..
This function search the hosts for VMs tha are not HA enabled
This function lets you find what VM´s that is running on your hosts and not activated on the virtual machine role on the cluster
On wednesday the 19th of March I will together with Veeam have a web seminar with the topic:
Automate daunting virtualization tasks with SMA
Now that you have set up your virtualization environment, you’ll want to automate it. In this webinar, you will learn how to do this with Service Management Automation(SMA) and how to integrate automated tasks into the Hyper-V virtualization environment. You will use different runbooks to automate some boring but necessary tasks that IT admins have to do.
This webinar will show you how to:
Automate patching of virtualization hosts
Expand virtual hard disks based on usage
Automatically update integrational components on VMs
Automatically update virtual hard disk templates
Clean old snapshots that have been forgotten
If you want to register and be part of this crazy one hour with the Swedish Chef you can do it on the following link
And guess what, It comes in a swedish version also! Check out in the following link and if you understand Swedish, register here!
I have had the pleasure to find a new book about PowerCLI that have been released.
Learning PowerCLI by Robert van den Nieuwendijk
The book has 10 chapters and is truly a bible for a VMware Admin that wants to learn how to utilize PowerCLI in their environment. The book covers the latest version of PowerCLI and PowerShell v3.
If you are new to PowerShell and PowerCLI the first chapters gives you great guidance in how to do basic things and once you feel comfortable you can continue with the other chapters and start automating your daunting tasks as a VI Admin!
I have done some magic with PowerCLI and I can recommend you to add it to your shelf to be comfortable in your career
I was at a meeting last friday and they where in the process of deploying a Scale Out File Server environment on Windows Server 2012 R2 and that with a Dell jbod box.
One important thing to consider when designing this is that if you want to use all the features in the new 2012 R2 with storage spaces there are some limits that could imply and affect your design.
If you want the new cool performance intelligent storage tiering in 2012 R2 that can only be used with either simple or mirroring protection on the physical level which result in quite an overhead on disks in your jbods to get some TB for the actual data. If only the deduplication feature would have been supported with server workloads and not just VDI setups, then this would not have been such a big deal.
Setting up your environment based on this would require some planning and creating several different storage selections where not everything stored on your superperformance auto-tiering parts of your SOFS. Maybe your budget would allow you to fill the JBOD with only SSD´s and then this is no problem but if not you should at least consider creating several different shares where the virtual disks either have storage tiering with mirroring or a plain disk with parity.
I would love to see some development in the SOFS storage integrated with Hyper-V in the same manner that VMware has with their Storage-DRS that could, based on different workloads and their load on different VHDX, be moved between the different datastores not just the tiers. This could be even more refined when adding a StorSimple box and the data that has not been accessed for a while would be offloaded to Azure.
Yes I know the StorSimple hardware needs a refresh in their NIC connectivity (in the current boxes there is a 2*1 Gbit active) but hopefully we will see some new hardware in a not to far future. And in an solution where the StorSimple box would be used and connected in a SOFS cluster together with JBOD´s the network bandwidth would not have such a large impact.
If you are kind of new to System Center VMM console and wondering why after a while your hosts responds “Needs Attention” and checking the host properties and seeing that everything looks green in the status, then continue and read this post!
You can just extend the console by right-click the hosts bar and add the column with “Agent Version Status” and then you can see the reason for host status,
The reason for this “Upgrade Avail” is that the VMM server has been updated with some new patches/updates that came with Windows Update and now you need to update your hosts to have the new agent to properly talk to the VMM Server.
That can of course be done with PowerShell if you have quite some hosts… You can of course also get the list of which hosts that need an update,
And then with the following command also update the hosts that need to
If you are wondering why I am filtering on Agent-Version -ne “0.0” is because in this environment there are several VMware hosts and they do not have a VMM agent but is recognised as managed hosts and appear when getting the “managed computers”
Today I had the opportunity to see an Nutanix deployed cluster with Hyper-V. If you have not heard about Nutanix before I urge you to go and check out their website and see how it works. Basically their solutions use local storage for massive performance and their Controller VM has direct access to the disks (a set of SSD and HDD) that also has tiering and dedup and present the storage to the Hyper-V node with a SMB 3 file share, for resiliency the data is replicated to other nodes so in case of an failure your VM´s will start up on another node with all the data.
When you buy an Nutanix solution the Windows 2012 R2 is installed on delivery and you will only configure networking and add the nodes to the Active Directory, so getting your Hyper-V enironment up and running is quite a breeze with this 2U x 4 node solution and it can scale with more 2U sets and this can also be added after you are up and running to scale with your added loads!
As you can see on the features you get both TRIM and ODX functionallity with the solution allowing you to clone or create VM´s within the box in just seconds and also reclaim storage when removing data and VM´s! As you will always use the local controller VM there is no need for SMB Multichannel.
The nutanix will work with your System Center environment and can be automated with VMM and Orchestrator.
I will do a more thorough post about the configuration and setup when I get my hands on a Nutanix-platform!
So I have in several posts described different ways on moving from one hypervisor to another with your VM´s. There are several different choices when it comes to the transitioning phase, some more automated than others.
But wait!! Yea I know the virtualization is Gods gift to the IT but it also carries some responsibilities. The technique gives you as the IT admin the possibility to run old legacy operating systems almost forever.
One big thing in the Move projects is to also take into account if the workload running can be upgraded within the OS, not just moved to another virtualization platform. Surely you do not want to describe on your CV that in your last employment you had Windows 2003 Servers as Domain Controllers for instance. Compare it with VHS, do you still watch movies and record stuff on the 80/90´s system?
First of all you can use the Microsoft Assessment and Planning toolkit to analyze what of your workloads can be upgraded ( I know, it does not look at your third party server applications that needs some brushing of and upgrading also)
When you have analyzed and got some workloads that can be migrated then check out this page on what tools and wizards you can use to actually smoothen the process for those windows roles and features running on your old and soon to be unsupported OS to an new version! As you can see below, Windows 2008 R2 is coming to the last date of mainstream support!
Yes I know that this often can be a quite burden on the IT department to carry out this kind of transformation projects and for several company-critical applications this can also become a serious cost that someone has to approve or maybe decide to decommission if they live outside of the support window!
There are some backup vendors that nowadays have support for Hyper-V and host level backup. I have been testing some of them but also wanted to check how the Altaro Hyper-V Backup solution works.
I cannot complain about the ease of installing and getting started on the Altaro solution. I like the install wizard that directly recognises if I am trying to install on a cluster node or a single hyper-v instance.
It is fully supported to install on a core or full version of Hyper-V 2012 and also 2012 R2.
When I tested to install on my one-node Hyper-V 2012 R2 cluster it found and promoted the node as a master controller.
It was really easy to configure the VM´s that should be included in the backup and then schedule a backup job.