We have some demands on BIG-ASS VM´s and in our new environment with System Center 2016 and VMM 2016 we tried to add a hardware profile with more than 64 vCPU´s as now in Hyper-V 2016 we can have a VM with 240 vCPU´s and 12 TB of ram, but that cannot be done 🙁
We have also updated with the latest SCVMM CU4 but still no success! Neither via GUI or via PowerShell!
We installed a new Preview of SCVMM 1711 to see if it was any difference and guess what! It has finally been updated but we would much rather see it also comming in a CU in the near time for VMM 2016 as we cannot deploy a preview of the semianual into production..
The gui also have been updated for a hardware profile where it clearly states that it has to be a gen2 vm and also the OS cannot be lower than 2016 for both host and vm
I have got the opportunity to speak at Ignite again, this will be my third year I will do a session on this gigantic conference!
I have a Community Theater Session where I would love to have you as a guest if you will also be there and have some time for this topic!
“Using a PowerShell release pipeline for a real-world service provider delivery in Microsoft Azure”
Delivering managed services for a service provider can be cumbersome and often the quality and reliability is not first priority. Utilizing PowerShell and Desired State Configuration makes it repetitive, versionable, and testable! In this real-world case we have implemented a release pipeline to make sure that PowerShell scripts, modules, and dsc configurations are tested before put into production use in Azure Automation.
After installing/deploying the AzureStack DevKit i added the SQL RP and also wanted to add the App Service Resource Provider for a dev experience!
I have an relative ordinary HPE 380 G9 box with 128 GB of ram and 2 CPU´s so it should be fine, but there was some issues that I wanted to document to help others, this will probably be fixed in an later release of the App Service install pack though.
I downloaded the bits and ran the deploy App Service
After filling this out the deployment started but after a few hours it failed during the deploy step, I tried the Retry a couple of times but without luck, also closing the wizard makes you lose the stage of deploy and need to rerun the whole deploy! When doing a redeploy you have to delete the resource group APPSERVICE-LOCAL (or what you called it) and also go into your SQL server that you entered into the wizard and remove the appservice db´s!!
How did I mange to get it working though? I got some help from Andrew at Microsoft that works with the Stack team and he gave me some guidance how to get it into a good state! Apparently the App Service adds all uppdates during deploy and to be more successful the recommendation was to update with patches that was released up until last tuesday with the win2016 image and thus update to latest CU, mine came from the marketplace syndication with Azure and that one had not the patches when I did this test, I threw that away and ran the create image script adding the parameter for -includeLatestCU.
Rerunning the wizard and when it allthough had a image with latest and greatest patches got stuck on “App Service Deploy Failed” I went into the CN0-VM and opened the mmc for the state of the different servers in the App Service
I also hit the repair link and when all of them said “Ready” I added a new 0.status file I got from Andrew into the custom script folder on the CN0-VM, easiest way to do that was with the lovely PowerShell Direct that is part of the Win 2016!
and then went back into the wizard hitting Retry and this time it continued to the finish and my deployment was successful!
Yesterday during the start of Microsoft Partner Conference Inspire the AzureStack was released as GA and also was made available for the devkit version to be downloaded!
Of course I had to test it and now thanks to the new installation powershell script with a gui it is even more easy than ever to start the deployment… First of all I downloaded the kit, It helps to have a 10 Gbit internet connection at the datacenter 😉 and then download the Powershell script.
Once the bits had been extracted I could use the wizard to prepare the unattended-file and the boot-from-vhd for the cloudbuilder.vhdx and reboot the server to continue the deployment!
After reboot I could start the same wizard to start deployment of the Stack
There was issues with the deploy script yesterday me and Ruud reported which was quickly fixed by Marc van Eijk, the problems was that if added a vlan or a dns the deploy failed.
I also found an issue that if as in my case the firewall in front of my stack did not allow for external NTP sources I ended up in a failed deploy because it requires a NTP sync before continuing, so I had to configure an internal NTP source and then the deploy succeeded!
The deployment took about 4 hours and once that was completed I could fire up an browser and connect to the portal!
As it is known we should use Windows Server 2016 foremost and as often as it is possible and try to not use with a “Desktop Experience” unless it is really necessary! Of course it makes total sense if you are deploying a RDS solution but if you deploy a AD DC and file servers then naaaee….
In Azure it is not just called Windows Server 2016 and searching in the marketplace you can see that there the name core is the denominator
And it kind of make sense that the Server without GUI can and should use the Small disk option that is to be used with the new managed disks so you have to dig a bit deeper and search for small and then you find those:
Deploying with CLI or powershell with a template need the right SKU to get the core :
Unfortunately Azure have the core as a name but should instead use the “Desktop Experience” on the other one instead so it was consistent with the install of regular OS deployments in a datacenters..
I have been trying out the Altaro VM Backup in my lab. It is a Backup solution that have been around for quite a while but also got support for VMware which was not part of the product in the start! Quite a few companies have both Hyper-V and VMware and having different backup solutions is not viable and place a burdon on the backup admins!
They have several very nice features:
Backup and Replication features
Drastically reduce backup storage requirements on both local and offsite locations, and therefore significantly speed up backups with Altaro’s unique Augmented Inline Deduplication process
Back up live VMs by leveraging Microsoft VSS with Zero downtime
Full support for Cluster Shared Volumes & VMware vCenter
Offsite Backup Replication for disaster recovery protection
Compression and military grade Encryption
Schedule backups the way you want them
Specify backup retention policies for individual VMs
Back up VMs to multiple backup locations
Restore & Recovery features
Instantly boot any VM version from the backup location without affecting backup integrity.
Browse through your Exchange VM backup’s file system and restore individual emails
Granular Restore Options for full VM or individual files or emails
Retrieve individual files directly from your VM backups with a few clicks.
Fast OnePass Restores
Restore an individual or a group of VMs to a different host
Restore from multiple points in time rather than just ‘the most recent backup’
They do also have a REST api that can be utilized for automation which in todays world is a requirement for most business because of their standardisation and automation work to get better quality and speed.
The VM Backup Installation and configuration
It is very easy to get started with Altaro VM Backup.
And once finished you can start the management console to configure the backups and also the repositories
The console is very easy to find your way around in and configure advanced settings
For the trial there are no limits so you can test it for all your VM´s in 30 days. You can also download the Free Hyper-V Backup or the VMware version. You will be able to back up 2 VMs for free forever.
Altaro has still a license that is not bound to cores or cpu and uses a host license instead!
With Azure Stack TP3, we’ve worked with customers to improve the product through numerous bug fixes, updates, and deployment reliability & compatibility improvements from TP2. With Azure Stack TP3 customers can:
Deploy with ADFS for disconnected scenarios
Start using Azure Virtual Machine Scale Sets for scale out workloads
Syndicate content from the Azure Marketplace to make available in Azure Stack
Use Azure D-Series VM sizes
Deploy and create templates with Temp Disks that are consistent with Azure
Take comfort in the enhanced security of an isolated administrator portal
Take advantage of improvements to IaaS and PaaS functionality
Use enhanced infrastructure management functionality, such as improved alerting
Shortly after TP3, Azure Functions will be available to run on TP3, followed by Blockchain, Cloud Foundry, and Mesos templates. Continuous innovation will be delivered to Azure Stack up to general availability and beyond. TP3 is the final planned major Technical Preview before Azure Stack integrated systems will be available for order in mid-CY17.