Now finally Microsoft have updated the misleading documentation on supported guest os within Hyper-V. This is quite important as some people tend to get stuck on small details and as my good friend Didier wrote on his blog, Hyper-V supports guest OS n+1, although that now gets a bit altered with the semi-anual releases.
Now the docs page is updated and shows the following:
Automatic VM activation heads-up!
There is though a small or big thing that needs to be considered if you have an environment with Hyper-V servers and utilize the AVMA. If you plan to deploy Server 2019 guest VM´s there is no way to get them auto activated on a 2016 Hyper-V host.
If you are a bit more old fashioned and utilize a KMS you will just need a KMS server that is newer than 2012 as the key for 2019 needs the KMS to be hosted on at least a 2012R2 Windows Server!
My homelab environment consists of two Intel NUC and I have been playing around with the insiders previews of Server 2019 on one of them and the other one was running Server 2016.
As you might know there is a bit of a hustle of the nic drivers with the server versions of Windows and the Intel NUC´s so there are some steps to get it working. I had some issues where the nic failed during in-place upgrade between preview versions of 2019 and as I do not have a KVM I had to move the NUC to a monitor and fix it. To get the drivers in I had to set the server into test mode:
bcdedit/set LOADOPTIONS DISABLE_INTEGRITY_CHECKS
bcdedit/set TESTSIGNING ON
bcdedit/set NOINTEGRITYCHECKS ON
After I did this and rebooted the server I could update the nic drivers that I already had modified as per this blog post.
I wanted to test and update my 2016 server with an in-place upgrade without moving it from the closet and as a precaution I changed to test mode first and then started the update…
After the upgrade went through successfully I changed back to non-test-mode:
bcdedit/set LOADOPTIONS ENABLE_INTEGRITY_CHECKS
bcdedit/set TESTSIGNING OFF
bcdedit/set NOINTEGRITYCHECKS OFF
I had a small issue with the Windows Update after the upgrade and it would not finish installing the CU of 2018-12… As a mitigation if this I went for the command line tool of System File Checker, SFC and the parameter /Scannow I also did a Dism repair and after these two successfully ran I could continue with the Windows Update!
The Azure Stack have several Resource Providers that can be utilized to bring value to the stack.
We have an Azure Stack in the company and have had an early adopter experience. To make the most of our testing and offering we added App Service, MySQL, SQL RP´s after deployment.
In our multi-tenant usage registered Stack we noticed on our own bill that it was a bit high $$ and realized that it was the shared workers that was behind this, during a couple of Azure Stack work shops we had scaled it to 12 instances for labs. The shared workers are billed to the registration CSP-subscription, the dedicated are billed to the customers subscriptions when they are in use in an app plan but you as a stack provider can have several of them running and prepared without any extra cost.
If you want to add or remove worker instances this can be mitigated with PowerShell or through the portal:
There are some caviats about this and that can be read in the app service documentation, if you want to give the user subscriptions access to serverless functions on a consumption plan you have to have enough shared workers available…. read more here
Carefully monitor the capacity and usage of your add-on RP´s so the experience for the customer always is great!
I am right now digesting the full and exciting last week and wanted to update you on the top 10 sessions of all that I attended in person. We are all different and have different taste and here you can see what I picked out.
I had like 50 sessions in my schedule that I did not manage to attend but will try to find time to watch the recordings and I will do another follow up post with the top sessions of all from Ignite later!
GS001 – An end-to-end tour of the Microsoft developer platform
Allthough I am more of an Operations guy I did not attend the infrastructure foundation session with Corey Sanders and I am happy I went for the Scott Hanselmans instead and have an insight into how the total developer experience now is within the Microsoft space and it was a great session that i can highly recommend you check out the recording of.
DT1001 – Voices from the top: Leaders get real on building inclusive work cultures
This year the Ignite conference had a track on Diversity and Tech and the first day I attended this lunch-session and being a nerd-techi and attending a non-technical session was great! This session gave me some great insight on how we as an industry that is very much still male-focused and need to work on our culture values to be able to get more people in.
BRK2215 – Real World architecture considerations for Azure: how to succeed and what to avoid
This first session on the Tuesday morning gave some insights from the Fasttrack team on how to do best practices when setting up environments within Azure.
BRK3062 – Architecting Security and Governance Across your Azure Subscriptions
On this session we got a high level overview on governance work within Azure and very valuable insights in the releases that had been announced regarding policys, resource graph, cost, management groups and blueprints. Also we got an insight into the in-guestvm policy work that Michael Greene with the powershell team have enabled.
BRK2269 – WinOps: Lessons learned from Enterprise devops with Microsoft technologies
Avesome session on how to apply devops thinking within the Microsoft Technologies and IT Pro space by the DevopsGuys Stephen Thair. He had some great valuable points and case studies where devops principles had been successfully implemented.
DT1003 – Service degraded: Recognizing mental burnout in your colleagues and yourself
Another great session from the Diversity track and this time it was Sonia Cuff (Azure Ops Advocate) that presented on the health topic and how to get into control of signs of burnout within yourselves or your colleagues. She did a splendid job and I did really value being here and not in Snovers PowerShell session that this one collided with in the schedule (Some sessions can be saved for later via the recording)
BRK1094 – Accelerating your IT career
Thursday morning and the room was almost full with the one and only Ned Pyle talk about how to survive in the changing landscape and how to see and work on your career. He had several tips on how to succed. The four pillars of success that included Discipline, Technical powerhouse, Communication, Legacy.
BRK2362 – The SRE role: An unexpected journey
I have not heard Jared talk before but this was a great session on the path for him and Microsoft adopting SRE (Site Reliability Engineers) practices that Facebook, Netflix and Google have been utilizing for years. It was fun analogies and his story going from being a server hugger to a cattle farmer made most of the crowd to recognise themselves.
BRK3085 – Deep dive into Implementing governance at scale through Azure Policy
Last session before the Microsoft Ignite celebration party but the room was full and everyone wanted to get more insigt into the work with Azure Policy, Azure Resource Graph and Azure Blueprints. Always a deep dive gives that extra layer of understanding and this time was no exclusion having the product team presenting their stuff gives that extra nudge.
DT1005 – In conversation – raising the next generation of IT pros as diversity and inclusion champions
The final day I listened to this panel that talked on an interesting topic that I can truly recommend you to watch the recording. Among the people on stage was Donovan Brown talking about the struggle to hire the right staff as a People Manager.
I am on the 5th of september going to have a webinar together with Savision about moving to Azure
“Don´t be a dinosaur, how to stay on top of your it Infrastructure when transitioning into Azure”
Change is happening incredibly fast in today’s IT delivery, and for a service provider, it’s about embracing the new or risking the latest T-Rex. In this webinar we review how to automate and create standardized Windows Server solutions in Azure where management and monitoring are included as a service. Interaction with customers through Microsoft Teams and Bots that speeds up change cases and provides quick feedback! 24/7 you can know status and costs as well as order new services that automatically end up under NOC when it reaches production status
Please sign up on Savisions web page and we will have a great time together uncovering some very cool things in the Azure space
For taking care of my backups on my lab environment I have tested and updated to the new Altaro VM Backup to version 7.6 that now have some really nice features:
Augmented Inline Deduplication
Continuous Data Protection (CDP)
Offsite Backup Replication
Grandfather-Father-Son Archiving (GFS)
Cloud Backup to Azure
Setup and updating
In the console there is a check update and when pressing that I get redirected to the download page on Altaro
Installing the update keeps the settings and license so no fuzz there!
Configuring is really easy and getting the backup up and running was a breeze. Altaro have made it easy with good info and guidance on schedules and configurations needed!
After Install of the mother I see in the console that my agent on the other Hyper-V server needs to be updated also to work properly.
Offsite backup and restore
One really nice feature is the Cloud Backup to have a storage account in azure as offsite location where the backups can be sent. I can set the storage account to cool and thus save a bit on the cost!
Start up with creating a storage account in Azure at a preferred region. As I already have backup onsite I do not need geo-replication within Azure also
After setup in Azure you need to configure Altaro Backup and add an offsite location.
Once I have setup the offsite storage I can then add a backup to be replicated there. And doing a restore from an Azure storage account took about 7 minutes for a 12 GB VM, I have in my lab a 250 Mbit broadband connection and the other side will probably not be the limiting factor 🙂
Another great feature that can be configured is the Altaro Cloud Management Console that makes it easy to stay on top of your backups and you can reach it from anywhere with a browser!
To set up backup reporting via email I can utilize a Office 365 account and the smtp.office365.com
Once setup I can expect an backup report every morning at 8 AM
Getting the Altaro Backup solution up and running is really straight forward and easy! I have not yet tested it in a large scale environment yet but it seems really great and have as I described above some very good features!
For us that have an automation approach we can connect to the Altaro Rest API to check and do stuff for larger environments. Being an MSP and having BaaS is crucial as a competitive offer and the licensing for Altaro Backup in an MSP scenario goes into number of VM instead of CPU
I urge you to take it for a test run and see for your selves!
Build test environments by using the Azure Stack Development Kit (ASDK).
This objective may include but is not limited to: use PowerShell commands; install updated ASDK; troubleshoot failed installs; post-deployment registration
Configure DNS for data center integration.
This objective may include but is not limited to: configure external DNS name resolution from within Azure Stack; configure Azure Stack DNS names from outside Azure Stack
Configure connectivity for data center integration.
This objective may include but is not limited to: manage firewall ports needed at the edge; configure connectivity to the data center; install and renew certificates for public endpoints
Connect to and perform API-based administration on Azure Stack.
This objective may include but is not limited to: connect to the stack by using PowerShell; configure client certificates; configure firewall to support remote administration; establish RBAC roles for the Azure Stack fabric; create subscriptions for end users
Configure and administer the App Service resource provider.
This objective may include but is not limited to: configure system; configure source control; configure worker tiers; configure subscription quotas; scale worker tiers and App Service infrastructure roles; add custom software; configure Azure Stack networking security
Configure and administer database resource providers.
This objective may include but is not limited to: configure and administer the SQL adapter; configure and administer the MySQL adapter; set up SKUs; set up additional hosting capacity
Configure and administer IaaS services.
This objective may include but is not limited to: implement virtual machine images; prepare Linux and Windows images; prepare a custom image; upload an image
This objective may include but is not limited to: create quotas; configure plans; configure offers; configure delegated offers; create add-on plans
This objective may include but is not limited to: add new tenants; remove tenants; manage authentication and authorization; establish RBAC roles for the tenant space
Manage the Azure Marketplace.
This objective may include but is not limited to: enable Azure Marketplace on Azure Stack; plan new packages; create and publish new packages; download Azure Marketplace items
Enable DevOps for tenants.
This objective may include but is not limited to: enable version control for tenants; manage ARM templates; deploy ARM templates; debug ARM templates; use Microsoft Visual Studio Team Services to connect to Azure Stack; use continuous integration and continuous deployment to automate a pipeline that targets Azure Stack
Plan and implement a backup-recovery and a disaster-recovery solution.
This objective may include but is not limited to: back up Azure Stack infrastructure services; perform cloud recovery of Azure Stack, replicate and fail over IaaS virtual machines to Azure; back up and restore PaaS resource data; back up and restore backup and restore of user IaaS virtual machine guest-OS, disks, volumes, and apps
Manage and monitor capacity, performance, updates, and alerts.
This objective may include but is not limited to: manage storage; monitor available storage; integrate existing monitoring services; manage public IP address ranges; monitor infrastructure component health; monitor Azure Stack memory, public IP addresses, and storage tenant consumption; apply updates; update system firmware; review and react to alerts
Manage usage reporting.
This objective may include but is not limited to: provide access to the usage database; test usage by using the ASDK; collect the usage data by using the Provider Usage API and the Tenant Usage API; investigate the usage time versus the reported time
On the 26 of june Microsoft will have a half of a day summit on Windows Server that you do not want to miss!
The agenda will have four different tracks
Hybrid: We’ll cover how you can run Windows Server workloads both on-premises and in Azure, as well as show you how Azure services can be used to manage Windows Server workloads running in the cloud or on-premises.
Security: We know security is top of mind for many of you and we have tons of great new and improved security features that we can’t wait to show and help you elevate your security posture.
Application Platform: Containers are changing the way developers and operations teams run applications today. In this track we’ll share what’s new in Windows Server to support the modernization of applications running on-premises or in Azure.
Hyper-convergent Infrastructure: This is the next big thing in IT and Windows Server 2019 brings amazing new capabilities building on Windows Server 2016. Join this track to learn how to bring your on-premises infrastructure to the next level.
The recommendation stated is that for virtual machines running on either VMware or Hyper-V should be configured with a High Performance power plan.
Looking at Microsoft Azure VM´s they are set as High Performance by default:
In my Hyper-V lab you can see that I have balanced set and when using the powerplan powershell module I created you can also change it to high perf
If you save the following powershell functions in a folder on c:\program files\windowspowershell\modules\powerplan you can then import it as the screendump and utilize it either on a local server or remote server.
As the WMI has some issues on core I use PowerCFG.exe and get data in this function