Azure Portal App

There is a new preview of a Azure Portal App that lets you use the Azure Portal without any other browser available, this is a great thing if you have a Windows Server as your main go-to jumpbox that you do things in and as we know it is by default not possible to run Edge browser in Windows Server and you are stuck with Internet Explorer and that alone makes you go bananas and also that browser is some serious legacy thingy.

First you go to

once downloaded and installed you can then sign into your azure account to start utilizing the app and manage your cloud resources

As you can see it is like any other browser experience of the Azure portal and you can of course also start an cloud shell.

Of course the option to install chrome or firefox also works as an alternative, although some companies have restrictions on third party software being installed within their server environments…

Windows Admin Center 1903

With the preview of Windows Admin Center 1903 that now is available in the insiders you now have some new extensions that make life even easier than before administering AD, DHCP and DNS.

Once upgraded I go into the portal and there I find the new extensions.

And after installing them I can go to an domain controller and instantly administer user and computer objects

So please replace your old domain controllers and let go of the GUI option when you do it!

happy playing!

Azure Stack HCI

Today Microsoft announced the Azure Stack HCI and the family of Azure, Azure Stack, Azure Stack HCI is complete to take care of your company all different needs.

So Azure Stack HCI is the new name for the Hyper Converged solution that before was called WSSD and hardware companies certify their solutions to be in the list for Azs HCI.

Azure Stack HCI solutions

There is a hybrid event on the 28th that you can sign up and learn more on that online show.

Or listen on the recording from Jeff and Vijay where they describe more about Azure Stack HCI

New challenges in the cloud

The time has come to start a new part of my worklife! I am leaving Basefarm and the Lead Architect role I had there for a new job as a Chief Cloud Architect at Evry.

During my time at Basefarm I have been working intensively with Azure Stack and Azure offerings as well as on-premise solutions. Basefarm was one of the Azure Stack early adopters and the learnings within this journey have been challenging and interesting both based on the appliance and also the organisational adoption.

My key areas as a Chief Cloud Architect will be on public cloud solutions and helping my team at Evry and the customers be successful in the transformation to the public cloud.

I will be focusing on amazing solutions based on Azure and Azure Stack but I will also work on AWS and Google Cloud solutions.

I will have the great pleasure to work with Marius Sandbu that is also a Microsoft Most Valuable Professional.

Bin file left on Hyper-V VM

We have an upgraded Hyper-V cluster from 2012 R2 to 2016 and I wanted to make some more space on the volume that it resides on so I started looking on the settings of the VMs. As I concluded in another blog post there can be a discussion about if we really need the setting of “Save VM state” for the VM´s in a Hyper-V cluster. It is viable to have this setting on VM´s that reside on a standalone host and if doing maintenance it saves the state of the VM during host reboot.

So what is the thing here, well some of the VM´s in this cluster have got their VM version upgraded to 8.0 but some was still on 5.0. If i just check on the SC VMM I could find the VM´s configured with “Save VM” and amount of storage that they consume.

But when checking the storage where the VM reside I noticed that the value above did not match the size. With the following PowerShell I check for the files that have the ending of .bin (2012 r2 and older format) and also .vmrs (that is the 2016+ format)

Apparently some of the VM´s that have been upgraded from 5.0 have their bin file still in the subfolder.

To fix this I did a report of the files and then a delete, as the file is not in use in a 8.0 VM I could do it when it was online

Running the following on your VM´s disable the “SaveVM” and if you then upgrade to VM version 8.0 after that you will not get a duplicate issue 🙂

Happy HyperVing out there

How to Test-AzureStack

Running and operating an Azure Stack either on a DevKit or a integrated system can be a hurdle and sometimes you need to know the state of the stamp and the portal does not always show everything.

Connecting an session to a emergency recovery console and kicking of a Test-AzureStack can give you some more insights to what is the state of the system.


If you want to know more and see the state you better look at the parameters of Test-AzureStack because there are some hidden gems there! If you run with a -ServiceAdminCredential you will get some information and see what actually works on the stamp in regards of deployment and usage of the base RP´s.

If you do not want to run all Test-AzureStack tests you can specify running just -Include AzsScenarios and thus only running Operator and User Scenarios and not all other tests with fabric and storage etc. There is another parameter -timeout that can be used if you need more time for the test to run

a successful Test-Azurestack -Include AzsScenarios

One thing to consider is that it is cumbersome to utilize a serviceadmin credential that is MFA-enabled for the Test-AzureStack and that you have to set up a separate account for this test.

Windows Server 2019 on Hyper-V 2016

Now finally Microsoft have updated the misleading documentation on supported guest os within Hyper-V. This is quite important as some people tend to get stuck on small details and as my good friend Didier wrote on his blog, Hyper-V supports guest OS n+1, although that now gets a bit altered with the semi-anual releases.

old doc page

Now the docs page is updated and shows the following:

updated docs page

Automatic VM activation heads-up!

There is though a small or big thing that needs to be considered if you have an environment with Hyper-V servers and utilize the AVMA. If you plan to deploy Server 2019 guest VM´s there is no way to get them auto activated on a 2016 Hyper-V host.

If you are a bit more old fashioned and utilize a KMS you will just need a KMS server that is newer than 2012 as the key for 2019 needs the KMS to be hosted on at least a 2012R2 Windows Server!

Upgrading my homelab to Server 2019

My homelab environment consists of two Intel NUC and I have been playing around with the insiders previews of Server 2019 on one of them and the other one was running Server 2016.

As you might know there is a bit of a hustle of the nic drivers with the server versions of Windows and the Intel NUC´s so there are some steps to get it working. I had some issues where the nic failed during in-place upgrade between preview versions of 2019 and as I do not have a KVM I had to move the NUC to a monitor and fix it. To get the drivers in I had to set the server into test mode:

After I did this and rebooted the server I could update the nic drivers that I already had modified as per this blog post.

I wanted to test and update my 2016 server with an in-place upgrade without moving it from the closet and as a precaution I changed to test mode first and then started the update…

After the upgrade went through successfully I changed back to non-test-mode:

I had a small issue with the Windows Update after the upgrade and it would not finish installing the CU of 2018-12… As a mitigation if this I went for the command line tool of System File Checker, SFC and the parameter /Scannow I also did a Dism repair and after these two successfully ran I could continue with the Windows Update!

Happy NUC-playing with 2019 🙂

Azure Stack App Service workers

The Azure Stack have several Resource Providers that can be utilized to bring value to the stack. 

We have an Azure Stack in the company and have had an early adopter experience. To make the most of our testing and offering we added App Service, MySQL, SQL RP´s after deployment.

In our multi-tenant usage registered Stack we noticed on our own bill that it was a bit high $$ and realized that it was the shared workers that was behind this, during a couple of Azure Stack work shops we had scaled it to 12 instances for labs. The shared workers are billed to the registration CSP-subscription, the dedicated are billed to the customers subscriptions when they are in use in an app plan but you as a stack provider can have several of them running and prepared without any extra cost.  

If you want to add or remove worker instances this can be mitigated with PowerShell or through the portal:

There are some caviats about this and that can be read in the app service documentation, if you want to give the user subscriptions access to serverless functions on a consumption plan you have to have enough shared workers available…. read more here

Carefully monitor the capacity and usage of your add-on RP´s so the experience for the customer always is great!