After installing/deploying the AzureStack DevKit i added the SQL RP and also wanted to add the App Service Resource Provider for a dev experience!
I have an relative ordinary HPE 380 G9 box with 128 GB of ram and 2 CPU´s so it should be fine, but there was some issues that I wanted to document to help others, this will probably be fixed in an later release of the App Service install pack though.
I downloaded the bits and ran the deploy App Service
After filling this out the deployment started but after a few hours it failed during the deploy step, I tried the Retry a couple of times but without luck, also closing the wizard makes you lose the stage of deploy and need to rerun the whole deploy! When doing a redeploy you have to delete the resource group APPSERVICE-LOCAL (or what you called it) and also go into your SQL server that you entered into the wizard and remove the appservice db´s!!
How did I mange to get it working though? I got some help from Andrew at Microsoft that works with the Stack team and he gave me some guidance how to get it into a good state! Apparently the App Service adds all uppdates during deploy and to be more successful the recommendation was to update with patches that was released up until last tuesday with the win2016 image and thus update to latest CU, mine came from the marketplace syndication with Azure and that one had not the patches when I did this test, I threw that away and ran the create image script adding the parameter for -includeLatestCU.
Rerunning the wizard and when it allthough had a image with latest and greatest patches got stuck on “App Service Deploy Failed” I went into the CN0-VM and opened the mmc for the state of the different servers in the App Service
I also hit the repair link and when all of them said “Ready” I added a new 0.status file I got from Andrew into the custom script folder on the CN0-VM, easiest way to do that was with the lovely PowerShell Direct that is part of the Win 2016!
and then went back into the wizard hitting Retry and this time it continued to the finish and my deployment was successful!
Yesterday during the start of Microsoft Partner Conference Inspire the AzureStack was released as GA and also was made available for the devkit version to be downloaded!
Of course I had to test it and now thanks to the new installation powershell script with a gui it is even more easy than ever to start the deployment… First of all I downloaded the kit, It helps to have a 10 Gbit internet connection at the datacenter 😉 and then download the Powershell script.
Once the bits had been extracted I could use the wizard to prepare the unattended-file and the boot-from-vhd for the cloudbuilder.vhdx and reboot the server to continue the deployment!
After reboot I could start the same wizard to start deployment of the Stack
There was issues with the deploy script yesterday me and Ruud reported which was quickly fixed by Marc van Eijk, the problems was that if added a vlan or a dns the deploy failed.
I also found an issue that if as in my case the firewall in front of my stack did not allow for external NTP sources I ended up in a failed deploy because it requires a NTP sync before continuing, so I had to configure an internal NTP source and then the deploy succeeded!
The deployment took about 4 hours and once that was completed I could fire up an browser and connect to the portal!
UR 3 for System Center 2016 have started to surface now and in the windows Catalog I have found VMM and SCOM 2016 UR3 packages. There are quite a few fixes and if you have a dev or stage environment you should start to evaluate this new rollup!
SCSM 2016 UR3 could be found in the download center at this link
As it is known we should use Windows Server 2016 foremost and as often as it is possible and try to not use with a “Desktop Experience” unless it is really necessary! Of course it makes total sense if you are deploying a RDS solution but if you deploy a AD DC and file servers then naaaee….
In Azure it is not just called Windows Server 2016 and searching in the marketplace you can see that there the name core is the denominator
And it kind of make sense that the Server without GUI can and should use the Small disk option that is to be used with the new managed disks so you have to dig a bit deeper and search for small and then you find those:
Deploying with CLI or powershell with a template need the right SKU to get the core :
Unfortunately Azure have the core as a name but should instead use the “Desktop Experience” on the other one instead so it was consistent with the install of regular OS deployments in a datacenters..
I have been trying out the Altaro VM Backup in my lab. It is a Backup solution that have been around for quite a while but also got support for VMware which was not part of the product in the start! Quite a few companies have both Hyper-V and VMware and having different backup solutions is not viable and place a burdon on the backup admins!
They have several very nice features:
Backup and Replication features
Drastically reduce backup storage requirements on both local and offsite locations, and therefore significantly speed up backups with Altaro’s unique Augmented Inline Deduplication process
Back up live VMs by leveraging Microsoft VSS with Zero downtime
Full support for Cluster Shared Volumes & VMware vCenter
Offsite Backup Replication for disaster recovery protection
Compression and military grade Encryption
Schedule backups the way you want them
Specify backup retention policies for individual VMs
Back up VMs to multiple backup locations
Restore & Recovery features
Instantly boot any VM version from the backup location without affecting backup integrity.
Browse through your Exchange VM backup’s file system and restore individual emails
Granular Restore Options for full VM or individual files or emails
Retrieve individual files directly from your VM backups with a few clicks.
Fast OnePass Restores
Restore an individual or a group of VMs to a different host
Restore from multiple points in time rather than just ‘the most recent backup’
They do also have a REST api that can be utilized for automation which in todays world is a requirement for most business because of their standardisation and automation work to get better quality and speed.
The VM Backup Installation and configuration
It is very easy to get started with Altaro VM Backup.
And once finished you can start the management console to configure the backups and also the repositories
The console is very easy to find your way around in and configure advanced settings
for the trial there is no limits so you can test it for all your VM´s in 30 days and then you can continue to protect 2 VMs forever in the free version that the trial continues in.
Altaro has still a license that is not bound to cores or cpu and uses a host license instead!
This last weekend there have been quite a buzz about the ransomware that been spreading like the plague based on the fact that there are still so many unpatched servers and clients running windows from the stone age. We can also discuss for a while why in Windows 10 and Windows server 2016 the SMB1 protocol is still enabled and needs to be turned off? One alternative could have been to say that if you want to use this 30 year protocol you would need to enable it and thus knowing the risk and taking that into account when deciding for the legacy track
One way of beeing safe is to of course turn of the computer but that works how long?
In my lab environment I have the luck to only use WIndows 2012 R2 and above, I need to get the computers from the AD and also remove the FS-SMB1 role. The quickest way is to just disable the SMB1 protocol, you know there are users in an ordinary world that kind of does not want servers to be restarted whenever and removing the feature does need a reboot… So first disable the protocol now and then remove the role when it is time to do the magic reboot
# Checking for servers in my ADlab environment and I want to return those responding on SMB/445
So in my home lab I had a DC going out of time (it was a technical preview of 2016) and needed to be replaced and I wanted to do it the right way and not login to the console/gui on the actual DC to it once during the removal and deploy of a new one!
So firstly I had to decommission it as a DC and then I created a new image from the media
After this I started the new DC-VM, to use the PowerShell Direct I had to activate the “Guest Service Interface”. one cool thing is when using PowerShell direct I can set the IP address on the NIC within the VM without getting disconnected as I would have been otherwise if using a ordinary powershell remoting session!
PSC:\>Enable-VMIntegrationService-VMNamedc01-Name"Guest Service Interface"