As it is known we should use Windows Server 2016 foremost and as often as it is possible and try to not use with a “Desktop Experience” unless it is really necessary! Of course it makes total sense if you are deploying a RDS solution but if you deploy a AD DC and file servers then naaaee….
In Azure it is not just called Windows Server 2016 and searching in the marketplace you can see that there the name core is the denominator
And it kind of make sense that the Server without GUI can and should use the Small disk option that is to be used with the new managed disks so you have to dig a bit deeper and search for small and then you find those:
Deploying with CLI or powershell with a template need the right SKU to get the core :
Unfortunately Azure have the core as a name but should instead use the “Desktop Experience” on the other one instead so it was consistent with the install of regular OS deployments in a datacenters..
I have been trying out the Altaro VM Backup in my lab. It is a Backup solution that have been around for quite a while but also got support for VMware which was not part of the product in the start! Quite a few companies have both Hyper-V and VMware and having different backup solutions is not viable and place a burdon on the backup admins!
They have several very nice features:
Backup and Replication features
Drastically reduce backup storage requirements on both local and offsite locations, and therefore significantly speed up backups with Altaro’s unique Augmented Inline Deduplication process
Back up live VMs by leveraging Microsoft VSS with Zero downtime
Full support for Cluster Shared Volumes & VMware vCenter
Offsite Backup Replication for disaster recovery protection
Compression and military grade Encryption
Schedule backups the way you want them
Specify backup retention policies for individual VMs
Back up VMs to multiple backup locations
Restore & Recovery features
Instantly boot any VM version from the backup location without affecting backup integrity.
Browse through your Exchange VM backup’s file system and restore individual emails
Granular Restore Options for full VM or individual files or emails
Retrieve individual files directly from your VM backups with a few clicks.
Fast OnePass Restores
Restore an individual or a group of VMs to a different host
Restore from multiple points in time rather than just ‘the most recent backup’
They do also have a REST api that can be utilized for automation which in todays world is a requirement for most business because of their standardisation and automation work to get better quality and speed.
The VM Backup Installation and configuration
It is very easy to get started with Altaro VM Backup.
And once finished you can start the management console to configure the backups and also the repositories
The console is very easy to find your way around in and configure advanced settings
for the trial there is no limits so you can test it for all your VM´s in 30 days and then you can continue to protect 2 VMs forever in the free version that the trial continues in.
Altaro has still a license that is not bound to cores or cpu and uses a host license instead!
With Azure Stack TP3, we’ve worked with customers to improve the product through numerous bug fixes, updates, and deployment reliability & compatibility improvements from TP2. With Azure Stack TP3 customers can:
Deploy with ADFS for disconnected scenarios
Start using Azure Virtual Machine Scale Sets for scale out workloads
Syndicate content from the Azure Marketplace to make available in Azure Stack
Use Azure D-Series VM sizes
Deploy and create templates with Temp Disks that are consistent with Azure
Take comfort in the enhanced security of an isolated administrator portal
Take advantage of improvements to IaaS and PaaS functionality
Use enhanced infrastructure management functionality, such as improved alerting
Shortly after TP3, Azure Functions will be available to run on TP3, followed by Blockchain, Cloud Foundry, and Mesos templates. Continuous innovation will be delivered to Azure Stack up to general availability and beyond. TP3 is the final planned major Technical Preview before Azure Stack integrated systems will be available for order in mid-CY17.
So I am working on a customer and their path of upgrading to 2016 versions. The first step was to make sure that the VMM 2012 R2 server was updated to latest UR and that I can deploy guest vm´s with 2016.
After the update of VMM to UR11 I checked the list of OS,
So to be able to see the 2016 as a guest OS i have to add a hotfix and that took some time but what ever you do, do not cancel but wait and wait and wait and the never ending progress bar will eventually go away 😉 . And yes you have to add one hotfix for the console and one for the vmm server!
Today I was at Microsoft Sweden and did a webinar on Windows Server 2016 Hyper-V and System Center VMM. This was the first of 5 webinars that Microsoft have this week focusing on the highlights on the new release.
The webinar was in Swedish and I will post a link to it when it will become available!
I am a firm believer that Servers should not be used for the wrong things and thus I have now installed the new System Center VMM 2016 on a Windows Server 2016 Core.
In my home lab I do not have so many hosts so I have used the opportunity to install the SQL 2016 on the same core instance.
As I am installing the SQL on the same machine I had to enable the .net 3.5/2.0 feature on this server and yes I know and can´t agree more, please remove this requirement dear SQL team and move to the future!
Although it is not supported with the wizard for sql install on core it do show some progress through a graphical dialog…
So once that was up and running I installed the ADK for windows 10, and I used the one for Windows 10 1607.
And then I could start the VMM install. And yes there is a command line way of installing the VMM but this time I wanted to see if I could use the wizard in core!
During the installation the wizard complained about my memory that I had assigned to the VM that I was installing on and I could with the superduperfeature in 2016 add more to the running VM without doing any stop and start!
After that I had no more issues and the installation completed successfully!
Well once installed I had to do some patching as at the same time VMM 2016 was released Microsoft also announced the availability of CU1 🙂 and trying to use the short cut from the installation dialog fails on a Server core as those GUI parts are not present! I can though use the Sconfig and the “Download and install updates” option to get the updates I want…
Revised: Based on the SQL req page that have been updated it now is supported to run on SQL standard and from SQL 2012 SP2, the following link on the VMM page though still says 2014 Enterprise but that will be updated. My MVP friend Anders Asp have got info that I share here:
“Official MSFT statement: That is likely a carry over from earlier TP content when we had a bug that installation would fail on Std SQL(TP3?). Standard should work.”
//As you can see the System Center VMM 2016 GA will require a SQL 2014 Enterprise or later, so you will not be able to use a standard SQL to be supported. So if you are upgrading from a VMM 2012 R2 you will also have to upgrade your SQL to the Enterprise level.//
The SQL instance solely used for the System Center is included in the System Center licensing.
During Ignite 2016 in Atlanta, Microsoft announced the technical preview 2 of AzureStack and finally now this friday I got my hardware available (the dang server was not responding on the ILO port and I had to go to the datacenter to give it a kung-fu-devops-kick) so I could deploy the new bits.
First things first! Read the documentation about how to proceed and you will more likely succeed in your deployment!
The download for AzureStack is 20 GB so if you have a slow internet connection it will take some time!
Before getting started i suggest you to run the pre-check script that can tell you if there is some immediate issues,
And then you can unpack and follow the instructions to prepare to vhd-boot into the cloudbuilder disk with the next script:
Once rebooted you want to make sure that you only have one nic enabled and then kick of the deployment which will take about 2-3 hours if you have a decent hardware like me 😛
As you can see the install process uses both desired state and powershell direct (which is a lovely feature in Hyper-V 2016)
And if you are patient and then log in as a azurestack\AzureStackAdmin on the physical machine you will see the status of the deployment. Do not log in as a local user on the server and try to start the deployment again!
Hopefully you will end up with the same result as me:
And then you can log in to the VM MAS-CON01 to connect to the portal,
Maybe I was lucky but I believe that the Stack-Team has done some serious work since TP1 and the deployment process have been thoroughly developed, tested and works really good now.
I have been quite busy lately and not had the time to update the blog so much that I wanted but I will try to add some posts during the summer!
In the project I am right now we are setting up environments in Amazons cloud AWS. I have used their images AMI for a SQL Always On cluster that spans over two nodes.
The AMI is preinstalled with SQL and ready for incorporation in an domain and the Enterprise version can be used with the r3.2xlarge r3.4xlarge and r3.8xlarge.
As you can see each of them have an SSD instance storage that can be used as a temporary volume which suits SQL tempdb perfectly. Just go into the SQL configuration and point its tempdb to the temporary storage! The MSSQL service account is a domain account without administrative rights on the server so that is why I explicit set the rights for the volume…
As the AWS documentations clearly tells us:
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content
You can specify instance store volumes for an instance only when you launch it. The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances:
The underlying disk drive fails
The instance stops
The instance terminates
So I created a small powershell script that runs each time the instance boots to set ACL´s on that volume for the SQL to be able to create its tempdb files. No tempdb files = no sql service running….
# Script to check and set ACL on the AWS temp drive Z: for SQL Temp