I have been doing some maintenance on a Hyper-V environment and patching with Windows updates and also firmware for the hardware.
The hosts was Dell R730 with former Broadcom and now Qlogic 10 Gbit NIC´s. There are 4 10 Gbit NIC´s on every host and they are set up with one team for management and one team for VM´s.
I have done firmware update before but now there was a new version that I wanted to apply.
It tock a while before I realised that the NIC`s had changed name and thus that was why the teams was degraded, we had some discussions and the Networking guy checked his configuration more than once and we also verified the cabling on the server and then after a while I realised that the NIC actually had changed name and once adding the correct NIC´s to the teams the status changed to normal!
So when doing maintenece please check the status after updating firmware or the battle with the networking guys can end in misery on your side 😉
I have been quite busy lately and not had the time to update the blog so much that I wanted but I will try to add some posts during the summer!
In the project I am right now we are setting up environments in Amazons cloud AWS. I have used their images AMI for a SQL Always On cluster that spans over two nodes.
The AMI is preinstalled with SQL and ready for incorporation in an domain and the Enterprise version can be used with the r3.2xlarge r3.4xlarge and r3.8xlarge.
As you can see each of them have an SSD instance storage that can be used as a temporary volume which suits SQL tempdb perfectly. Just go into the SQL configuration and point its tempdb to the temporary storage! The MSSQL service account is a domain account without administrative rights on the server so that is why I explicit set the rights for the volume…
As the AWS documentations clearly tells us:
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content
You can specify instance store volumes for an instance only when you launch it. The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances:
The underlying disk drive fails
The instance stops
The instance terminates
So I created a small powershell script that runs each time the instance boots to set ACL´s on that volume for the SQL to be able to create its tempdb files. No tempdb files = no sql service running….
# Script to check and set ACL on the AWS temp drive Z: for SQL Temp
I created a new Windows 10 client in my Mac that I have in an assignment and that on VirtualBox. I made it to small of course and when trying to add some stuff with onedrive sync I could not succeed …
So to be able to expand the underlying disk in virtualbox I had to go into terminal mode as this is not part of the gui which of course is okay, so find your vdi file and possible if you are afraid, do a copy of the file before and of course the VM have to be stopped during the expansion.
Using command VBoxManage modifymedium disk xxxx.vdi –resizebytes 85899345920
and checking the properties on the virtual disk file I can see that it has been expanded
after that you boot the VM and go into windows disk settings and expand the partition and you are ready to add files 🙂
So the day have come when finally the new TP5 bits have been released! And I of course downloaded and wanted to test to upgrade one of my hyper-v servers in my home lab.
Once installed I tried to migrate a VM from the hyper-v manager at the new TP5 node, I had of course set up kerberos and delegation before but still it gave me an error. To see if it was just in the GUI or also in PowerShell I tried the same move and got the same issue
So powershell remoting to the rescue to test that I could live migrate my VM´s from the TP4 to the TP5 and that worked nicely. I will dig some more into if there is an issue with the AD objects or what causes this and do an update if I find anything…
I have got the opportunity to step in for Steven Muranski from Chef to do a presentation on Chef and PowerShell DSC at the PowerShell conference PSConfEU next week in Hannover, Germany. My session will be on Thursday afternoon.
If my information is right the conference is sold out and it will be three awesome days with automation love!
I wanted to install the different browsers and also ChefDK on an new Windows 10 client and that with PowerShell. There is of course a way to just run Chocolatey command line but I wanted to use the PackageManagement so I had to add that provider and source so I could just use the Install-Package cmdlet.
I have a Windows 10 1511 with latest patches and I run the following commands to enable the chocolatey repository:
As you can see, when I enter Find-PackageProvider I find the Chocolatey provider and just run the Install-PackageProvider -Name Chocolatey
Then I need to open a new PowerShell window and can see that the PackageProvider and PackageSource have the Chocolatey records and thus I can now add software from this,
First I add Firefox and as you can see my PackageSource is not trusted and I get an warning, that can be configured with the Set-PackageSource -Name chocolatey -Trusted
Finally the Azure Site Recovery service can be reached from the new Azure Portal and the ARM way of doing things! It has been possible to use ASR with PowerShell and the new ARM way for some months but only for a subset of the site recovery services (VMM/Hyper-V).
Not a day to soon! I have a customer that we have engaged in the CSP program and as that is based on the new, the old ASR was not possible to use with that subscription and use another subscription just for ASR sucks..
As you can see on the following screendump I go into the “Getting Started” to select scenario and then follow the guide to complete and in the case with physical and VMware I need to install a process/configuration server on-premise.
Once installed on a Windows 2012 R2 server I connect it to the ASR with the registration file,
One thing to think about using this service is that the process server will if you do not go in and configure the bandwidth settings eat all available internet capacity as my customer so nicely explained…
Configure this to something that works for both you and the company, and with the enhanced ASR where you do not need additional servers in Azure you find this setting in the backup properties.
It is quite easy to start and protecting your workloads and remember that the first 30 days are free 🙂
I was testing some xAzure DSC configuration stuff on my Windows Server 2016 TP4 and noticed that when I was trying to use the resource it said that duplicate modules was found but I could not find any in the powershell module libraries and thus it ought to be something else,
I found the tweet that Ben Gelens sent pointing me to the $env:psmodulepath
So looking in my psmodulepath I could see that I had two records that was the same and that had to be fixed,
So to remove the duplicate I first remove both and then add the one again:
During last week I was working on some bare metal deployment on some Hyper-V hosts with System Center VMM. We had deployed them before using legacy boot but now we had updated the BIOS to latest version and got into some trouble.. Maybe it was because of the HPE instead of the HP 😉
During the deployment the WinPE got an error and could not connect to the VMM server,
We tried to update NIC drivers and stuff on the WinPE image but that did not help. During the testing we started the server and configured it to boot with EFI instead of legacy boot and olala it worked to connect to the VMM server but thus as the Hyper-V VHD was MBR we got the following error:
The easiest way I could think of right there and then was to create my new GPT based VHD to boot the Hyper-V host with a powershell convert script from the original MBR vhd. The script required to be run on a Hyper-V host so I connected to one of the Hyper-V nodes in the test cluster and ran the script on a patched VM that was sysprepped:
And once that was done I had to update the Physical profile to set the disk to GPT instead of MBR: