Utilizing the mighty Irwin´s Operation testing in the environment I am building and administrering right now is great success. I have set up a scheduled task that runs every morning and giving me a nice report about the status of the environment and if someone have done reconfigurations or just services being down for any reason.
This week though the scheduled task started to failing.
Apparently someone had done some mistreatment to my environment path so the powershell.exe could not be found any more from CMD thus the scheduled task failing..
First off I could see that a %path% had snuck into, in CMD ECHO %path% shows all paths so some failure during a PS-script as in powershell %path% does not give love in the same way as the cmd console.
There could be other ways but one easy way is to find another server having the path untouched and removing the %path% from this server and adding the right way to system32 and powershell….
And then to correctly set the path (if you just use $env:path = $real It will only set it for the current session and not being persistent and in this case it is not enough :
And once that have been done I could verify that powershell was once again found within the CMD
You might have noticed on twitter and other media that now PowerShell has been released as open source and also available not only for your windows environment but also now for Linux and OS X 🙂
The release is available on Github and go there to get your preferred package for your OS! Read the instructions and be aware that this is the first step and the packages for the non-windows is in alpha!
I have a Mac with OS X and have of course installed it
Read some more on Snovers blog post here if you want some more insight in why and how and also some cool demos !
I have been doing some maintenance on a Hyper-V environment and patching with Windows updates and also firmware for the hardware.
The hosts was Dell R730 with former Broadcom and now Qlogic 10 Gbit NIC´s. There are 4 10 Gbit NIC´s on every host and they are set up with one team for management and one team for VM´s.
I have done firmware update before but now there was a new version that I wanted to apply.
It tock a while before I realised that the NIC`s had changed name and thus that was why the teams was degraded, we had some discussions and the Networking guy checked his configuration more than once and we also verified the cabling on the server and then after a while I realised that the NIC actually had changed name and once adding the correct NIC´s to the teams the status changed to normal!
So when doing maintenece please check the status after updating firmware or the battle with the networking guys can end in misery on your side 😉
I have been quite busy lately and not had the time to update the blog so much that I wanted but I will try to add some posts during the summer!
In the project I am right now we are setting up environments in Amazons cloud AWS. I have used their images AMI for a SQL Always On cluster that spans over two nodes.
The AMI is preinstalled with SQL and ready for incorporation in an domain and the Enterprise version can be used with the r3.2xlarge r3.4xlarge and r3.8xlarge.
As you can see each of them have an SSD instance storage that can be used as a temporary volume which suits SQL tempdb perfectly. Just go into the SQL configuration and point its tempdb to the temporary storage! The MSSQL service account is a domain account without administrative rights on the server so that is why I explicit set the rights for the volume…
As the AWS documentations clearly tells us:
An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content
You can specify instance store volumes for an instance only when you launch it. The data in an instance store persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under the following circumstances:
The underlying disk drive fails
The instance stops
The instance terminates
So I created a small powershell script that runs each time the instance boots to set ACL´s on that volume for the SQL to be able to create its tempdb files. No tempdb files = no sql service running….
# Script to check and set ACL on the AWS temp drive Z: for SQL Temp
I created a new Windows 10 client in my Mac that I have in an assignment and that on VirtualBox. I made it to small of course and when trying to add some stuff with onedrive sync I could not succeed …
So to be able to expand the underlying disk in virtualbox I had to go into terminal mode as this is not part of the gui which of course is okay, so find your vdi file and possible if you are afraid, do a copy of the file before and of course the VM have to be stopped during the expansion.
Using command VBoxManage modifymedium disk xxxx.vdi –resizebytes 85899345920
and checking the properties on the virtual disk file I can see that it has been expanded
after that you boot the VM and go into windows disk settings and expand the partition and you are ready to add files 🙂
So the day have come when finally the new TP5 bits have been released! And I of course downloaded and wanted to test to upgrade one of my hyper-v servers in my home lab.
Once installed I tried to migrate a VM from the hyper-v manager at the new TP5 node, I had of course set up kerberos and delegation before but still it gave me an error. To see if it was just in the GUI or also in PowerShell I tried the same move and got the same issue
So powershell remoting to the rescue to test that I could live migrate my VM´s from the TP4 to the TP5 and that worked nicely. I will dig some more into if there is an issue with the AD objects or what causes this and do an update if I find anything…
I have got the opportunity to step in for Steven Muranski from Chef to do a presentation on Chef and PowerShell DSC at the PowerShell conference PSConfEU next week in Hannover, Germany. My session will be on Thursday afternoon.
If my information is right the conference is sold out and it will be three awesome days with automation love!
I wanted to install the different browsers and also ChefDK on an new Windows 10 client and that with PowerShell. There is of course a way to just run Chocolatey command line but I wanted to use the PackageManagement so I had to add that provider and source so I could just use the Install-Package cmdlet.
I have a Windows 10 1511 with latest patches and I run the following commands to enable the chocolatey repository:
As you can see, when I enter Find-PackageProvider I find the Chocolatey provider and just run the Install-PackageProvider -Name Chocolatey
Then I need to open a new PowerShell window and can see that the PackageProvider and PackageSource have the Chocolatey records and thus I can now add software from this,
First I add Firefox and as you can see my PackageSource is not trusted and I get an warning, that can be configured with the Set-PackageSource -Name chocolatey -Trusted