Addning vmware vm to AD with Invoke-VMscript

Adding a new Vmware VM to Active Directory without using Customization Specifications proved to be a challenge.

Since the VM was newly created from a template with firewall enabled I was forced to use Invoke-VMscript to get into to the server. After adding it to the domain and moving it to a OU with a GPO disabling the firewall I can then continue the customization of the VM with ordinary Invoke-Command. Attaching with Invoke-Command between a domain connected  Vm and a workgroup VM unfortunately was not possible because of Kerberos authentication issues.

It’s not good practice not write username and Password in clear text in a script like this, but using Get-Credential and store it in a variable did not work for some reason.

So this did not work for some reason, just freeze:

$domainAdminCredentials = (Get-Credential)
$script="Add-Computer -Domainname -Credential $cred -restart -force"
$GuestCredential = (Get-Credential)
Invoke-VMScript -VM $targetvm -ScriptText  $script  -GuestCredential $GuestCredential

But this works:

$cmd4 = @'
$password = 'password' | ConvertTo-SecureString -asPlainText -Force
$username = 'domain/username'
$credential = New-Object System.Management.Automation.PSCredential($username,$password)
Add-Computer -Credential $credential -DomainName '' -Restart -force
$paramInvoke = @{
VM = $targetvm
GuestCredential = $GuestCredential
ScriptType = 'Powershell'
ScriptText = $cmd4
Invoke-VMScript @paramInvoke

Posted in AD, powershell, Uncategorized | Leave a comment

Here we go again !

After more than two years recovery I’m spinning up this blog again. I will take up blogging about stuff I find interesting.

I march 2016 I “hit the wall” and was diagnosed with exhaustion syndrome, I have since then recovered and I’m now back to almost normal. I have learned to take time to recover and to listen to the signals I get from my body. I think that I got out of all this with tools that will make my life richer going forward.

For those of you waiting for a continued journey on Nutanix, I must make you disappointed. After running Nutanix for over a year the company decided not to proceed on that journey. I Personally regret this decision even though I was part of it, it was both a strategic and economical decision. Maybe we where not ready for this kind of journey ?

I’m personally a strong believer in Nutanix and have also invested in the stock.

We have dismantled and reused the hardware for other purposes and right now we don’t have any plans on running Nutanix again.

Posted in Uncategorized | Leave a comment

Nutanix journey – part 3

Since mid november and my last blog post we have been moving on in small steps, both internal and external resources was a bit limited during this period, vacations and holidays making things even worse.

But, finally on January 13th, we began moving “real” VM’s over to the Nutanix environment. So far we have 32 VM’s in the cluster, 15 being large SQL server between 0.5 and 1.7 TB in size. Most of them with all databases PAGE compressed, the highest level of compression in SQL server, and we still have reached 1.8 : 1 in compression ration in Nutanix. There is still very limited load on most of these servers so performance is still to be evaluated.

The cluster is now made up of eight 6060 nodes with two containers for VM’s. Both running RF3 and one is compressed and the other one uncompressed. Reason for RF3 is that we don’t run any backup’s on the Nutanix environment yet and we don’t have any replications of snapshots to any other cluster. Snapshot have been setup daily with 7 days retention.

Our impression when we dediced to go with Nutanix, was that node and block awareness and RF2 would be sufficient for our initial demands since this is a POC environment for only test/dev VM’s. What we didn’t know was that block awareness was only “best effort” while running RF2. It was a big surprise to us, when the PRISM GUI started to report that we did not have block awareness after big loadtests and while copying large VM’s and files into the Nutanix environment.

To feel confident with the environment we reinstalled the cluster with RF3 and created new containers. As long as we had a mix of RF2 and RF3 containers in the storage poo,l we continued to get alerts about block awareness not being available. Last week we migrated the last VM from the RF2 to the RF3 container and removed the RF2 container, after that we don’t have any alerts about block awareness any longer.

I’m not sure if this would have been a big problem to stay on RF2, Nutanix says that most customers are running RF2 and that no data has never been lost for any customer. Losing both nodes in a block is probably very unlikely. What worried us most was the fact that if a VM runs in a block with two nodes and both fail, it could potentially start-up on a third node without having access to all of it’s blocks. A normal windows VM with a block missing in a windows DDL file would maybe get a blue screen. But if it was a SQL server, a block might be missing within a database and that would not be noticed until someone accessed it.

Administrators would of course know if a node or two was lost and could take action to limit access or not startup VM’s from the failing nodes but it is still an interesting scenario to think about.

We will reevaluate the design and probably only run RF2 in any coming implementations, we would probably have cross cluster replication of snapshots and external backup in the future. It would also be possible to only use one node in each block for a certain cluster and by that achieving block awareness even with RF2, but that was not possible in this implementation.

The only drawback with RF3 is 50% more storage  needed.  We have run a lot of performance test with both syntetic test tools and also with ETL jobs in  SQL servers and have seen no performance difference between RF2 and RF3. When load on the server gets higher, writing to three nodes instead of two will of course cause additional network traffic between nodes, future will tell if this will mean anything negative.

One of the first servers we cloned to Nutanix for testing was a SQL Server 2012 warehouse server with a home made stored procedure based ETL load process. This ETL load consists of many single threaded SQL insert and updates and the CPU speed becomes very important to get a high speed load. In this case we moved the VM from a three year old HP blade with 3,07 Ghz CPU’s to a 6060 node with only 2,8 Ghz CPU’s. We immedtalty saw a 10-15 % increase in load time because of this, but we also saw an additional 10-15 % increase in load time that we initially couldn’t understand. Nutanix helped out during several days to try and understand what was causing this performance drop. We started by implementing all best practices fro SQL servers in Vmware, went on with even more changes recommended by Nutanix, none of the making any major difference.

A performance monitor setup were we added transaction log flushes/sec, finally made me realize we where writing 100000-200000 small 512 b blocks to disk /sec during execution of some specific stored procedures. I could also see that the time for each write was far higher in Nutanix then on the old NetApp environment.

Finally we could pinpoint a cursor based insert and update taking place in the stored procedure.  Cursor based inserts or updates are considered bad practice in the SQL community and should be avoided as much as possible. In this case just adding a BEGIN TRANSACTION and COMMIT TRANSACTION in  the beginning and end of the stored procedure made the transaction log flushes/sec go down to about 100/sec and every write to be 64kb in size. And by doing so we could eliminate the 10-15 % increase in load time.

But why did it run slower in Nutanix than in NetApp ? First I though I found something that would make the load process be faster also in the original server, still running in NetApp, but to my surprise it did not decrease the load time at all in the NetApp environment.

The reason for this is probably that in NetApp NVRAM is used to acknowledge writes from clients and in Nutanix it takes place on SSD disks. When writes occur as such high frequency as in a the cursor based update, this becomes a bottleneck. And since the cursor in this case updated billions of rows it became a huge problem.

I have already started part 4 and part 5 of the Nutanix journey, stay tuned for more info.

* More performance comparison of moved VM’s

* Migrating VM’s to Nutanix might be slow…. vMotion has limitations. Could we use a third party tool like Veeam replication ?

* What about backups ? Is snapshots enough ? What about SQL backup’s and single db restores ? Integration with NetBackup and are there alternatives ?

* Monitoring of Nutanix , how to monitor in Nagios ?





Posted in nutanix, Performance, SQLServer | 11 Comments

Nutanix journey – Part 3 – Coming soon

Next part of my journey with Nutanix is now coming together. We are actually live with some VM’s now and I have started writing on a post about the last month’s activities. Coming soon…

Posted in nutanix | Leave a comment

Nutanix journey – part 2

The journey continues, boxes are now delivered and have been racked, ready for installation on Monday.

I wrote in part 1 that I would discuss the reason to choose Nutanix.

An important part was cost, even though Nutanix is expensive upfront, we believe there are big savings going forward. OPEX saving are more important than CAPEX cost to us.

Eliminating SAN networking, bladechassi config, patching, aggregates, volumes, LUN and everything involved in maintaining those are a big saving. HA and DR setup is also very complex in traditional environments. Using those resources for automation and cloud adoption would be much better.

Complexity of Metro-cluster and active/active setup between sites also seems much easier in a Nutanix solution.

We have not evaluated in such big detail as David Quinney in this article:



Never mind the placement of the blocks, it’s kind of temporary…;-)

But I have followed the progress of Nutanix since late 2012 and have been more a more amazed about what they are doing. I have compared them to other vendors and I have read miles of blogs, websites, test, tweets and all other available writing I have come across. For me the decisions was easy, my problem was to convince management. This was not an easy task and we have had numerous reference calls with other customers, we have met with resellers and  Nutanix specialists from around the globe. Still there is a nagging feeling that its to good to be true. I hope that feeling proves to be wrong in the long run.

Today, installation of our first Nutanix cluster actually begun, a reseller got in to install and setup the initial setup which requires some special handling. It proved not to be so easy as I had expected.

Apparently the installations required IPV6 connectivity between hosts, Controller VM and the installation host. A dedicated switch had been setup and VLAN tagged to accommodate this, but things didn’t work, connectivity could not be established as required. Finally we realized the switch it self was faulty, after changing to another switch is was much easier.

For this POC we had decided not to set up a dedicated 10 GB switch, we connected everything to one already in use. This proved to be against us today. While pulling one of the 10 GB DAC connection, several other ports in that switch decided to go offline, causing several other systems go in to trouble. Seems like that switch also is faulty or has a bug. Installation was halted for some hours to figure out if we dared to continue. A dedicated switch is probably a good idea for the future…

After all connectivity issues were sorted out installation and cluster configuration was real smooth and when we stopped to-night we had a five node cluster up and running. Tomorrow we need to adjust VLAN´s on all nodes and configure the rest of the VMware environment. Then there is a health test and a performance test to be run to verify that the cluster delivers the performance expected.

After that we ourselves will run tests to verify that HA is working as we expect. Pull on network cable, pull one disk, kill a Controller VM, pull one node etc… Just so that we know what happens and understands the consequences of something should go wrong. This will also gives us better understanding about node-, block- and rack -awareness, how RF2 and RF3 works.

Since we have reached max cooling kapacity in our data center we can’t power on all Nutanix nodes before we have shutdown the servers it is to replace. We will start with a small cluster and slowly migrate over VM’s one by one and then shutdown hosts in the old Vmware cluster one by one.




Posted in nutanix | Leave a comment

SQL Pass Summit 2014 – Day 5

Last day of Pass, it has been an awesome week, no other way to get this much knowledge from som many great speakers. It rocks.

First session: Troubleshooting AlwaysOn Availability Groups in SQL 2014

Very good session on troubleshooting AlwaysOn and what de most common support case are.

Query to check for secondary URL list in the deck. Multisubnet failover can cause problem for applications like Sharepoint which doesn’t  support the Connections string features to use readable secondaries. Detailed descriptions in the slide deck with links to articles about how to get around som of this problems.

Set async secondaries to sync mode before failover. resume of all secondaries will then not be necessary.

Download the slide deck, lots of nice things in it.

Second session: Enter the Dragon: Running SQL 2014 on Windows Server Core

  • Less space consumed
  • Reduced patching
  • Reduced surface area for attack
  • Faster deployment
  • Faster boot times

Consider using Hyper-V Server for licensing reason.

Powershell is the tool for configuration of server core, VMM is used for provisioning.

MinShell is a basic GUI that can be enabled. Need to install Full Fat Windows and remove GUI, to be able to run it easy. IS very difficult to install afterwards on a core installation.

Description of how to create ini file for core installation in slide deck.

Third session: Getting Started Reading Execution Plans

Grant Fritchey demos how to read queryplans. right to left or Left to right, you have do both. Great queries to find bad behaving queries.

Look them up at

Forth and last session at Pass: High Performance Infrastructure for SQL Server 2014

My favorite subject, hardware for high performance servers.

SMB 2.1 MTU 1MB is disabled by default in windows, makes a huge difference.

Best session of the whole week. Amazing stuff with SMB3 and NAND flash storage and what it can do for SQL server performance. SMB protocol is definitely an alternative to traditional FC and ISCSI.

Need to get hold of the presentation, it was awesome.



Posted in Performance, SQLServer | 1 Comment

SQL Pass Summit 2014 – day 4

Amazing keynote speak by Dr. Rimma Nehme on Cloud DB’s. It clearly a thing of the future and maybe even now. This recording will be a very good to show people that want’s to learn about clouds. Brent Ozar have a good live blog here.

First session: Writing Faster Queries Using SQL Server 2014

Starting by talking about in-memory tables, we have schema_only and schema_and_data tables. Schema_only is volatile tables in memory, stat will be lost when SQL server is turned off. Schema_and_data flushes out data to disk. Schema only does not incur any logging, which makes it very fast.

You can determining if a Table or Stored Procedure Should Be Ported to In-Memory OLTP by using Transaction Performance collector in SQL 2014.

SQL 2014 has many new features. Existing code “may” run faster, requires rewrite to take advantage of new features.

Updatable column stored index can only have one clustered columnstore index.

 Second session: “Leveraging AQL Server in Azure Virtual Machine Best Practices”

  • Keep storage account and SQL server VM in same region
  • Disable GEO-replication, use AlwaysON replication instead.
  • Don’t use caching.
  • Use 64KB allocation when formatting disk.
  • Only use “data disks”, also for tempdb
  • Place data files across multiple data disks

Third session: “Azure CAT: Deep Dive of Real World Complex Azure Data Solutions”

Left the session after 15 minutes, interesting case study about car manufacturer Qoros in China that is connected to everything. Sounds like an interesting concept. Didn’t enjoy the presentation, so I left and ended up on Kendra Littles sessions on Index suggestions and why they might not be so good. Excellent session on the indexes suggestions in SQL server not always being so good. Had several examples where the query actually performed worse after the suggested indexes had been added. Filtered indexes good be a solutions sometime , but there are some caveats to look out for. Applications can have problem with INSERT/DELETE/UPDATES if the table have filtered indexes.

Forth session: Build a Social Analytics Platform to Manage Your Social Presence

Not really a DBA session, more a way of getting to know how social data is used by companies. How can social media data be analyzed. What kind of tools in SQL server stack can be used to analyze the data was discussed.

List of tools for SSIS: is a great tool to collect data from social media

Posted in Uncategorized | 1 Comment