The Nutanix journey – part 1 #nutanix

I have been blogging and tweeting about Nutanix since June 2013, over 3200 tweets regarding Nutanix according to #Snapbird. So my journey to Nutanix and web-scale IT is not new in any way. But now I’m getting pretty close to actually get my hands on a couple of boxes. In about two weeks time I will be able actually start working with them. It’s been a very long and winding road to get to this point. In the following months you will be able to follow me in my progress.


The company I work for has a very traditional infrastructure. NetApp SAN, HP blade servers, Vmware for Virtual servers and XenServer for VDI and application servers. Some 250 physical servers and around 1200 VM’s in total. Just over 100 SQL servers and around 120 applications. We have a high virtualization level and physical servers are mostly high-end SQL servers and compute engines. We have 24×7 operations and branch offices around the globe. During last winter we began to have serious performance problems related to the SAN. The environment had been growing rapidly, both in IOPS and capacity usage.

A lot effort has been spent since then to try to figure out what the next step in Storage and compute should look like. In 2011-2012 we invested in FusionIO PCI cards for our high-end SQL server when the SAN was not able to neither provide enough throughput, nor being able to handle the load.

Our NetApp environment was originally built with two controller , one used for production and the other for test/dev. On each controller all data was spread out over all available disks two give all different application the best possible performance. This worked great at the beginning with low load, but has become and increasingly difficult problem to handle.

Mixing NFS, CIFS and FC is in my opinion maybe not he best solution for all environments. VDI, SQLServers, Application Servers and CIFS shares basically don’t mix week when IOPS get high. They disturb each other too much.

Since then we have been looking for alternatives and Nutanix really became interesting this spring when we started to realize that not only the NetApp but also most of blade and rack servers where closing in on their fourth year. We hade also reached a threshold when it came to data center cooling and power supply. We are basically not able to do a forklift upgrade of the SAN on premises any longer. We would not be able to have both old and new running simultaneously. We had to have alternatives. So while thinking about co-location and building a new data center we also took up discussions with a couple of resellers of Nutanix  to start getting more details about this product.

It has been a tough ride to convince my CIO and COO, I started spamming them with Nutanix info over a year ago and only after numerous reference calls, Nutanix technicians on site, countless reseller meeting, did we finally sign up for a POC purchase a week ago.

The plan now is two move our Vmware test/dev environment consisting of about 185 VM’s today, running on 12 HP G7 blades with 384 GB of RAM, consuming about 50 TB of disk space to eight Nutanix 6060 nodes with 512 Gb RAM.

In part two I will discuss the reason behind our decision to a Nutanix POC.

Stay tuned..


This entry was posted in nutanix. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s