DiskSpd new disk I/O performance tool

Late september Microsoft released a new tool for disk performance testing. It has been mentioned more then once at SQL Pass Summit.

Have a look here: https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223

It’s said to be better than SQLIO and at first looks it seems very interesting.  Here is a blog post about how to use it. Said to be able to measure SMB performance as well.

http://blogs.technet.com/b/josebda/archive/2014/10/13/diskspd-powershell-and-storage-performance-measuring-iops-throughput-and-latency-for-both-local-disks-and-smb-file-shares.aspx

Advertisements
Posted in Performance, powershell, SQLServer | Leave a comment

SQL PASS Summit 2014 – day 3

Thomas LaRock starting of the KeyNote.

Azure, is the topic of this KeyNote. A lot of new nice features coming up.

In-Memory table with columnstore indexes will enable realtime-analytics of OLTP data in a new way.

Tables stretched between on-premises and cloud. Kind of like storage tiering but on table level, really cool.

First session “Building Perfect SQL Server, Every time”

Joey starts by going through settings for  Max Memory, MAXDOP, cost for parallelism. Nothing new here, same recommendations as yesterday on the parallelism session.

Getting into Model DB settings. File growth settings and recovery model.

Tempdb config was nothing new, multiple files the most important.

Setup SQL Agent Alerts for critical errors. 823-825 and 016-025

Evaluate if backup compression should be used, better than using storage level reduplication which can give you extended restore times. Compression is better to do on the SQL server to avoid sending more data over the network. Or consider snapshot backup through VSS and agent backup software connected to the storage.

DAC (dedicated remote administrator) is an emergency backdoor, default only works on a RDP session directly. But if that is hogged it is good to be able to do it through SSMS from other servers. Needs to be enabled with the reconfigure advanced option.

Install Ola Hallengren Maintenance Solutions to set up backup and DBCC and index optimization, great tool.

Patch ! But wait 1-2 weeks after release.

Use standards, create a standard policy document. Include thing like, drive names, volume size, editions, settings, O/S, HA and DR options, security. Revisit every 6 months.

At lunch sat down randomly at a table where I didn’t know anyone, turns out that the guy next to me is living less than 1000 m away from where I live home in Stockholm. Small world.

For sql servers where Sharepoint shares a host use resource governor to set MAXDOP=1 for share point but let everything else on hosts run with parallelism.

Second session: “Right-sizing your SQL Server VM” with David Klee

David describes how to setup and collect Perfmon data in this article https://bit.ly/1sqSVns, perfmon data is a key component for right-sizing your VM’s. Without a baseline is nearly impossible to know when improvements have been reached.

Number of vCPU can often be to high, monitor vCPU ReadyTime to find out if lowering number of vCPU’s could improve performance.

Third session: “Performance tuning your backups”

Great session with a lot of great advice about how to increase speed of your backups and restores. Number of backup files, number of buffers and transfer size affects backup and restores times. How much ? as always: It depends….

Forth session: “Azure SQL Database Overview”

Azure now offers 99,99% SLA that is financially backed by Microsoft.

Pricing from $5 to $3000 per month ?

Backup of databases is no extra charge. Restore creates new database, you can then failover to this restored DB. Retension period from 7 to 35 days depending on tier.

Introduces DTU’s (database throughput units) to be able to compare different performance tiers of Azure SQL DB’s.

Online change of tiering when load increases, makes it very easy to scale up or down as your load changes. Is it possible to scale automatically ? Pricing is now on an hourly basis so scaling seems interesting.

Geo-replication enables async replication to between 1 and 4 secondary sites either readable or not depending on tier.

Auditing functionality is also available.

Expanded Surface Area coming soon as preview, will close the gap between SQL Server and Azure SQL DB.  Adventureworks DB will then be supported for example. ChangeTracking is also available. Tool will be available to check a DB for changes needed to be able to move it to Azure SQL DB.

All traditional tools and apps on-premises are able to connect to Azure SQL DB’s.

 

 

Posted in SQLServer | 1 Comment

SQL Pass Summit 2014 – Day 2

Last nights networking dinner got move from one place to another and we decided to skip it, there was just to many in line and we were to hungry, so we skipped it and went for Hard Rock Cafe instead.

Second day precon is “Better performance through parallelism” with Adam Machanic.

Adam starting of with a history lesson on CPU development and Moores law.

Talks about linear and extra linear operations. In theory only table scan is linear, all other operations have overhead in form of algorithm complexity.

example:

Table Scan: O(N)

Sort: O(N * log(N))

Index Seek: O(logN)), O(M*log(n))

Deep dive into Threads,Preemptive scheduling, Quantum, clock interrupt frequency and affinity.  Don’t run Adobe flash on a server since it will change quantum units for the whole system to get smooth streaming of video. Can be checked with clockres.exe by sysinternals. http://technet.microsoft.com/en-us/sysinternals/bb897568.aspx

Always use Hyperthreading on all newer servers. Makes it possible to get second path into CPU. A 100% utilized CPU may only be using parts of the built in functionality. Hyperthreading makes it possible for other threads to use the available resources.

Numa testing by Linchi Shea shows that there is basically no performance impact if you have to use remote NUMA nodes on newer CPU’s.

http://sqlblog.com/blogs/linchi_shea/archive/2012/01/30/performance-impact-the-cost-of-numa-remote-memory-access.aspx

Performance saving mode on a server is good unless CPU load is above 70-80%, then the switching of C-state modes will incur too much overhead.

In SQL 2014 minimal logged SELECT INTO is finally available. Could make ETL jobs using SELECT INTO much faster. No code changes needed, just works out of the box.

Server level MAXDOP setting can be overridden by OPTION (MAXDOP). Resource Governor can be used to prevent increasing MAXDOP with OPTION feature in SQL code.

Max Server memory setting affects the barrier for parallel plan generation. Buy more RAM !

Affinity mask could be interesting when you run SQL and SSIS and/or SSAS or any other application on the same server. Traceflag 8002 should be used in that case to prevent hard affinity lock within affinity groups.

Parallel cost threshold of 5 is way to low, start with 40-50 and work from there.

Max Degree of parallelism default value of 0 site worst default value in SQL Server. On OLTP system 2 is a recommendation to start with. For OLAP number of physical number of cores * 1,5, or with hyper threading logical number of cores * 0.5

Never touch Max Worker Threads without consulting Microsoft support. Or if your server has stopped responding.

Functions inhibits parallel operations, functions can then be replaced with CLR’s to get around that.

Traceflag 8649 will always give a parallel plan if one is available even if the cost is higher. Great for troubleshooting query plans.

Use the sp_whoisactive with @get_task_info=2 to get better wait info. sp_whoisactive is available at http://sqlblog.com/files/folders/beta/entry42453.aspx

very interesting session with Adam, deep dive in parallelism, what affects it and what can be done to improve the use of parallel plans. Difficult to get into details about query optimization in writing, just lets say I have learned some new ways to approach tuning.

Now it’s time for the Welcome reception, will be nice. Still Jetlagged so it will be an early night today too.

Posted in SQLServer | 1 Comment

SQL Pass Summit 2014 – First day

Finally settled in Seattle for SQL Pass Summit 2014. Had a pleasant trip with IcelandAir from Stockholm in just above 12 hours. Feels lucky since a friend spent 22 hours.

Registration went smooth with a nice swag in form of a backpack that actually seems useable. Great !

Now I’m seated for the first pre conference day. I’m not going to cover everything in the session, just write down things of interest.

“Zero to hero with Powershell and sqlserver”

On stage are Bradley Ball (@sqlballs), Robert C. Cane (@arcanecode) and Jason Strate (@stratesql).

I’m not new to powershell, I have worked with it for maybe two years, but I never had any real training, so I thought starting from scratch would be a good thing. I believe there is always something new that you can learn.

Use #Region #EndRegion to being able to collapse sections of the code was new to me.

F5 – runs whole script

F8 – runs selection

Here strings ideal for SQL statements

$heretext = @”

test

test

“@

First section of the session involved standard powershell cmdlets and programming syntax and functions. We learned the basic stuff.

Second session started to focus on SQL Server interaction, starting by loading SQL modules.

New in SQL 2012 was interaction with SSIS and SSAS.

Great example in the session on reading the errorlog with filters to find backup failures.

All scripts will be available at SQLPass website for download.

SQL Provider vs. SMO was discussed the primary difference is the depth of the details available for creating objects. SQL provider calls the same DLL’s as SMO. SMO seems a bit more complicated, but gives more options.

SQL Provider is much slower when traversing objects in a DB, SMO is the solution for speed.

Policy based management is not one of my favorite features in SQLServer, I’m not using it today, running it through powershell seems to ease the burden.

CLS
$serverName = "STRATE-SQL\SQL2014"

$policyFolder = "C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Policies\DatabaseEngine\1033"
#explorer $policyFolder

$policy = $policyFolder+"\Backup and Data File Location.xml"

Invoke-PolicyEvaluation -Policy $Policy -TargetServerName $serverName -AdHocPolicyEvaluationMode  Check -OutputXML

Enterprise Policy Management Framework seems to be a great tool to make it much simpler to use and also to create SSRS reports about how your environment comply.

Check out https://epmframework.codeplex.com

Ever wanted to move indexes between file groups ? This powershell script will do it quite easy for you.

cls
$SQLServerName = "STRATE-SQL\SQL2014"
$databaseName = "Northwind"
$fromFileGroup = "PRIMARY"
$toFileGroup = "INDEX"

#$fromFileGroup = "INDEX"
#$toFileGroup = "PRIMARY"

$SQLServerInstance = New-Object ("Microsoft.SqlServer.Management.Smo.Server") $SQLServerName
$($SQLServerInstance.Databases | where {$_.name -eq $databaseName}).tables | % {
	$t = $_
#	write-host "$_.schema $_.name"
	$_.indexes | % {
		if ($_.id -ge 2 -and $_.filegroup -eq $fromFileGroup -and $_.IndexKeyType -eq "none") {
				$script = $($_.script()).replace("ON [$fromFileGroup]", "ON [$toFileGroup];").replace("(PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)","(DROP_EXISTING = ON)")

				invoke-sqlcmd -ServerInstance $SQLServerName -Database $databaseName -Query $script
			}
		}
	}

# check to see where all of the indexes are
foreach ($t in $($SQLServerInstance.Databases | where {$_.name -eq $databaseName}).tables) {
	$tableName = $t.Name
	$schemaName = $t.schema
	$t.Indexes | % {
		$index = $_.name
		$fileGroup = $_.filegroup

		Write-Host "$fileGroup, $schemaName, $tableName, $index"
		}
}

Then we got into using SQLIO through powershell, this seems like a very interesting idea.

Much easier to run SQLIO through powershell, don’t have  to write so many different rows for a complete test. Also easy to run it against remote servers.

Jonathan Kehayias has created a nice Powershell script to parse the sqlio output and create an excel file with graphs and all. You can find it here:

http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/05/25/parsing-sqlio-output-to-excel-charts-using-regex-in-powershell.aspx

Remaning of the sessions got more into different use cases, ETL loading through powershell is an interesting new concept. Able to build very dynamic scripts that for smaller load might be better than SSIS which is much more status when it comes to changes in source data. Have a look at:

https://gallery.technet.microsoft.com/Arcane-SQL-A-PowerShell-185651ad

In the end we looked at writing your own modules for powershell and how to execute sql scripts from within PS.

Great session all in all and it’s becoming more and more obvious that powershell is playing a larger and larger role in SQL server. Going forward with automation and orchestration, powershell knowledge is a must.

Now time for Networking Dinner with Steve Jones and Andy Warren at Buffalo Wild Wings.

Posted in powershell, SQLServer | Tagged | 1 Comment

SQL PASS SUMMIT 2014 in Seattle #sqlpass

PASS Summit 2014 Banner_940x94

A lot happening right now…

Leaving for Seattle and SQL Pass Summit 2014 on sunday. Really looking forward to this, second time now and no longer a “First timer”. Last year was very good, I meet some nice people, attended sessions that where awesome and enjoyed all the evening activities. But I kind of missed out on the opportunity to connect with more people. I have read many blog post on the subject and I realize that talking to people is probably what gives the most during the week. This time I will try to interact more. A few tips for those that read this and are new to Pass.

  • 1. Get out of your comfort zone, talk to people around you at the receptions.
  • 2. Walk up to the “celebrities” and say hi, they don’t bite.
  • 3. Visit the breakfast sessions or at least have breakfast at Pass instead of at the hotel.
  • 4. At lunch, don’t try to find an empty table, sit down and have a chat with complete strangers.
  • 5. Visit the clinic and the community areas and discuss your problems. If you have a problem at work, you can be sure someone else has the same problem.

See you in Seattle on monday !

Posted in SQLServer | Leave a comment

The Nutanix journey – part 1 #nutanix

I have been blogging and tweeting about Nutanix since June 2013, over 3200 tweets regarding Nutanix according to #Snapbird. So my journey to Nutanix and web-scale IT is not new in any way. But now I’m getting pretty close to actually get my hands on a couple of boxes. In about two weeks time I will be able actually start working with them. It’s been a very long and winding road to get to this point. In the following months you will be able to follow me in my progress.

Intro:

The company I work for has a very traditional infrastructure. NetApp SAN, HP blade servers, Vmware for Virtual servers and XenServer for VDI and application servers. Some 250 physical servers and around 1200 VM’s in total. Just over 100 SQL servers and around 120 applications. We have a high virtualization level and physical servers are mostly high-end SQL servers and compute engines. We have 24×7 operations and branch offices around the globe. During last winter we began to have serious performance problems related to the SAN. The environment had been growing rapidly, both in IOPS and capacity usage.

A lot effort has been spent since then to try to figure out what the next step in Storage and compute should look like. In 2011-2012 we invested in FusionIO PCI cards for our high-end SQL server when the SAN was not able to neither provide enough throughput, nor being able to handle the load.

Our NetApp environment was originally built with two controller , one used for production and the other for test/dev. On each controller all data was spread out over all available disks two give all different application the best possible performance. This worked great at the beginning with low load, but has become and increasingly difficult problem to handle.

Mixing NFS, CIFS and FC is in my opinion maybe not he best solution for all environments. VDI, SQLServers, Application Servers and CIFS shares basically don’t mix week when IOPS get high. They disturb each other too much.

Since then we have been looking for alternatives and Nutanix really became interesting this spring when we started to realize that not only the NetApp but also most of blade and rack servers where closing in on their fourth year. We hade also reached a threshold when it came to data center cooling and power supply. We are basically not able to do a forklift upgrade of the SAN on premises any longer. We would not be able to have both old and new running simultaneously. We had to have alternatives. So while thinking about co-location and building a new data center we also took up discussions with a couple of resellers of Nutanix  to start getting more details about this product.

It has been a tough ride to convince my CIO and COO, I started spamming them with Nutanix info over a year ago and only after numerous reference calls, Nutanix technicians on site, countless reseller meeting, did we finally sign up for a POC purchase a week ago.

The plan now is two move our Vmware test/dev environment consisting of about 185 VM’s today, running on 12 HP G7 blades with 384 GB of RAM, consuming about 50 TB of disk space to eight Nutanix 6060 nodes with 512 Gb RAM.

In part two I will discuss the reason behind our decision to a Nutanix POC.

Stay tuned..

 

Posted in nutanix | Leave a comment

Old blog posts republished

I have republished some of my most read article from my previous blog. Your welcome to read and also visit the old blog and read my other posts at b3itonms.wordpress.com

Posted in Uncategorized | Leave a comment