One DBA's Ongoing Search for Clarity in the Middle of Nowhere


Yet Another Andy Writing About SQL Server

Tuesday, November 24, 2015

We Are All Responsible

Today we all received an email from PASS's president Tom LaRock (blog/@SQLRockstar):

PASS Anti-Harassment Policy Reminder
It is unfortunate that I have to write this letter, but it has become necessary.
An Anti-Harassment Policy (AHP) was implemented in 2012 to help provide a harassment-free conference experience. Everyone attending PASS Summit is expected to follow the AHP at all times, including at PASS-sponsored social events. This includes, but is not limited to, PASS staff, exhibitors, speakers, attendees, and anyone affiliated with the event.
This year at PASS Summit I served on the Anti-Harassment Review Committee. As such, it was my responsibility to help review complaints and incidents reported during Summit. The PASS Summit experience should be enjoyable, exciting, and safe for everyone at all times. However, I am disappointed to say that PASS Summit was a negative experience for a few members of our #SQLFamily this year.
I expect Summit attendees and PASS members to treat others with respect at all times, whether that is inside a session room, the conference hallway, a PASS sponsored event, or even at a local coffee shop.
On a positive note, there were people actively using the policy this year and supporting one another onsite as well. I am proud to see that our community has embraced having this policy. It is wonderful to know that our #SQLFamily will not put up with these types of incidents.
If you have concerns or want to report an incident within the guidelines of the AHP, I encourage you to contact
Thomas LaRock
PASS President


It is sad to me that we still live in a world where it is necessary to remind people of things like this.  I am not deluded that events like those in this post by Wendy Pastrick (blog/@wendy_dance) will never happen, but it is still disappointing every time I hear about one.

No one - regardless of gender/ethnicity/anything else - should have to suffer from an uncomfortable environment or even worse outright physical or mental abuse, especially in a professional setting.  When something like this does happen it is on all of us to speak up to try to end the situation and to prevent it from recurring.

I commit to personally working harder at this - when we see something unacceptable happen, Speak Up!

At the end of the day, we are all responsible for ourselves *and* for each other in our #sqlfamily.

As always, my $.02


Also - read Erin Stellato's (blog/@erinstellato) excellent post here giving *her* $.02.

Monday, November 16, 2015

CHECKDB - The database could not be exclusively locked to perform the operation

…so the job failed….again…you know the one – that darn Integrity Check job: 
Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp_ma...".: 100% complete  End Progress  Error: 2015-11-15 03:27:52.43     Code: 0xC002F210     Source: User Databases Integrity Check Execute SQL Task     Description: Executing the query "DBCC CHECKDB(N'master')  WITH NO_INFOMSGS  " failed with the following error: "The database could not be exclusively locked to perform the operation.  Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details."

This particular SQL Server 2014 Standard server is relatively new at a client that still uses old style maintenance plans (haven’t yet been able to convince them to #GoOla).  My first instinct in these cases is always that there is a problem related to the maintenance plan itself rather than the database or the integrity check command.  Since the failure is on the small yet unbelievably important master database I decided to just run the command from the query window to find out… 
DBCC CHECKDB(N'master')  WITH NO_INFOMSGS Msg 5030, Level 16, State 12, Line 1The database could not be exclusively locked to perform the operation. 
Msg 7926, Level 16, State 1, Line 1Check statement aborted. The database could not be checked as a database snapshot could not be created and the database or table could not be locked. See Books Online for details of when this behavior is expected and what workarounds exist. Also see previous errors for more details.

I have a small private user database on the system so I tried CHECKDB there: 
DBCC CHECKDB(N'Ntirety')  WITH NO_INFOMSGS Command(s) completed successfully.

OK….now what?

A Google of the “CHECKDB master could not be exclusively locked” brought back a short list of possibles which turned into a strand of spaghetti through the usual suspects – ServerFault, StackExchange,…before I found a question on SimpleTalk that sounded pretty spot on and referenced the most reliable source on CHECKDB I know, Paul Randal (blog/@PaulRandal).

Paul’s blog post “Issues around DBCC CHECKDB and the use of hidden database snapshots” discusses the need to have certain permissions to be able to create the snapshot CHECKDB uses.  I checked the DATA directory and the SQL Server default path and found that the service account did have Full Control to those locations.

What happened next ultimately resolved my issue, and it reflects something I constantly tell people when they ask me how I research things relatively quickly (most of the time anyway :)) – whenever you read a blog post or article about a subject, MAKE SURE TO READ THE FOLLOW-UP COMMENTS!  Sometimes they are nothing beyond “Great Article!” but quite often there are questions and answers between readers and the author that add important extra information to the topic, or just “Don’t Forget This!” style comments that add more detail.

In this case, one of the commenters said this: 
Brandon Tucker says:
August 27, 2013 at 2:19 pm
Ran into some issues and did some more debugging. Check 


The Connect item talks about how the questioner was trying to use minimum permissions for their SQL Server service account and found that it broke the CHECKDB for their system databases in the same fashion as what I was seeing.  The agreed-upon fix was to add READ permission (just READ, not WRITE or anything higher) to the root level of the drive - not the folders, but the actual root of the drive.  Most of the respondents talked about adding the permission to ServerName\Users, while one person mentioned jut adding for the service account.

I checked the root directory of the DATA drive (where the data files for both the system and user databases reside) and found that it had been stripped down to Adminstrators and a couple of system logins (Windows Server 2012 R2). 

I added the read permission to the root of the drive for my SQL Server service account, and: 
 Command(s) completed successfully.

…and all was right with the world.

A subsequent attempt to run the offending Integrity Check job succeeded cleanly as well.


This is not the first time I have seen something like this and I always have to question the level of zeal with which some security admins/teams pursue this.  In my experience most shops (by far) have insufficient security, while the remaining shops have way too much, functionally breaking the systems by keeping them so secure that *nobody* can access them.

There are only a few people out there that seem to have that happy balance figured out where data is secure but people can still do their jobs and customers can use the systems.

In general my best experience is not to mess with permissions of the SQL Server account for files/folders/drives that are relevant to SQL Server.  If a folder is in one of the MSSQL hierarchies, let the install set the permissions and then LEAVE THEM ALONE!

Your SQL Server service account doesn’t need to be a Windows administrator, or even have that elevated of general permissions – but if the file or folder is created or managed by SQL Server, leave the service account alone.

As always, my $02 – hope this helps!

Thursday, November 12, 2015

SQL Server Training

Brent Ozar’s group ( just announced a pretty crazy set of “Black Friday” deals:

Insane sales if you can be one of the first few at midnight, but discounts all day on various training.


The other top training in the field (to me) is from Kimberly Tripp & Paul Randal (& company) at SQLskills (  

UPDATE - I originally said SQLskills wasn't offering discounts right now but their CEO Paul Randal corrected me:

Actually we do have discounts right now - up to $200 off every 2016 class if you register before 2016. Thanks

Sorry Paul :)

Make sure to include this as well:


None of the above training is cheap, but it is pretty much the only in-person training classes (as opposed to conferences) I would consider spending money on for SQL Server.  In-person class from Microsoft and smaller providers have never provided me with much value and even less so as a Senior DBA


The conference options are the PASS Summit ( - what I attended last week) and SQL Intersection (formerly Connections - ) which is usually held in Vegas around this time of year.  The Summit is put on by the non-profit PASS (formerly the Professional Association for SQL Server) and Intersection is put on by a for-profit group that runs a variety of Intersection conferences (Dev, SharePoint, etc.).

I haven’t been to Intersection so I can’t objectively compare one to the other, but the Summit definitely gets a broader array of speakers and content, while Intersection is curated to a specifically narrow group of speakers, always of the MVP/MCM level.


I would be remiss to omit PASS SQLSaturdays - this is free training by the same speaker that present at PASS Summit and the other conferences. (Disclaimer - there is often a small fee for lunch.)  There are one or two day SQLSaturdays all over the world, often more than one event per week!  Check out the list at: 


I always feel I learn more in person, but failing that, here are some other options:

For Purchase:

Pluralsight – for a monthly membership fee you get access to all of the recorded courses Pluralsight has online, and it is quite a large library – the SQL Server courses are by a wide array of individuals but many of them are from the folks at SQLskills (see above).


PASS Virtual Chapters – there are currently 27 virtual chapters that provided monthly or semi-monthly free webcasts on a variety of topics -

Vendors – the major vendors each provide some variant of regular free webcasts – some of it is marketing for their products (obviously) but there is often good content along with it – sometimes you do have to register, but at the end of the day if you have ever done anything online as a SQL Server person odds are these companies already have your info:

SQLbits – the major European SQL Server conference is SQLbits, often held in the UK.  One of the plusses to SQLbits over the PASS Summit is that the post most if not all of their recorded sessions online for free!


Of course these are just training content in the form of presentations, etc.  There are always blogs (and they are just as important – content produced every day from SQL Server Pros all over the world). 

Start with Tom LaRock’s (SQLRockstar) list here: but make sure you read Tom too

If you are looking for more, SQL Server Central (owned by Redgate) publishes a strong list at


I apologize to anyone I omitted and all opinions are my own as to who provides top content in their arenas - thanks!

Tuesday, November 10, 2015

T-SQL Tuesday #72 - Implicit Conversion Problems

This month T-SQL Tuesday is hosted by Mickey Steuwe (blog/@SQLMickey) and her topic of choice is “Data Modeling Gone Wrong.”

(If you don’t know about T-SQL Tuesday check out the information here – each month there is a new topic and it is a great excuse to write each month (and to start writing!) because someone offers a topic, so you already have taken the first step!).

My first memory of a data modeling problem comes to data types and the process of choosing appropriate data types - not because of storage concerns (although there can be problems) but because of actual performance risks.

When the data type of a field in a table doesn’t precisely match the datatype of your query (or parameter of your stored procedure) a conversion occurs.  Conversions can be implicit (automatic) or explicit (requiring an inline CONVERT or CAST statement in the query).

Here is a chart from the Technet article on Data Type Conversions:

Data type conversion table

As you can see, a conversion from a CHAR to an NVARCHAR is “implicit” – so no big deal, right?


When I started as a DBA, we designed a system for payment processing for a university debit card system (an internal system, not a VISA/MC branded card at that time).  Most of us that were involved were relatively new to the design process, so we decided to use NVARCHAR for all of our character fields.  We decided it would be useful in a couple of ways:
  • It was a standard – everything was the same (my 16-year-experienced self now knows that isn’t a great definition of “standard”)
  • It allowed us to handle any eventual contingency – maybe we would need Unicode data, right?

At the beginning there wasn’t any visible impact – we only had a few thousand accounts and didn’t notice any issues.

Then we got to Fall Rush…

If you are familiar with American Universities, you know that campus bookstores make 80%-90% of their profit during the first week of each semester and often the week before.  (This is probably true at many international universities as well.) One of the primary uses of our internal card was textbook purchasing, so we were slammed during this time period, both in terms of overall quantity of transactions and rapidity of transactions during the business day.

When we hit this period, we saw performance dip. At first we assumed it was just load (remember this was SQL Server 7.0 and then 2000) but we quickly realized that there was slag in our queries that wasn’t needed – we saw this in our query plan:

Why did we have a SCAN instead of a SEEK?

In a modern query plan this is visible in the predicate as a CONVERT_IMPLICIT:

Even in our inexperienced state we knew one of those key rules of query optimizations:

SCAN BAD! (In your best Frankenstein voice)

There are two ways to deal with an issue like this – you can fix the design of the table, or you can modify your code to perform an explicit CAST or CONVERT of the statement.  The best fix (“best”) is to fix the design of the table – Aaron Bertrand (blog/@AaronBertrand) has a good list of suggestions here for fields that can often be set to a smaller data type.

If you can’t change the table design (as you very often can’t) the other option is to modify the code to prevent an implicit conversion.  The first method is like this:

SET @CardID = ‘123ABC456’
SELECT CardID, LastName, FirstName, Balance
FROM Credit
WHERE CardID = @CardID

In this case the CardID field in the Credit table is of the NVARCHAR data type (as in my debit card scenario above).  To get rid of the implicit conversion I am using a variable that is NVARCHAR to “pre-perform” the conversion so that by the time the query runs it is comparing apples to apples (or in this case NVARCHAR to NVARCHAR).

Another way to accomplish this is by using a CAST/CONVERT, like this:

SELECT CardID, LastName, FirstName, Balance
FROM Credit
WHERE CardID = CAST(‘value’ as NVARCHAR)


This seems like a minor detail but it can be a very important one, especially in systems with lots of rows and/or high transaction rates.


Your homework - there is a great test harness for this problem created by Jonathan Kehayias (blog/@sqlpoolboy) in a blog post at including charts that quantify the performance hit and CPU costs. Check it out!

Tuesday, November 3, 2015

PASS Summit 2015 Recap

Last week was the annual PASS Summit in Seattle.  The Summit is the predominant educational and networking event for SQL Server professionals of all types – DBA’s, Developers, Analysts, and many more.

I wasn’t sure I was going to be able to attend this year but after some gyrations with a friend and my employer I was able to make it work, and it was definitely worth it, as always.

In no particular order here are my top moments from the week:

  • The Day One Keynote – not for the Microsoft message, but for the introduction from outgoing PASS President Tom LaRock (blog/@SQLRockstar).  I have admired Tom for some time for the effort he is able to put in to his job as a technical evangelist (Head Geek) at Solarwinds while still being married with children *and* maintaining a strong presence as a member of the #sqlfamily and the overall SQL Server community.  Being the president of PASS is no small task, with no pay and often little appreciation, and it was obvious as Tom described the journey he has taken just how invested he is in that service to the community.


  • The Day Two KeynoteDr. David DeWitt and Dr. Rimma Nehme of the Microsoft Jim Gray Systems Lab never disappoint, always delivering a presentation that is almost academic – there is never any hard marketing aside from a brief mention of their project of the moment (such as Polybase) but even that is usually a tongue-in-cheek reference.  This year’s talk was on the “Internet of Things” #IoT – describing how the world is relying on more and more items with internet connectivity, such as smartphones and fitness trackers.  They described how the data generated by these objects is resulting in a data explosion that will be an overload for business intelligence and how we as data professionals need to be ready for that volume.  The slides from the talk are available on the Gray Systems Lab website.They also made a sad announcement that they would no longer be giving PASS Summit keynotes.  Dr. DeWitt is retiring (at least mostly retiring) and Dr. Nehme is moving on to other things.  They will be sorely missed by the PASS community and Microsoft has a very high bar to clear for whomever comes next year.

  • Bob Ward’s session – Bob (blog/@bobwardms) is the “Chief Technology Officer, CSS Americas” for Microsoft’s Customer Support Services (CSS, the artist formerly known as PSS) and he always melts our brains with a deep dive internals session on a chosen area of SQL Server.  This year it was “Inside Wait Types, Latches, and Spinlocks” and was booked for a double session at the end of the day.  Bob’s session is another one of those that always is about technology and *not* marketing or fluff.  It is marked as 500-level (off the charts) and he isn’t kidding – about a third of the audience didn’t come back after the mid-session break.  I learned a lot but I was definitely in a daze at the end.

  • Slava Murygin’s photos – Slava (blog/@SlavaSQL) did an amazing job documenting the Summit with a wide array of “day in the life” style pictures from the keynotes, sessions, exhibit hall, and a variety of other locations.  He posted them on his blog at and I have referenced several of them in this post.  If you haven’t already looked at the pictures you need to check them out – and if you don’t already follow Slava on Twitter, do that too! :)

  • Reconnecting with #sqlfamily – one of the top parts of any Summit is seeing people that I deal with all of the time online but who live all over the world.   From former co-workers that live in town (but that I still rarely see) to new friends from half a world away, there are hundreds of friends new and old that I only see at the Summit (and there are far too many to list here).  Part of #sqlfamily is caring for each other and for others and meeting each other in person just reinforces that.  Two items this year are #sqlcares to support the National MS Society and #ArgenisWithoutBorders to support Doctors without Borders.  This second item especially highlighted how many of us are willing to make fools of ourselves to help draw donations in – this happened at the Summit in response to the $25,000 that was donated:

Erin and her crowd of fans after her session.

I didn’t get a good picture of Kendal presenting but loved his opening slide. :)

All in all it was a great experience as always – registration is already open for next fall!

Wednesday, April 15, 2015

Deleting Files Older than X Hours with PowerShell

(aka "OMG I can't believe I am actually finally writing a #PowerShell blog post").


I currently have a situation at a client where we are running a server-side trace that is generating 6GB-7GB of trc files per day while we are watching to see what might be causing a server issue.

I need to keep enough trace files that if something happens late in the day or overnight we can run it down, but not so many as to have 6GB+ of files all of the time.

In the past for file cleanups I have relied on the ForFiles command (modified from the one used by Ola Hallengren in his Output File Cleanup job):

cmd /q /c "For /F "tokens=1 delims=" %v In ('ForFiles /P "$(ESCAPE_SQUOTE(SQLLOGDIR))" /m *_*_*_*.txt /d -30 2^>^&1') do if EXIST "$(ESCAPE_SQUOTE(SQLLOGDIR))"\%v echo del "$(ESCAPE_SQUOTE(SQLLOGDIR))"\%v& del "$(ESCAPE_SQUOTE(SQLLOGDIR))"\%v"
The problem in this case is that ForFiles takes a parameter in *days* (/d) and in my case I really want to delete files older than 18 *hours*.

Looking for another solution I figured there had to be a PowerShell solution (Remember - #YouCanDoAnythingWithPowerShell) and after some digging I found someone using the Get-ChildItem and Remove-Item cmdlets to do pretty much what I was looking for:

Get-ChildItem $env:temp/myfiles | where {$_.Lastwritetime -lt (date).addminutes(-15)} | remove-item

This sample would remove anything from temp\myfiles older than 15 minutes.  By hardcoding the -path parameter rather than relying on the $env variable and changing addminutes to addhours I was able to accomplish my goal:

Get-ChildItem -path D:\AndyG\Traces | where {$_.Lastwritetime -lt (date).addhours(-18)} | remove-item
 This command "gets" each item in the given path with a modification date-time older than 18 hours and then removes/deletes them from the folder .

After testing this in a PS window, I tried it as a PowerShell job step in a SQL Server Agent job and it worked!  (Of course, the account running the job step needs to have the necessary permissions on the D:\AndyG\Traces folder to perform the item deletes).

Moving forward, I may abandon the ForFile idea in favor of this whenever possible - it is simpler and PowerShell seems to be more and more what makes the world go around - hope this helps!

Thursday, April 2, 2015

Availability Groups - Where Did My Disks Go?

The TL;DR - beware of Failover Cluster Manager trying to steal your non-shared storage!

At a client recently two availability groups on a single Windows cluster went down simultaneously.  Apparently the server that was the primary for the AGs (Server1) had mysteriously lost its DATA and LOG drives.  By the time the client got us involved they had faked the application into coming up by pointing it directly to the single SQL Server instance that was still up (Server2) directly via the instance name rather than the availability group listeners.

I found that two of the drives on Server1 had gone offline, causing the issues – sample errors from the Windows System and Application Logs respectively:


Log Name:      System
Source:        Microsoft-Windows-FailoverClustering
Date:          3/31/2015 2:36:25 PM
Event ID:      1635
Task Category: Resource Control Manager
Level:         Information
User:          SYSTEM
Cluster resource 'Cluster Disk 2' of type 'Physical Disk' in clustered role 'Available Storage' failed.


     Log Name:      Application
Source:        MSSQLSERVER
Date:          3/31/2015 2:36:25 PM
Event ID:      9001
Task Category: Server
Level:         Error
Keywords:      Classic
User:          N/A
The log for database 'Database1' is not available. Check the event log for related error messages. Resolve any errors and restart the database.


Since this is an Availability Group (AG) I was surprised that there were “Cluster Disk” resources at all – AG’s do not rely on shared disk (it is one of their many advantages) and most AG clusters don’t have any shared disk at all (occasionally a quorum drive).

This is what I saw in Failover Cluster Manager:

Cluster Disk 1 was the Quorum, but the presence of disks 2-7 did not make sense to me in a regular AG arrangement.  The two disks that were online (Disk 6 and Disk 7) were the two disks that were currently “live” on Server2, but there was still no reason for them to be in Failover Cluster Manager.

The service provider assured me that none of the drives except the Quorum are presented to more than one server from the back-end storage.

There was one reported event that happened at 2:36pm, a time that coincided with the failures – the client added a new node Server3 to the cluster (it was evicted 30 minutes later with no further impact positive or negative).

My best theory at this point was that when the engineer tried to add Server3 to the Windows cluster they mistakenly tried to add the disks as Cluster Disk resources – for a traditional SQL Server Failover Cluster Instance (FCI) this would be correct – for a SQL FCI almost all disk is shared and all nodes need to have access to all of the shared disk (although only one node can “own” it at any one time).

A cluster will “lock” disks – if cluster MySuperHugeAndAmazingCluster01 owns a particular drive then no other server or cluster can use it – the only way for a server to access it is through the cluster.  I considered that may be the cause of this issue – even though several of the drives are flagged that “clustered storage is not connected to the node” this may simply have been because the storage wasn’t presented to the current “owner” of the Cluster Disk objects Server2.


After an application downtime was scheduled, I signed on to the server and after deleting the AGs (first saving their settings for later re-creation) and shutting down SQL I deleted the cluster disk objects.  This, combined with a disk rescan in the Computer Management console on each server, did indeed return control of the “missing” drives to the servers.  I dropped six of the seven cluster disk objects (all of them except the Quorum object) which means I needed to rescan disks on all of the servers.  This validated that the only reason things have been working on Server2 was because the cluster thought that Server2 owned the disk objects (my guess is because the Add Node wizard to add Server3 to the cluster the other day was probably run from Server2 rather than Server1– more to follow on that).

I recreated the two AGs and as a final step I performed a failover test of each of the two availability groups from Server2 to Server1 and back again so that at the end of the process Server2 was the primary for both availability groups.  Making Server2 the primary was necessary because of the changes the client had made to the front-end applications and processes to get them to work since they redirected the applications to talk directly to Server2 rather than to the two availability group names (this works since the availability group name is really just a redirect to the server name/IP itself under the hood).  A final step for the client was to redirect the apps to once again talk to the availability group listeners.

I then added the new node (Server3) to the cluster and stepping through the Add Node wizard showed me the likely cause of the original issue (below).
As of the end of the call, the client was satisfied with the end state – SQL Servers online, availability groups running, and new cluster node added.


Here is what *I* learned today, brought to light through adding the new node and what was almost certainly the cause of the problem:

As I noticed when adding Server3 to the cluster, on the Confirmation screen of the Add Node wizard in Windows Server 2012 there is a check box to “Add all eligible storage to the cluster” – by default it is *CHECKED*.

As described here by Clustering MVP David Bermingham, this can really cause problems:

On the confirmation screen you will see the name and IP address you selected. You will also see an option which is new with Windows Server 2012 failover clustering…”Add all eligible storage to the cluster”. Personally I’m not sure why this is selected by default, as this option can really confuse things. By default, this selection will add all shared storage (if you have it configured) to the cluster, but I have also seen it add just local, non-shared disks, to the cluster as well. I suppose they want to make it easy to support symmetric storage, but generally any host based or array based replication solutions are going to have some pretty specific instructions on how to add symmetric storage to the cluster and generally this option to add all disks to the cluster is more of a hindrance than a help when it comes to asymmetric storage. For our case, since I have no shared storage configured and I don’t want the cluster adding any local disks to the cluster for me automatically I have unchecked the Add all eligible storage to the cluster option.

(emphasis mine)

Although I have seen a cluster disk object reserve/”lock” a resource so that the actual servers can’t access it other than through the cluster, but I haven’t run over this specific situation before (the check box).  The above explanation from David shows the most likely reason *why* this happened in this case – with the offending box checked by default, whoever was adding the node probably clicked right past it and when the process to actually add the node started, it grabbed all of the storage for the cluster, locking everybody out.  This would have impacted Server3 as well, but since it was a new server with no user databases (or anything else) on its D: and E: drives unless someone was looking in My Computer and saw the drives disappear, there wouldn’t be any immediately apparent problem on that server.

The reason why I believe the Add Node wizard was run from Server2 (not that it is important, just explanatory) was because the disk objects showed as being owned by Server2.  Since Server2 owned the cluster disk objects, it could still access them, which is why it was able to keep accessing its user databases on the two drives. 


At the end of the day, if you are working on a cluster with no shared storage, make sure to uncheck the "Add all eligible storage to the cluster" check box - and even if you do have storage, it may not be a bad practice to uncheck the box - it isn't that hard to add the disks manually afterward, and it makes your cluster creation process consistent.

BONUS - I am not a PowerShell-freak myself (I keep telling myself I need to become one since #YouCanDoAnythingWithPowerShell)  but if you like PS there is a flag to the relevant command there as well that is functionally equivalent to unchecking the box:
PS C:\> Add-ClusterNode -Name Server3 -NoStorage