One DBA's Ongoing Search for Clarity in the Middle of Nowhere


*or*

Yet Another Andy Writing About SQL Server

Saturday, February 22, 2014

Working From Home

I have been working 100% from home for almost four months now,  and one of my former coworkers recently hit me with "How's it going working from home - I expected to see a blog post about it by now."

HASHTAG GuiltTrip HASHTAG Sigh

He's right - I had intended to blog about this before now, and as the three people who read my blog have noticed by now,  I have produced a whopping *one* blog post since starting at Ntirety.


HASHTAG ReadyOrNotHereWeGo

 
I have noticed some of the benefits that many people report about Telecommuting:

More time with my wife and our three little boys - when I drove into the office every day (or at least most days,  since my last job did allow me to occasionally work from home) the only meal I usually had with my family was the evening meal (we call it dinner).  I was almost always out of the house before everyone else was up and around, so I didn't have breakfast with my family (and often didn't really have breakfast at all,  or at least not a good breakfast - more on that in a bit).  Lunch was never at home - maybe once a month I would come home at lunchtime and bring home takeout.  Now I have all three meals at home with my family every day.  


Another benefit in this area is the lost commute. Driving to my office at my last job was a 30-45 minute commute depending on what time of day I was driving.  Combined with the time spent preparing to leave and the time spent setting up in my cube every day (bonus benefit - no cube now) I was spending about two hours a day going back and forth.  Now that time is spent helping my wife gets the kids ready in the morning and either cooking dinner or distracting the kids so my wife can cook.  :)


One final benefit in this area is the reduced/non-existent travel.  My last position was supposed to be about 50% remote managed services (from our local office) and 50% on-site consulting, and they were definitely honest and up-front about that.  To be fair they did a good job of allowing me to work more like 80/20, but it still meant one week every couple months I was away from home at a client site.  The other downside was that while it averaged out to a week every other month, it was more streaky than that, with two weeks coming in one month and then no travel for three months straight.  The trade-off for my lighter travel was that some of my colleagues willingly traveled much more frequently, to the tune of 3 weeks or more every month.  It quickly became clear that there was no real advancement available within the company unless you were willing to travel more than I would be comfortable doing.


At Ntirety, all of my work is WFH - in my four months I have been away from home for one week, when I visited Boston (the home base of Ntirety) during my first week on the job.  The company line is that we will do a week at the mother-ship 2-3 times per year, always with significant notice (as opposed to my last job, where you would often find out on Thursday or Friday that you were flying out to a client on Sunday afternoon.)


Decreased expenses - aside from what is usually the most obvious upside to most people - less gas consumed and fewer miles on our little Chevy Malibu - I have found I spend less discretionary money in other ways as well.


When I went to the office, I ate out for breakfast and/or lunch - takeout or in restaurants - at least three or four times a week and often even more.  Now I never eat out by myself,  and combined with a new effort in our family to eat at home more often, we find ourselves only eating out about once a week (some weeks not at all), and more importantly, we don't find ourselves missing it much!
 

The other expense that has come down is our grocery bill.  While that may seem counter-intuitive since we are eating at home more,  I have realized that before I would stop at a store on the way home 2-3 times a week to grab something,  which often resulted in picking up something extra as well.   Now that I don't go out every day, those more expensive "quick trips into the store" have virtually disappeared.

Increased flexibility - my previous job did a good job of allowing me flexibility to go to doctor's appointments, etc., even as we went through the multitude of appointments that make up the pregnancy and birth of our third child last year, but I was still in the office every day, basically 7-4 or 8-5.


Now when I take a break to go to the restroom or get a drink,  I can spend a minute to talk to my wife,  or to throw the clothes from the washer into the dryer (our laundry closet is upstairs near my home office).

--


Of course as with everything,  there have been some minor downsides to the new situation as well:


Less social media and blogging presence - this is another item that may seem counter-intuitive,  since most articles and blogs about Telecommuting talk about the importance of the "virtual water cooler" to stay connected to the outside world.


At Ntirety we use Skype as an instant messaging tool,  and I work with two MCMs (although one of them did recently leave to from his own consultancy).  I have found that the interaction I was missing at work in the past that was driving me heavily onto Twitter is more present now in my work relationships and therefore has decreased my Twitter presence.


Two other things contribute to my decreased presence,  one of which relates to my new position and one of which relates to the changes in my family (read: having three kids in a little over three years).
My new WFH situation and it's benefit of spending more time with my family has decreased my time spent working on my blog.  I used to spend a little time at the start and end of each day in the office compiling ideas and nibbling away at blog posts, especially at the end of the day if I knew traffic for the commute was going to be bad.  Now with my 20-30 *second* commute at the end of the day (down the stairs to the first floor), usually with little traffic other than dodging a cat on the way down, I find myself in more of a hurry to get out the door and "home" to my family.  While I can apply a little self-discipline and overcome this to blog more frequently (as I do hope to do), it still takes additional effort.

The other thing - the family thing - that keeps me off Twitter as much as I used to is my decreased attendance at SQLSaturdays and other events. Having a growing family with multiple small children (now 4, 2.5, and 1 years old) has made it harder to justify to myself spending extra time away, and this makes me less interested in what's going on on Twitter - both because I don't need the event information, but also because there's a tiny little piece of me that is jealous that I'm not traveling to remote SQLSaturdays, etc.  Maybe this will fade over time, and I know I need to be more present to take part in my #sqlfamily, so I plan to work on this as well on the coming months.


--


All in all, the WFH experience has been a very positive one, and I greatly recommend it to anyone who meets the following criteria:

  • You are self-directed, able to work without constant direction from your supervisor.
  • You can handle not seeing your co-workers and boss every day.
  • Most importantly, you have a door to close when needed - I don't always work with the door to my office shut, but there is definitely time every day when I close it.

The items I need to work on in the next six months:

  • Re-invigorated online presence on Twitter, both for my own benefit and for that of the #sqlfamily
  • Increased blogging - my blogging decreased even while I was at House of Brick, but as mentioned above has been almost non-existent in the last three months - my goal is to get to a post a week, even if it is a one page "micro-post."

Further Bulletins as Events Warrant!



Tuesday, February 11, 2014

T-SQL Tuesday #051: Always Hedge Your Bets

 TSQL2sDay 
It's time for T-SQL Tuesday again and this month's host is my former co-worker, the @SQLRNNR himself, Jason Brimhall.  Jason's chosen theme is Poker Face, asking us to describe situations where someone took a bet that was just a little too risky for our liking.

When I think about users doing risky (AKA boneheaded) things, it almost always comes back to backups - or more accurately, the *lack* of backups.

"We are running a major migration *and* upgrade of our credit card processing system this weekend, but we only have a four hour window, so we won't take backups - we can just revert to last night's Windows backups, right?"

"We don't backup our system databases because they aren't important (even though they have hundreds of SSIS packages in msdb)."

"We backup our databases to the local X: drive, but we don't sweep them off to the network or an appliance - taking the backup is good enough, right?"

"We take full backups every single day and log backups every hour - but we don't have time or storage to test them."

To me, this last one is the most dangerous "risky thing" of all, because by my experience it is easily the most pervasive problem related to backups.  Almost all DBAs take regular backups of most or all of their databases, but very few seem to have regular test restores.

As Paul Randal (blog/@PaulRandal) noted in his excellent blog post (I don't know who said it first, because it is credited to too many different SQL Server professionals to count):

"You need to design a restore strategy, not a backup strategy."

One of the most important parts of a restore strategy is testing your backups.  Unfortunately in my 13+ years as a SQL Server DBA (including 4+ years as a Managed Services DBA/Consultant), I can count on one hand the businesses/clients that had a regular restore test environment and schedule, and even those businesses only did it for several "critical" databases.

Why does this happen?  I can think of a couple "reasons" (excuses) I have heard over the years:

Excuse #1 - "We can't afford the servers/time/licenses/whatever to test our backups."

BZZZZZ - False!  In reality you can't afford *not* to test your backups.  Especially in today's ever-increasingly virtualized world, it is becoming easier and easier to throw up more servers, and this often problematic fact (which can lead to server sprawl and other manageability issues for the DBA) can be leveraged in a positive way to create a backup/restore test environment that can be taken up and down or cloned as quickly as a mouse click. 

As Brent Ozar (blog/@BrentO) notes in his post "Dev, Test and Production SQL Server environments" the ideal situation is to perform nightly restores of your PROD environment into QA (with limitations related to sensitive data, etc. as Brent describes) which not only gives you the benefit of current data in your QA environment, but also tests your backup/restore process as well.

Paul Randal ("Mr. DBCC") notes in his thorough blog series on "CHECKDB From Every Angle" that an additional benefit of performing this type of backup/restore testing is the ability to run your regular CHECKDB - BTW - you do run regular CHECKDB's on all of your databases, user and system, right?  That's another poor DBA practice for another blog post - many DBAs only run CHECKDB once in a while or only when something goes wrong (which is usually too late).  You need to run CHECKDB as often as your system can handle it - daily is great but weekly is definitely better than many DBAs do.  As Paul succinctly puts it in SQLskills's great Immersion Event classes, your data is only as clean as of the time of your last successful CHECKDB. </rant>

As I was saying, with a regular backup/restore strategy, you can run your DBCC CHECKDB against that restored copy and a clean result will show you that your PROD database is clean *as of the time of the backup*.  This last part is especially relevant - if your PROD backup runs at 10pm and you restore it to QA at 4am and get a clean CHECKDB against QA, but then find out later that morning that there was a problem at 1am, your clean CHECKDB doesn't mean anything other than helping put a box around the problem - the corruption occurred sometime between the 10pm backup that resulted in the clean CHECKDB and 1am when the problem was discovered.

Excuse #2 - "We only need to test restores on our critical databases, right?  The other databases don't mean as much."

BZZZZZ - False!  At the risk of another rant, this leads me to another common misconception many DBAs/managers/carbon-based lifeforms have - "It's just DEV, so it doesn't matter."

It took me a few years to get it, but for some time my take on this scenario is:

"Every system is PROD to somebody."

What about a DEV system?  It's PROD for the development team - and heaven help you when the DEV databases that haven't been backed up for six months (you know, the bi-annual DEV backup scheme) goes down and loses all of the code for the latest release that was supposed to be checked into Source Control, but you know, they didn't have time and it shouldn't matter because the DBA backs up the databases, RIGHT?  To management and other IT teams, it is always the Default Blame Acceptor's fault - and you are kidding yourself if you think otherwise.

What about a QA system?  It's PROD to the QA team and their testers - and unless you have the nightly backup/restore to QA cycle described above in place, do you know how long it will take to refresh QA from PROD?  Especially if you have to scrub out the sensitive data - a restore of an already-scrubbed backup of QA will almost certainly be significantly faster than a "scrub & refresh" from PROD. How many hours of lost QA work will there be, possibly impacting an upcoming release date, before QA is back online?

I know that these comments are mostly written from a development shop point of view, but even if you don't build your systems - you just buy third-party software for everything - you still probably (should) have at least one layer of DEV/TEST/QA system to test regular patches and new vendor code releases on, and the same statements made above apply to those systems as well.  Consider this comment that could easily cost you your job: "Sorry, Mr. CIO, but the SharePoint upgrade for tomorrow can't happen because the QA team hasn't finished their work yet and QA is down with no database backups."

--

Too make a long story short (I know - too late) - without regular test restores, your backups do not provide much of a guarantee of recoverability.  (Even successful test restores don't 100% guarantee recoverability, but it's much closer to 100%).

WHATEVER YOU DO - DON'T STOP TAKING BACKUPS JUST BECAUSE YOU CAN'T REGULARLY TEST THEM - THIS WOULD BE WRONG-WRONG-WRONG.

Instead, use this a reminder that backups are good, but backups with test restores are much, much better!