One DBA's Ongoing Search for Clarity in the Middle of Nowhere


*or*

Yet Another Andy Writing About SQL Server

Thursday, May 29, 2014

Handling Perfmon Logs with Relog


In a recent situation, I was tasked by a client to check one of their servers which they believed to be grossly over-provisioned.  As a VM they knew they could easily pull both CPU and RAM from the server if they could determine the true needs of the server.

One of the items I regularly recommend to my clients is to have a "permanent" Perfmon collector log, where a basic set of counters is collected every five minutes, with the log file cycled every day.  This generates 1.5MB-2.0MB of files daily (a trivial amount of storage even if maintained for several years) and daily cycling makes the logs easier to handle when troubleshooting issues.  This is also useful for something most people don't have - a baseline over time - but to use it this way requires some serious finagling as you will see.

--

The way I set up the collector by default, the collector writes to a file in a designated directory, and I use a couple of batch files that use the logman command to stop and start the collector, and then a pair of Windows Scheduled Tasks, one to stop and start the collector each night (effectively cycling to a new output file) and one to start the collector on server start-up (so that a server reboot doesn't kill the collector altogether).  I know there are ways to configure a collector to cycle to new files each day, but they are version-specific based on your version of Windows, while the batch file/scheduled task method works across O/S versions.

--

The net effect of this method is that you end up with a directory with a large stack of blg files like this:


You can see a file for each day plus a new file any time the Server restarts.  This is great when you are looking for info of what happened on the night of 05/24/2014, but not so much when you need to know what happened over the last thirty days.

Enter relog!

Relog is a command line utility for manipulating perfmon output files (either .blg or .csv).  It is frequently used to filter existing files by counter or to convert .blg files into .csv files, both of which are described in this blog post and this one by John Pertell (blog/@jayape).  A lesser known use of relog is its ability to combine individual files into a single larger file, like this:

relog day1.blg day2.blg day3.blg -f BIN -o Day1_2_3.blg

...which would combine the three files day1, day2, and day3 into a single large file Day_1_2_3.

This is a necessary evil for many types of analysis since it can be difficult to wade through multiple perfmon log files at once.  Most tools claim to support it, but most of them also have vague disclaimers that amount to YMMV.  The most reliable way to handle this situation is to use relog to combine the files prior to beginning the analysis.


In the given situation, I had 43 files covering a little more than a month to analyze.  I hadn't collected this many files at once before, but didn't really think about it as any big deal.  I used notepad to mock together the relog command (not for the faint of heart):


C:\PerfLogs\Blog>relog Ntirety-SQLServerPerf_000066.blg Ntirety-SQLServerPerf_000067.blg Ntirety-SQLServerPerf_000068.blg Ntirety-SQLServerPerf_000069.blg Ntirety-SQLServerPerf_000070.blg Ntirety-SQLServerPerf_000071.blg Ntirety-SQLServerPerf_000072.blg Ntirety-SQLServerPerf_000073.blg Ntirety-SQLServerPerf_000074.blg Ntirety-SQLServerPerf_000075.blg Ntirety-SQLServerPerf_000076.blg Ntirety-SQLServerPerf_000077.blg Ntirety-SQLServerPerf_000078.blg Ntirety-SQLServerPerf_000079.blg Ntirety-SQLServerPerf_000080.blg Ntirety-SQLServerPerf_000081.blg Ntirety-SQLServerPerf_000082.blg Ntirety-SQLServerPerf_000083.blg Ntirety-SQLServerPerf_000084.blg Ntirety-SQLServerPerf_000085.blg Ntirety-SQLServerPerf_000086.blg Ntirety-SQLServerPerf_000087.blg Ntirety-SQLServerPerf_000088.blg Ntirety-SQLServerPerf_000089.blg Ntirety-SQLServerPerf_000090.blg Ntirety-SQLServerPerf_000091.blg Ntirety-SQLServerPerf_000092.blg Ntirety-SQLServerPerf_000093.blg Ntirety-SQLServerPerf_000094.blg Ntirety-SQLServerPerf_000095.blg Ntirety-SQLServerPerf_000096.blg Ntirety-SQLServerPerf_000097.blg Ntirety-SQLServerPerf_000098.blg Ntirety-SQLServerPerf_000099.blg Ntirety-SQLServerPerf_000100.blg Ntirety-SQLServerPerf_000101.blg Ntirety-SQLServerPerf_000102.blg Ntirety-SQLServerPerf_000103.blg Ntirety-SQLServerPerf_000104.blg Ntirety-SQLServerPerf_000105.blg Ntirety-SQLServerPerf_000106.blg Ntirety-SQLServerPerf_000107.blg Ntirety-SQLServerPerf_000108.blg -f BIN -o Combined.blg

...and went to run it from the command prompt.  Much to my surprise I received the following:


At first the error ("The specified log file type has not been installed on this computer") threw me and made me investigate whether one of the files was corrupted in some way.  All of the files were roughly the same size, with the days that had multiple files (due to server restarts) still combining to be about the same size.  I collected smaller subsets of 2 files/5 files/10 files without issue.  At this point I turned to my Google-Fu and quickly found someone with a similar problem.  Microsoft employee Dustin Metzgar (blog/@DustinMetzgar) had a blog post from August 2013 "Performance Monitor Issues" that described my situation exactly. Issue #4 in the post is "Combining several BLGs into one is limited to 32 files and has poor error messages" - sound familiar?  Sure enough, if I modified my relog command to only include 32 files, it worked!

The item that Dustin doesn't address is how to handle the excess files.  I found that with a little creativity (very little) I was able to use multiple relog commands to combine groups of files into larger files and then combine those larger files into one final large file:

C:\PerfLogs\Blog>relog Ntirety-SQLServerPerf_000066.blg Ntirety-SQLServerPerf_000067.blg Ntirety-SQLServerPerf_000068.blg Ntirety-SQLServerPerf_000069.blg Ntirety-SQLServerPerf_000070.blg  Ntirety-SQLServerPerf_000071.blg  Ntirety-SQLServerPerf_000072.blg  Ntirety-SQLServerPerf_000073.blg  Ntirety-SQLServerPerf_000074.blg  Ntirety-SQLServerPerf_000075.blg  Ntirety-SQLServerPerf_000076.blg  Ntirety-SQLServerPerf_000077.blg  Ntirety-SQLServerPerf_000078.blg  Ntirety-SQLServerPerf_000079.blg  Ntirety-SQLServerPerf_000080.blg  Ntirety-SQLServerPerf_000081.blg Ntirety-SQLServerPerf_000082.blg Ntirety-SQLServerPerf_000083.blg Ntirety-SQLServerPerf_000084.blg Ntirety-SQLServerPerf_000085.blg Ntirety-SQLServerPerf_000086.blg Ntirety-SQLServerPerf_000087.blg Ntirety-SQLServerPerf_000088.blg Ntirety-SQLServerPerf_000089.blg Ntirety-SQLServerPerf_000090.blg Ntirety-SQLServerPerf_000091.blg Ntirety-SQLServerPerf_000092.blg Ntirety-SQLServerPerf_000093.blg Ntirety-SQLServerPerf_000094.blg Ntirety-SQLServerPerf_000095.blg Ntirety-SQLServerPerf_000096.blg Ntirety-SQLServerPerf_000097.blg  -f BIN -o Combined1.blg 



C:\PerfLogs\Blog>relog Ntirety-SQLServerPerf_000095.blg Ntirety-SQLServerPerf_000096.blg Ntirety-SQLServerPerf_000097.blg Ntirety-SQLServerPerf_000098.blg Ntirety-SQLServerPerf_000099.blg Ntirety-SQLServerPerf_000100.blg Ntirety-SQLServerPerf_000101.blg Ntirety-SQLServerPerf_000102.blg Ntirety-SQLServerPerf_000103.blg Ntirety-SQLServerPerf_000104.blg Ntirety-SQLServerPerf_000105.blg Ntirety-SQLServerPerf_000106.blg Ntirety-SQLServerPerf_000107.blg Ntirety-SQLServerPerf_000108.blg -f BIN -o Combined2.blg 


C:\Perflogs\Blog>relog Combined1.blg Combined2.blg -f BIN -o Combined.blg


I was then able to sift through the combined.blg file for the information covering the entire period easily.  After manual examination, I next loaded the blg file into the Performance Analysis of Logs (PAL) Tool for further analysis, but that is the subject for another blog post. :)

One item I haven't been able to run down yet is why the large output file (combined.blg) only has 11795 samples while the two input files have 12659 samples total.  This would be expected behavior/by design if I had applied some type of filter in the relog command, but in this case I did not.  You can see in the previous screenshots (from the initial relogs) that the input samples match the output samples exactly in those cases.  If I find an answer I will update the post with the information.

Hope this helps!

Wednesday, May 21, 2014

Why Won't My Clustered Instance Start?

Last weekend I had a client page me in the middle of the night (because, of course, these things always happen in the middle of the night) with the complaint that their SQL Server Failover Cluster Instance (FCI) wouldn't come online on their B node after multiple attempts.  When they failed it back to the A node, the instance came up without issue.

The error they reported was this:
Error: 17750, Severity: 16, State: 0. Could not load the DLL SQLJDBC_XA.dll, or one of the DLLs it references. Reason: 126(The specified module could not be found.). 
If you don't recognize it outright, SQLJDBC_XA.dll is part of the JDBC (Java Database Connectivity) driver specifically for handling distributed transactions.

At first I went digging for references to this specific error, but didn't see much except for the fact that the Application Log on the server had roughly 2.3 Gajillion (a  technical term roughly equivalent to a Googolplex of Googolplexes) of these 17750 errors.

I tried to go to the SQL Server Error Log but of course the server wasn't changed from its default setting of six plus current, meaning there were only seven logs.  Since the client has tried multiple times to fail the instance back and forth, they had easily burned through the seven tries and therefore the original error.
PRO TIP: I strongly recommend changing the default log retention to a much larger number (30-45) and then setting up a job to cycle the error log every day by running sp_cycle_errorlog.  This works in all versions of SQL Server, makes the logs smaller and easier to open when you want to troubleshoot an issue, and often most importantly, protects you from the issue described above of multiple SQL Server service restarts aging out your log too quickly, or at least mitigates it (if you restart your instance 46 times, you will still be stuck).
Right-click on "SQL Server Logs" and select Configure.  Check the box for "Limit the number of error log files before they are recycled" and fill in the Maximum Number of Error Log files.  Many people are faked into believing that there is no limit because by default this box isn't checked, but if it isn't checked all you get is the *default* limit of six. #TheMoreYouKnow
I did the simple thing - I searched the B node for the file SQLJDBC_XA.dll to see if it was in the wrong directory or simply didn't exist - surprise surprise, it didn't exist.  The other possibility with this particular error (for a DLL file anyway) would be that it wasn't registered with Windows, but in this case it wasn't there at all.

I looked at the A node and searched for the file and there it was:


This showed me that the problem was probably related to a piece of application code installed on the A node, which I found without too much effort:


 I then looked on the B node and found the this ATG application wasn't there:



 No wonder the application DLL wasn't there!

As you can see the problem is that the application code was not installed on the B node and therefore the relevant database couldn't be started.  Failing the instance back to the A node allowed everything to run because the application code had been installed there.  The Windows Application Log also showed a particular database related to the situation (with a suspiciously similar name):
Date     5/17/2014 2:08:16 AM
Log     SQL Server
Source     spid7s
Message
Recovery is complete. This is an informational message only. No user action is required.
--
Date     5/17/2014 2:08:16 AM
Log     SQL Server
Source     spid51
Message

Starting up database 'atg'.
-- 
Date     5/17/2014 2:08:16 AM
Log     SQL Server
Source     spid63
Message
Attempting to load library 'SQLJDBC_XA.dll' into memory. This is an informational message only. No user action is required.
--
Date     5/17/2014 2:08:16 AM
Log     SQL Server
Source     spid63
Message
Error: 17750, Severity: 16, State: 0.
--
Date     5/17/2014 2:08:16 AM
Log     SQL Server
Source     spid63
Message
Could not load the DLL SQLJDBC_XA.dll, or one of the DLLs it references. Reason: 126(The specified module could not be found.)
 ...which led into a near infinite cycle of Attempting to load - Count not load - Attempting to load - Could not load - etc. etc. etc.  Only by making it all the way back to the instance restart could you see the "starting up database" message.

--

Lessons learned for the client (and all of us):

  • Don't install application code on a SQL Server unless you absolutely have to.  Ever.  When you do have to do so, make sure all impacted SQL Servers (cluster nodes, mirroring hosts, etc.) have the same version of the application with all of the same options enabled.  (This last part gets people as well - the application is installed on both nodes but one is installed with a particular flag enabled which adds extra DLLs to the server which causes the same problem seen above).
  • Set the SQL Server Log to extra history retention so that your DBAs and other impacted parties have a better chance of seeing the true cause of the initial failure - much of the information is in the Windows Application Log but not all of it, and it is easier to find the SQL Server-specific items in the SQL Server log.  Also, on many servers the Application Log cycles more frequently than every thirty days (although you can increase the size of the Application Log as well, and this also can be a good idea.)
  • Most importantly - call for help before trying the same thing over and over.  When I try something once and it doesn't work, I do often try it a second time to make sure I didn't make a simple fat-finger mistake the first time.  After that, throw up the warning flag and ask for help via another DBA, or #sqlhelp on Twitter, or Google-Fu - whatever helps you get a second opinion.
Hope this helps!


Tuesday, May 13, 2014

T-SQL Tuesday #54 - Interviews and Hiring



http://borishristov.com/blog/t-sql-tuesday-54-interview-invitation/This month’s T-SQL Tuesday is hosted by Boris Hristov (blog/@BorisHristov) and his chosen topic is “Interviews and Hiring” – specifically interviewing and hiring of SQL Server Professionals.  This is an interesting topic to me since I have given dozens of interviews over the years and conversely have taken very few – probably less than ten.

As an interviewee it is always interesting to watch for the curve ball or the trick question, and it is actually a little disappointing those few times it doesn’t come. 

Most of them are pretty mundane but thrown in with actual technical questions, often to try to break the interview's rhythm:
  1. What is your SSRS experience?   
  2. What is the difference between putting filter criteria in the WHERE clause and in the JOIN clause?   
  3. Why is the sky blue?” (Blue light has a shorter wavelength and therefore is scattered by particles in the air – or maybe “Because IBM bought it” although this becomes less and less relevant as IBM becomes less and less relevant.)
Others are meant to be nerd-funny or start pseudo-religious arguments:
  1. Star Trek or Star Wars?” (Trek – DUH!) 
  2. What...is the air-speed velocity of an unladen swallow?” (African or European?) 
  3. My personal favorite “Why can’t your cross the streams?” (“It would be bad” is acceptable but the better answer is “Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.”)
As an interviewer my curveball questions usually come as a follow-up to some answer given to a straight question – for example, when I was working at the University of Nebraska-Lincoln and giving interviews for our Business Computing group: 
  1. Why do you want to work for us?
  2. I have heard you have good educational opportunities and because I want the stability of working for a public institution.
  3. Have you done your research – we had large budget cuts the last three years running – do you consider that stable?
Questions like this are useful because they not only put the interviewee a little off-rhythm but they also raise a relevant talking point for further discussion.

I also like asking the funny/nerd questions (“Which of the thirteen dwarves of Thorin are you most like and why?”) because it opens the door to the interviewee that I am a humorous, (and at least moderately geeky) employee and that the workplace has that tone as well (if it didn’t have that tone then I wouldn’t be there in the first place to be interviewing you!)
http://img1.wikia.nocookie.net/__cb20110723162158/lotr/images/2/21/Thorin_and_Company.png
You may have noticed that I have brought up the concept of throwing an interviewee off-balance or off their rhythm a few times during this post – I want to reinforce that this isn’t because I am mean or a jerk, as many interviewees seem to think after hearing about an interview like this.  After a relatively successful response to an “off-balance question” I always tell the interviewee that I see an important part of any DBA’s job is the ability to respond to the unexpected, even while focusing intently on something else.  We spend too much of our careers - whether as a company DBA, a professional consultant, or a managed services DBA - shifting gears at a moment’s notice.  It never fails that when I am working on a major year-long project that has a deliverable Friday and I suddenly have to respond to a page that just came in for a disk full – and then five minutes later another page for a server down!

Another important related skill for a DBA that many interviews don't cover is determining which fire is more important – the answer to this is always “It Depends” – not just because that is the right answer to most IT questions 95% of the time, but because it really does depend.  Maybe the server down is for an internal infrastructure server at Client A that is important but can wait an hour, while the Disk Full from Client B is on their monster OLTP cluster that runs all of their online business – suddenly Disk Full sounds pretty important, doesn’t it?

I have always been a fan of the soft skills questions such as the ones described here over the hard technology questions.  While we do have to ask technology questions to know whether the interviewee understands the difference between a server and an instance, several key items that were new in SQL version XXX, or why we should almost always configure Max Server Memory, I am not a fan of the BOL-style questions – “what is the third parameter by position of sp_add_jobschedule?” or “What are the Registry Keys that you can edit to enable SQL Agent mail from the command line, and what is the extended stored procedure you use to do so?” I have always believed that as long as you understand the concepts and the processes, I can teach you the specifics of the technology.

The technology changes too fast to keep up with every little bit and byte that changes between versions – I would rather hire someone who has spent 10-15 years troubleshooting, researching, and figuring out how to make things work by the seat of their pants than the 10-15 year person who  happen to know the answers to the specific BOL questions that happen to be in the interview, but doesn't know how to do basic online research (basic = Google/Bing), doesn't read blogs or newsgroups, and doesn't know how to use Twitter to ask for #sqlhelp.  

Do you have to blog and tweet to be a successful Senior DBA?  Definitely not - but I find as I spend more time as a Senior DBA and interacting with other Senior DBAs, that more and more of them do blog and tweet (even if just for #sqlhelp) and even if they don't blog themselves they do read the top blogs.

Just because you do know the third parameter in sp_add_jobschedule – who cares?  99% of the time (or more) I can look it up if I really need it, and if I can’t look it up, what are the chances that of the billions and billions of things there are to know about SQL Server that I will know the one piece of minutia that is relevant.

(Oh - by the way – the third position of sp_add_jobschedule for SQL 2012 is @name, and the registry keys for enabling SQL Agent mail are HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent\UseDatabaseMail and HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent\DatabaseMailProfile and they would be edited by using master.dbo.xp_instance_regwrite.)  #IHadToLookThemUp  #TheMoreYouKnow

Usually I end my posts with “Hope This Helps”, but in this case I will end with this:

https://www.flickr.com/photos/zen/2424581

Wednesday, May 7, 2014

Help - I Need a List of my Maintenance Plan Jobs *and* Schedules!

UPDATE - Jason Brimhall (blog/@sqlrnnr) pointed out after reading this post that there is a more inline way to do a lot of this work as shown in his post here.  Jason's post relates to SSRS reports (very similar logic since they also are SQL Agent jobs) and could easily be applied to this same maintenance plan problem - thanks Jason!


I have written over and over about how I dislike SQL Server's canned SSIS Maintenance Plans and how I strongly prefer Ola Hallengren's Maintenance Solution.  Another popular option for intelligent index maintenance are the scripts from Michelle Ufford (blog/@sqlfool), which also perform intelligent re-indexing.  The key advantage of Ola's code to me is that it is a full maintenance suite, handling backups, CHECKDB, *and* index maintenance.

All of that aside, as a Managed Services DBA I still have to frequently deal with Maintenance Plans and their repercussions on my innocent unsuspecting clients.

I recently received a request from one of my clients for all of the maintenance plan information and their related jobs for backup tasks.  I had a script to return maintenance plan information (using the updated tables and views present from SQL Server 2005+) but I didn't have an easy way to reference the jobs and job schedules - sure I knew how to pull basic info from sysjobs and sysjobschedules, but doing so is a complete mess of bit-wise mathematics to decipher the multi-values in a single field (bad Microsoft!) that is prevalent throughout the job tables and views.

A little Google-Fu brought me to Michelle's blog and an amazing entry where she already has all of the bit-wise-ness (bit-wisdom?) worked out in a very elegant query to pull the relevant information.  Her original blog post is about generating disable statements for the jobs, but I slightly modified her query to return a few extra fields and then hacked it together with my pre-existing maintenance plan information query to return the data I needed for my client:
/*
Backup information from maintenance plans - jobs, schedules, etc.

Andy Galbraith @DBA_ANDY

MSSQL 2005+

Heavily borrows from http://sqlfool.com/2009/02/view-sql-agent-jobs-tsql/
for the original job schedule CTE and base query - thanks Michelle!

I modified Michelle's original query slightly and then added
the maintenance plan information to match the jobs to their
parent maintenance plans.

The filter that makes the query relevant to backup subplans is:

"and smpld.line1 like '%Back Up%'"

Commenting out or removing this line will display information about
all maintenance plan subplans and their enabled jobs
*/

Declare @weekDay Table
(
    mask  int
    , maskValue varchar(32)
);

Insert Into @weekDay
    Select 1, 'Sunday'  UNION ALL
    Select 2, 'Monday'  UNION ALL
    Select 4, 'Tuesday'  UNION ALL
    Select 8, 'Wednesday'  UNION ALL
    Select 16, 'Thursday'  UNION ALL
    Select 32, 'Friday'  UNION ALL
    Select 64, 'Saturday';

With myCTE
As (
    Select sched.name As 'scheduleName'
    , sched.schedule_id
    , jobsched.job_id
    , Case
        When sched.freq_type = 1
            Then 'Once'
        When sched.freq_type = 4 And sched.freq_interval = 1
            Then 'Daily'
        When sched.freq_type = 4
            Then 'Every ' + Cast(sched.freq_interval As varchar(5)) + ' days'
        When sched.freq_type = 8
            Then Replace( Replace( Replace((
                Select maskValue
                From @weekDay As x
                Where sched.freq_interval & x.mask <> 0
                Order By mask For XML Raw)
    , '"/><row maskValue="', ', '), '<row maskValue="', ''), '"/>', '')
        + Case When sched.freq_recurrence_factor <> 0
        And sched.freq_recurrence_factor = 1
            Then '; weekly'
    When sched.freq_recurrence_factor <> 0
            Then '; every '
            + Cast(sched.freq_recurrence_factor As varchar(10)) + ' weeks'
        End
        When sched.freq_type = 16
            Then 'On day '
            + Cast(sched.freq_interval As varchar(10)) + ' of every '
            + Cast(sched.freq_recurrence_factor As varchar(10)) + ' months'
        When sched.freq_type = 32
            Then Case
            When sched.freq_relative_interval = 1
                Then 'First'
            When sched.freq_relative_interval = 2
                Then 'Second'
            When sched.freq_relative_interval = 4
                Then 'Third'
            When sched.freq_relative_interval = 8
                Then 'Fourth'
            When sched.freq_relative_interval = 16
                Then 'Last'
    End +
    Case
        When sched.freq_interval = 1
            Then ' Sunday'
        When sched.freq_interval = 2
            Then ' Monday'
        When sched.freq_interval = 3
            Then ' Tuesday'
        When sched.freq_interval = 4
            Then ' Wednesday'
        When sched.freq_interval = 5
            Then ' Thursday'
        When sched.freq_interval = 6
            Then ' Friday'
        When sched.freq_interval = 7
            Then ' Saturday'
        When sched.freq_interval = 8
            Then ' Day'
        When sched.freq_interval = 9
            Then ' Weekday'
        When sched.freq_interval = 10
            Then ' Weekend'
    End
    +
    Case
        When sched.freq_recurrence_factor <> 0
        And sched.freq_recurrence_factor = 1
            Then '; monthly'
        When sched.freq_recurrence_factor <> 0
            Then '; every '
    + Cast(sched.freq_recurrence_factor As varchar(10)) + ' months'
    End
    When sched.freq_type = 64
        Then 'StartUp'
    When sched.freq_type = 128
        Then 'Idle'
     End As 'frequency'
    , IsNull('Every ' + Cast(sched.freq_subday_interval As varchar(10)) +
    Case
        When sched.freq_subday_type = 2
            Then ' seconds'
        When sched.freq_subday_type = 4
            Then ' minutes'
        When sched.freq_subday_type = 8
            Then ' hours'
    End, 'Once') As 'subFrequency'
    , Replicate('0', 6 - Len(sched.active_start_time))
        + Cast(sched.active_start_time As varchar(6)) As 'startTime'
    , Replicate('0', 6 - Len(sched.active_end_time))
        + Cast(sched.active_end_time As varchar(6)) As 'endTime'
    , Replicate('0', 6 - Len(jobsched.next_run_time))
        + Cast(jobsched.next_run_time As varchar(6)) As 'nextRunTime'
    , Cast(jobsched.next_run_date As char(8)) As 'nextRunDate'
    From msdb.dbo.sysschedules As sched
    Join msdb.dbo.sysjobschedules As jobsched
    On sched.schedule_id = jobsched.schedule_id
    Where sched.enabled = 1
)
Select DISTINCT p.name as 'Maintenance_Plan'
, p.[owner] as 'Plan_Owner'
, sp.subplan_name as 'Subplan_Name'
, smpld.line3 as 'Database_Names'
, RIGHT(smpld.line4,LEN(smpld.line4)-6) as 'Backup_Type'
, job.name As 'Job_Name'
, sched.frequency as 'Schedule_Frequency'
, sched.subFrequency as 'Schedule_Subfrequency'
, SubString(sched.startTime, 1, 2) + ':'
    + SubString(sched.startTime, 3, 2) + ' - '
    + SubString(sched.endTime, 1, 2) + ':'
    + SubString(sched.endTime, 3, 2)
As 'Schedule_Time' -- HH:MM
, SubString(sched.nextRunDate, 1, 4) + '/'
    + SubString(sched.nextRunDate, 5, 2) + '/'
    + SubString(sched.nextRunDate, 7, 2) + ' '
    + SubString(sched.nextRunTime, 1, 2) + ':'
    + SubString(sched.nextRunTime, 3, 2)
As 'Next_Run_Date'
/*
Note: the sysjobschedules table refreshes every 20 min,
so Next_Run_Date may be out of date
*/
From msdb.dbo.sysjobs As job
Join myCTE As sched
On job.job_id = sched.job_id
join  msdb.dbo.sysmaintplan_subplans sp
on sp.job_id = job.job_id
inner join msdb.dbo.sysmaintplan_plans p
on p.id = sp.plan_id
JOIN msdb.dbo.sysjobschedules sjs
ON job.job_id = sjs.job_id
INNER JOIN msdb.dbo.sysschedules ss
ON sjs.schedule_id = ss.schedule_id
join msdb.dbo.sysmaintplan_log smpl
on p.id = smpl.plan_id
and sp.subplan_id =smpl.subplan_id
join msdb.dbo.sysmaintplan_logdetail smpld
on smpl.task_detail_id=smpld.task_detail_id
and smpld.line1 like '%Back Up%'
where job.[enabled] = 1
and smpld.line3<>''
Order By Next_Run_Date;
The output of the query will look like this (wrapped so it could be inserted at a readable font):
As can be seen from these results, this example instance has four Maintenance Plans, each with one sub-plan related to backups.  The "FULL SQLBKUP_DAILY_AllButSunday" maintenance plan runs weekly every day but Sunday once each day at 10:45pm.

Shout out again to Michelle for the CTE that compiles the job schedule bit-wise information - you saved me (and I'm sure many others) a lot of busywork!

--

Hope this helps!



Friday, May 2, 2014

Speaking at #SQLSat307 in Iowa City!

I just received confirmation I will be speaking at SQL Saturday 307 in Iowa City on June 7th!

http://www.sqlsaturday.com/307/eventhome.aspx 
I will be giving an updated version of my Ola Hallengren Maintenance Solution talk that I have given at a few previous SQL Saturdays, re-titled "Are You Maintaining Your SQL Servers?"


Are you performing all of the regular maintenance you should on your servers?
 

Do you not run CHECKDB because "it takes too long"?  How about Index Maintenance - does that take too long?
 

Do you really have the appropriate backups in place?
 

Do you have benchmark numbers in place to compare when your server is "slow
 

Come discuss the answers to these and other questions - we will discuss the free Maintenance Solution scripted tools from Ola Hallengren and how they can help you fix these and other problems with a little easy coding you can do yourself.  We will also discuss other tips and tricks to help you maintain and troubleshoot your SQL Server instances.

SQL Saturdays are awesome training and networking experiences - the full schedule hasn't yet been published but there were multiple Microsoft MVP's on the submissions list.  For only a $10 lunch fee you can spend the day learning about SQL Server and networking with your peers.
Register now and I hope to see you there!