One DBA's Ongoing Search for Clarity in the Middle of Nowhere


*or*

Yet Another Andy Writing About SQL Server

Showing posts with label VLFs. Show all posts
Showing posts with label VLFs. Show all posts

Thursday, October 5, 2017

Toolbox - Fix Your FILEGROWTH

One of the items we usually flag in reports is FILEGROWTH by percentage.  It doesn't help that for most versions of SQL Server the default FILEGROWTH increment for one or both files (DATA and LOG) is 10%.

https://i.imgflip.com/jaq5c.jpg

As you probably already know, the key flaw to percentage-based FILEGROWTH is that over time the increment grows larger and larger, causing the actual growth itself to take longer and longer.  This is especially an issue with LOG files because they have to be zero-initialized before they can be used, causing excessive I/O and file contention while the growth is in progress.  Paul Randal (blog/@PaulRandal) describes why this is the case in this blog post.  (If you ever get a chance to see it Paul also does a fun demo in some of his classes and talks on why zero initialization is importan, using a hex editor to read the underlying contents of disk even after the old data is "deleted")

As I mentioned above the catch to percentage-based growth is the ever-growing increment:

    

In the image on the left you can see that after 20 growths at 10% you are now growing at 313MB at a time.  By 30 growths (not pictured) the increment is 812MB - getting close to the 1GB mark.  Depending on the speed of your storage this can cause significant delay.

Another related issue is Virtual Log Files (VLF's) which I discuss here.  It is important to have a meaningful increment so that the growth isn't *too* small relative to the file size.

How do we fix this?  I found a script (and modified it of course) to generate the ALTER DATABASE statements to set the FILEGROWTH increment so a size fair to the individual file's current size, based on a table shown in the script:

--

/*

FILEGROWTH Reset
Submitted by Andy Galbraith
02/04/2016

Script to reset all FILEGROWTH to a fixed increment based on their current size:
CurrentSize<1GB = 16MB
1GB<=CurrentSize<5GB = 128MB
5GB<=CurrentSize<100GB = 256MB
CurrentSize>=100GB = 512MB

Actual queries are about two-thirds of the way down at 'SET @Query' if you want to modify the size parameters

Modified from a script at http://www.sqlservercentral.com/scripts/Administration/99339/

Tested on MSSQL 2005/2008/2008R2/2012/2014

*/

SET NOCOUNT ON
USE master
GO

/* Create a Table for Database File Info */
IF OBJECT_ID('tempdb..#ConfigAutoGrowth') IS NOT NULL 
DROP TABLE #ConfigAutoGrowth

CREATE TABLE #ConfigAutoGrowth
(
DatabaseID INT,
DBName SYSNAME,
LogicalFileName VARCHAR(max),
FileSizeinGB decimal(10,2),
GrowthOption VARCHAR(12)
)

/* Load the Database File Table */
INSERT INTO #ConfigAutoGrowth
SELECT 
SD.database_id, 
SD.name,
SF.name, 
sf.size*8/1024.0/1024.0 as FileSizeinGB, 
CASE SF.status & 0x100000
WHEN 1048576 THEN 'Percentage'
WHEN 0 THEN 'MB'
END AS 'GROWTH Option'
FROM SYS.SYSALTFILES SF
JOIN SYS.DATABASES SD
ON SD.database_id = SF.dbid

/* Variable and Cursor Declarations */ 
DECLARE @name VARCHAR ( max ) /* Database Name */
DECLARE @DatabaseID INT /* Database ID */
DECLARE @LogicalFileName VARCHAR ( max ) /* Database Logical file name */
DECLARE @FileSizeinGB DECIMAL(10,2) /* Current File Size in GB */
DECLARE @GrowthOption VARCHAR ( max ) /* Current FILEGROWTH Type */
DECLARE @Query VARCHAR(max) /* Dynamic Query */

DECLARE DBCursor CURSOR FOR
SELECT DatabaseID, DBName, LogicalFileName, FileSizeinGB, GrowthOption
FROM #ConfigAutoGrowth

OPEN DBCursor

FETCH NEXT FROM DBCursor 
INTO @DatabaseID,@name,@LogicalFileName,@FileSizeinGB, @GrowthOption

WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'Changing AutoGrowth option for database ['+ UPPER(@name) +'] - current file size ' + cast (@FileSizeinGB as varchar)+'GB'

IF @FileSizeinGB<1 
SET @Query= 'ALTER DATABASE ['+ @name +'] MODIFY FILE (NAME = ['+@LogicalFileName+'],FILEGROWTH = 16MB)'--,MAXSIZE=UNLIMITED)'
IF @FileSizeinGB>=1 and @FileSizeinGB<5 
SET @Query= 'ALTER DATABASE ['+ @name +'] MODIFY FILE (NAME = ['+@LogicalFileName+'],FILEGROWTH = 128MB)'--,MAXSIZE=UNLIMITED)'
IF @FileSizeinGB>=5 and @FileSizeinGB <100
SET @Query= 'ALTER DATABASE ['+ @name +'] MODIFY FILE (NAME = ['+@LogicalFileName+'],FILEGROWTH = 256MB)'--,MAXSIZE=UNLIMITED)'
IF @FileSizeinGB>=100
SET @Query= 'ALTER DATABASE ['+ @name +'] MODIFY FILE (NAME = ['+@LogicalFileName+'],FILEGROWTH = 512MB)'--,MAXSIZE=UNLIMITED)'

PRINT @Query
--EXECUTE(@Query)

FETCH NEXT FROM DBCursor 
INTO @DatabaseID,@name,@LogicalFileName,@FileSizeinGB,@GrowthOption
END 

CLOSE DBCursor 
DEALLOCATE DBCursor 

DROP TABLE #ConfigAutoGrowth
GO

--

SELECT   
    SD.database_id,   
    SD.name,  
    SF.name,  
    CASE SF.status & 0x100000  
    WHEN 1048576 THEN 'Percentage'  
    WHEN 0 THEN 'MB'  
    END AS 'GROWTH Option' 
,size*8.0/1024 as SizeinMB
,growth*8.0/1024.0 as Growth 
FROM SYS.SYSALTFILES SF  
JOIN   
SYS.DATABASES SD  
ON   
SD.database_id = SF.dbid  
GO 

--

By default the script simply PRINT's out the ALTER DATABASE statements for you to copy-paste to another window and execute, but you can un-comment out the "EXECUTE (@Query)" statement and the script will automatically enact the changes.

Hope this helps!


Thursday, January 21, 2016

Counting Your VLFs, or, Temp Tables Inside IF...ELSE Blocks

There are many many blog posts out there about Virtual Log Files (VLFs) - one of the absolute best is "8 Steps to Better Transaction Log Throughput" from Kimberly Tripp (blog/@KimberlyLTripp) of SQLskills as well as the several other posts she links to from that post - if you haven't read them, click the link right now and do so - my post will be right here when you get back.

--

VLFs are the logical units that make up your transaction logs, and at the end of the day the TL;DR boils down to "Usually, too many VLFs Are bad" - they can decrease performance by functionally "fragmenting" your transaction log and slowing down everything transaction log-related (so basically, everything).

phil hartman frankenstein  - Too Many VLFs BAD!
http://memegenerator.net/Phil-Hartman-Frankenstein/caption
VLF count is something that most vendors check during their health checks, and many SQL Server pros recommend it as well.  Of course "too many VLFs" is a relative term, with people throwing around numbers of 50 or 100 or 200 as their threshold of concern.

--

The resource I have always fallen back on to run this check is the script from Michelle Ufford (blog/@sqlfool).  She created it back in 2010 and it is the basis for the VLF script included in Glenn Berry's (blog/@GlennAlanBerry) Diagnostic Information (DMV) Queries.

Michelle's query relies on the undocumented DBCC LogInfo command to gather its VLF data - DBCC LogInfo returns a row for each VLF, so the count(*) of that query gives the number of VLFs for the database.  The catch is that in SQL Server 2012, Microsoft added a column to the front of the resultset (RecoveryUnitID).  As the DBCC command is undocumented, this new column is undocumented as well.

Michelle's code uses INSERT...EXEC to populate a temporary table with the VLF info, and the addition of this extra column breaks the original script.  Glenn's versions of the scripts handle this issue easily since they are version-specific - in the SQL 2012/2014/2016 versions of the script, the temp table declaration is modified to include the extra RecoveryUnitID column, which allows the rest of the script to function as designed.

--

My problem is I wanted a version of the script that could be used across versions 2005+, and this presented a problem.  At first I tried to add an IF...ELSE block to the start of the script to handle the differing CREATE TABLE statements:

--

If (select LEFT(cast(serverproperty('ProductVersion') as varchar),2)) in ('8.','9.','10')
BEGIN
Create Table #stage
(
FileID int
, FileSize bigint
, StartOffset bigint
, FSeqNo bigint
, [Status] bigint
, Parity bigint
, CreateLSN numeric(38)
);
END
ELSE
BEGIN
Create Table #stage
(
RecoveryUnitID int /* This is the new column as of SQL 2012 */
, FileID int
, FileSize bigint
, StartOffset bigint
, FSeqNo bigint
, [Status] bigint
, Parity bigint
, CreateLSN numeric(38)
);
END

--

http://memegenerator.net/Grumpy-Cat

Regardless of the SQL version I tested, I received this:

Msg 2714, Level 16, State 1, Line 16
There is already an object named '#stage' in the database.

I played with it a little, including adding an IF EXISTS check to the beginning of the second block (and yes, I directed it to tempdb..#stage to reference the temp table) and none of it worked.  I poked around a little online and couldn't find a way to make it work - many people saying that it couldn't be done with temp tables, and that you should use a "regular" table or maybe a regular view instead.

My problem is that I am creating a script I want to run on lots of different servers across lots of environments, and I don't want to assume that the table name I am using doesn't already exist.  Is it likely that a client server will have a table named dbo.AndyGVLFCountReallyUniqueTableNameGUIDPurpleMonkeyDishwasher?  Well no, but you never know...  Also, many environments have rules about creating "real" objects without change control - even an object that will be created, exist for <30 seconds, and be dropped.

Besides at this point it had become a challenge of how to make it work - there had to be a different way of looking at the problem.  I fiddled with a table variable solution and had no better luck, resulting in a similar "already exists" error.

I realized part of the problem was how my script was laid out - I was checking for the lower version as my decision gate (in 8/9/10 ELSE) and while that was what needs to happen (I didn't want to hard code 11/12/13 and have it break with future versions) I didn't need to have the CREATE be part of the check - I just needed to handle the fact that the down-level object couldn't have the offending column:

--

Create Table #stage
(
RecoveryUnitID int /* This is the new column as of SQL 2012 */
, FileID      int
, FileSize    bigint
, StartOffset bigint
, FSeqNo      bigint
, [Status]    bigint
, Parity      bigint
, CreateLSN   numeric(38)
);

If (select LEFT(cast(serverproperty('ProductVersion') as varchar),2)) in ('8.','9.','10')
ALTER TABLE #stage DROP COLUMN RecoveryUnitID

--

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpToMii3ijkw0vgfUfzJTiBAvTU-Pm7hAFUU0MUXCT58ocU5CDRzdXK-cP6YYvG6-1fw7iOONuYbVwk6H6yFLACZ9XP8XSS-MLiZs6wPSEselOmuH-IFbZYQHlFjLhim7sAaZmAWe7Uk-6/s1600/33617404.jpg

In this case, I was able to create the table before the version check *with* the extra column, and then run a version check to drop the column if the instance is down-level.

With this in hand, I was able to modify Michelle's script to run for all current versions of SQL:

--

/*

VLF Count Script

Modifed From Michelle Ufford @sqlfool 
http://sqlfool.com/2010/06/check-vlf-counts/

Added version check code due to changes in DBCC LOGINFO

Tested on MSSQL 2005/2008/2008R2/2012/2014

*/

/*
NOTE - the output of DBCC LogInfo adds an extra 
column as of SQL 2012 so there is a version check 
to drop that column for older versions
*/

Create Table #stage
(
RecoveryUnitID int /* This is the new column as of SQL 2012 */
, FileID int
, FileSize bigint
, StartOffset bigint
, FSeqNo bigint
, [Status] bigint
, Parity bigint
, CreateLSN numeric(38)
);

If (select LEFT(cast(serverproperty('ProductVersion') as varchar),2)) in ('8.','9.','10')
ALTER TABLE #stage DROP COLUMN RecoveryUnitID

Create Table #results(
Database_Name sysname
, VLF_count int 
);


Exec sp_msforeachdb N'Use [?]; 

Insert Into #stage 
Exec sp_executeSQL N''DBCC LogInfo([?])''; 

Insert Into #results 
Select DB_Name(), Count(*) 
From #stage; 

Truncate Table #stage;'

Select * 
From #results
Order By VLF_count Desc;

Drop Table #stage;
Drop Table #results;

--

I am happy with the final product (the modified VLF count script) but also with my brief path of discovery on handling Temp Tables in IF...ELSE blocks - I know I have had similar problems before as outputs vary from version to version and now I have another idea to try the next time it comes up!

Hope this helps!