One DBA's Ongoing Search for Clarity in the Middle of Nowhere


*or*

Yet Another Andy Writing About SQL Server

Thursday, March 29, 2018

Toolbox - Just the Basics

There are many query sets out there to collect varying groups of configuration data - the ones I most frequently use come from Glenn Berry of SQLskills (blog/@GlennAlanBerry) but I have found that there is more data than his basic set that I like to pull, and I wanted a single query that I could run across multiple versions of SQL Server.  Glenn's basic setup is version-specific - one script for 2005, a different for 2008, etc. - and I wanted to be able to just copy-paste my script in and press Execute.



I ended up starting with some base DMV info and ran it trial-and-error against different versions until I found the fields and formats that are different.  The biggest branch is between SQL 2005-2008R2 and 2012+; there were lots of changes to the dynamic management views in SQL 2012 - so many that I ended up creating two separate branches of code with a version check at the top to determine which branch the execution goes down.

There are multiple ways to determine the SQL Server version, but the one I found that I liked the best in this case is the SERVERPROPERTY('ProductVersion'):

SET @SQLVersion = LEFT(CONVERT(VARCHAR, SERVERPROPERTY('ProductVersion')),4)PRINT @SQLVersion

This returned a nice four character value that was easy to use in the code (it's converted to a char so that I can use the LEFT function to trim it):

9.00
10.0
10.5
11.0
12.0
13.0
--

The biggest change between the versions is a new DMV added in SQL 2008R2 SP1, sys.dm_os_windows_info.  I very much like having a DMV with some basic Windows information:
SELECT *
FROM sys.dm_os_windows_info 
windows_release windows_service_pack_level windows_sku os_language_version
6.3 7 1033

...but what I don't like is that it was added in SP1 - SQL 2008R2 RTM doesn't have this view.  What that meant to my code was that I reference this view in the 2012+ branch but not in the 2005-2008R2 branch since I would have to get more specific (to the SP level) to determine whether I could use it in 2008R2.

http://s2.quickmeme.com/img/cc/cc63896fead20f99f3381a5c389587c48263f258d755be752624b60b62769bab.jpg

--

To return RAM info, I ended up using a temp table (not my favorite but it worked here) - there is a deferred name resolution issue with returning the data due to changes in SQL 2012+.  Even with the IF...ELSE code branch I was using to split my code, the changes in this particular query broke without using an up-front temp table to hold the RAM info (which I then added into the data return query).

--

So...the query:

/*
Physical RAM data comes from the sys.dm_os_sys_info DMV 
The catch is that in SQL 2012 the field changed from being measured in bytes to KB and the field name changed. 
I am collecting the data in a Temp Table because it was the only way I could find to handle deferred name resolution - by having the
DMV inline in a regular query the name resolution errors out even inside an IF ELSE statement.
The temp table is then selected from in the actual data return query below.
IF...ELSE...EXEC Code slightly modified from:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/442ce31a-9f08-4fa2-90ce-aaba23e05fac/gathering-physical-memory-for-sql-2005sql-2014?forum=transactsql
*/
IF OBJECT_ID('tempdb..#RAM') IS NOT NULL
DROP TABLE #RAM
CREATE TABLE #RAM
(
InstanceName SYSNAME,
Physical_Mem_MB INT
)
IF LEFT(CAST(SERVERPROPERTY('ResourceVersion') as VARCHAR(20)),2) in ('11','12','13','14')
EXEC ('INSERT INTO #RAM SELECT @@SERVERNAME, (physical_memory_KB/1024) as Physical_Mem_MB FROM sys.dm_os_sys_info')
ELSE
EXEC ('INSERT INTO #RAM SELECT @@SERVERNAME, (physical_memory_in_bytes/1024/1024) as Physical_Mem_MB FROM sys.dm_os_sys_info')
--
/*
The actual data return query for the Instance settings
There is a major branch for SQL 2005-2008R2 and a separate branch for 2012+, again because of differing field names and new DMVs between versions
Much of the data is selected as inline queries to prevent the need to tack on the Instance Name and then create a huge multi-way join
*/
DECLARE @SQLVersion as NVARCHAR(4)
SET @SQLVersion = LEFT(CONVERT(VARCHAR, SERVERPROPERTY('ProductVersion')),4)
/*
PRINT @SQLVersion
*/
IF @SQLVersion in ('9.00', '10.0', '10.5')
SELECT SERVERPROPERTY('ServerName') as [InstanceName]
, SERVERPROPERTY('ComputerNamePhysicalNetBIOS') as [ComputerNamePhysicalNetBIOS]
, SERVERPROPERTY('ProductVersion') as [SQLProductVersion]
, @SQLVersion as [SQLMajorVersion]
, CASE @SQLVersion
WHEN '9.00' THEN 'SQL Server 2005'
WHEN '10.0' THEN 'SQL Server 2008'
WHEN '10.5' THEN 'SQL Server 2008 R2'
END as [SQLVersionBuild]
, SERVERPROPERTY('ProductLevel') as [SQLServicePack]
, SERVERPROPERTY('Edition') as [SQLEdition]
, (
SELECT RIGHT(SUBSTRING(@@VERSION, CHARINDEX('Windows NT', @@VERSION), 14), 4)
) as [WindowsVersionNumber]
, (
SELECT CASE LTRIM(RIGHT(SUBSTRING(@@VERSION, CHARINDEX('Windows NT', @@VERSION), 14), 4))
/* LTRIM needed to trim leading space in three character version numbers such as 5.0 */
/* No DMV for this in SQL 2005, 2008, or 2008R2 RTM */
WHEN '5.0' THEN 'Windows 2000'
WHEN '5.1' THEN 'Windows XP'
WHEN '5.2' THEN 'Windows Server 2003/2003 R2'
WHEN '6.0' THEN 'Windows Server 2008/Windows Vista'
WHEN '6.1' THEN 'Windows Server 2008 R2/Windows 7'
WHEN '6.2' THEN 'Windows Server 2012/Windows 8'
WHEN '6.3' THEN 'Windows Server 2012 R2'
WHEN '10.0' THEN 'Windows Server 2016'
ELSE 'Windows vNext'
END
) as [WindowsVersionBuild]
, (
SELECT [Physical_Mem_MB]
FROM #RAM
) as [PhysicalMemMB]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%min server memory%'
) as [MinServerMemoryMB]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%max server memory%'
) as [MaxServerMemoryMB]
, (
SELECT [cpu_count]
FROM sys.dm_os_sys_info
) as [LogicalCPUCount]
, (
SELECT [hyperthread_ratio]
FROM sys.dm_os_sys_info
) as [HyperthreadRatio]
, (
SELECT [cpu_count]/[hyperthread_ratio]
FROM sys.dm_os_sys_info
) as [PhysicalCPUCount]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%max degree of parallelism%'
) as [MAXDOP]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%cost threshold for parallelism%'
) as [CTOP]
ELSE /* SQL 2012+ */
SELECT SERVERPROPERTY('ServerName') as [InstanceName]
, SERVERPROPERTY('ComputerNamePhysicalNetBIOS') as [ComputerNamePhysicalNetBIOS]
, SERVERPROPERTY('ProductVersion') as [SQLProductVersion]
, @SQLVersion as [SQLMajorVersion]
, CASE @SQLVersion
WHEN '11.0' THEN 'SQL Server 2012'
WHEN '12.0' THEN 'SQL Server 2014'
WHEN '13.0' THEN 'SQL Server 2016'
ELSE 'SQL Server vNext'
END as [SQLVersionBuild]
, SERVERPROPERTY('ProductLevel') as [SQLServicePack]
, SERVERPROPERTY('Edition') as [SQLEdition]
, (
SELECT [windows_release]
FROM sys.dm_os_windows_info
) as [WindowsVersionNumber]
, (
SELECT CASE windows_release
WHEN '5.0' THEN 'Windows 2000'
WHEN '5.1' THEN 'Windows XP'
WHEN '5.2' THEN 'Windows Server 2003/2003 R2'
WHEN '6.0' THEN 'Windows Server 2008/Windows Vista'
WHEN '6.1' THEN 'Windows Server 2008 R2/Windows 7'
WHEN '6.2' THEN 'Windows Server 2012/Windows 8'
WHEN '6.3' THEN 'Windows Server 2012 R2'
WHEN '10.0' THEN 'Windows Server 2016'
ELSE 'Windows vNext'
END
FROM sys.dm_os_windows_info /* New DMV in SQL 2008 R2 SP1*/
) as [WindowsVersionBuild]
, (
SELECT [Physical_Mem_MB]
FROM #RAM
) as [PhysicalMemMB]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%min server memory%'
) as [MinServerMemoryMB]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%max server memory%'
) as [MaxServerMemoryMB]
, (
SELECT [cpu_count]
FROM sys.dm_os_sys_info
) as [LogicalCPUCount]
, (
SELECT [hyperthread_ratio]
FROM sys.dm_os_sys_info
) as [HyperthreadRatio]
, (
SELECT [cpu_count]/[hyperthread_ratio]
FROM sys.dm_os_sys_info
) as [PhysicalCPUCount]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%max degree of parallelism%'
) as [MAXDOP]
, (
SELECT [value_in_use]
FROM sys.configurations
WHERE name LIKE '%cost threshold for parallelism%'
) as [CTOP]
/*
Clean-Up
*/
IF OBJECT_ID('tempdb..#RAM') IS NOT NULL
DROP TABLE [#RAM]
--

The resultset looks like this:


InstanceName ComputerName
PhysicalNetBIOS
SQLProductVersion SQLMajorVersion
Instance1\I1 Server55 12.0.5000.0 12

SQLVersionBuild SQLServicePack SQLEdition
SQL Server 2014 SP2 Standard Edition (64-bit)


Windows
Version
Number
Windows
Version
Build
Physical
MemMB
MinServer
MemoryMB
MaxServer
MemoryMB
6.3 Windows Server 2012 R2 81918 64 66560


LogicalCPUCount HyperthreadRatio PhysicalCPUCount MAXDOP CTOP
12 1 12 1 5

--

I like this a lot as it is a quick way to pull basic SQL, Windows, RAM, and CPU from any SQL 2005+, which is something I need all the time when I touch a new server.

Hope this helps!



Wednesday, March 7, 2018

Which Blogs Do I Recommend?

One of the most important parts of continuing learning as a DBA (or probably an IT Professional) is reading blogs.  The world used to run on professional magazines and journals (and there are a few good ones still out there) but the current equivalent of a weekly or quarterly periodical is a daily/weekly blog produced by anyone from a corporate presence like Microsoft or SentryOne to individual DBA's chronicling their IT journey (like me)!

I used to rely on the Blogger Rankings from Tom LaRock as a good list of general SQL Server blogs, but recently I noticed the rankings list has been removed from his site.  I don't know when it happened but since I frequently referenced new blog readers to his list I figured I should try to recreate a version of it myself.

These are the blogs I most frequently read and reference, although there are many, many, more out there.

https://nebraskasql.blogspot.com/p/blog-list.html

I use feedly as my tool of choice, adding the individual links to my feedly so that all of the blog posts aggregate in one place.

As with any blog posts, read at your own risk and your mileage may vary (and remember, #ItDepends) but these are sources I find to be generally reliable, sorted by their employers for ease of reference.

Even though the blogs are broken out by employer, most of the posts (by far) that you see in these blogs are general SQL Server knowledge and *not* company product marketing (although those sometimes appear as well - for example you will see references to the new version of Plan Explorer Pro on some of the SentryOne blogs, etc.)

--

Hope this helps!

Monday, March 5, 2018

Error 33206 - SQL Server Audit Failed to Create the Audit File

One of the joys of working with SQL Server is the descriptive and meaningful error messages, right?

https://vignette.wikia.nocookie.net/batmantheanimatedseries/images/a/a7/DiD_39_-_Joker_Vision.jpg/revision/latest?cb=20160610021326

Exhibit A - the 33206:
Error: 33206, Severity: 17, State: 1.
SQL Server Audit failed to create the audit file 'T:\MSSQL\Audit Files\PLAND_Objects_DML_Audit_Events_A2B00B57-4B43-4570-93C8-B30EE77CC8C9_0_131645629182310000.sqlaudit'. Make sure that the disk is not full and that the SQL Server service account has the required permissions to create and write to the file.
At first glance, this error tells us one of two things is wrong - the disk is full or the SQL service account doesn't have the required permissions to create and write to a file at the given location.

Simple to troubleshoot, right?

Except when the service account already has Full Control to each folder and sub-folder in the path...


(Note that it is important to check every folder along the path - I have seen situations where the account has permissions to the direct sub-folder (in this example T:\MSSQL\Audit Files) or even the direct sub-folder and the root of the drive (T:) and yet a security check fails due to the intermediate folder not having the correct permissions.  To me it shouldn't work this way, but sometimes it does...)

What about the drive being full?


Maybe not.

Having exhausted the obvious paths (at least obvious from the error text), I went back to my default next step:

https://memegenerator.net/img/instances/52632802/i-see-your-google-fu-is-as-strong-as-mine.jpg

A base search for a "SQL Error 33206" actually brought back a bunch of results about Error 33204 - "SQL Server Audit could not write to the security log" (FYI - the common fix for a 33204 is to grant the SQL service account rights to a related registry key as described here)

A better Google contains a piece of the error message: 
SQL 33206 "SQL Server Audit failed to create the audit file"
The second result in the resultset for this search is a recent blog post from someone I know to be a reliable source, Microsoft Certified Master Jason Brimhall (blog/@sqlrnnr).  

In his post Jason describes the same process I describe above - eliminate the obvious - but then he shows what the real problem was for my situation, the file configuration in the SQL Server Audit configuration:


When I checked the T:\MSSQL\Audit Files folder, sure enough, there were fifteen audit files reaching back over thirteen months' worth of service restarts.

To mitigate the problem I deleted the oldest of the fifteen files, and the Audit resumed.

WOOHOO!

The real fix to this situation is to configure the files as "rollover" files - setting the Audit File Maximum Limit to fifteen "Maximum Rollover Files" instead of fifteen"Maximum Files" would allow the audit to overwrite the oldest file once it reaches to configured max rather than crashing the audit as happened here.

Realize you can only do this if you can handle the oldest files being overwritten; if you have to persist the files you need to create a separate archiving process to handle that.

--

If nothing else, the real silver lining in my particular situation was that the creator of this audit at the client configured the audit to "Continue" rather than "Shut Down Server" (completely shut down Windows - not just stop SQL, but shut down the whole server) or "Fail Operation" (allow SQL to run but fail any operation that would meet the audit specification - not as catastrophic as "Shut Down Server" but still very impactful).

https://www.thesun.co.uk/wp-content/uploads/2016/11/nintchdbpict000273355923.jpg?w=960
There are definitely situations that call for "Shut Down Server" or "Fail Operation" - if your Audit is in place to satisfy a legal/regulatory/moral requirement, then definitely consider these options - but this is often not the case.

--

Hope this helps!



Wednesday, February 7, 2018

Toolbox - Which Clerk Is Busy?

A few years ago I wrote what would turn out to be my most-hit blog post (so far) titled "Error 701 - Insufficient System Memory - Now what?"

In that post I talk about a troubleshooting scenario that finally led to using DBCC MEMORYSTATUS to find one of the memory clerks (MEMORYCLERK_XE) consuming far too much memory on the instance.  (In that case there was a runaway XEvents session burning lots of RAM).

Bottom line, you should troubleshoot which clerk was the busiest to find the source of the overall problem.

https://dominicantoday.com/wp-content/uploads/2017/09/a-4.jpg
The 701 error (Error 701: “There is insufficient system memory to run this query”) just came up this morning again in our team Skype, and my colleague mentioned that the problem was happening at a time when no one wanted to be on the server watching.

I told him about my previous situation and how looking at DBCC MEMORYSTATUS had led to my smoking gun, and it led to thinking about some way to collect and persist the MEMORYSTATUS data without someone watching live on the system.

Google led me to several variations of a #YouCanDoAnythingWithPowershell script, such as this one from Microsoft Premier Field Engineer (PFE) Tim Chapman (blog/@chapmandew) but I really wanted a Transact-SQL script I could play with myself.

More Google finally led me to a blog post and T-SQL script from the SQL Server Photographer Slava Murygin (blog/@slavasql) (follow his blog for scripts and great photos he takes at all types of SQL events)!

Slava's script translates one of the Powershell scripts into T-SQL, storing the output from DBCC MEMORYSTATUS in a temporary table for output.  The one limitation of this for my needs is that it is an adhoc execution - great to run interactively but not useful to store over time.

It took a few steps to turn Slava's temp table implementation into one that stores the data in a permanent table so that it can be queried after the fact and over time.

A few notes: Slava's script relies on xp_cmdshell, and my modification still does.   My modification stores data in a permanent table, which means it needs to reside in a permanent database.  My script uses the "DBADatabase" including code to create it if it doesn't exist, but it is an easy find-replace to change that name if you'd like:

--

/*

Track DBCC MEMORYSTATUS data over time

Guts of query to parse DBCC results into a temp table from 
http://slavasql.blogspot.com/2016/08/parsing-dbcc-memorystatus-without-using.html

Modified to store those results in a permanent table over time
Intended for use in a scheduled Agent Job but could be run manually as needed

Also modified to run on SQL 2005 (syntax changes)

Relies on xp_cmdshell

Stores data in the "DBADatabase" including creating the database if it doesn't pre-exist

If you wish to use an existing database or a different database 
name simply Find-Replace for the string DBADatabase

*/

SET NOCOUNT ON
GO

IF DB_ID('DBADatabase') IS NULL  /* Check if DBADatabase database exists - if not, create it */
BEGIN
EXECUTE ('CREATE DATABASE DBADatabase')

    ALTER DATABASE DBADatabase SET RECOVERY SIMPLE;

    ALTER AUTHORIZATION ON DATABASE::DBADatabase TO sa;

/* Read the current SQL Server default backup location */  
DECLARE @BackupDirectory NVARCHAR(100)   
EXEC master..xp_instance_regread @rootkey = 'HKEY_LOCAL_MACHINE',  
@key = 'Software\Microsoft\MSSQLServer\MSSQLServer',  
    @value_name = 'BackupDirectory', @BackupDirectory = @BackupDirectory OUTPUT ;  
EXECUTE ('BACKUP DATABASE DBADatabase to DISK = '''+@BackupDirectory+'\DBADatabase.bak'' WITH INIT')
PRINT 'CREATED DATABASE'
    RAISERROR('Ensure that you add the DBADatabase database to backup / maintenance jobs/plans', 10, 1) WITH NOWAIT;
END;
GO

/* If Holding Table doesn't exist, create it */
IF OBJECT_ID('DBADatabase.dbo.DBCCMemoryStatus') IS NULL
CREATE TABLE [DBADatabase].[dbo].[DBCCMemoryStatus](
[Datestamp] [datetime] NOT NULL,
[DataSet] [varchar](100) NULL,
[Measure] [varchar](20) NULL,
[Counter] [varchar](100) NULL,
[Value] [money] NULL
) ON [PRIMARY]

GO

SET ANSI_PADDING OFF
GO

IF OBJECT_ID('tempdb..#tbl_MemoryStatusDump') IS NOT NULL
DROP TABLE #tbl_MemoryStatusDump;
GO
IF OBJECT_ID('tempdb..#tbl_MemoryStatus') IS NOT NULL
DROP TABLE #tbl_MemoryStatus;
GO
CREATE TABLE #tbl_MemoryStatusDump(
 ID INT IDENTITY(1,1) PRIMARY KEY
 , [Dump] VARCHAR(100));
GO
CREATE TABLE #tbl_MemoryStatus(
 ID INT IDENTITY(1,1), 
 [DataSet] VARCHAR(100), 
 [Measure] VARCHAR(20), 
 [Counter] VARCHAR(100), 
 [Value] MONEY);
GO
INSERT INTO #tbl_MemoryStatusDump([Dump])
EXEC ('xp_cmdshell ''sqlcmd -E -S localhost -Q "DBCC MEMORYSTATUS" ''');
GO
DECLARE @f BIT
 , @i SMALLINT
 , @m SMALLINT 
 , @CurSet VARCHAR(100)
 , @CurMeasure VARCHAR(20)
 , @Divider TINYINT
 , @CurCounter VARCHAR(100)
 , @CurValue VARCHAR(20);

SET @f=1
SET @m = (SELECT MAX(ID) FROM #tbl_MemoryStatusDump)
set @i = 1

WHILE @i < @m
BEGIN
 SELECT @Divider = PATINDEX('% %',REVERSE(RTRIM([Dump])))
  , @CurCounter = LEFT([Dump], LEN([Dump]) - @Divider)
  , @CurValue = RIGHT(RTRIM([Dump]), @Divider - 1)
 FROM #tbl_MemoryStatusDump WHERE ID = @i;

 IF @f = 1 
  SELECT @CurSet = @CurCounter, @CurMeasure = @CurValue, @f = 0 
  FROM #tbl_MemoryStatusDump WHERE ID = @i;
 ELSE IF LEFT(@CurCounter,1) = '(' SET @f = 1;
 ELSE IF @CurCounter != 'NULL' and LEFT(@CurCounter,1) != '-'
  INSERT INTO #tbl_MemoryStatus([DataSet], [Measure], [Counter], [Value])
  SELECT @CurSet, @CurMeasure, @CurCounter, CAST(@CurValue as MONEY)
  FROM #tbl_MemoryStatusDump WHERE ID = @i;
 SET @i = @i + 1;
END
GO

/*Send data from temp table to permanent table */
INSERT INTO DBADatabase.dbo.DBCCMemoryStatus
SELECT 
GETDATE() as Datestamp
, DataSet
, Measure
, Counter
, Value 
FROM #tbl_MemoryStatus

/* Purge rows older than 96 hours to manage table size */
DELETE FROM DBADatabase.dbo.DBCCMemoryStatus
WHERE DATEDIFF(hh,DateStamp, GETDATE())>96

/*
SELECT *
FROM DBADatabase.dbo.DBCCMemoryStatus
WHERE counter = 'VM Reserved'
ORDER BY DateStamp DESC
*/


--

Running this statement interactively doesn't return any data - it just loads the data into DBADatabase.dbo.DBCCMemoryStatus.  Running the commented-out SELECT at the bottom of the script as written will query that table for all rows of counter VM Reserved (virtual memory reserved) but there is much more data than that available if you modify the SELECT.

This query can be dropped into a SQL Agent job step as is and it will run - just like the interactive run it will create the database and permanent table if they don't exist and then store those nuggets of data into the permanent table for later use - you never know when you may need them!

https://memegenerator.net/img/instances/400x/57081575/winter-is-coming.jpg

--

Hope this helps!


Friday, January 19, 2018

Toolbox - Why is My Database So Big????

I have written previously (here) about how to tell which database files have free space in them when your drive starts to fill.

What if all of your database files are full and you are still running out of space?

DRIVE DATABASE
NAME
FILENAME FILETYPE FILESIZE SPACEFREE PHYSICAL_NAME
F DB03 DB03_MDF DATA 35.47 GB 225.63 MB F:\MSSQL\Data\DB03_Data.mdf
F DB02 DB02Data DATA 110.25 MB 92.38 MB F:\MSSQL\Data\DB02.mdf
F DB01 DB01Data DATA 142.06 MB 72.69 MB F:\MSSQL\Data\DB01.mdf
F DB05 DB05_Data DATA 35.72 GB 71.44 MB F:\MSSQL\Data\DB05_Data.mdf
F DB06 DB06_MDF1 DATA 36.47 GB 58.50 MB F:\MSSQL\Data\DB06_Data.mdf
F DB04 DB04Data DATA 36.47 GB 38.00 MB F:\MSSQL\Data\DB04_Data.mdf

http://www.joemartinfitness.com/wp-content/uploads/2013/11/Post-holiday-bloat.jpg
The next step is to see what is taking up the space in the individual databases.  Maybe there's an audit table that never gets purged...or a Sales History that can be archived away, it only there were an archive process...

https://s3.amazonaws.com/lowres.cartoonstock.com/business-commerce-work-workers-employee-employer-staff-ear0117_low.jpg

**ALWAYS CREATE AN ARCHIVE/PURGE PROCESS ** #ThisIsNotAnItDepends

--

There is an easy query to return the free space available in the various tables in your database - I found it in an StackOverflow at https://stackoverflow.com/questions/15896564/get-table-and-index-storage-size-in-sql-server and then modified it somewhat to make the result set cleaner (to me anyway):

--

/*
Object Sizes

Modified from http://stackoverflow.com/questions/15896564/get-table-and-index-storage-size-in-sql-server
*/

SELECT 
@@SERVERNAME as InstanceName
, DB_NAME() as DatabaseName
, ISNULL(s.name+'.'+t.NAME, '**TOTAL**')  AS TableName
, SUM(p.rows) AS RowCounts
--, SUM(a.total_pages) * 8 AS TotalSpaceKB
, SUM(a.total_pages) * 8/1024.0 AS TotalSpaceMB
, SUM(a.total_pages) * 8/1024.0/1024.0 AS TotalSpaceGB
, SUM(a.used_pages) * 8/1024.0 AS UsedSpaceMB
, (SUM(a.total_pages) - SUM(a.used_pages)) * 8/1024.0 AS UnusedSpaceMB
FROM sys.tables t
INNER JOIN sys.schemas s 
ON s.schema_id = t.schema_id
INNER JOIN sys.indexes i 
ON t.OBJECT_ID = i.object_id
INNER JOIN sys.partitions p 
ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a 
ON p.partition_id = a.container_id
WHERE t.NAME NOT LIKE 'dt%'    -- filter out system tables for diagramming
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255 
GROUP BY s.name+'.'+t.Name
WITH ROLLUP
ORDER BY TotalSpaceMB DESC

--

Running the query against an individual database will return all of the tables in the database and their sizes, as well as the total size of the database (from the ROLLUP):

--

InstanceName DatabaseName TableName RowCounts TotalSpaceMB UsedSpaceMB UnusedSpaceMB
Instance01 Database99 **TOTAL** 148409590 36,042.063 32,890.922 3151.141
Instance01 Database99 dbo.BusinessRulesAuditing 20686280 17,877.813 17,860.938 16.875
Instance01 Database99 dbo.BusinessRulesRuleSet 18840 3,895.766 877.945 3,017.820
Instance01 Database99 dbo.ModelWorkbookVersionExt 4383 1,818.453 1,806.445 12.008
Instance01 Database99 dbo.EquityOutputVersions 35040362 1,746.688 1,739.281 7.406
Instance01 Database99 dbo.NominalOutputs 3592710 258.305 251.047 7.258
Instance01 Database99 dbo.Auditing 173494 33.391 27.586 5.805
Instance01 Database99 dbo.EquityOverrides 13105402 515.859 511.188 4.672
Instance01 Database99 dbo.ContingentOutputs 1371465 364.641 361.719 2.922
Instance01 Database99 dbo.ContingentWSStaging 292907 333.320 329.359 3.961
Instance01 Database99 dbo.TransformedAdminContingent 585848 76.070 73.000 3.070

--

There are two different useful cases I normally find with this result set.

The first (highlighted in aqua) is the very large table.  In this sample my 35GB database has one large 17GB table.  This is a situation where you can investigate the table and see *why* it is so large.  In many cases, this is just the way it is - sometimes one large "mother" table is a fact of life (You take the good, you take the bad, you take them both, and there you have..."

https://memegenerator.net/img/instances/11140850/god-im-old.jpg

Often though you will find that this table is an anomaly.  As mentioned above, maybe there is a missing purge or archive process - with a very large table look at the table definitions to see if their are date/datetime columns, and then select the top 10 order by those columns one by one to see how old the oldest records are.  You may find that you are storing years of data when you only need months and can purge the oldest rows.  You may also find that even though you do need years of data, you may not need them live in production, which allows you to periodically archive them away to another database (maybe even another instance) where they can be accessed only when needed.

https://blog.parse.ly/wp-content/uploads/2015/05/say_big_data.jpg
--

The second case (highlighted in yellow) is a table with significant free space contained in the table itself.  In this example a 3.7GB table has 3.0GB of free space in it!

How do you account for this?  There are a few ways - when you delete large numbers of rows from the table, the space usually isn't released until the next time the related indexes are rebuilt.  Another possibility is index fill factor - the amount of free space that is included in indexes when they are rebuilt.  I have run into several instances where a client DBA misunderstood the meaning of the fill factor and did a reverse to themselves, setting the fill factor to 10 thinking it would leave 10% free space when in fact it left 90% free space, resulting not only in exceeding large indexes but also in very poor performance as the SQL Server needs to scan across many more pages to retrieve your data.

To help determine which index(es) may contribute to the problem, you can add the indexname to the query to break the data out that one more level:

--


/*
Object Sizes With Indexes

Modified from http://stackoverflow.com/questions/15896564/get-table-and-index-storage-size-in-sql-server
*/

SELECT 
@@SERVERNAME as InstanceName
, DB_NAME() as DatabaseName
, ISNULL(s.name+'.'+t.NAME, '**TOTAL**')  AS TableName
, ISNULL(i.Name, '**TOTAL**')  AS IndexName
, SUM(p.rows) AS RowCounts
--, SUM(a.total_pages) * 8 AS TotalSpaceKB
, SUM(a.total_pages) * 8/1024.0 AS TotalSpaceMB
, SUM(a.total_pages) * 8/1024.0/1024.0 AS TotalSpaceGB
, SUM(a.used_pages) * 8/1024.0 AS UsedSpaceMB
, (SUM(a.total_pages) - SUM(a.used_pages)) * 8/1024.0 AS UnusedSpaceMB
FROM sys.tables t
INNER JOIN sys.schemas s 
ON s.schema_id = t.schema_id
INNER JOIN sys.indexes i 
ON t.OBJECT_ID = i.object_id
INNER JOIN sys.partitions p 
ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a 
ON p.partition_id = a.container_id
WHERE t.NAME NOT LIKE 'dt%'    -- filter out system tables for diagramming
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255 
GROUP BY s.name+'.'+t.Name
, i.Name
WITH ROLLUP

--

InstanceName DatabaseName TableName IndexName RowCounts TotalSpaceMB UsedSpaceMB UnusedSpaceMB
Instance01 Database99 **TOTAL** **TOTAL** 148409590 36,042.063 35,890.922 151.141
Instance01 Database99 BusinessRulesAuditing PK__BusinessRules 20686280 17,877.813 17,860.938 16.875
Instance01 Database99 BusinessRulesAuditing **TOTAL** 20686280 17,877.813 17,860.938 16.875
Instance01 Database99 BusinessRulesRuleSet IX_RuleSet1 18840 2,100.000 137.000 1,963.000
Instance01 Database99 BusinessRulesRuleSet IX_RuleSet2 18840 190.000 180.000 10.000
Instance01 Database99 BusinessRulesRuleSet PK_RuleSet 18840 1,600.000 560.000 1,040.000
Instance01 Database99 BusinessRulesRuleSet **TOTAL** 56520 3,895.766 877.945 3,017.820
Instance01 Database99 EquityOutputVersions PK_EquityOutputVersions 35040362 1,746.688 1,739.281 7.406
Instance01 Database99 EquityOutputVersions **TOTAL** 35040362 1,746.688 1,739.281 7.406
Instance01 Database99 EquityOverrides PK_EquityOverrides 13105402 515.859 511.188 4.672
Instance01 Database99 EquityOverrides **TOTAL** 13105402 515.859 511.188 4.672
Instance01 Database99 Auditing PK_Auditing 173494 33.391 27.586 5.805
Instance01 Database99 Auditing **TOTAL** 173494 33.391 27.586 5.805

<result set snipped for space>

--

In this case you could look at the fill factor of the IX_RuleSet1 and PK_RuleSet indexes to see why there is so much free space.  Failing that it is possible that these indexes need to be rebuilt to release that space, possibly after a large delete from the table cleared that space.


--

I find I use this query more than I ever would have thought with space issues - I start with the Database Free File query (from the previous post) and then move next to this query.

--

Hope this helps!