If your database contains very large tables, querying or managing those tables may fill tempdb. When tempdb is full, an error is raised and the transaction is rolled back. If this will happen or not depends on how large your tempdb can get. Even if the size of tempdb isn’t limited by turning off autogrowth or by setting a max size, the disk space available is the upper limit for how big tempdb can get. Check out this blog post to see how to calculate the maximum possible size of tempdb.
Large tables can be a latent problem, since there may be no problems at all until someone, for instance, tries to sort the table in a query. SQL Server uses tempdb (as the name implies) as a temporary workspace for data, and if SQL Server can’t handle all the data in memory tempdb is used.
To avoid this problem you need to know if there are very large tables that won’t fit in tempdb when at its maximum size. Here is a script to look for tables larger than tempdb:
Msg 1105
When tempdb is full, the following error message will appear:
[red_box]Msg 1105, Level 17, State 2, Line 1 Could not allocate space for object ‘dbo.SORT temporary run storage: 140737680637952’ in database ‘tempdb’ because the ‘PRIMARY’ filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.[/red_box]
The transaction that caused tempdb to fill will fail and release the space. Run the following T-SQL to check the current size on the data files (remember that space is released after the error, so it may seem there is a lot of free space):
SELECT name, size, max_size FROM tempdb.sys.database_files WHERE type_desc = 'ROWS'
The columns size and max_size shows the number of 8KB pages, so to transform the value to MB for instance, use size*8/1024.
A value of -1 in max_size means unlimited file growth (autogrow). But even if autogrowth is turned on, the size limit on the file is the disk space available.
Tables larger than tempdb
So, how can a full tempdb be avoided? One thing to look out for, is very large tables that won’t fit in tempdb. In an earlier blog post, I demonstrated a stored procedure (dbo.GetTempdbMaxSize) that calculates the maximum size on tempdb. Let’s use that stored procedure and see if there are any tables larger than that maximum size. To get the size on all tables on all databases in an instance, I use the sp_MSforeachdb function. To get the size of each table, I use the sys.dm_db_partition_stats dynamic management view:
DECLARE @TempdbMaxSize INT EXEC dbo.GetTempdbMaxSize @TempdbMaxSize OUTPUT SELECT @TempdbMaxSize AS TempdbMaxSize IF (SELECT OBJECT_ID('tempdb..#ALL_TABLE_SIZES')) IS NOT NULL BEGIN DROP TABLE #ALL_TABLE_SIZES END CREATE TABLE #ALL_TABLE_SIZES ( TABLENAME NVARCHAR(MAX) NOT NULL ,SIZEINMB INT NOT NULL ) EXECUTE sp_MSforeachdb 'USE [?]; INSERT #ALL_TABLE_SIZES SELECT DB_NAME() + ''.'' + T2.name + ''.'' + T0.name ,SUM(T1.reserved_page_count*8.0/1024) FROM sys.objects T0 JOIN sys.dm_db_partition_stats T1 ON T0.object_id = T1.object_id JOIN sys.schemas T2 ON T0.schema_id = T2.schema_id WHERE [type] = ''U'' AND T1.index_id <= 1 GROUP BY DB_NAME() + ''.'' + T2.name + ''.'' + T0.name ' SELECT * FROM #ALL_TABLE_SIZES WHERE SIZEINMB >= @TempdbMaxSize
As always, don’t run the script on a production environment without testing.