Sladescross's Blog

Blogging about Sharepoint related stuff

Tempdb Drivers and Optimisation March 30, 2014

Filed under: Drivers,FILEGROWTH,Files,Optimisation,Performance,space,TempDb,Trace Flag — sladescross @ 12:55 pm

http://technet.microsoft.com/en-us/library/ms175527(v=SQL.105).aspx

SELECT
name AS FileName,
size*1.0/128 AS FileSizeinMB,
CASE max_size
WHEN 0 THEN ‘Autogrowth is off.’
WHEN -1 THEN ‘Autogrowth is on.’
ELSE ‘Log file will grow to a maximum size of 2 TB.’
END,
growth AS ‘GrowthValue’,
‘GrowthIncrement’ =
CASE
WHEN growth = 0 THEN ‘Size is fixed and will not grow.’
WHEN growth > 0 AND is_percent_growth = 0
THEN ‘Growth value is in 8-KB pages.’
ELSE ‘Growth value is a percentage.’
END
FROM tempdb.sys.database_files;
GO

tempdb file size
FILEGROWTH increment
0 to 100 MB 10 MB
100 to 200 MB 20 MB
200 MB or more 10%*
* You may have to adjust this percentage based on the speed of the I/O subsystem on which the tempdb files are located. To avoid potential latch time-outs, we recommend limiting the autogrow operation to approximately two minutes. For example, if the I/O subsystem can initialize a file at 50 MB per second, the FILEGROWTH increment should be set to a maximum of 6 GB, regardless of the tempdb file size. If possible, use instant database file initialization to improve the performance of autogrow operations.

•Create as many files as needed to maximize disk bandwidth. Using multiple files reduces tempdb storage contention and yields significantly better scalability. However, do not create too many files because this can reduce performance and increase management overhead. As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary. Note that a dual-core CPU is considered to be two CPUs.


Make each data file the same size; this allows for optimal proportional-fill performance.


Put the tempdb database on a fast I/O subsystem. Use disk striping if there are many directly attached disks.


Put the tempdb database on disks that differ from those that are used by user databases.

http://thesqldude.com/2012/05/15/monitoring-tempdb-space-usage-and-scripts-for-finding-queries-which-are-using-excessive-tempdb-space/

Identify which type of tempdb objects are consuming space

The following query helps you understand if user objects or version store or internal objects are the ones using the space in tempdb. According to this output, you can focus on the below sections.
SELECT
SUM (user_object_reserved_page_count)*8 as user_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage If user_obj_kb is the highest consumer, then you that objects are being created by user queries like local or global temp tables or table variables. Also don’t forget to check if there are any permanent tables created in TempDB. Very rare, but I’ve seen this happening. If version_store_kb is the highest consumer, then it means that the version store is growing faster than the clean up. Most likely there are long running transactions or open transaction (Sleeping state), which are preventing the cleanup and hence not release tempdb space back.

Query that identifies the currently active T-SQL query, it’s text and the Application that is consuming a lot of tempdb space
SELECT es.host_name , es.login_name , es.program_name,
st.dbid as QueryExecContextDBID, DB_NAME(st.dbid) as QueryExecContextDBNAME, st.objectid as ModuleObjectId,
SUBSTRING(st.text, er.statement_start_offset/2 + 1,(CASE WHEN er.statement_end_offset = -1 THEN LEN(CONVERT(nvarchar(max),st.text)) * 2 ELSE er.statement_end_offset END – er.statement_start_offset)/2) as Query_Text,
tsu.session_id ,tsu.request_id, tsu.exec_context_id,
(tsu.user_objects_alloc_page_count – tsu.user_objects_dealloc_page_count) as OutStanding_user_objects_page_counts,
(tsu.internal_objects_alloc_page_count – tsu.internal_objects_dealloc_page_count) as OutStanding_internal_objects_page_counts,
er.start_time, er.command, er.open_transaction_count, er.percent_complete, er.estimated_completion_time, er.cpu_time, er.total_elapsed_time, er.reads,er.writes, er.logical_reads, er.granted_query_memory
FROM sys.dm_db_task_space_usage tsu inner join sys.dm_exec_requests er
ON ( tsu.session_id = er.session_id and tsu.request_id = er.request_id)
inner join sys.dm_exec_sessions es ON ( tsu.session_id = es.session_id )
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) st
WHERE (tsu.internal_objects_alloc_page_count+tsu.user_objects_alloc_page_count) > 0
ORDER BY (tsu.user_objects_alloc_page_count – tsu.user_objects_dealloc_page_count)+(tsu.internal_objects_alloc_page_count – tsu.internal_objects_dealloc_page_count) DESC

Trace Flag 1118

This trace flag is available starting with SQL 2000 SP3 to reduce tempdb contention by forcing uniform extent allocations as opposed to mixed extent allocations. This trace flag is only to be used if you seeing contention (wait_Stats) on the PFS/GAM pages like 2:1:1 etc.. More internal details on this trace flag is available in Paul Randal’s blog post here.

Not only does enabling the trace flag help but you need to create multiple tempdb files equal to the number of logical processors. So if you have 4 CPU’s you will create 4 tempdb data files. Now, what if you have 16 or 32 processors, do you still need to create that many tempdb files?

The answer is NO, you don’t have to. The above recommendation has been stated in many KB articles like http://support.microsoft.com/default.aspx?scid=kb;EN-US;328551

If the number of logical processors on your server is greater than or equal to 8, then use 8 data files for tempdb. If the number of logical processors is less than 8, then use as many data files as your processor count.

 

SQL TempDB IaaS February 18, 2013

Filed under: Azure,IaaS,SQL,TempDb — sladescross @ 10:45 am

http://social.technet.microsoft.com/wiki/contents/articles/14337.change-tempdb-to-temporary-drive-on-azure-sql-iaas.aspx

TEMPDB in SQL IaaS

On Azure SQL IaaS, there is a recommendation that we have to move TEMPDB on Temporary Drive (D:) to improve SQL Server performance.

 

TempDB and Log Files May 11, 2012

Filed under: TempDb — sladescross @ 2:47 pm

http://www.sqlskills.com/BLOGS/PAUL/post/Understanding-data-vs-log-usage-in-tempdb.aspx

One of the things to consider about tempdb is that logging in tempdb is ultra-efficient. Log records for updates in tempdb, for instance, only log the before image of the data instead of logging both before and after images. There is no need to log the after image – as that is only used for the REDO portion of crash recovery. As tempdb never gets crash-recovered, REDO never occurs. The before image *is* necessary, however, because transactions can be rolled back in tempdb, just like other databases, and so the before image of an update must be available to be able to successfully roll back the update.

 

SQL Tempdb Move and Multiple Files February 1, 2011

Filed under: TempDb — sladescross @ 3:06 pm

http://msdn.microsoft.com/en-us/library/ms175527.aspx

Configuration for resize and location.

http://www.databasejournal.com/features/mssql/article.php/3379901/Moving-the-Tempdb-and-Master-Database-in-SQL-Server.htm

 

SQL Peformance Urban Legends June 16, 2010

Filed under: SQL Performance,TempDb — sladescross @ 7:11 pm

http://blogs.msdn.com/b/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx

Tempdb is the database with the highest level of create and drop actions and under high stress the allocation pages, syscolumns and sysobjects can become bottlenecks.   SQL Server 2005 reduces contention with the ‘cached temp table’ feature and allocation contention skip ahead actions.

When multiple workers are attempting to create or drop objects in tempdb you can decrease the bottleneck by having

  • As many files in tempdb as schedulers
  • All files sizes are equal
  • Uniform allocations enabled (-T1118)

The key for SQL Server is the I/O response times.  Start by looking at the Average Disk Seconds Per Transfer for the same I/O path.  Then look at the SQL Server sysprocesses and waitstats for buffer based I/O wait information.   You want to see evidence from SQL Server that the disk queue length is related to an I/O bottleneck.

 

SQL Tempdb Performance May 16, 2010

Filed under: Performance,SQL,TempDb — sladescross @ 10:36 pm

http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/58382401-8320-4d09-b021-febb83f70e9e

Take tempdb:

If there is lot of write activity, tempdb suffers from allocation bottleneck as these system pages PFS, GAM, SGAM will be accessed. One way to find this allocation bottleneck is below.

http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/bf08e5c4-544b-4348-90d6-3285fec186fb

The answer is, it depends. Answering that question is extremely difficult and not the same for everyone.

You need to know where the bottleneck of performance is located. In SQL Server the bottleneck for queries is CPU, RAM and disk access, in that order. The disk is the slowest part of the computer, but can be offset by huge amounts of cache, either in the controller, server, OS or SQL cache. If you are not maxing out the bandwidth of the hard drives, then creating 2 (or more) data files per LUN does not hurt your performance. But, you want to minimize the time the OS is waiting on the hard drives.

It also depends on your usage. If most of your database is “static” data and read many times, then you will benefit from a huge data cache (more RAM). However, if your database is transactional and updated all the time and you need to read those new records all the time, then hard drive speed is probably your bottleneck.

In addition, with newer caching controllers/SANs the RAID level has very little effect on the performance of the actual read/write time. I have done testing on IBM DS4800 using all RAID levels, and they are so close as to be insignificant. IMO, the reliablity of having RAID 5 and spreading the reads/writes between many drives outweights the performance hit of the RAID 5. Even using Windows software “stripping” between several RAID 5 arrays (using a huge server) had almost zero affect on performance.

Bottom line, you will have to try and find out how it works.

select session_id, wait_duration_ms, resource_description

from sys.dm_os_waiting_tasks

where wait_type like ‘PAGE%LATCH_%’ and

resource_description like ’2:%

or

SELECT *
FROM sys.sysprocesses
WHERE lastwaittype like ‘PAGE%LATCH_%’ AND waitresource like ’2:%’

To avoid this allocation bottleneck, it is recommended to

1) To create multiple files as upto the number of cpu’s avaialable and its important that these files to be of EQUAL size to make use of the proportional fill. These files don’t necessary be on different disks unless you have facing IO bottleneck too. If you have the option go for it.

If tempdb is going to be on its own disk/array/whatever…give it a decent size to start and review regularly.
AFAIK, I don’t think there is a huge downside to making it too large to begin with.
It is recreated every time SQL Server starts, but with ‘instant’ file initialization, that should not be a problem.
Start with something like 32GB. If you have four cores, create four files of 8GB each. Eight cores = 8 files of 4GB. And so on.
That’s just to give you somewhere to start. Go large!

http://msdn.microsoft.com/en-us/library/dd758814.aspx

Disk alignment.

Stripe size is the size of one entire stripe spread across all the disks in a RAID-0, RAID-10, RAID-5, or RAID-6 disk group. Stripe unit size is the size of each element of the stripe, as stored on each member of the disk group. Stripe size is a product of the stripe unit size and the number of disks in a RAID group. Stripe unit size is the attribute of a RAID disk group that can be configured by administrators. A stripe unit is the collection of bits on each disk exactly equal to the stripe unit size.

In the absence of an explicit vendor recommendation, use a partition offset that complies with the correlation discussed in the section “Essential Correlations: Partition Offset, File Allocation Unit Size, and Stripe Unit Size”. 64 KB is a common, valid starting partition offset because it correlates well with fundamental physical boundaries in disks, controllers, and cache. Other valid starting partition offsets exist. The Windows Server 2008 default is 1024 KB. For more information, see the “Valid Starting Partition Offsets” section later in this paper.

The performance benefit of disk partition alignment requires configuration of valid starting partition offsets.

Windows Server 2008 partition alignment defaults to 1024 KB (that is, 1,048,576 bytes). This value provides a durable solution. It correlates well (as described later) with common stripe unit sizes such as 64 KB, 128 KB, and 256 KB as well as the less frequently used values of 512 KB and 1024 KB. Also, the value virtually guarantees hidden structures allocated by storage hardware are skipped.

Essential Correlations: Partition Offset, File Allocation Unit Size, and Stripe Unit Size
Use the information in this section to confirm the integrity of disk partition alignment configuration for existing partitions and new implementations.

There are two correlations which when satisfied are a fundamental precondition for optimal disk I/O performance. The results of the following calculations must result in an integer value:

Partition_Offset ÷ Stripe_Unit_Size

Stripe_Unit_Size ÷ File_Allocation_Unit_Size

Of the two, the first is by far the most important for optimal performance. The following demonstrates a common misalignment scenario: Given a starting partition offset for 32,256 bytes (31.5 KB) and stripe unit size of 65,536 bytes (64 KB), the result is 0.4921875. This is not an integer; therefore the offset & strip unit size are not correlated. This is consistent with misalignment.

However, a starting partition offset of 1,048,576 bytes (1 MB) and a stripe unit size of 65,536 bytes produces a result of exactly 8, an exact integer, which is consistent with alignment.

Note that file allocation unit (cluster) size commonly correlates with common stripe unit sizes. The performance question here is usually not one of correlation per the formula, but whether the cluster size is the NTFS default of 4,096 bytes or has been explicitly defined at 64 KB, which is a best practice for SQL Server.

http://en.wikipedia.org/wiki/RAID#RAID_1

RAID Types

http://www.pcguide.com/ref/hdd/perf/raid/levels/singleLevel5-c.html

RAID 5

http://www.sql-server-performance.com/faq/raid_1_raid_5_p1.aspx

RAID 1

Both hardware RAID 1 (mirroring or duplexing) or RAID 5 (striping with parity) offer good data redundancy should a single hard disk in a RAID array fail. And as you might expect, there is some differences in performance between the two.
The major difference in performance between RAID 1 and RAID 5 is that RAID 5 experiences heavy write overhead because additional parity data has to be created and written to the array. This is overhead that RAID 1 does not experience. Because of this, write performance on RAID 1 is generally much faster than RAID 5.

Read performance, on the other hand, is generally better for RAID 5 than RAID 1. This is especially true if the RAID 5 arrays have more than three disk drives in them. RAID 5 read performance increases as more disks are added to the array because the more disks there are, the more read/write heads there are, and RAID 5 arrays have the ability to read simultaneously from all the drives, speeding read access. Since RAID 1 only has two disk drives, and RAID 5 has three or more disk drives, RAID 5 has the read performance advantage.

http://www.brainbell.com/tutorials/Networking/RAID_10.html

 

 
Follow

Get every new post delivered to your Inbox.

Join 63 other followers