Feeds:
Posts
Comments

Archive for September, 2013

A quick one today; During development, I often find myself having to clear up SQL servers procedure cache before re-running a query. Sometimes I have  a hard time recollecting the exact order of words. Is it ‘FREEPROCCACHE’ or ‘PROCCACHEFREE’. So this post is for me to refer to, in future.

-- clear all plans in cache
DBCC FREEPROCCACHE

-- Clear Buffer pool
CHECKPOINT
GO
DBCC DROPCLEANBUFFERS
GO

DBCC FREEPROCCACHE

Helps clear plan cache. Each query when its executed, Sql Server creates a plan and stores it in its memory; So next time you run the same query, it reuses the plan instead of recreating. So, clearing proc cache forces plan generation for every query. So, do not use it on production databases (causes performance hit, as SQL has to recompile every single query/procedure before executing).

DBCC DROPCLEANBUFFERS

Sql Server, upon executing a query and returning the results to the user, it stored the data in its buffer pool for a little while. If you rerun the query, Sql Server will be able to quickly return the results from memory rather than going to disk again. This is much faster. But every few minutes, this buffer pool is automatically cleared. But that’s a different topic.

So, if you want to force clean entire buffer pool, you want to run DROPCLEANBUFFERS. First run CHECKPOINT to force all dirty pages to be written to disk and then run DROPCLEANBUFFERS to clean the buffer pool.

Hope this helps,
_Sqltimes

Advertisements

Read Full Post »

Just learnt an awesome feature with in SQLCMD. String Replacement using SQLCMD !! Using -v we can replace strings in the sql file that is about to be executed.

This may not sound great, when you are in a unique situation where you need to replace a particular section of your SQL code with a string literal on the fly right before execution, this works great !! (Using global temp tables, or session variables may work for some situations, but not here)

Let’s say you have a SQL script with a bunch of SQL code. But there is one section, where the variable value is not obvious until a moment before run time. In our case, path to a file. Path depends on some factors that keep changing for every file. So we need something that allows us to run the same batch file for any file by just providing the path as a parameter to the SQLCMD call.

Example:

Imagine a SQL file that accomplishes some business process.

Sample sql file, let’s say SQLCMD_StringReplace.sql looks like this:

SQLCMD Script File on Disk

SQLCMD Script File on Disk

The script has the following code:

DECLARE   @test VARCHAR(50) = 'test1'
		, @test2 VARCHAR(50) = 'test2'
		, @test3 VARCHAR(50) = 'test3'
		, @test4 VARCHAR(50) = 'test4'

SET @test3 = '$(Replace1)' + ' - This is replaced.'
SET @test4 = '$(Replace2)' + ' - This is replaced.'

PRINT @test
PRINT @test2
PRINT @test3
PRINT @test4
GO

When you run this command at CMD prompt (or PowerShell), the string literals Replace1 and Replace2 will be replaced with respective stings from -v flag (New String1 & New String2)

sqlcmd -S 153.60.58.148\PLAY -E -v Replace1 = "New string1" Replace2 = "New string2" -i "C:\Test\SQLCMD_StringReplace.sql"
SQLCMD StringReplace Execution

SQLCMD StringReplace Execution

This is an awesome feature !!

Hope this helps,
_Sqltimes

Hat Tip: My colleague Charlie for pointing this to me

Read Full Post »

Update : Added one more way; Hat tip to Glenn Berry

Running COUNT(*) on a table results in Table scan or Index scan; Both are not preferred. Meta-data views could help avoid that.

  • Option 1: Query ‘dm_db_partition_stats’ view
SELECT T.name AS [Table_Name]
     , I.name AS [Index_Name]
     , SPS.row_count AS [Record Count]

FROM sys.tables AS t
INNER JOIN sys.indexes AS i
    ON t.object_id = i.object_id
    AND I.index_id IN (1,0)
INNER JOIN sys.dm_db_partition_stats AS SPS
    ON SPS.object_id = t.object_id
    AND SPS.index_id = I.index_id
GO
  • Option 2: Query ‘sysindexes’ view. Please note that this is a depricated view.
SELECT OBJECT_NAME(id), rowcnt, * FROM sysindexes
WHERE indid IN (1,0)
ORDER BY OBJECT_NAME(id)
  • Option 3: Query ‘sys.partitions’ view.

SELECT OBJECT_NAME(object_id) AS [ObjectName]
, SUM(rows) AS [RowCount]
, data_compression_desc AS [CompressionType]
FROM sys.partitions WITH (nolock)
WHERE index_id < 2 --ignore the partitions from the non-clustered index if any
AND Object_name(object_id) NOT LIKE N'sys%'
AND Object_name(object_id) NOT LIKE N'queue_%'
AND Object_name(object_id) NOT LIKE N'filestream_tombstone%'
AND Object_name(object_id) NOT LIKE N'fulltext%'
AND Object_name(object_id) NOT LIKE N'ifts_comp_fragment%'
GROUP BY object_id, data_compression_desc
ORDER BY Sum(rows) DESC
OPTION (recompile);

Since we are only querying meta data, the results are instantaneous.

Hope this helps,
_Sqltimes

Read Full Post »

Starting Sql Server 2005, tempdb’s role has significantly increased. It is heavily used for data processing i.e MARS, sort, reindexing, temp tables, table variables, etc. Performance Tuning TempDB is not just an important task its a non-ignorable task for any large scale system (VLDB or evolving VLDBs).

There are several best practicies to make overall improvements to a Sql Server instance. From Hardware, to Operating System settinga, Sql Server Configuration, Storage Architecture, Tables Structure, Indexes, etc. Today, I am going to focus on one aspect that I’ve been meaning to write about for a long time. Optimization of TempDB performance using multiple data files.

As you know, any database has data and log file. Each data file has a dedicated thread to perform IO and other necessary operations. If all your data is in one MDF file, then any table reads/writes have to go through the same thread. Even if the underlying disk subsystem is fast, you are limiting the overall throughput with just one thread. More data files means, more threads to simultaneously read and write data. But there is a right balance to strike. Adding a new data file for each table to increase parallelism may not be the first thing to attempt (may be this is a good idea, but needs to be properly tested)

When you have one MDF file for TempDB, all the processes that need TempDB space will be competing for the same limited resources. When you have multiple files, they are managed in parallel with dedicated threads. Larger size means more bit map allocations available and multiple files means parallel execution.

Since Sql using round-robin proportional fill algorithm for distributing data load to all the data files, it results in keeping the data files to the same size. This allows evenly growth in all data files. When you first create tempdb data files, be generous and allocate a big size. Do not rely on auto-growth. Auto-growth is a backup plan, not a technique as part of primary plan. So, allocate more space than you need for each data file initially. Monitor the usage and allocate the size to what is more applicable.

Now, the important question is ‘how many data files do I create for my tempdb?

This is the logic I follow (and what I learnt from other MVPs).

  • If you have a quad-core CPU, then add 4 data files (all equal size)
  • If you have dual quad-code, then add 8 data files.
  • If you have 4 quad-core or more (i.e. 64 core, etc), then start with 8 data files and work your way up by adding 4 data files each time.

In essence, keep the number of data files to the same number as the number of cores. Monitor your system, and see what works. There is no silver bullet here, only guidelines to make incremental improvements.

Use DMVs (system views), to measure the growth:

  • sys.dm_io_virtual_file_stats
  • sys.database_files

Note: Sql Server restart is not required when you add new data files.

Hope this helps,
_Sqltimes

Read Full Post »

Update on July 8th, 2016 : Added image.

Recently, I made a post on why it takes a lot of time when you run CREATE DATABASE (or other data file extension tasks). The reason is database file initialization. This initialization happens for both LOG and DATA files, but you can skip this step for DATA files (not LOG files).

Recently, I learnt from Glenn Berry that we could leverage Instant Database File Initialization feature by skipping the step of zeroing out the data file, when:

  • New database is created with a large data file size or
  • Restoring a VLDB or
  • Autogrowth of data file.
  • etc.

This could be achieved by utilizing a feature, called Instant Database File Initialization, within the Operating System. This allows skipping the zeroing out step and complete the task much quicker. This is an operating system feature and needs special permissions. We need to assign this permission to the user account under which Sql Server service runs.

For that, go to:

Group Policy >> Computer Configuration >> Windows Settings >> Security Settings >> Local Policies >> User Rights Assignment >>

Look for a policy called “Performance Volume Maintenance“. By default, only administrator have this right, but you can assign this permission, on the local machine, to Sql Server service account (even if it is a domain account).

Instant_File_Initialization

Voila !! now the Sql Server service account has permissions to ‘Instant Database File Initialization’.

Hope this helps,
_Sqltimes

Read Full Post »