Feeds:
Posts
Comments

Archive for July, 2015

Quick one today:

In a previous post, we discussed the different ways to clean Sql Server cache. Now we look at another one. DBCC FREESYSTEMCACHE.

DBCC FREESYSTEMCACHE('ALL')
GO

DBCC FREESYSTEMCACHE

As the name suggests, FREESYSTEMCACHE allows us to free up any unused cache in Sql Server memory. Sql Server usually, cleans up any unused cache from memory to free up space for new entries to use. But if we need to run this on demand, we could use this command to free-up. In addition to that we could also specify a particular pool that we want to clear or ALL pools.

Example 1: To clear all entries from from ‘default pool’

--
-- Clear the 'default' pool un-used entries
--
DBCC FREESYSTEMCACHE('ALL', 'default')
GO

 

Example 2: To clear all entries from all pools

--
-- Clear unused entries from all pools
--
DBCC FREESYSTEMCACHE('ALL')
GO

 

Advertisements

Read Full Post »

Quick one today:

A few weeks ago, we ran into an interesting error. This nebulous error has caused some confusion in the team, as it occurred during a high pressure situation. After the event passed, and everyone came back to normal state, it became apparent that this is a innocuous error.

Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)

One of our deployment steps changes the table definition of a large table. During this time, looks like, the database catalog is locked. That makes sense, as we are attempting to change the table definition which is in the catalog. Since this was a large table, the ALTER TABLE took more than a few minutes and for the whole duration of this change the catalog would be locked.

So, when we clicked on the tree structure in SSMS, there was no response. SSMS froze. After a few seconds, it came back with this error message. Surprisingly, if you query the catalog using TSQL, it works. So, that anomaly is not clear yet.

With in SSMS, under Options menu, go to, ‘Designers‘ >> ‘Table and Database Designers‘. On the right had side, look for ‘Override connection string time-out value for table designer updates: Transaction time-out after: 30 seconds‘. Change it to 60 or 90 seconds (as you see fit). This way, it will wait longer before times-out.

My preference is still 30 seconds. Once it errors out, I use TSQL to gather any information I need.

For more information, please refer to this MSDN article.

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Sometimes there is a need to track the usage patterns of transactional log file usage. There is a quick and efficient way to gather this data on regular basis.

--
-- Check the log file space usage
--
DBCC SQLPERF(LOGSPACE)
GO

It gives, Database Name, Log Size (MB), Log Space Used (%) and status. See the sample image below:

Log Space Usage

Log Space Usage

Previously, we saw a different usage of DBCC SQLPERF to reset DMV counters.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Earlier, there was an interesting question that came up during a discussion. When we perform ‘ALTER INDEX ALL ON dbo.test REBUILD’ does it change the data COMPRESSION status of the table?

In short, no. It keeps it the way it was before the REBUILD started.

Since this is a REBUILD (without changing the definition), to reduce the fragmentation, my guess is it does not read the contents of the data (in detail). It will read it just enough to write to a different contiguous location. So, there is no un-compressing and re-compressing operation going on here.

But if we mention the index option “DATA_COMPRESSION = PAGE”, it will compress the data, no matter what the previous state.

--
-- Sample code to test
--
CREATE TABLE dbo.test (
    , ID        INT                IDENTITY(1,1)  PRIMARY KEY
	, Name1     VARCHAR(50)        DEFAULT CONVERT(VARCHAR, (RAND()*10000))
	, Name2     VARCHAR(50)        DEFAULT CONVERT(VARCHAR, (RAND()*10000))
	, Name3     VARCHAR(50)        DEFAULT CONVERT(VARCHAR, (RAND()*10000))
		)
GO

--
-- Put some dummy data
--
INSERT INTO dbo.test DEFAULT VALUES
GO 1000

Now, lets check the compression status of the CLUSTERED index.

--
-- Check the compression status
--
SELECT OBJECT_NAME(object_id), data_compression_desc
FROM sys.partitions
WHERE object_id = OBJECT_ID('test')
AND index_id = 1

As you can see, the table data is not compressed.

Table Not Compressed

Table Not Compressed

Now, when we REBUILD, the compression status does not change.

--
-- Now REBUILD without any INDEX OPTIONS.
--
ALTER INDEX ALL ON dbo.test REBUILD
GO

As part of REBUILD, once we add any changes to the definition of the index, the compression status changes.

--
-- Now REBUILD and test.
--
ALTER INDEX ALL ON dbo.test REBUILD
WITH (DATA_COMPRESSION = PAGE)
GO
Rebuild with COMPRESSION change

Rebuild with COMPRESSION change


Hope this helps,
_Sqltimes

Read Full Post »