Feeds:
Posts
Comments

Quick one today:

In a previous post, we discussed the different ways to clean Sql Server cache. Now we look at another one. DBCC FREESYSTEMCACHE.

DBCC FREESYSTEMCACHE('ALL')
GO

DBCC FREESYSTEMCACHE

As the name suggests, FREESYSTEMCACHE allows us to free up any unused cache in Sql Server memory. Sql Server usually, cleans up any unused cache from memory to free up space for new entries to use. But if we need to run this on demand, we could use this command to free-up. In addition to that we could also specify a particular pool that we want to clear or ALL pools.

Example 1: To clear all entries from from ‘default pool’

--
-- Clear the 'default' pool un-used entries
--
DBCC FREESYSTEMCACHE('ALL', 'default')
GO

 

Example 2: To clear all entries from all pools

--
-- Clear unused entries from all pools
--
DBCC FREESYSTEMCACHE('ALL')
GO

 

Quick one today:

A few weeks ago, we ran into an interesting error. This nebulous error has caused some confusion in the team, as it occurred during a high pressure situation. After the event passed, and everyone came back to normal state, it became apparent that this is a innocuous error.

Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)

One of our deployment steps changes the table definition of a large table. During this time, looks like, the database catalog is locked. That makes sense, as we are attempting to change the table definition which is in the catalog. Since this was a large table, the ALTER TABLE took more than a few minutes and for the whole duration of this change the catalog would be locked.

So, when we clicked on the tree structure in SSMS, there was no response. SSMS froze. After a few seconds, it came back with this error message. Surprisingly, if you query the catalog using TSQL, it works. So, that anomaly is not clear yet.

With in SSMS, under Options menu, go to, ‘Designers‘ >> ‘Table and Database Designers‘. On the right had side, look for ‘Override connection string time-out value for table designer updates: Transaction time-out after: 30 seconds‘. Change it to 60 or 90 seconds (as you see fit). This way, it will wait longer before times-out.

My preference is still 30 seconds. Once it errors out, I use TSQL to gather any information I need.

For more information, please refer to this MSDN article.

 

Hope this helps,
_Sqltimes

Quick one today:

Sometimes there is a need to track the usage patterns of transactional log file usage. There is a quick and efficient way to gather this data on regular basis.

--
-- Check the log file space usage
--
DBCC SQLPERF(LOGSPACE)
GO

It gives, Database Name, Log Size (MB), Log Space Used (%) and status. See the sample image below:

Log Space Usage

Log Space Usage

Previously, we saw a different usage of DBCC SQLPERF to reset DMV counters.

Hope this helps,
_Sqltimes

Quick one today:

Earlier, there was an interesting question that came up during a discussion. When we perform ‘ALTER INDEX ALL ON dbo.test REBUILD’ does it change the data COMPRESSION status of the table?

In short, no. It keeps it the way it was before the REBUILD started.

Since this is a REBUILD (without changing the definition), to reduce the fragmentation, my guess is it does not read the contents of the data (in detail). It will read it just enough to write to a different contiguous location. So, there is no un-compressing and re-compressing operation going on here.

But if we mention the index option “DATA_COMPRESSION = PAGE”, it will compress the data, no matter what the previous state.

--
-- Sample code to test
--
CREATE TABLE dbo.test (
    , ID        INT                IDENTITY(1,1)  PRIMARY KEY
	, Name1     VARCHAR(50)        DEFAULT CONVERT(VARCHAR, (RAND()*10000))
	, Name2     VARCHAR(50)        DEFAULT CONVERT(VARCHAR, (RAND()*10000))
	, Name3     VARCHAR(50)        DEFAULT CONVERT(VARCHAR, (RAND()*10000))
		)
GO

--
-- Put some dummy data
--
INSERT INTO dbo.test DEFAULT VALUES
GO 1000

Now, lets check the compression status of the CLUSTERED index.

--
-- Check the compression status
--
SELECT OBJECT_NAME(object_id), data_compression_desc
FROM sys.partitions
WHERE object_id = OBJECT_ID('test')
AND index_id = 1

As you can see, the table data is not compressed.

Table Not Compressed

Table Not Compressed

Now, when we REBUILD, the compression status does not change.

--
-- Now REBUILD without any INDEX OPTIONS.
--
ALTER INDEX ALL ON dbo.test REBUILD
GO

As part of REBUILD, once we add any changes to the definition of the index, the compression status changes.

--
-- Now REBUILD and test.
--
ALTER INDEX ALL ON dbo.test REBUILD
WITH (DATA_COMPRESSION = PAGE)
GO
Rebuild with COMPRESSION change

Rebuild with COMPRESSION change


Hope this helps,
_Sqltimes

Quick one today:

Earlier, we ran into an interesting error in our lab environment.

Msg 8115, Level 16, State 1, Line 2
Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.

Initially, this error did not make sense. There are no calculations performed on this column data for it to overflow, but then it dawned on us. Its the INT vs. BIGINT issue. The column has INT as the datatype and it has reached its maximum limit: 2,147,483,647. After that, not a single record could get into the table.

Immediately, changed the column datatype to BIGINT to let the application processes continue to pump data into the table. Surprisingly, there were not errors from the .Net application from the datatype change.

--
--  Change datatype of column
--
ALTER TABLE dbo.SampleTable
ALTER COLUMN ID BIGINT
GO

One interesting point that came up during all of this was, the time it took to change from INT to BIGINT. It was completed in under a few seconds (around 30 seconds). For a table this big, the initial guesstimate was a few minutes, but that was proven completely wrong. When we recently, performed similar task on a VARCHAR column (on a large table), it took more than an hour. INT is 4 bytes, BIGINT is 8 bytes. So, if we increase a VARCHAR column by 4 bytes, it should also, in theory, take the same time, right?

In our lab, to be sure, I ran a few tests to measure the difference. And the difference is consistent. Converting INT to BIGINT takes 90 seconds, but a CHAR change from 10 to 14 takes 180 seconds. Any thoughts?

Hope this helps,
_Sqltimes

Quick one today:

Recently, in our lab, we ran into this interesting error:

Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized.

Turns out, we were attempting to multiple operations there were mutually exclusive. For example,

  • Adding a new data file while backup operation is running
  • Removing data file while running backup operation
  • Shrink file during backup operation
  • etc.

Once the backup is completed, we were able to perform these operations. When backup runs, it reads every extent and writes into a backup file. So, during this, any attempts to change the underlying file structures (where the extents live) could not be allowed (obviously).

There may be several such operations that are mutually exclusive; So it is good to learn and list them so we do not do this again. More will be added as they come up.

 

Hope this helps,
_Sqltimes

Quick one today:

Sometimes there is a need to query Environment Variables using T-SQL. ‘xp_cmdshell’ allows us to query environment variables. First we need to enable ‘xp_cmdshell’ in ‘sys.configurations’ using EXEC sp_configure. It is an advanced option, so enable ‘advanced options’ first.

--
-- Set Configurations
--
SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 1
RECONFIGURE
GO

EXEC sp_configure 'xp_cmdshell', 1
RECONFIGURE
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 0
RECONFIGURE
GO

SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

Then use xp_cmdshell to query each environment variable, as needed. Or you could create a temporary table and INSERT the result into it.

--
-- Query environment variables
--
exec xp_cmdshell 'echo %NUMBER_OF_PROCESSORS%'
exec xp_cmdshell 'echo %ProgramFiles%'
exec xp_cmdshell 'echo %LTRXDATABASE%'
GO

Result:

Environment Variables Output

Environment Variables Output

Reset the sys.configurations settings after you are done.

--
-- Reset Configurations
--
SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 1
RECONFIGURE
GO

EXEC sp_configure 'xp_cmdshell', 0
RECONFIGURE
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 0
RECONFIGURE
GO

SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

 

 

Hope this helps,
_Sqltimes

Follow

Get every new post delivered to your Inbox.