Feeds:
Posts
Comments

Archive for the ‘Cool Script’ Category

Interesting one today:

On a production box, the backup jobs have been failing with an interesting and perplexing error. Its says “Not enough disk space“; As you can guess, this is one of those confusing or misleading error messages that’s not what it seems on the surface — Making it worthwhile for a post of its own.

Detailed error message is below:

BACKUP DATABASE DummyDB
TO        DISK = N''
	, DISK = N''
	, DISK = N''
	, DISK = ''
WITH STATS = 1
GO
...
...
...
68 percent processed. 
69 percent processed. 
70 percent processed. 
Msg 3202, Level 16, State 1, Line 1 

Write on "F:\MSSQL\Backup\DummyDB.BAK" failed: 
112(There is not enough space on the disk.) 

Msg 3013, Level 16, State 1, Line 1 
BACKUP DATABASE is terminating abnormally.

This error occurs in both backups with & without compression; And in FULL & Differential backups.

This is a fairly large database, ranging up to 18 TB. So, backups are an ordeal to perform. So, when DIFF backups started failing, it was a bit concerning too.

After attempting several backups on local  & remote storage with plenty of space, a pattern still did not emerge. The only constant is that it fails around 70% completion progress.

At that point, one of  my colleagues (Thanks Michael) pointed out that, as part of backup operation, Sql Server will first run some algorithm that calculates the amount of space needed for the backup file. If the backup drive has enough free space well  and good, if not, it throws this error.

But, as you can guess, we had plenty of free space i.e. peta bytes of free space.

Occasionally, manual backups are successful. So, I’m not sure what is going on, but here is my theory:

At different points, Sql  Server  runs the algorithm (“pre-allocation algorithm”) to determine if there is enough space. Initially it comes back saying “yes” — and the backup proceeds with writing to the backup file; Again a little later, it checks, and it comes back with “Yes”; But at someone on subsequent checks (in our case between 70% to 72% complete), the algorithm decides there is  not enough disk space.

So, turns out there is a TRACE FLAG called 3042 that could disable this algorithm from making any assessments — that way backups could progress to completion.

From  MSDN:

Bypasses the default backup compression pre-allocation algorithm to allow the backup file to grow only as needed to reach its final size. This trace flag is useful if you need to save on space by allocating only the actual size required for the compressed backup. Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Caution: Manually make sure there is plenty of space for backup to complete — since we are disabling the algorithm.

--
-- Disable pre-allocation algorithm
--
DBCC TRACEON (3042)
GO

BACKUP DATABASE DummyDB
TO        DISK = N''
    , DISK = N''
    , DISK = N''
    , DISK = ''
WITH STATS = 1
GO
DBCC TRACEOFF (3042)
GO

Make sure you test this in a non-production environment, before enabling it in production.

Hope this helps,
_Sqltimes

Advertisements

Read Full Post »

Quick one today:

In production environments, its a common practice to perform full VERIFICATION of backup files, as part of backup it self. Following script shows verification of backup file:

--
-- Verify backup file
--
RESTORE VERIFYONLY
FROM       DISK = N'H:\MSSQL\Backup\DummyDB_01.BAK'
	,  DISK = N'H:\MSSQL\Backup\DummyDB_02.BAK'
	,  DISK = N'H:\MSSQL\Backup\DummyDB_03.BAK'
	,  DISK = N'H:\MSSQL\Backup\DummyDB_04.BAK'
	,  DISK = N'H:\MSSQL\Backup\DummyDB_05.BAK'
WITH STATS = 1
GO

RestoreVerify

 

Hope this helps,
_Sqltimes

 

Read Full Post »

Quick one today:

Sometimes there is a quick need to list all indexes, both clustered and non-clustered indexes, and their storage location a.k.a. FileGroup.

The below query gives us that ability; As evident, it is expandable to add more details, but here it is kept short to demonstrate the value:

--
-- All all indexes and their FGs
--
SELECT    O.name AS [tableName]
	, I.name AS [IndexName]
	, I.index_id AS [IndexID]
	, I.type_desc AS [IndexType]
	, FG.name  AS [FileGroup_Name]

FROM sys.indexes AS I

INNER JOIN sys.filegroups AS FG
	ON I.data_space_id = FG.data_space_id
INNER JOIN sys.all_objects AS O
	ON I.object_id = O.object_id
	AND O.type = 'U'
	AND O.is_ms_shipped = 0

ORDER BY O.name, I.index_id
GO

 

Hope this helps,
_Sqltimes

 

Read Full Post »

Quick one today:

A few days ago, this error appeared on one of our lab machines.

Error Message:

Msg 15281, Level 16, State 1, Line 16
SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of component 
'Ad Hoc Distributed Queries' because this component is turned off as part of 
the security configuration for this server. A system administrator can enable 
the use of 'Ad Hoc Distributed Queries' by using sp_configure. For more information 
about enabling 'Ad Hoc Distributed Queries', search for 'Ad Hoc Distributed Queries'
in SQL Server Books Online.

Resolution:

This is an easy error to fix — as the error message is pretty verbose and self-explanatory. After making sure that ad hoc distributed queries are allowed (acceptable to be executed) in your sql environment, run the following query to enable execution of ad hoc in your Sql instance.

--
-- Check current status of Ad Hoc Distributed Queries
--
SELECT * FROM sys.configurations WHERE name LIKE '%ad hoc Dis%'
GO

--
-- Enable Ad Hoc Distributed Queries
--
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC sp_configure 'Ad Hoc Distributed Queries', 1
GO
EXEC sp_configure 'show advanced options', 0
GO
RECONFIGURE
GO

 

Hope this helps,
_Sqltimes

 

Read Full Post »

Interesting one today:

A few months ago, we had an issue with a database in our lab environment where the database ended up in SUSPECT mode due to storage issues. Once the storage was fixed, we were able to perform troubleshooting steps on the database. Of those, today we’ll only cover the step that was used to rebuild the transactional log file (rest of the steps will be covered in a future post).

Error Message:

An error occurred while processing the log for database 'SampleDB'. 
If possible, restore from backup. 
If a backup is not available, it might be necessary to rebuild the log.

 

NOTE: Rebuilding the transaction log is always the last option. There are other, safer options to troubleshoot when databases are in SUSPECT mode. Use this mode only after you’ve exhausted all other options like

  • CHECKDB
  • Restore from valid backup
  • Repairing with EMERGENCY mode

 

NOTE: Rebuilding transactional log file will break the restore chain; So any previous transactional log files could not be applied with the backups going forward.

Rebuild Transactional Log File

 

Step 1: Let’s identify the transactional log file name and path. Use the following query:

--
-- Gather Logical & Physical names of the database
--
SELECT D.name AS [DatabaseName]
	, M.name AS [LogicalName]
	, M.physical_name AS [PhysicalName]
FROM sys.master_files AS M
INNER JOIN sys.databases AS D 
	ON d.database_id = m.database_id
	AND D.name = 'SampleDB'
GO

Let’s use the name & path in the script below.

 

Step 2: Prepare the database

Before we rebuild the transactional log file, we need to set the database in EMERGENCY mode & SINGLE_USER mode. Use the script below:

--
-- Set database in EMERGENCY mode & SINGLE_USER mode
--
ALTER DATABASE SampleDB  SET EMERGENCY
GO
ALTER DATABASE SampleDB  SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

 

 

Step 3: Rebuild Transactional Log

This is an undocumented & unsupported command, so use caution before you run this in production. Steps like this must be taken only upon guidance from Microsoft CSS.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Rebuild log file
--
ALTER DATABASE SampleDB 
REBUILD LOG
ON 
	( NAME = 'SampleDB_log'
	, FILENAME = 'L:\MSSQL\LOGS\SampleDB_log.MDF'
	)
GO

 

Step 4: Post Rebuild

Upon successful log rebuild, lets take a few precautions to make sure everything is till good.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Post rebuild steps
--
DBCC checkdb(SampleDB)
GO

ALTER DATABASE SampleDB  SET MULTI_USER
GO

SELECT DATABASEPROPERTYEX('SampleDB', 'Status')
GO

BACKUP DATABASE SampleDB TO DISK = N'Z:\MSSQL\Backup\SampleDB_AfterTLogRebuild.BAK'
GO

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

Last time we walked through the T-SQL steps to insert a tracer token to measure latency in replication topology. Today, we’ll look at an alternate method to query tracer token details; a.k.a. MStracer_tokens &  MStracer_history meta tables.

Along with the T-SQL procedures (sys.sp_helptracertokenhistory), Sql Server also provides a way to query the tracer tokens using metadata tables i.e. MStracer_tokens &  MStracer_history.  They keep track of details for each token. Querying them will provide us necessary information.

--
-- Query tracer token tables
--
SELECT	  publication_id
	, agent_id
	, t.publisher_commit
	, t.distributor_commit
	, h.subscriber_commit

FROM MStracer_tokens t
JOIN MStracer_history h
	ON t.tracer_id = h.parent_tracer_id

ORDER BY t.publisher_commit DESC
GO
Query Tracer Token Details

Query Tracer Token Details

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

This article is part 3 in a series on Advanced Query Tuning Concepts, that are good to be familiar with. Full list is here.

Hash Join

When both the data sets are large with unsorted & unindexed data sets, Hash Join is the best operator. This is the most complicated data set to process and Hash Join could process them efficiently.

HashJoin_QueryPlan.png

Hash Joins follow a complicated logic in identifying matching records. We’ll get into the details in a future post.

There are different types of Hash Joins

  1. In-memory hash join
  2. Grace hash join
  3. Recursive hash join

Note:

Hash Joins end up being used a lot in intermediary steps. During large table joins, when virtual tables are generated during intermediary steps, the subsequent joins on those intermediary tables are performed using Hash Joins — as these intermediary tables are not indexed or sorted

 

Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »