Feeds:
Posts
Comments

Archive for the ‘Error Messages’ Category

Interesting one today:

On a production box, the backup jobs have been failing with an interesting and perplexing error. Its says “Not enough disk space“; As you can guess, this is one of those confusing or misleading error messages that’s not what it seems on the surface — Making it worthwhile for a post of its own.

Detailed error message is below:

BACKUP DATABASE DummyDB
TO        DISK = N''
	, DISK = N''
	, DISK = N''
	, DISK = ''
WITH STATS = 1
GO
...
...
...
68 percent processed. 
69 percent processed. 
70 percent processed. 
Msg 3202, Level 16, State 1, Line 1 

Write on "F:\MSSQL\Backup\DummyDB.BAK" failed: 
112(There is not enough space on the disk.) 

Msg 3013, Level 16, State 1, Line 1 
BACKUP DATABASE is terminating abnormally.

This error occurs in both backups with & without compression; And in FULL & Differential backups.

This is a fairly large database, ranging up to 18 TB. So, backups are an ordeal to perform. So, when DIFF backups started failing, it was a bit concerning too.

After attempting several backups on local  & remote storage with plenty of space, a pattern still did not emerge. The only constant is that it fails around 70% completion progress.

At that point, one of  my colleagues (Thanks Michael) pointed out that, as part of backup operation, Sql Server will first run some algorithm that calculates the amount of space needed for the backup file. If the backup drive has enough free space well  and good, if not, it throws this error.

But, as you can guess, we had plenty of free space i.e. peta bytes of free space.

Occasionally, manual backups are successful. So, I’m not sure what is going on, but here is my theory:

At different points, Sql  Server  runs the algorithm (“pre-allocation algorithm”) to determine if there is enough space. Initially it comes back saying “yes” — and the backup proceeds with writing to the backup file; Again a little later, it checks, and it comes back with “Yes”; But at someone on subsequent checks (in our case between 70% to 72% complete), the algorithm decides there is  not enough disk space.

So, turns out there is a TRACE FLAG called 3042 that could disable this algorithm from making any assessments — that way backups could progress to completion.

From  MSDN:

Bypasses the default backup compression pre-allocation algorithm to allow the backup file to grow only as needed to reach its final size. This trace flag is useful if you need to save on space by allocating only the actual size required for the compressed backup. Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Caution: Manually make sure there is plenty of space for backup to complete — since we are disabling the algorithm.

--
-- Disable pre-allocation algorithm
--
DBCC TRACEON (3042)
GO

BACKUP DATABASE DummyDB
TO        DISK = N''
    , DISK = N''
    , DISK = N''
    , DISK = ''
WITH STATS = 1
GO
DBCC TRACEOFF (3042)
GO

Make sure you test this in a non-production environment, before enabling it in production.

Hope this helps,
_Sqltimes

Advertisements

Read Full Post »

Quick one today:

A few days ago, this error appeared on one of our lab machines.

Error Message:

Msg 15281, Level 16, State 1, Line 16
SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of component 
'Ad Hoc Distributed Queries' because this component is turned off as part of 
the security configuration for this server. A system administrator can enable 
the use of 'Ad Hoc Distributed Queries' by using sp_configure. For more information 
about enabling 'Ad Hoc Distributed Queries', search for 'Ad Hoc Distributed Queries'
in SQL Server Books Online.

Resolution:

This is an easy error to fix — as the error message is pretty verbose and self-explanatory. After making sure that ad hoc distributed queries are allowed (acceptable to be executed) in your sql environment, run the following query to enable execution of ad hoc in your Sql instance.

--
-- Check current status of Ad Hoc Distributed Queries
--
SELECT * FROM sys.configurations WHERE name LIKE '%ad hoc Dis%'
GO

--
-- Enable Ad Hoc Distributed Queries
--
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC sp_configure 'Ad Hoc Distributed Queries', 1
GO
EXEC sp_configure 'show advanced options', 0
GO
RECONFIGURE
GO

 

Hope this helps,
_Sqltimes

 

Read Full Post »

Interesting one today:

A few months ago, we had an issue with a database in our lab environment where the database ended up in SUSPECT mode due to storage issues. Once the storage was fixed, we were able to perform troubleshooting steps on the database. Of those, today we’ll only cover the step that was used to rebuild the transactional log file (rest of the steps will be covered in a future post).

Error Message:

An error occurred while processing the log for database 'SampleDB'. 
If possible, restore from backup. 
If a backup is not available, it might be necessary to rebuild the log.

 

NOTE: Rebuilding the transaction log is always the last option. There are other, safer options to troubleshoot when databases are in SUSPECT mode. Use this mode only after you’ve exhausted all other options like

  • CHECKDB
  • Restore from valid backup
  • Repairing with EMERGENCY mode

 

NOTE: Rebuilding transactional log file will break the restore chain; So any previous transactional log files could not be applied with the backups going forward.

Rebuild Transactional Log File

 

Step 1: Let’s identify the transactional log file name and path. Use the following query:

--
-- Gather Logical & Physical names of the database
--
SELECT D.name AS [DatabaseName]
	, M.name AS [LogicalName]
	, M.physical_name AS [PhysicalName]
FROM sys.master_files AS M
INNER JOIN sys.databases AS D 
	ON d.database_id = m.database_id
	AND D.name = 'SampleDB'
GO

Let’s use the name & path in the script below.

 

Step 2: Prepare the database

Before we rebuild the transactional log file, we need to set the database in EMERGENCY mode & SINGLE_USER mode. Use the script below:

--
-- Set database in EMERGENCY mode & SINGLE_USER mode
--
ALTER DATABASE SampleDB  SET EMERGENCY
GO
ALTER DATABASE SampleDB  SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

 

 

Step 3: Rebuild Transactional Log

This is an undocumented & unsupported command, so use caution before you run this in production. Steps like this must be taken only upon guidance from Microsoft CSS.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Rebuild log file
--
ALTER DATABASE SampleDB 
REBUILD LOG
ON 
	( NAME = 'SampleDB_log'
	, FILENAME = 'L:\MSSQL\LOGS\SampleDB_log.MDF'
	)
GO

 

Step 4: Post Rebuild

Upon successful log rebuild, lets take a few precautions to make sure everything is till good.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Post rebuild steps
--
DBCC checkdb(SampleDB)
GO

ALTER DATABASE SampleDB  SET MULTI_USER
GO

SELECT DATABASEPROPERTYEX('SampleDB', 'Status')
GO

BACKUP DATABASE SampleDB TO DISK = N'Z:\MSSQL\Backup\SampleDB_AfterTLogRebuild.BAK'
GO

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

During one of the Sql Cluster set up in our lab environment, we ran into this interesting (and common) error message.

The Sql Server failover cluster instance name '' already exists as a clustered resource. 
Specify a different failover cluster name.
Sql Cluster Set up Error

Sql Cluster Set up Error

Resolution

Turns out, the sysadmins created a Resource Group with the same name and allocated all the cluster disks to it. So, when I attempt to create a new Resource Group (for Sql Server resources), it throws this error.

 

SqlCluster Install Resolution

SqlCluster Install Resolution

 

Solution: Once the dummy resource group was removed, the cluster set up wizard progressed without any errors.

The moral of the story is, when we are about to create a resource group, make sure a resource group by the same name does not already exist.

 

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

Earlier this month, during Sql Server Cluster set up on a new set of VMs, ran into this interesting warning message.

Network binding Order generated a warning.

The domain network is not the first bound network. This will cause domain operations to run slowly and can cause timeouts that result in failures. Use the Windows network advanced configurations to change the binding order.
Cluster Setup Network Binding Order

Cluster Setup Network Binding Order

Upon further investigation, it became clear that the NIC that connects to the Domain network is not given highest priority (as needed) for Sql Cluster.

Resolution

In Clustered environments, it is recommended to have the network interfaces properly ordered for maximum efficiency.

Go to “Network connections” and open Advanced Settings. See the image below:

Network Connection - Advanced Settings

Network Connection – Advanced Settings

In the resultant window, under Adapters and Bindings tab, make sure the network interfaces are ordered according to the recommendation. Domain network needs to be on the top, then Heartbeat Network and Remote Access Connections. See the image below, for the recommended order.

Network Binding Proper Order

Network Binding Proper Order

After saving the new order, go back to “Install Failover Cluster Rules” and re-run the checks. This blog has more info, if interested about the rest of cluster set up.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Recently, ran into this error, about an error with implicit conversion.

Msg 402, Level 16, State 1, Line 45
The data types nvarchar and varbinary are incompatible in the add operator.

This is the query:

--
-- Generate script to CREATE/migrate Sql logins
--
SELECT
		  'CREATE LOGIN [' + L.name
		+ '] WITH PASSWORD = ' + L.password_hash + ' HASHED, sid = ' + L.sid
		+ ', CHECK_EXPIRATION = '
		+ CASE WHEN L.is_expiration_checked = 0 THEN 'OFF, ' ELSE 'ON, ' END
		+ 'CHECK_POLICY = '
		+ CASE WHEN L.is_policy_checked = 0 THEN ' OFF, ' ELSE 'ON, ' END
		+ 'DEFAULT_DATABASE = [' + L.default_database_name + '], '
		+ 'DEFAULT_LANGUAGE = [' + L.default_language_name + ']'
FROM sys.sql_logins AS L
WHERE type = 'S' -- SqlLogin
ORDER BY L.name ASC
GO

On the surface it seems fairly easy to fix, right? Just find the VARBINARY literal in the query and make the conversion EXPLICIT using a CONVERT() function.

In this case, it is ‘L.password_hash‘ & ‘L.sid‘.

As indicated in BoL, password_hash is a VARBINARY(256) dataype and sid is a VARBINARY(85). To concatenate VARBINARY values (represented in hex), with the rest of the query, written in VARCHAR, we need to use an an undocument function called: fn_varbintohexstr. 

Here our goal is to keep the hex representation intact (visual); and just convert the binary to VARCHAR, so they could be concatenated with the rest of the SQL script (written in VARCHAR). We do not want to convert the contents into VARCHAR, just the format to VARCHAR.

--
-- Generate script to CREATE/migrate Sql logins
--
SELECT
		  'CREATE LOGIN [' + L.name
		+ '] WITH PASSWORD = ' + master.sys.fn_varbintohexstr(L.password_hash) + ' HASHED, sid = ' + master.sys.fn_varbintohexstr(L.sid)
		+ ', CHECK_EXPIRATION = '
		+ CASE WHEN L.is_expiration_checked = 0 THEN 'OFF, ' ELSE 'ON, ' END
		+ 'CHECK_POLICY = '
		+ CASE WHEN L.is_policy_checked = 0 THEN ' OFF, ' ELSE 'ON, ' END
		+ 'DEFAULT_DATABASE = [' + L.default_database_name + '], '
		+ 'DEFAULT_LANGUAGE = [' + L.default_language_name + ']'
FROM sys.sql_logins AS L
WHERE type = 'S' -- SqlLogin
ORDER BY L.name ASC
GO

With this function, we are able to convert the contents of VARBINARY variable/column (represented in hexadecimal) to VARCHAR (again represented in hex), so we could generate a proper SQL script.

 

Hope this helps,
_Sqltimes

Read Full Post »

Problem

While gathering replication backlog details, ran into this interesting error. The goal was to run the sp_replmonitorsubscriptionpendingcmds stored procedure and store the output in a table. As we’ve seen in a recent post, redirecting output of a stored procedure execution into a table is possible; But in this case, it throws an error saying that is not allowed.

INSERT INTO #DC1_Repl_Backlog
EXEC  sp_replmonitorsubscriptionpendingcmds
		  @publisher	= 'InstanceName'
		, @publisher_db	= 'DBName'
		, @publication	= 'Publication'
		, @subscriber	= 'Subscriber'
		, @subscriber_db= 'DBName2'
		, @subscription_type = '0'
GO
Msg 8164, Level 16, State 1, Procedure sp_replmonitorsubscriptionpendingcmds, Line 233
An INSERT EXEC statement cannot be nested.

(0 row(s) affected)

With the available information, right now, a clear & coherent explanation fo this behavior is not available from my end. But my guess is this; The code inside this stored procedure must be using a similar INSERT INTO #table EXEC sp_xyz, hence the error “INSERT EXEC statement cannot be nested

Resolution

OPENROWSET helps in getting around this. See the sample code below:

--
--
--
IF OBJECT_ID('tempdb..#DC1_Repl_Backlog') IS NOT NULL
	DROP TABLE #DC1_Repl_Backlog

CREATE TABLE #DC1_Repl_Backlog (
	  pendingcmdcount	BIGINT
	, estimatedprocesstime	BIGINT
)

INSERT #DC1_Repl_Backlog (pendingcmdcount, estimatedprocesstime)
SELECT *
FROM OPENROWSET('SQLOLEDB',
		'Server=InstanceName;Trusted_Connection=yes;',
		'EXEC Ditribution.dbo.sp_replmonitorsubscriptionpendingcmds
					  @publisher = ''PublisherInstance''
					, @publisher_db	= ''DBName''
					, @publication = ''Publication''
					, @subscriber = ''Subscriber''
					, @subscriber_db = ''DBName2''
					, @subscription_type = ''0'''
		) 

SELECT *
FROM #DC1_Repl_Backlog
GO
Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »