Feeds:
Posts
Comments

Posts Tagged ‘State 1’

Interesting one today:

On a production box, the backup jobs have been failing with an interesting and perplexing error. Its says “Not enough disk space“; As you can guess, this is one of those confusing or misleading error messages that’s not what it seems on the surface — Making it worthwhile for a post of its own.

Detailed error message is below:

BACKUP DATABASE DummyDB
TO        DISK = N''
	, DISK = N''
	, DISK = N''
	, DISK = ''
WITH STATS = 1
GO
...
...
...
68 percent processed. 
69 percent processed. 
70 percent processed. 
Msg 3202, Level 16, State 1, Line 1 

Write on "F:\MSSQL\Backup\DummyDB.BAK" failed: 
112(There is not enough space on the disk.) 

Msg 3013, Level 16, State 1, Line 1 
BACKUP DATABASE is terminating abnormally.

This error occurs in both backups with & without compression; And in FULL & Differential backups.

This is a fairly large database, ranging up to 18 TB. So, backups are an ordeal to perform. So, when DIFF backups started failing, it was a bit concerning too.

After attempting several backups on local  & remote storage with plenty of space, a pattern still did not emerge. The only constant is that it fails around 70% completion progress.

At that point, one of  my colleagues (Thanks Michael) pointed out that, as part of backup operation, Sql Server will first run some algorithm that calculates the amount of space needed for the backup file. If the backup drive has enough free space well  and good, if not, it throws this error.

But, as you can guess, we had plenty of free space i.e. peta bytes of free space.

Occasionally, manual backups are successful. So, I’m not sure what is going on, but here is my theory:

At different points, Sql  Server  runs the algorithm (“pre-allocation algorithm”) to determine if there is enough space. Initially it comes back saying “yes” — and the backup proceeds with writing to the backup file; Again a little later, it checks, and it comes back with “Yes”; But at someone on subsequent checks (in our case between 70% to 72% complete), the algorithm decides there is  not enough disk space.

So, turns out there is a TRACE FLAG called 3042 that could disable this algorithm from making any assessments — that way backups could progress to completion.

From  MSDN:

Bypasses the default backup compression pre-allocation algorithm to allow the backup file to grow only as needed to reach its final size. This trace flag is useful if you need to save on space by allocating only the actual size required for the compressed backup. Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Caution: Manually make sure there is plenty of space for backup to complete — since we are disabling the algorithm.

--
-- Disable pre-allocation algorithm
--
DBCC TRACEON (3042)
GO

BACKUP DATABASE DummyDB
TO        DISK = N''
    , DISK = N''
    , DISK = N''
    , DISK = ''
WITH STATS = 1
GO
DBCC TRACEOFF (3042)
GO

Make sure you test this in a non-production environment, before enabling it in production.

Hope this helps,
_Sqltimes

Advertisements

Read Full Post »

Quick one today:

Given the usage of UNPIVOT is not that common as PIVOT (which itsel is more uncommon than other techniques in TSQL), the occurance of this error is even more remote. Luckly, we ran into it earlier:

Msg 8167, Level 16, State 1, Line 5
The type of column conflicts with the type of other columns specified in the UNPIVOT list.

Following is the UNPIVOT statement that threw the error. As you can see, on the surface, everything looks correct.

--
--	UNPIVOT syntax that throws error
--
SELECT ID, Metrics, Measures
FROM	(
	SELECT	  Daily_Health_Check_ID
			, Environment
			, DataCenter
	FROM SampleTable
	) AS Source
UNPIVOT
(
	Measures FOR Metrics IN (Environment, DataCenter)
) AS UnPVT
GO

Now let’s change the datatype and size of all the columns to match.

--
--	After converting all colums to the same datatype & size
--
SELECT ID, Metrics, Measures
FROM	(
	SELECT	  Daily_Health_Check_ID
			, CONVERT(VARCHAR(20), Environment) AS [Environment]
			, CONVERT(VARCHAR(20), DataCenter) AS [DataCenter]
	FROM SampleTable
	) AS Source
UNPIVOT
(
	Measures FOR Metrics IN (Environment, DataCenter)
) AS UnPVT
GO

Voilà !! The key is to have uniformed data type & size of all columns in the source data set

Hope this helps,
_Sqltimes

Read Full Post »

Interesting error today:

Recently on one of out performance test environment, we ran into an interesting error that looked new (unfamiliar) and a bit perplexing. Below is the error:

Error: 7886, Severity: 20, State: 1
A read operation on a large object failed while sending  data to the client. 
A common cause for this is if the  application is running in READ UNCOMMITTED isolation level.  
This connection will be terminated.

When this error occurred?

Upon further digging (and rereading the error message a few times), it became clear that were performing some long-running NOLOCK query, which was not uncommon in this environment. As it turns out, this is retrieving some large objects and sending to client applications. But in our situation, both of them are not the root of the problem.

What could be the problem?

Since these kind of operations are not uncommon in this environment, it could not have been these operations that could have caused it. May be they contributed, but they did not seem to be the root cause. We ran the same operations several times, sometimes they throw this error and other times it runs fine. So, the problem is some where else.

Solution

Luckily, someone already talked about this error in detail here. It turns out, this particular database, before every performance test, is being rest to ‘AutoGrowth’ with small increments (a.k.a. ~ 1 MB). Once we corrected out preparatory steps (for performance test) to configure the database size properly, this error got resolved.

What the root cause?

As of now, it is not clear as to why this issue occurs, but this is my working theory. When a large read operations (reading LOB and sending to client) occur with NOLOCK isolation, it reads dirty data. It might be possible that, someone else is attempting to change the this LOB at the same time. This by itself would not be a problem, since NOLOCK allows reading uncommitted data; But might be a contributor.

Adding to that, there is not enough free space left in the data file to accommodate the new changes coming in. If its a larger LOB, with size greater than the ‘AutoGrowth’ setting of 1 MB, it might have caused some problem where Sql Server has to take a breather to expand the file properly before allowing changes to occur properly. So the best option is to stop the NOLOCK read connection, which is the least important or light weight task. Doing this would cause least negative impact to the database integrity, while keeping the cost minimal (connection close).

Not sure if this theory is correct, but just a made up theory.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Earlier this interesting error message appeared in our environment, during some data retrieval. Looks like we cannot query DMVs on a remote instance using Linker Server syntax of Server.Instance.Schema.Object standard. Below is the query that queries DMV’s on remote machine:

Error:

--
-- Query remote server DMV's
--
SELECT TOP 10 *
FROM [RemoteServer].[master].sys.dm_exec_query_stats AS qs
CROSS APPLY [RemoteServer].[master].sys.dm_exec_sql_text (qs.sql_handle) AS qt
GO

When we run it, Sql Server throws this error:

Msg 4122, Level 16, State 1, Line 11
Remote table-valued function calls are not allowed.

 

Solution:

Now, we need a different approach. Enter OpenQuery. Let’s try the same query logic with OpenQuery.

--
-- Query remote DMV's with OPENQUERY
--
SELECT *
FROM OPENQUERY(RemoteServer, 'SELECT TOP 10 *
                              FROM [master].sys.dm_exec_query_stats AS qs
                              CROSS APPLY [master].sys.dm_exec_sql_text (qs.sql_handle) AS qt
                             ')
GO
Use OpenQuery

Use OpenQuery

 

DATA ACCESS for Distributed Query:

Please note that for OpenQuery to run successfully, you need DATA ACCESS enabled with the RemoteServer. If not, you’ll run into an error like this:

Msg 7411, Level 16, State 1, Line 36
Server 'RemoteServer' is not configured for DATA ACCESS.

Use sp_serveroption to enable DATA ACCESS option for the RemoteServer. DATA ACCESS enables linked server for distributed query access to the remote server. See sample below:

--
-- Enable DATA ACCESS for RemoteServer
--
EXEC sp_serveroption 'RemoteServer', 'DATA ACCESS', 'TRUE'
GO

Once DATA ACCESS is enabled, the OpenQuery runs correctly.

Hope this helps,
_Sqltimes

Read Full Post »