Feeds:
Posts
Comments

Archive for the ‘Sql Server 2008 R2’ Category

Quick one today:

Recently, in one of our lab machines, we ran into this errors during replication reconfiguration (rebuilding) efforts.

Msg 18752, Level 16, State 1, Procedure sp_replcmds, Line 1
Only one Log Reader Agent or log-related procedure (sp_repldone, sp_replcmds, 
and sp_replshowcmds) can connect to a database at a time. If you executed a 
log-related procedure, drop the connection over which the procedure was executed 
or execute sp_replflush over that connection before starting the Log Reader Agent 
or executing another log-related procedure.

The error message, though verbose, it was a bit confusing to understand the underlying cause for this error. Once, we reiterated the steps that resulted in this error, it became clear as to what could have contributed to this error. We had replication set upon a lab machine, and before this error occurred, we were attempting to drop transactional replication using SSMS GUI.

Turns out Log Reader agent was not properly stopped before attempting this; To resolve this, just stop the Log Reader Agent. To be sure, go to SSMS > Replication > Right click on Publication and go to ‘View Log Reader Agent Status‘.

Once Log Reader Agent is stopped, we were able proceed with dropping Transactional replication configuration.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

In the past, we’ve covered a MAXDOP query hint with details and nuances (and here too). This time, we’ll get into another interesting query hint MAXRECURSION.

One of my colleagues needed to come up with a numbers table with values like this, where there are multiple batches and each batch has some records.

Batch RecordsInBatch
1 1
1 2
1 3
1 4
1 2500
2 1
2 2
.. ..
2 2500
.. ..
999 2500

This could be achieved with recursive CTE with a table OPTION (MAXRECURSION = n).

--
--  Numbers table with recursive CTE
--
DECLARE @BN INT = 999, @RN INT = 2500

;WITH BatchNumbers (BatchNum)
AS	(
		SELECT 1 AS BatchNum
		UNION ALL
		SELECT BatchNum + 1 AS BatchNum
		FROM BatchNumbers
		WHERE BatchNum < @BN
	),
RecordNumbers (RecNum)
AS	(
		SELECT 1 AS RecNum
		UNION ALL
		SELECT RecNum + 1 AS RecNum
		FROM RecordNumbers
		WHERE RecNum < @RN
	)
SELECT BatchNumbers.BatchNum, RecordNumbers.RecNum
FROM RecordNumbers
CROSS JOIN BatchNumbers
ORDER BY BatchNumbers.BatchNum, RecordNumbers.RecNum
OPTION (MAXRECURSION 2500)
GO

MAXRECURSION – points to keep in mind

  • Server-wide, the limit is set to 100.
    • Use MAXRECURSION query hint to prevent infinite loops
  • Server-wide default is set to 100
  • When MAXRECURSION 0 is specified, it is same as infinite loops (no limit)
  • In query hint, values for MAXRECURSION range from 0 to 32,767
  • In cases where incorrect code enters into production, that results in infinite loops, MAXRECURSION could be used to limit the loops.
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Earlier today, in our SSRS lab, after deploying reports using PowerShell, we ran into this error:

The dataset `DataSource1' refers to the shared data source `DataSource1', 
which is not published on the report server. The shared data source `DataSource1' 
must be published before this report can run.

Obviously, since we are still in the process of figuring out the correct “SSRS Deployment Steps” for automation, we may have missed a step or two. In this case, we missed/forgot the “data source association to each RDL” step.

Resolution:

Since we are still figuring out an automated step to it, for now, this is what we did to resolve this issue manually.

  1. Go to Reports Manager URL on IE browser (with Administrator rights)
  2. Go to the individual report/RDL and hover mouse over it.
  3. Click on the right side yellow drop down and go to ‘Manage’
  4. Go to ‘Data Sources’ tab
  5. Assign the correct shared data source (see below image for clarity)
SSRS_Associate_DataSource_to_RDL.PNG
Hope this helps,
_Sqltimes

Read Full Post »

Quick one day:

Lately, while working on SSRS, we’ve uncovered some tidbits that have been helping us identify the root cause. Here is one of them:

In our environment, we pull SSRS reports from a client application (web page); For the most part, things run smooth. On occasion, when we run into issues, we need to be able to pull up the underlying error message. In SSRS, by default, error reporting back to calling application (client webpage) is disabled. But it is easy to enable it as and when needed and then turn it off, once done.

Go to SSMS and connect to Reporting Service. Right click on the root and go to properties. Under Advanced tab, under Security section, look for EnableRemoteErrors and set the value as True from the drop down. This sends error back to the client, so we can see what is going on. See the image below for clarity.

SSRS_Report_Errors_To_Client.PNG

 

Read Full Post »

Interesting one today:

On one of our production machines, we recently added a new LUN to a SQL cluster. A task like this is a team effort. Sysadmins perform some steps and DBA carry out the remaining. In this article, the main focus is on covering the steps after the LUN is added to the OS & Sql Cluster by the sysadmins.  For context, we’ll start with high level steps before going into details.

Sysadmins steps

  1. Add new storage to the machine/OS as an available storage
  2. Format the available drive with appropriate settings (cluster size) and add it as a new drive
  3. Make drive available to the Cluster using “Add Disk” screen in FailOver Cluster Management tool.

DBAs steps

  1. Add available storage to Sql Cluster
  2. Configure dependency (and check the report before & after)
  3. Add data file on the new cluster storage drive

Here we’ll cover the DBA steps in detail:

Some of these steps were covered under a different recent article as part of dealing with an error message, but here we’ll cover it as a task by itself (which it is).

Add New Storage

Once sysadmins have made the new storage as an ‘available storage’ to OS Cluster, it needs to be added as a new storage location to the SQL Cluster.

In FailOver cluster manager, go to Sql Server Resource Group for this SQL Cluster and right click for detailed options and choose “Add Storage” (see image below)

sqlcluster_addnewstorage_to_os_cluster

Once successful, go to Storage\Disks under in FailOver Cluster Manager to confirm the availability. See image below:

sqlcluster_addnewdrive

Configure Dependency

Adding the storage is an important step, and equally important step is adding the new drive to Sql Cluster Dependency Chain. Dependency Chain informs Sql Sever “how to act”, when any resource in the Cluster becomes unavailable. Some resources automatically trigger cluster failover to other node; some resources do not. This decision is made based on the configurations in Dependency Chain.

Example:

Critical: Data drive/LUN that has database files is critical for optimal availability of the Sql Cluster. So, if it becomes unavailable, failing over to other available nodes is imperative to keep the cluster available.

Non-Critical: In some scenarios, Sql Server Agent is not considered as Critical. So if it stops for some reason, Cluster will make multiple attempts to start it on the same node, but may not necessarily cause failover.

This is a business decision. All these “response actions” will be configured in Cluster settings.

Now, check the dependency report (before); We can see that new drive exists in Cluster, but is not yet added to the Dependency Chain.

SqlCluster_DependencyReport.PNG

To Configure Dependency Chain, go to the Sql Server Resource Group under Roles in FailOver Cluster Manager. See the image below for clarity:

Then go to the bottom section for this Resource Group, where all the individual resources that are part of this Resource Group are displayed.

Under “Other Resources“, right click on Sql Server Resource and choose properties.

do As show

sqlcluster_addnewstorage_add_to_dependency

In the “Sql Server Properties” window, we can see the existing resources already added to dependency chain logic.

SqlCluster_Dependency_Before.PNG

Now, go to the end of the logic list and choose “AND” for condition and pick the new Cluster Storage to be included. See image below for clarity:

SqlCluster_Dependency_After.PNG

After saving the settings, regenerate the Dependency Chain report. Now, we’ll see the new drive as part of the logic.

sqlcluster_dependencyreport_after

Add Database Data File to New Cluster Storage

Now, that the new drive is ready, we could easily add a new data file to the new location.

--
-- Add data file to new storage location
--
USE [master]
GO
ALTER DATABASE [SampleDB]
ADD FILE
	(
		  NAME 			= 	N'SampleDB_Data3'
		, FILENAME 		= 	N'U:\MSSQL\Data\SampleDB_Data3.NDF'
		, SIZE 			= 	3500 GB
		, FILEGROWTH 	= 	100 GB
		, MAXSIZE 		= 	3900 GB
	)
TO FILEGROUP [PRIMARY]
GO
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Edit: Update on April 10th – Added some comments to repeat some steps, as needed

Earlier today, we ran into this error in our lab environment with replication. Two different errors, on the same environment, due to similar sequence of events.

Msg 20584, Level 16, State 1, Procedure sp_MSrepl_check_server, Line 67
Cannot drop server because it is used as a Subscriber to remote Publisher in replication.

Cannot drop server because it is used as a distributor in replication

This type of errors are rare, but one of these three options below might provide some relief.

  1. Reset Dist flag
  2. Force drop distributor
  3. Manually remove Publishers (from Distributor properties)
    1. Invalid object name ‘dbo.MSmerge_agents’

Also, see the notes in the bottom section (points to keep in mind)

Reset Dist Flag for Distributor Instance

Sometimes, the dist flag for Distributor stays set, when not supposed to. So manually resetting it helps.

--
--  Reset Distributor flag
--
EXEC master.dbo.sp_ServerOption   @server = 'REPLDist'
				, @OptName = 'dist'
				, @OptValue = 'false'
GO

Force Drop Distributor

One of the last options is to ‘force’ drop distributor. This allows removing distributor database without running through the proper checks. This step is a high risk, high reward option; Since all checks were previously performed and all components of replication are removed, except distributor, this step might provide relief.

EXEC sp_dropdistributor @no_checks = 1, @ignore_distributor = 1
GO

Manually Remove Publishers

Usually, the above two steps will resolve the issue. But in rare occasions, the issue persists, like in our lab. For those stubborn situation, this technique has given some relief.

Go to Replication >> Distributor Properties >> Go to Publishers (tab)

Now, un-check each Publisher and save. Important to note that, this needs to be done for one publisher at a time.

Distributor_Properties_RemovePublishers.PNG

3.1 Invalid object name MSmerge_agents

In Step 3, sometimes we might run into an error like this:

Invalid object name 'dbo.MSmerge_agents'.

Not sure what the root cause is, but this seems to help:

  1. Create two tables in Distributor database
  2. Rerun Step 3

1. Create two tables in Distributor database

NOTE: This step is not necessary in all situation. So, proceed with caution.

Looks like these two tables are missing in Distribution database. Create them in the Distribution database. And then go back to Step 3 again. Or Step 3 & 2, as needed.

Note: There is no harm in adding these two tables because, after you create these tables and then successfully drop replication (including the distributor database) the new objects will be removed along with the Distributor database.

So you have a clear system at the end.

create table msmerge_agents(
	  id						INT
	, name						NVARCHAR(100)
	, publisher_id				SMALLINT
	, publisher_db				SYSNAME
	, publication				SYSNAME
	, subscriber_id				SMALLINT
	, subscriber_db				SYSNAME
	, local_job					BIT
	, job_id					BINARY(16)
	, profile_id				INT
	, anonymous_subid			UNIQUEIDENTIFIER
	, subscriber_name			SYSNAME
	, creation_date				DATETIME
	, offload_enabled			BIT
	, offload_server			SYSNAME
	, sid						VARBINARY(85)
	, subscriber_security_mode	SMALLINT
	, subscriber_password		NVARCHAR(524)
	, publisher_login			SYSNAME
	, publisher_password		NVARCHAR(524)
	, job_step_uid				UNIQUEIDENTIFIER
)
GO

CREATE table MSmerge_subscriptions(
	  publisher_id		SMALLINT
	, publisher_db		SYSNAME
	, publication_id	INT
	, subscriber_id		SMALLINT
	, subscriber_db		SYSNAME
	, subscription_type INT
	, sync_type			TINYINT
	, status			TINYINT
	, subscription_time DATETIME
)
GO

Points to keep in mind:

Sometimes, just the one of these steps might be enough to resolve the issue; Sometimes, it might take two of the steps. In our lab, we ended up needing all the 3 steps. the first step is easy to implement.
The second and third steps might need repeated attempts. Something like:
  1. Try Step 2, if it fails,
    1. then try step 3;
    2. If it fails, try Step 3.1
    3. If that succeeds, then go back to Step 2.
  2. Once Step 2 is successful, the distributor is fully cleared and cleaned.
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Given the usage of UNPIVOT is not that common as PIVOT (which itsel is more uncommon than other techniques in TSQL), the occurance of this error is even more remote. Luckly, we ran into it earlier:

Msg 8167, Level 16, State 1, Line 5
The type of column conflicts with the type of other columns specified in the UNPIVOT list.

Following is the UNPIVOT statement that threw the error. As you can see, on the surface, everything looks correct.

--
--	UNPIVOT syntax that throws error
--
SELECT ID, Metrics, Measures
FROM	(
	SELECT	  Daily_Health_Check_ID
			, Environment
			, DataCenter
	FROM SampleTable
	) AS Source
UNPIVOT
(
	Measures FOR Metrics IN (Environment, DataCenter)
) AS UnPVT
GO

Now let’s change the datatype and size of all the columns to match.

--
--	After converting all colums to the same datatype &amp; size
--
SELECT ID, Metrics, Measures
FROM	(
	SELECT	  Daily_Health_Check_ID
			, CONVERT(VARCHAR(20), Environment) AS [Environment]
			, CONVERT(VARCHAR(20), DataCenter) AS [DataCenter]
	FROM SampleTable
	) AS Source
UNPIVOT
(
	Measures FOR Metrics IN (Environment, DataCenter)
) AS UnPVT
GO

Voilà !! The key is to have uniformed data type & size of all columns in the source data set

Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »