Feeds:
Posts
Comments

Archive for the ‘DBA Interview’ Category

Problem

While gathering replication backlog details, ran into this interesting error. The goal was to run the sp_replmonitorsubscriptionpendingcmds stored procedure and store the output in a table. As we’ve seen in a recent post, redirecting output of a stored procedure execution into a table is possible; But in this case, it throws an error saying that is not allowed.

INSERT INTO #DC1_Repl_Backlog
EXEC  sp_replmonitorsubscriptionpendingcmds
		  @publisher	= 'InstanceName'
		, @publisher_db	= 'DBName'
		, @publication	= 'Publication'
		, @subscriber	= 'Subscriber'
		, @subscriber_db= 'DBName2'
		, @subscription_type = '0'
GO
Msg 8164, Level 16, State 1, Procedure sp_replmonitorsubscriptionpendingcmds, Line 233
An INSERT EXEC statement cannot be nested.

(0 row(s) affected)

With the available information, right now, a clear & coherent explanation fo this behavior is not available from my end. But my guess is this; The code inside this stored procedure must be using a similar INSERT INTO #table EXEC sp_xyz, hence the error “INSERT EXEC statement cannot be nested

Resolution

OPENROWSET helps in getting around this. See the sample code below:

--
--
--
IF OBJECT_ID('tempdb..#DC1_Repl_Backlog') IS NOT NULL
	DROP TABLE #DC1_Repl_Backlog

CREATE TABLE #DC1_Repl_Backlog (
	  pendingcmdcount	BIGINT
	, estimatedprocesstime	BIGINT
)

INSERT #DC1_Repl_Backlog (pendingcmdcount, estimatedprocesstime)
SELECT *
FROM OPENROWSET('SQLOLEDB',
		'Server=InstanceName;Trusted_Connection=yes;',
		'EXEC Ditribution.dbo.sp_replmonitorsubscriptionpendingcmds
					  @publisher = ''PublisherInstance''
					, @publisher_db	= ''DBName''
					, @publication = ''Publication''
					, @subscriber = ''Subscriber''
					, @subscriber_db = ''DBName2''
					, @subscription_type = ''0'''
		) 

SELECT *
FROM #DC1_Repl_Backlog
GO
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

In replication, the transactions that come into the system are stored in Distribution database first; From there they are replicated to each Subscriber. The data stays there for several hours. This is determined by a ‘Transaction Retention‘ setting on the Distributor.

Similarly, as the data is replicated, log entries are made in the Distributor database. These entries also have a retention policy that could be set using ‘History Retention‘ setting.

Go to Distribution instance, Replication >> Right click >> Distributor Properties

Distributor Retention Properties

Distributor Retention Properties

Under general tab, we see these settings. Change them as needed to achieve longer/shorter retention of both data & log.

Retention Policy

Retention Policy

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Every few days, we run into a situation, where just adding an ORDER BY clause does not solve the problem of retrieving records in a particular fashion.

ORDER BY is beneficial, if you want to order the records and retrieve them in a defined pattern. Adding TOP 10 (TOP 100, etc) to it makes it easy to query the ones you want. But our situation we needed to order the records in ascending order, but we only want the bottom 10 records.

Drum roll !! Enter the all purpose CTE to rescue!!

--
-- Create a temp table and store records in some random order
--
CREATE TABLE #OrderTest (
	  ID	INT			NOT NULL
	, Name	VARCHAR(10)	NOT NULL
	, Flag	TINYINT
)
GO

INSERT INTO #OrderTest (ID, Name, Flag)
VALUES (2, 'Test Name1', 1)
	, (4, 'TestName2', 0)
	, (1, 'TestName3', 7)
	, (3, 'TestName4', 2)
	, (9, 'TestName5', 5)
	, (8, 'TestName6', 1)
	, (5, 'TestName7', 3)
	, (6, 'TestName8', 0)
GO

SELECT * FROM #OrderTest
GO
Initial order of records (Insert order)

Initial order of records (Insert order)

Now, when we query TOP 10 using ORDER BY ID ASC, we’ll get records in the expected ascending order of the ID column. But the requirement is to retrieve BOTTOM 5 records, with the same ascending order on ID column. Like TOP 10 clause, there is no BOTTOM 10 clause. See the image below for clearer understanding:

Records we need to query

Records we need to query

So we use CTE to circumvent that.

Usually ORDER BY clause is not allowed in CTE’s; Except when TOP clause is used.

--
-- Use CTE to pick the records we want
--
; WITH Bottom5 (ID, Name)
AS
	(
		SELECT TOP 5 ID, Name
		FROM #OrderTest
		ORDER BY ID DESC
	)
SELECT *
FROM Bottom5
ORDER BY ID ASC
GO

Desired Result

Desired Result

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

In the past, we’ve covered a MAXDOP query hint with details and nuances (and here too). This time, we’ll get into another interesting query hint MAXRECURSION.

One of my colleagues needed to come up with a numbers table with values like this, where there are multiple batches and each batch has some records.

Batch RecordsInBatch
1 1
1 2
1 3
1 4
1 2500
2 1
2 2
.. ..
2 2500
.. ..
999 2500

This could be achieved with recursive CTE with a table OPTION (MAXRECURSION = n).

--
--  Numbers table with recursive CTE
--
DECLARE @BN INT = 999, @RN INT = 2500

;WITH BatchNumbers (BatchNum)
AS	(
		SELECT 1 AS BatchNum
		UNION ALL
		SELECT BatchNum + 1 AS BatchNum
		FROM BatchNumbers
		WHERE BatchNum < @BN
	),
RecordNumbers (RecNum)
AS	(
		SELECT 1 AS RecNum
		UNION ALL
		SELECT RecNum + 1 AS RecNum
		FROM RecordNumbers
		WHERE RecNum < @RN
	)
SELECT BatchNumbers.BatchNum, RecordNumbers.RecNum
FROM RecordNumbers
CROSS JOIN BatchNumbers
ORDER BY BatchNumbers.BatchNum, RecordNumbers.RecNum
OPTION (MAXRECURSION 2500)
GO

MAXRECURSION – points to keep in mind

  • Server-wide, the limit is set to 100.
    • Use MAXRECURSION query hint to prevent infinite loops
  • Server-wide default is set to 100
  • When MAXRECURSION 0 is specified, it is same as infinite loops (no limit)
  • In query hint, values for MAXRECURSION range from 0 to 32,767
  • In cases where incorrect code enters into production, that results in infinite loops, MAXRECURSION could be used to limit the loops.
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Earlier today, in our SSRS lab, after deploying reports using PowerShell, we ran into this error:

The dataset `DataSource1' refers to the shared data source `DataSource1', 
which is not published on the report server. The shared data source `DataSource1' 
must be published before this report can run.

Obviously, since we are still in the process of figuring out the correct “SSRS Deployment Steps” for automation, we may have missed a step or two. In this case, we missed/forgot the “data source association to each RDL” step.

Resolution:

Since we are still figuring out an automated step to it, for now, this is what we did to resolve this issue manually.

  1. Go to Reports Manager URL on IE browser (with Administrator rights)
  2. Go to the individual report/RDL and hover mouse over it.
  3. Click on the right side yellow drop down and go to ‘Manage’
  4. Go to ‘Data Sources’ tab
  5. Assign the correct shared data source (see below image for clarity)
SSRS_Associate_DataSource_to_RDL.PNG
Hope this helps,
_Sqltimes

Read Full Post »

Quick one day:

Lately, while working on SSRS, we’ve uncovered some tidbits that have been helping us identify the root cause. Here is one of them:

In our environment, we pull SSRS reports from a client application (web page); For the most part, things run smooth. On occasion, when we run into issues, we need to be able to pull up the underlying error message. In SSRS, by default, error reporting back to calling application (client webpage) is disabled. But it is easy to enable it as and when needed and then turn it off, once done.

Go to SSMS and connect to Reporting Service. Right click on the root and go to properties. Under Advanced tab, under Security section, look for EnableRemoteErrors and set the value as True from the drop down. This sends error back to the client, so we can see what is going on. See the image below for clarity.

SSRS_Report_Errors_To_Client.PNG

 

Read Full Post »

Interesting one today:

On one of our production machines, we recently added a new LUN to a SQL cluster. A task like this is a team effort. Sysadmins perform some steps and DBA carry out the remaining. In this article, the main focus is on covering the steps after the LUN is added to the OS & Sql Cluster by the sysadmins.  For context, we’ll start with high level steps before going into details.

Sysadmins steps

  1. Add new storage to the machine/OS as an available storage
  2. Format the available drive with appropriate settings (cluster size) and add it as a new drive
  3. Make drive available to the Cluster using “Add Disk” screen in FailOver Cluster Management tool.

DBAs steps

  1. Add available storage to Sql Cluster
  2. Configure dependency (and check the report before & after)
  3. Add data file on the new cluster storage drive

Here we’ll cover the DBA steps in detail:

Some of these steps were covered under a different recent article as part of dealing with an error message, but here we’ll cover it as a task by itself (which it is).

Add New Storage

Once sysadmins have made the new storage as an ‘available storage’ to OS Cluster, it needs to be added as a new storage location to the SQL Cluster.

In FailOver cluster manager, go to Sql Server Resource Group for this SQL Cluster and right click for detailed options and choose “Add Storage” (see image below)

sqlcluster_addnewstorage_to_os_cluster

Once successful, go to Storage\Disks under in FailOver Cluster Manager to confirm the availability. See image below:

sqlcluster_addnewdrive

Configure Dependency

Adding the storage is an important step, and equally important step is adding the new drive to Sql Cluster Dependency Chain. Dependency Chain informs Sql Sever “how to act”, when any resource in the Cluster becomes unavailable. Some resources automatically trigger cluster failover to other node; some resources do not. This decision is made based on the configurations in Dependency Chain.

Example:

Critical: Data drive/LUN that has database files is critical for optimal availability of the Sql Cluster. So, if it becomes unavailable, failing over to other available nodes is imperative to keep the cluster available.

Non-Critical: In some scenarios, Sql Server Agent is not considered as Critical. So if it stops for some reason, Cluster will make multiple attempts to start it on the same node, but may not necessarily cause failover.

This is a business decision. All these “response actions” will be configured in Cluster settings.

Now, check the dependency report (before); We can see that new drive exists in Cluster, but is not yet added to the Dependency Chain.

SqlCluster_DependencyReport.PNG

To Configure Dependency Chain, go to the Sql Server Resource Group under Roles in FailOver Cluster Manager. See the image below for clarity:

Then go to the bottom section for this Resource Group, where all the individual resources that are part of this Resource Group are displayed.

Under “Other Resources“, right click on Sql Server Resource and choose properties.

do As show

sqlcluster_addnewstorage_add_to_dependency

In the “Sql Server Properties” window, we can see the existing resources already added to dependency chain logic.

SqlCluster_Dependency_Before.PNG

Now, go to the end of the logic list and choose “AND” for condition and pick the new Cluster Storage to be included. See image below for clarity:

SqlCluster_Dependency_After.PNG

After saving the settings, regenerate the Dependency Chain report. Now, we’ll see the new drive as part of the logic.

sqlcluster_dependencyreport_after

Add Database Data File to New Cluster Storage

Now, that the new drive is ready, we could easily add a new data file to the new location.

--
-- Add data file to new storage location
--
USE [master]
GO
ALTER DATABASE [SampleDB]
ADD FILE
	(
		  NAME 			= 	N'SampleDB_Data3'
		, FILENAME 		= 	N'U:\MSSQL\Data\SampleDB_Data3.NDF'
		, SIZE 			= 	3500 GB
		, FILEGROWTH 	= 	100 GB
		, MAXSIZE 		= 	3900 GB
	)
TO FILEGROUP [PRIMARY]
GO
Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »