Feeds:
Posts
Comments

Archive for the ‘Replication’ Category

Quick one today:

Recently, in one of our lab machines, we ran into this errors during replication reconfiguration (rebuilding) efforts.

Msg 18752, Level 16, State 1, Procedure sp_replcmds, Line 1
Only one Log Reader Agent or log-related procedure (sp_repldone, sp_replcmds, 
and sp_replshowcmds) can connect to a database at a time. If you executed a 
log-related procedure, drop the connection over which the procedure was executed 
or execute sp_replflush over that connection before starting the Log Reader Agent 
or executing another log-related procedure.

The error message, though verbose, it was a bit confusing to understand the underlying cause for this error. Once, we reiterated the steps that resulted in this error, it became clear as to what could have contributed to this error. We had replication set upon a lab machine, and before this error occurred, we were attempting to drop transactional replication using SSMS GUI.

Turns out Log Reader agent was not properly stopped before attempting this; To resolve this, just stop the Log Reader Agent. To be sure, go to SSMS > Replication > Right click on Publication and go to ‘View Log Reader Agent Status‘.

Once Log Reader Agent is stopped, we were able proceed with dropping Transactional replication configuration.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Edit: Update on April 10th – Added some comments to repeat some steps, as needed

Earlier today, we ran into this error in our lab environment with replication. Two different errors, on the same environment, due to similar sequence of events.

Msg 20584, Level 16, State 1, Procedure sp_MSrepl_check_server, Line 67
Cannot drop server because it is used as a Subscriber to remote Publisher in replication.

Cannot drop server because it is used as a distributor in replication

This type of errors are rare, but one of these three options below might provide some relief.

  1. Reset Dist flag
  2. Force drop distributor
  3. Manually remove Publishers (from Distributor properties)
    1. Invalid object name ‘dbo.MSmerge_agents’

Also, see the notes in the bottom section (points to keep in mind)

Reset Dist Flag for Distributor Instance

Sometimes, the dist flag for Distributor stays set, when not supposed to. So manually resetting it helps.

--
--  Reset Distributor flag
--
EXEC master.dbo.sp_ServerOption   @server = 'REPLDist'
				, @OptName = 'dist'
				, @OptValue = 'false'
GO

Force Drop Distributor

One of the last options is to ‘force’ drop distributor. This allows removing distributor database without running through the proper checks. This step is a high risk, high reward option; Since all checks were previously performed and all components of replication are removed, except distributor, this step might provide relief.

EXEC sp_dropdistributor @no_checks = 1, @ignore_distributor = 1
GO

Manually Remove Publishers

Usually, the above two steps will resolve the issue. But in rare occasions, the issue persists, like in our lab. For those stubborn situation, this technique has given some relief.

Go to Replication >> Distributor Properties >> Go to Publishers (tab)

Now, un-check each Publisher and save. Important to note that, this needs to be done for one publisher at a time.

Distributor_Properties_RemovePublishers.PNG

3.1 Invalid object name MSmerge_agents

In Step 3, sometimes we might run into an error like this:

Invalid object name 'dbo.MSmerge_agents'.

Not sure what the root cause is, but this seems to help:

  1. Create two tables in Distributor database
  2. Rerun Step 3

1. Create two tables in Distributor database

NOTE: This step is not necessary in all situation. So, proceed with caution.

Looks like these two tables are missing in Distribution database. Create them in the Distribution database. And then go back to Step 3 again. Or Step 3 & 2, as needed.

Note: There is no harm in adding these two tables because, after you create these tables and then successfully drop replication (including the distributor database) the new objects will be removed along with the Distributor database.

So you have a clear system at the end.

create table msmerge_agents(
	  id						INT
	, name						NVARCHAR(100)
	, publisher_id				SMALLINT
	, publisher_db				SYSNAME
	, publication				SYSNAME
	, subscriber_id				SMALLINT
	, subscriber_db				SYSNAME
	, local_job					BIT
	, job_id					BINARY(16)
	, profile_id				INT
	, anonymous_subid			UNIQUEIDENTIFIER
	, subscriber_name			SYSNAME
	, creation_date				DATETIME
	, offload_enabled			BIT
	, offload_server			SYSNAME
	, sid						VARBINARY(85)
	, subscriber_security_mode	SMALLINT
	, subscriber_password		NVARCHAR(524)
	, publisher_login			SYSNAME
	, publisher_password		NVARCHAR(524)
	, job_step_uid				UNIQUEIDENTIFIER
)
GO

CREATE table MSmerge_subscriptions(
	  publisher_id		SMALLINT
	, publisher_db		SYSNAME
	, publication_id	INT
	, subscriber_id		SMALLINT
	, subscriber_db		SYSNAME
	, subscription_type INT
	, sync_type			TINYINT
	, status			TINYINT
	, subscription_time DATETIME
)
GO

Points to keep in mind:

Sometimes, just the one of these steps might be enough to resolve the issue; Sometimes, it might take two of the steps. In our lab, we ended up needing all the 3 steps. the first step is easy to implement.
The second and third steps might need repeated attempts. Something like:
  1. Try Step 2, if it fails,
    1. then try step 3;
    2. If it fails, try Step 3.1
    3. If that succeeds, then go back to Step 2.
  2. Once Step 2 is successful, the distributor is fully cleared and cleaned.
Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

MSDTC is one of the popular software components that is on all Windows systems. It is one of the Windows  Operating System components that Sql Server relies on it to perform some crucial tasks (when needed).

What does it do?

MSDTC, Microsoft Distributed Transaction Coordinator, is essentially, as name suggests, a coordinator/manager to handle transactions that are distributed over multiple machines. Let’s say we start a transaction, where one of the steps includes querying data from a different Sql Server instance on a different physical machine; MSDTC comes into action with these specific tasks that need transaction coordination across different physical machines. It executes the section of code that is supposed to run on remote machines and brings back the results to local Sql instance. In this process, if any issue were to occur, on the remote machine that results in rollback, MSDTC makes sure the original transaction on this machine also rolls-back safely.

How does it do?

MSDTC comes with necessary Operating System controls and memory structures to carry out these operations independent of the Sql Instances, while keeping integrity of the transaction across the multiple physical Sql machines a.k.a. the complete two-phase distributed commit protocol and the recovery of distributed transactions.

Where does Sql Server use it?

The key point here is that these need to be Sql Instances on different physical machines. Queries that request data across different instances on the same physical box do not go through MSDTC.

MSDTC is used by query activities like

  • Linked Servers
  • OPENROWSET
  • OPENQUERY
  • OPENDATASOURCE
  • RPC (Remote Procedure Calls)
  • Ones with
    • BEGIN DISTRIBUTED TRANSACTION
  • etc…

So, every time we run SQL queries that utilize above techniques, they rely on MSDTC to carry out operation while maintaining transaction integrity.

Who else uses it?

MSDTC is an Operating System resource that is used by applications other than Sql Server, to perform any distributed transaction activities; Like eXtended Architecture applications.

Is MSDTC required?

MSDTC is not required for Sql Server installation or operation. If you are only going to use Database Engine, then it is not required or used. If your Sql uses any of the above mentioned query techniques (Linked Server, OPENQUERY, etc), or SSIS or Workstation Components then MSDTC is required.

If you are installing only the Database Engine, the MSDTC cluster resource is not required. If you are installing the Database Engine and SSIS, Workstation Components, or if you will use distributed transactions, you must install MSDTC. Note that MSDTC is not required for Analysis Services-only instances.

What about Sql Cluster?

Same rules as above apply to Sql Clusters as well with one additional rule. If you have two instances on the same machine (that are clustered across different physical machines), then you’ll need MSDTC. Since the Cluster could failover to remote machine at anytime.

Let’s take an example:

Let’s say Instance1 is on physical machines A & B, with B as active node. Instance2 is on machines B & C, with B as active node. A query going from Instance1 to Instance2 will need MSDTC (even if both the instances are active on the same physical machine B at that given point in time.).

This is because, there is no guarantee that they will remain on the same physical machine at any given time; They might failover to other machines, resulting in instances being on physically different machines. So MSDTC is required (when distributed operations are performed).

Also the recent Sql Server versions do not required MSDTC during Sql Server installations.

Other points in a Clustered Environment

We could have multiple instances of MSDTC as different clustered resource (along with default MSDTC resource).

In scenario with multiple MSDTC, we could configure each Sql Cluster resource to have a dedicated MSDTC instance. If such mapping does not exist, it automatically falls back to use the default MSDTC resource.

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

Context:

In replication, as we all know, data/records from Publisher are copied to all Subscribers. This is carried out as Transactions and Commands from Publisher to each Subscriber. Let’s say application INSERTs 10 new rows into the database (Publisher) as part of some application transaction; Now the LogReader reads Transactional Log file and takes a note of these 10 records; Brings their details to Distributor.

Behind the Scenes:

From Distributor, they are transferred to each Subscriber by the Distribution Agent. In actuality, the Distribution Agent executes these INSERT commands at each subscriber; Upon success, it proceeds to execute the subsequent commands until all Subscribers are up to date.

Here is the beauty:

In Replication, the result of INSERT activity by the application, is carried out as “1 transaction with 10 commands” (one INSERT command for each record, all encompassing in one transaction). When we look closely, each INSERT command is carried out by Replication Stored Procedures located in each Subscriber (they are created as part of replication setup). Distributor will have entries with these stored procedures with appropriate parameters. Now Distribution agent will execute them at each subscriber.

Distributor will have entries like:

--
--
--
{CALL [dbo].[mtx_rplins_SampleTable] (758374661,1,N'SomeName1')}
{CALL [dbo].[mtx_rplins_SampleTable] (758374662,4,N'SomeName2')}
{CALL [dbo].[mtx_rplins_SampleTable] (758374663,5,N'SomeName3')}

 

replication_storedprocedures

Question

Now here is the question that we’ll discuss today. Why do we have different replication stored procedures calls? Not different procedures, but different procedure calls. We have dedicated replication stored procedures for each table; Each to carry out INSERT, UPDATE and DELETE. Like

  • sp_MSins_ : Handles INSERT operations for this table
  • sp_MSupd_ : Handles UPDATE operations for this table
  • sp_MSdel_ : Handles DELETE  operations for this table

But why do we have different procedure calls?

In the above image, we see entries like {CALL […..]}. There other procedure calls like CALL, SCALL, XCALL, MCALL, VCALL. What are these for?

  • CALL
    • This is the most popular or default call used in Replication.
    • It could be used for INSERT or UPDATE or DELETE
  • SCALL
    • This is used for UPDATEs only.
  • XCALL
    • Used for UPDATEs and DELETEs
  • MCALL
    • Used for UPDATEs only
  • VCALL
    • For internal use only.
    • Also used for updatable subscriptions

CALL:

This is the default option used for all activities a.k.a. UPDATE/DELETE/INSERT. As seen in the picture above, CALL will have a list of values provided for all columns of the table.

SCALL & MCALL:

Looks like some procedures are used exclusively for certain actions; Like MCALL & SCALL are dedicated for UPDATEs. But there is some difference between these too. SCALL only sends the columns that need updating; Where as MCALL sends all columns of the table (including the ones that are not changing).

XCALL:

Sends data for both before and after values for all columns of the table involved in UPDATE or DELETE. Obviously, in UPDATE, there are values for both before and after; But in DELETE, the after values is empty.

For more details, please refer to this excellent article.

Hope this helps,
_Sqltimes

Read Full Post »

Interesting article today on troubleshooting replication errors.

A few weeks ago, on production, we received an alert with on replication failures. Upon further inspection the error looks like this:

Command attempted:
if @@trancount > 0 rollback tran
(Transaction sequence number: 0x001031C200001B06000700000000, Command ID: 1)

Error messages:
The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)
Get help: http://help/20598
The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)
Get help: http://help/20598

Identify Root Cause:

To understand the course of action, we need to first understand the underlying issues. Go to Replication Monitor >> Open Details Window on the subscription that has errors. Go to ‘Distributor to Subscriber’ tab for more details. See the image below:

replication_error_20598

Now we see that, replication is attempting to perform an action on the Subscriber, but the row does not exist. So, lets find out more.

Find the exact command that is being replicated (or executed on Subscriber as part of replication) that throws this error. Use replication procedure sp_browsereplcmds for that.

Query the Distribution agent ID from dbo.MSdistribution_agents and use it in the query below.

--
-- Uncover the replication command throwing error
--
EXEC sp_browsereplcmds @xact_seqno_start = '0x001031C200001A620004000000000000'
                     , @xact_seqno_end = '0x001031C200001A620004000000000000'
                     , @agent_id = 49
                     , @publisher_database_id = 3
GO

You’ll see something like this:

replication_error_20598_investigation

Under the command column, we’ll see the exact command that is running into this error.

--
--  Error 20598 occurring in
--
{CALL [mtx_rpldel_ReportCriterion] (908236,71357,250,-1)}

Now, lets go to that stored procedure ‘mtx_rpldel_ReportCriterion’ and see what tables are involved in manipulation. In my scenario, the table ReportCriterion does not have the record with ID = 908236

Resolution

Once you understand the root cause, we have a few options.

  1. Data Integrity: Looks like we have synchronization issues between Publisher and Subscriber. If it is a non-production environment or an environment where reinitializing is an option, then we could take that route to sync up the data first.
    1. Once data integrity issues are resolved, all subsequent replication commends would be successful.
  2. Manual fix: Manually insert the missing record at Subscriber and then allow replication to perform its operations on the new record.
    1. With this option, the more records we uncover as missing, the more manual operation would be required. Its not ideal, but it is a workaround to get things going again.
  3. Ignore, for now: In some situations, until further solution is identified, we may need to ignore this one record and move forward with rest of the replication commands.
    1. Take necessary precautions to make sure there are no more such missing records. Or gather a list of all missing ones.
    2. Configure replication to ignore error 20598 using skiperrors parameter. There are a couple of ways to achieve this; here we’ll look at one.
    3. Go to the Agent Profile for this particular Distributor Agent. One of the profiles allows us to skip certain errors. See the image below.
    4. replication_error_20598_resolution

For more information, please refer to Microsoft Support article on similar issue.

Hope this helps,
_Sqltimes

Read Full Post »

Interesting post today:

Earlier this week, there was an interesting error in replication of our production environment:

select spid, blocked, lastwaittype from sysprocesses 
where '~68~~619~~1017~~1277~' like '%~' + cast (spid as nvarchar)+ '~%' 
and '~68~~619~~1017~~1277~' like '%~' + cast (blocked as nvarchar)+ '~%'

We have a dedicated distributor that delivers data from publisher to a remote subscriber. Before this error ocured, the data was flowing well, but once this error occured, the agent kept retrying, but there was no movement in delivery.

Reasons:

On further investigation, it became apparent that this agent has multiple threads configured to deliver the payload to subscribers a.k.a. SubscriptionStreams.

On a previous post on SubscriptionStreams, an important point is highlighted, which will be presented here again:

  • If any one threads runs into problems and rolls back, all other threads also rollback.

With the above point in context, it looks like there is an issue in the thread-scheduling logic in the Distribution Agent. When distribution agent is a single thread or multiple threads, ACID properties need to be maintained; so when one thread timesout (or runs into a problem), the rest of the threads need to be rolled back.

My guess is, distribution agent keeps trying with multiple threads and it runs into some scheduling conflict when one of the threads timesout or unable to divide the workload to be carried out in parallel.

Let’s say, if the replication payload is too small for parallel operation, Sql Server allows only one thread to do the work; In this scenario, the other threads need to be maintained active — but in some weird situation they might timeout — resulting in the whole batch to be rolledback.

Solution:

  • Microsoft confirmed that this is a bug and released a patch (for Sql Server 2005).
  • In our situation, we are in Sql Server 2008 R2, so this patch does not apply; The trick that worked was to remove parallel operation : SubscriptionStreams.

Once parallelism is removed, the traffic started flowing through.

Important notes:

  • Make sure you play with CommitBatchSize & CommitBatchThreshold before you advance to SubscriptionStreams.
  • SubscriptionStreams is a good option that improves replication performance . But it is an advanced option that is only good for certain situations.
  • So, when replication traffic patterns vary, the usage of this parameter needs to vary.
  • For heavy workloads, we set paralleism; For lower than normal workloads, we remove or reduce parallelism.
Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

In replication, there are several amazing features & configurations to make it robust, dependable & highly performing. These settings need to be correctly leveraged to squeeze out the best performance needed or applicable for each environment. Today, we’ll cover a popular setting called NOT FOR REPLICATION on IDENTITY columns.

Concept:

In short, when NOT FOR REPLICATION is enabled on IDENTITY columns (or other constraints), the IDENTITY value is not incremented when INSERTs occur due to replication traffic. But all other direct application traffic will increment IDENTITY value.

Imagine a Sql Server Publisher, let’s say P, that is publishing data to a Sql Server Subscriber, let’s say S. Now, both P & S have table called SampleTable with an IDENTITY column called ID. To make it easy to see the difference, let’s make their IDENTITY definition different at each location (P & S).

  • At Publisher, the IDENTITY value is defined as (1,10).
    • So, its values will be 1, 11, 21, 31, 41, etc.
  • At Subscriber, it is defined as (2, 10).
    • So, its values will be 2, 12, 22, 32, 42, etc.

The Set Up

With the above points, let’s create the table and set up replication between P & S. Followins some of the code used to create table at Publisher (P).

At Publisher

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(1,10)  NOT FOR REPLICATION   PRIMARY KEY   CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('A')
)
GO

At Subscriber:

Similarly, on Subscriber, create a similar table with different IDENTITY definition.

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(2,10)  NOT FOR REPLICATION   PRIMARY KEY     CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('B')
)
GO

So, there is no overlap between IDENTITY values generated at P & S.

Now let’s watch their behavior, as data in INSERTED into both servers.

  1. When data in INSERTED directly into each location (P & S)
  2. When data is indirectly INSERTED into S due to replication traffic from P

Below is some more code used to check IDENTITY values, Insert new data, etc. in these expirements.

--
-- Query the data
--
SELECT *
FROM dbo.SampleTable
ORDER BY ID ASC

--
-- Check the value of IDENTITY column at each step
--
SELECT IDENT_CURRENT('SampleTable')

--
-- Insert data directly into P
--
INSERT INTO dbo.SampleTable DEFAULT VALUES
GO

--
-- Manually insert data to introduce interesting scenarios
--
SET IDENTITY_INSERT dbo.SampleTable ON
INSERT INTO dbo.SampleTable (ID) VALUES(201)
SET IDENTITY_INSERT dbo.SampleTable OFF
GO

Run Experiments

With the above set up, lets run through some scenarios and observe Sql Server behavior in each situation.

Scenario 1:

When data in INSERTed directly into P:

  • The IDENTITY values increment with each insert as 1, 11, 21, 31, etc.
  • Subsequently, those records are replicated to S, with same IDENTITY values.
  • But in all of this, the IDENTITY value at S, stays at 2
    • Since NOT FOR REPLICATION is set on the IDENTITY column on S.

When data is INSERTed directly to S:

  • The IDENTITY values are incrementing as per definition to 2, 12, 22, etc
  • Irrespective of the replication traffic from P, the IDENTITY at S only depends on the records INSERTed directly into S.
  • Table at S, has records from both P & S.
    • S will look something like: 1, 2, 11, 12, 21, 22, 31, 32, etc
    • Table at P, will look at 1, 11, 21, 31, etc

Scenario 2: IDENTITY_INSERT

When manual entry is made at P (using IDENTITY_INSERT) to a new IDENTITY value that does not match with the pattern of IDENTITY definition, subsequent IDENTITY values, at P, are based on the highest entry in the table. It uses the same INCREMENT definition, but it is incremented based on the current highest entry value in the table.

At Publisher:

  • Let’s say the SampleTable, at P, has entries like 1, 11, 21, 31 with next IDENTITY value as 41.
  • Now, if a new record is entered manually using IDENTITY_INSERT, with new value as 26. It is successfully INSERTed.
    • Next IDENTITY value still remains at 41.
  • We can keep repeating these steps with different values like 7, 9, 13, 15, 17, 25, 28, 29 (as long as they are below 31).
    • INSERTs will be successful with no impact to next IDENTITY value, which is still at 41.
  • Now, if you perform a regular INSERT, the new record will get IDENTITY value as 41.

At Subscriber:

  • At S, all new entries, 26, 7, 9, 13, 15, 41, etc, are successfully INSERTed with no impact to IDENTITY definition at S.
    • At S, the next identity value is still set to 42
  • Any new direct INSERTs at S, will get IDENTITY values consistent with its previous behavior a.k.a. 42, 52, etc

Scenario 3: PRIMARY KEY Violation

Now, lets make a manual entry at P that matches with the next IDENTITY value at S.

  • For this, let’s assume that the highest value at P is 41, with next IDENTITY value as 51
  • At S, the current highest value is 52, with next IDENTITY value as 62.

Introduce problems:

  • At P, perform a manual INSERT (with IDENTITY_INSERT), with ID value as 62.
    • INSERT is successful at P; And it is replicated to S successfully.
  • After above operation, next IDENTITY value
    • At P is set to 72 (62+10).
    • At S, it is still at 62 (even though a new record in INSERTed with 62). Since NOT FOR REPLICATION is set, replication traffic does not influence IDENTITY increments at S.
  • Now, when a new record is directly INSERTed into S, the next IDENTITY value will be computed as 62, which results in PRIMARY KEY violation.
    • Violation of PRIMARY KEY constraint 'PK_SampleTable'. Cannot insert duplicate key in object 'dbo.SampleTable'
    • Interestingly, the next IDENTITY value for S, is incremented to 72.
    • Subsequent direct INSERTs into S will be 72, 82, etc

Viscious cycle:

  • In the above test, the next IDENTITY value at P is still at 72.
  • Similarly, the next IDENTITY value at S, is also set to 72.
  • So any new inserts at P, will be replicated to S with 72, 82, 92, etc.
    • If there are any existing records, at S, with same identity values, then replication traffic (from P to S) will fail with primary key violation.
    • But if S does not have any records with those identity values (from P), then replication traffic (a.k.a. 82, 92, 102) from P is successfully INSERTed into S
    • Any new traffic, directly at S, will run into PRIMARY KEY violation.
  • So, the summary is, one BAD entry is all it takes to screw up the IDENTITY definition with NOT FOR REPLICATION.

Solution:

  • When this happens, just RESEED, Identity values at P to a non-overlapping value that is consistent with its expected behavior.
    • Something like 151 or 201. To give it a fresh start with no overlaps with P or S existing records.
Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »