Feeds:
Posts
Comments

Archive for the ‘Weird Anamoly’ Category

Interesting one today:

On a production box, the backup jobs have been failing with an interesting and perplexing error. Its says “Not enough disk space“; As you can guess, this is one of those confusing or misleading error messages that’s not what it seems on the surface — Making it worthwhile for a post of its own.

Detailed error message is below:

BACKUP DATABASE DummyDB
TO        DISK = N''
	, DISK = N''
	, DISK = N''
	, DISK = ''
WITH STATS = 1
GO
...
...
...
68 percent processed. 
69 percent processed. 
70 percent processed. 
Msg 3202, Level 16, State 1, Line 1 

Write on "F:\MSSQL\Backup\DummyDB.BAK" failed: 
112(There is not enough space on the disk.) 

Msg 3013, Level 16, State 1, Line 1 
BACKUP DATABASE is terminating abnormally.

This error occurs in both backups with & without compression; And in FULL & Differential backups.

This is a fairly large database, ranging up to 18 TB. So, backups are an ordeal to perform. So, when DIFF backups started failing, it was a bit concerning too.

After attempting several backups on local  & remote storage with plenty of space, a pattern still did not emerge. The only constant is that it fails around 70% completion progress.

At that point, one of  my colleagues (Thanks Michael) pointed out that, as part of backup operation, Sql Server will first run some algorithm that calculates the amount of space needed for the backup file. If the backup drive has enough free space well  and good, if not, it throws this error.

But, as you can guess, we had plenty of free space i.e. peta bytes of free space.

Occasionally, manual backups are successful. So, I’m not sure what is going on, but here is my theory:

At different points, Sql  Server  runs the algorithm (“pre-allocation algorithm”) to determine if there is enough space. Initially it comes back saying “yes” — and the backup proceeds with writing to the backup file; Again a little later, it checks, and it comes back with “Yes”; But at someone on subsequent checks (in our case between 70% to 72% complete), the algorithm decides there is  not enough disk space.

So, turns out there is a TRACE FLAG called 3042 that could disable this algorithm from making any assessments — that way backups could progress to completion.

From  MSDN:

Bypasses the default backup compression pre-allocation algorithm to allow the backup file to grow only as needed to reach its final size. This trace flag is useful if you need to save on space by allocating only the actual size required for the compressed backup. Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).

Caution: Manually make sure there is plenty of space for backup to complete — since we are disabling the algorithm.

--
-- Disable pre-allocation algorithm
--
DBCC TRACEON (3042)
GO

BACKUP DATABASE DummyDB
TO        DISK = N''
    , DISK = N''
    , DISK = N''
    , DISK = ''
WITH STATS = 1
GO
DBCC TRACEOFF (3042)
GO

Make sure you test this in a non-production environment, before enabling it in production.

Hope this helps,
_Sqltimes

Advertisements

Read Full Post »

Interesting one today:

A few months ago, we had an issue with a database in our lab environment where the database ended up in SUSPECT mode due to storage issues. Once the storage was fixed, we were able to perform troubleshooting steps on the database. Of those, today we’ll only cover the step that was used to rebuild the transactional log file (rest of the steps will be covered in a future post).

Error Message:

An error occurred while processing the log for database 'SampleDB'. 
If possible, restore from backup. 
If a backup is not available, it might be necessary to rebuild the log.

 

NOTE: Rebuilding the transaction log is always the last option. There are other, safer options to troubleshoot when databases are in SUSPECT mode. Use this mode only after you’ve exhausted all other options like

  • CHECKDB
  • Restore from valid backup
  • Repairing with EMERGENCY mode

 

NOTE: Rebuilding transactional log file will break the restore chain; So any previous transactional log files could not be applied with the backups going forward.

Rebuild Transactional Log File

 

Step 1: Let’s identify the transactional log file name and path. Use the following query:

--
-- Gather Logical & Physical names of the database
--
SELECT D.name AS [DatabaseName]
	, M.name AS [LogicalName]
	, M.physical_name AS [PhysicalName]
FROM sys.master_files AS M
INNER JOIN sys.databases AS D 
	ON d.database_id = m.database_id
	AND D.name = 'SampleDB'
GO

Let’s use the name & path in the script below.

 

Step 2: Prepare the database

Before we rebuild the transactional log file, we need to set the database in EMERGENCY mode & SINGLE_USER mode. Use the script below:

--
-- Set database in EMERGENCY mode & SINGLE_USER mode
--
ALTER DATABASE SampleDB  SET EMERGENCY
GO
ALTER DATABASE SampleDB  SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

 

 

Step 3: Rebuild Transactional Log

This is an undocumented & unsupported command, so use caution before you run this in production. Steps like this must be taken only upon guidance from Microsoft CSS.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Rebuild log file
--
ALTER DATABASE SampleDB 
REBUILD LOG
ON 
	( NAME = 'SampleDB_log'
	, FILENAME = 'L:\MSSQL\LOGS\SampleDB_log.MDF'
	)
GO

 

Step 4: Post Rebuild

Upon successful log rebuild, lets take a few precautions to make sure everything is till good.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Post rebuild steps
--
DBCC checkdb(SampleDB)
GO

ALTER DATABASE SampleDB  SET MULTI_USER
GO

SELECT DATABASEPROPERTYEX('SampleDB', 'Status')
GO

BACKUP DATABASE SampleDB TO DISK = N'Z:\MSSQL\Backup\SampleDB_AfterTLogRebuild.BAK'
GO

 

Hope this helps,
_Sqltimes

Read Full Post »

Problem

While gathering replication backlog details, ran into this interesting error. The goal was to run the sp_replmonitorsubscriptionpendingcmds stored procedure and store the output in a table. As we’ve seen in a recent post, redirecting output of a stored procedure execution into a table is possible; But in this case, it throws an error saying that is not allowed.

INSERT INTO #DC1_Repl_Backlog
EXEC  sp_replmonitorsubscriptionpendingcmds
		  @publisher	= 'InstanceName'
		, @publisher_db	= 'DBName'
		, @publication	= 'Publication'
		, @subscriber	= 'Subscriber'
		, @subscriber_db= 'DBName2'
		, @subscription_type = '0'
GO
Msg 8164, Level 16, State 1, Procedure sp_replmonitorsubscriptionpendingcmds, Line 233
An INSERT EXEC statement cannot be nested.

(0 row(s) affected)

With the available information, right now, a clear & coherent explanation fo this behavior is not available from my end. But my guess is this; The code inside this stored procedure must be using a similar INSERT INTO #table EXEC sp_xyz, hence the error “INSERT EXEC statement cannot be nested

Resolution

OPENROWSET helps in getting around this. See the sample code below:

--
--
--
IF OBJECT_ID('tempdb..#DC1_Repl_Backlog') IS NOT NULL
	DROP TABLE #DC1_Repl_Backlog

CREATE TABLE #DC1_Repl_Backlog (
	  pendingcmdcount	BIGINT
	, estimatedprocesstime	BIGINT
)

INSERT #DC1_Repl_Backlog (pendingcmdcount, estimatedprocesstime)
SELECT *
FROM OPENROWSET('SQLOLEDB',
		'Server=InstanceName;Trusted_Connection=yes;',
		'EXEC Ditribution.dbo.sp_replmonitorsubscriptionpendingcmds
					  @publisher = ''PublisherInstance''
					, @publisher_db	= ''DBName''
					, @publication = ''Publication''
					, @subscriber = ''Subscriber''
					, @subscriber_db = ''DBName2''
					, @subscription_type = ''0'''
		) 

SELECT *
FROM #DC1_Repl_Backlog
GO
Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

For the last few months, on and off, there have been opportunities to run some interesting tests in our lab environment. this resulted in some good posts in the last few weeks. Adding to that tradition is another interesting topic Uniquifier.

Context:

Imagine a table with multiple records and a Clustered and bunch of NonClustered indexes. In the non-clustered indexes, the b-tree is structured based on the index keys and at the bottom of the tree, the leaf points back to clustered index using Clustering Key. Now imagine the same scenario with a non-unique clustered index, so there could be multiple records with same clustering key values. The dependent, non-clustered indexes now will need a way to uniquely identify between the identical looking entries. Enter Uniquifier column !!

Solution

An extra 4-byte column called uniquifier is added to all non-clustered indexes to uniquely distinguish between multiple index entries that result in pointing to the same clustering key.

Let’s take an example:

We’ll re-use some of the code from previous posts for this.

--
-- Create a dummy table to test DBCC IND
--
IF EXISTS (SELECT * FROM sys.objects WHERE name = 'Uniquifier_Test' AND type = 'U')
 DROP TABLE dbo.Uniquifier_Test
GO

CREATE TABLE dbo.Uniquifier_Test (
    ID INT NOT NULL DEFAULT (1)
  , Name VARCHAR(5) NOT NULL DEFAULT('aa')
)
GO

--
-- Create Clustered and NonClustered indexes
--
CREATE CLUSTERED INDEX CI_Uniquifier_Test_ID
    ON dbo.Uniquifier_Test(ID ASC)
GO

CREATE NONCLUSTERED INDEX nCI_Uniquifier_Test_Name
    ON dbo.Uniquifier_Test(Name ASC)
GO

--
-- Let's insert some dummy records
--
INSERT INTO dbo.Uniquifier_Test (ID, Name)
VALUES (1, 'aa')
     , (2, 'bb')
     , (3, 'cc')
     , (4, 'dd')
GO

SELECT *
FROM dbo.Uniquifier_Test
GO

Now, lets look at the contents of the non-clustered index pages. Fr more details on retrieving PageID and query page contents, please refer to the previous posts.

--
-- Retrieve PageID of nCI
--
DBCC IND (test, Uniquifier_Test, -1)
GO

--
-- Retrieve contents of nCI page
--
DBCC TRACEON (3604)
DBCC PAGE(test, 1, 34535, 3)
GO

As you can see, along with the nCI key column, Name, we also have the clustering key (ID) added to the nCI b-tree structure. Along with that there is a new column called UNIQUIFIER added to the non-clustered index pages. Since we did not add any duplicate values, the UNIQUIFIER column is set to zero.

Uniquifier (before duplicate entries)q

Uniquifier (before duplicate entries)

Now, lets add some duplicate entries.

--
-- Let's insert some duplicate records
--
INSERT INTO dbo.Uniquifier_Test (ID, Name)
VALUES (1, 'aa')
     , (2, 'bb')
GO

As you can see, where there are duplicate entries, the UNIQUIFIER column adds a unique value to distinguish between them. This incremental number is just within the same set of duplicate entries. Two duplicate rows for aa & 1 have the UNIQUIFIER value set to 0 and 1 respectively. For the next set of duplicates the incremental value starts over from 0. So it can accomodate a lot of duplicate entries.

Uniquifier after adding duplicate entries

Uniquifier after adding duplicate entries

So, the uniquifier column helps Sql Server distinguish between two identical entries in the non-clustered index.

Hope this helps,
_Sqltimes

Read Full Post »

Interesting topic today:

In Sql Server, as we all know, concurrency is maintained through locks & latches. When a particular row is being retrieved for any activity, a lock is requested on that row a.k.a. Shared lock or Exclusive lock, depending on the nature of the activity performed. Once activity is completed, the lock is released. So, when a large number of records in a table are retrieved, then something interesting happens. Sql Server, rather than issuing individuals locks on each row, it perform Lock Escalation to lock the entire table.

This behavior is helpful in some scenarios and detrimental in others. Each lock takes some resources (memory, etc). So establishing a large number of smaller locks (ROWLOCKs) aggregates to a lot of resources. So, Sql Server escalates the lock to either PAGE level or TABLE level (depending on the scenario). Sometimes this could result in longer waits for the PAGE (or entire table) to be freed from other locks, before this new lock request could be granted.

To avoid all of this, sometimes, developers use ROWLOCK to force Sql Server to use ROWLOCK even when performing operations on larger number of records (to avoid waiting for entire PAGE/TABLE to be free from other locks). There are several pros and cons to this approach. As always, the details of the situation guide the best approach in each scenario. One exception to this scenario is, even though we use ROWLOCK, sometimes Sql Server will force lock escalation.

Today, we’ll look at this one aspect of this automatic lock escalation:

Question: At what point does Sql Server, force lock escalation even when ROWLOCK is used?

Let’s run DELETE on a table and see how lock escalation happens. Three diferent levels:

  • 2000 ROWLOCK requests
  • 4000 ROWLOCK requests
  • 6000 ROWLOCK requests

Step 1: At 2000 ROWLOCKS

--
-- Let's run a large DELETE operation
--
DELETE TOP (2000) FROM dbo.SampleTable WITH (ROWLOCK) GO

Now, in a different SSMS window, let’s check the number of locks on the table.

--
-- Check locks on the table
--
EXEC sp_lock 56
WAITFOR DELAY '00:00:01'    -- keep running every second, to capture locks from other window
GO 10

 

Row Level Locks Granted

Row Level Locks Granted

As you can see ROWLOCKS on keys are GRANTed. Now, lets increase the batch size and see where the force lock escalation happens.

Step 2: At 4000 ROWLOCKS

--
-- Let's run a large DELETE operation
--
DELETE TOP (4000)
FROM dbo.SampleTable WITH (ROWLOCK)
GO

Let’s check the lock situation:

ROWLOCKs granted at 4000

ROWLOCKs granted at 4000

So, even at 4000, something interesting happens. Some ROWLOCKs are issues and some PAGELOCKS are issued. Looks like for some rows, the lock is escalated to the PAGELOCK level for efficiency. Let’ continue the effort

Step 3: At 6000 ROWLOCKS

--
-- Let's run a large DELETE operation
--
DELETE TOP (6000)
FROM dbo.SampleTable WITH (ROWLOCK)
GO

Let’s review the locks situation:

ROWLOCKs at 6000

ROWLOCKs at 6000

BINGO !!! As you can see the lock escalation occurs to TABLOCK level at 6000.

Initially, ROWLOCKS are issued; Then just a second later, some locks are escalated to PAGE and subsequently to TABLE level. This is interesting.

Another Nuance: Percentage

Question: Does this escalation occur based on a strict number (or range) or is it based on percentage of records being accessed in a table?

  • From the tests, it seems like the LOCK ESCALATION occurs based on the number, and not percentage of records being manipulated.
Table Size Total Record Count Count of records Lock Requested Type of Lock Granted
Small 2,100 2000 ROWLOCK
Medium 35,000 2000 ROWLOCK
Large 500,000 2000 ROWLOCK

Conclusion:

  1. Sql Server makes intelligent estimations on what is better at each level and makes best decisions to escalate locks as needed. Keep the 2000, 4000 & 6000 as general rule in mind and not as a set in stone rule.
  2. At any given point, Sql Server makes the best judgement call on what is more efficient.
  3. Lock Escalation is based on the number of records on which lock is requested and not on the percentage of records relative to the total records in the table.

Important Note:

  1. This behavior applies only to Sql Server 2012 as this behavior varies from version to version.
  2. In the past, for Sql Server 2008 and before, the number was at 1800 – 2200 before TABLE lock escalation occurs.
  3. If this continues, may be for Sql Server 2016, the number would be slightly higher, as Microsoft improves the lock efficiency by reducing the amount f resources required for each lock.
  4. This does not mean to go ‘free range’ crazy and use ROWLOCK on every query (obviously) & use 4000 everytime. Keep this information handy in making data retrieval decisions. Each ROWLOCK takes resources (memory & CPU); So we need to use caution and minimize the overhead. This a refined technique to be used in infrequent occassions that seem the most suitable for such techniques. Perfom tests before using in production.

 

Hope this helps,
_Sqltimes

Read Full Post »

In our lab machines, sometimes quick clean up activities become necessary; They occur frequently before and after some large batch testing scripts. Such situations include activities like:

  1. Reducing size of either log or data file
  2. Emptying transactional log file
  3. Deleting transactional log file

Note: Please be advised that such operations are not recommended on a production database. These will result in unpredictable and sometimes reduced performance.

In recenlt posts, we’ve convered the use of SHRINKFILE in different scenarios:

Important Points to keep in mind:

  • SHRINK operation could be stopped at anytime without losing the work completed thus far. It retains the progress made (re-allocations)
  • Shrinking data or log file does not require single-user mode on the database. Other user activity could be running in parallel without any interference with SHRINK work.
  • SHRINK process could be delayed due to blocking from other user activity, so if possible, perform SHRINK operation when there is lesser traffic.
  • SHRINK operation is a single threaded operation, that methodically works through each data block. So it is time consuming.
  • SHRINK one file at a time (rather than in parallel)

 

Following are the steps we follow:

Reducing Size of Log or Data File

In lab environment, to reduce the size of a bloated log or data file, we implement a version of the following steps:

Step 1:

  • Before freeing up any space back to Operating sytem, we need to adjust the way space is occupied by all the database pages.
  • Sql Server will reallocate all used pages from the end of the physical file to earlier portions.
  • This allows end of the physical file to be freed up.
--
-- SHRINK the data file down to 1 GB (reallocation)
--
USE [SampleDB]
GO
DBCC SHRINKFILE (N'Sample_Data2' , 1024) -- Reduce it to 1 GB
GO

Step 2:

  • Once reallocation or adjustment is complete, we could issue TRUNCATEONLY option to free up that space back to Operating System.
  • This is when we see that the physical file reducing in size.
--
-- Release space back to OS
--
USE [SampleDB]
GO
DBCC SHRINKFILE (N'Sample_Data2', TRUNCATEONLY)
GO

Emptying Transactional Log File

In lab environment, to empty entire transactional log file, we implement a version of the following steps:

--
-- To remove secondary log file, first we need to empty it. Then remove it
--
DBCC SHRINKFILE (SampleDB_log2, EMPTYFILE)
GO

Deleting Transactional Log File

In lab environment, to delete a transactional log file, we implement a version of the following steps:

--
-- To remove secondary log file, first we need to empty it. Then remove it
--
DBCC SHRINKFILE (SampleDB_log2, EMPTYFILE)
GO

ALTER DATABASE SampleDB
REMOVE FILE SampleDB_log2
GO

For more details, please refer to BoL

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

In replication, there are several amazing features & configurations to make it robust, dependable & highly performing. These settings need to be correctly leveraged to squeeze out the best performance needed or applicable for each environment. Today, we’ll cover a popular setting called NOT FOR REPLICATION on IDENTITY columns.

Concept:

In short, when NOT FOR REPLICATION is enabled on IDENTITY columns (or other constraints), the IDENTITY value is not incremented when INSERTs occur due to replication traffic. But all other direct application traffic will increment IDENTITY value.

Imagine a Sql Server Publisher, let’s say P, that is publishing data to a Sql Server Subscriber, let’s say S. Now, both P & S have table called SampleTable with an IDENTITY column called ID. To make it easy to see the difference, let’s make their IDENTITY definition different at each location (P & S).

  • At Publisher, the IDENTITY value is defined as (1,10).
    • So, its values will be 1, 11, 21, 31, 41, etc.
  • At Subscriber, it is defined as (2, 10).
    • So, its values will be 2, 12, 22, 32, 42, etc.

The Set Up

With the above points, let’s create the table and set up replication between P & S. Followins some of the code used to create table at Publisher (P).

At Publisher

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(1,10)  NOT FOR REPLICATION   PRIMARY KEY   CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('A')
)
GO

At Subscriber:

Similarly, on Subscriber, create a similar table with different IDENTITY definition.

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(2,10)  NOT FOR REPLICATION   PRIMARY KEY     CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('B')
)
GO

So, there is no overlap between IDENTITY values generated at P & S.

Now let’s watch their behavior, as data in INSERTED into both servers.

  1. When data in INSERTED directly into each location (P & S)
  2. When data is indirectly INSERTED into S due to replication traffic from P

Below is some more code used to check IDENTITY values, Insert new data, etc. in these expirements.

--
-- Query the data
--
SELECT *
FROM dbo.SampleTable
ORDER BY ID ASC

--
-- Check the value of IDENTITY column at each step
--
SELECT IDENT_CURRENT('SampleTable')

--
-- Insert data directly into P
--
INSERT INTO dbo.SampleTable DEFAULT VALUES
GO

--
-- Manually insert data to introduce interesting scenarios
--
SET IDENTITY_INSERT dbo.SampleTable ON
INSERT INTO dbo.SampleTable (ID) VALUES(201)
SET IDENTITY_INSERT dbo.SampleTable OFF
GO

Run Experiments

With the above set up, lets run through some scenarios and observe Sql Server behavior in each situation.

Scenario 1:

When data in INSERTed directly into P:

  • The IDENTITY values increment with each insert as 1, 11, 21, 31, etc.
  • Subsequently, those records are replicated to S, with same IDENTITY values.
  • But in all of this, the IDENTITY value at S, stays at 2
    • Since NOT FOR REPLICATION is set on the IDENTITY column on S.

When data is INSERTed directly to S:

  • The IDENTITY values are incrementing as per definition to 2, 12, 22, etc
  • Irrespective of the replication traffic from P, the IDENTITY at S only depends on the records INSERTed directly into S.
  • Table at S, has records from both P & S.
    • S will look something like: 1, 2, 11, 12, 21, 22, 31, 32, etc
    • Table at P, will look at 1, 11, 21, 31, etc

Scenario 2: IDENTITY_INSERT

When manual entry is made at P (using IDENTITY_INSERT) to a new IDENTITY value that does not match with the pattern of IDENTITY definition, subsequent IDENTITY values, at P, are based on the highest entry in the table. It uses the same INCREMENT definition, but it is incremented based on the current highest entry value in the table.

At Publisher:

  • Let’s say the SampleTable, at P, has entries like 1, 11, 21, 31 with next IDENTITY value as 41.
  • Now, if a new record is entered manually using IDENTITY_INSERT, with new value as 26. It is successfully INSERTed.
    • Next IDENTITY value still remains at 41.
  • We can keep repeating these steps with different values like 7, 9, 13, 15, 17, 25, 28, 29 (as long as they are below 31).
    • INSERTs will be successful with no impact to next IDENTITY value, which is still at 41.
  • Now, if you perform a regular INSERT, the new record will get IDENTITY value as 41.

At Subscriber:

  • At S, all new entries, 26, 7, 9, 13, 15, 41, etc, are successfully INSERTed with no impact to IDENTITY definition at S.
    • At S, the next identity value is still set to 42
  • Any new direct INSERTs at S, will get IDENTITY values consistent with its previous behavior a.k.a. 42, 52, etc

Scenario 3: PRIMARY KEY Violation

Now, lets make a manual entry at P that matches with the next IDENTITY value at S.

  • For this, let’s assume that the highest value at P is 41, with next IDENTITY value as 51
  • At S, the current highest value is 52, with next IDENTITY value as 62.

Introduce problems:

  • At P, perform a manual INSERT (with IDENTITY_INSERT), with ID value as 62.
    • INSERT is successful at P; And it is replicated to S successfully.
  • After above operation, next IDENTITY value
    • At P is set to 72 (62+10).
    • At S, it is still at 62 (even though a new record in INSERTed with 62). Since NOT FOR REPLICATION is set, replication traffic does not influence IDENTITY increments at S.
  • Now, when a new record is directly INSERTed into S, the next IDENTITY value will be computed as 62, which results in PRIMARY KEY violation.
    • Violation of PRIMARY KEY constraint 'PK_SampleTable'. Cannot insert duplicate key in object 'dbo.SampleTable'
    • Interestingly, the next IDENTITY value for S, is incremented to 72.
    • Subsequent direct INSERTs into S will be 72, 82, etc

Viscious cycle:

  • In the above test, the next IDENTITY value at P is still at 72.
  • Similarly, the next IDENTITY value at S, is also set to 72.
  • So any new inserts at P, will be replicated to S with 72, 82, 92, etc.
    • If there are any existing records, at S, with same identity values, then replication traffic (from P to S) will fail with primary key violation.
    • But if S does not have any records with those identity values (from P), then replication traffic (a.k.a. 82, 92, 102) from P is successfully INSERTed into S
    • Any new traffic, directly at S, will run into PRIMARY KEY violation.
  • So, the summary is, one BAD entry is all it takes to screw up the IDENTITY definition with NOT FOR REPLICATION.

Solution:

  • When this happens, just RESEED, Identity values at P to a non-overlapping value that is consistent with its expected behavior.
    • Something like 151 or 201. To give it a fresh start with no overlaps with P or S existing records.
Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »