Feeds:
Posts
Comments

Quick one today:

In our production code, there are several incidents where “SET ROWCOUNT” was implemented. Over the years, code has evolved with involvement from different developers and they might have made decisions to use this as an immediate fix for the issue at hand. Let’s dig a bit deeper into this feature.

Essentially, SET ROWCOUNT tells Sql Server to stop any further processing once the number is reached. For example, if I set

SET ROWCOUNT 1800
GO

Any SELECT, UPDATE, DELETE statements following this will stop processing any more records once it hits 1800 record count. On the surface this seems like a cool feature, and it is; But it has some limitations going forward.

  • Microsoft recommends that we discontinue using this style as they are planning to stop its affect on DML statements (UPDATE/DELETE/INSERT).
    • Microsoft recommends using TOP statement instead
  • If both TOP statement and SET ROWCOUNT are used, SET ROWCOUNT over rides (when ROWCOUNT value is smaller than TOP value)
  • This setting comes into play during execution and not at parse time.
  • If you want to reset it, then use the statement below.
SET ROWCOUNT 0
GO

 

Hope this helps,

_Sqltimes

Question: Paging through records

Quite often, when paging through a large set of records, we display a set of records at a time on user interface. Upon clicking next, the next set of records are displayed. In short, this type of paging could be achieved using CTE and ROW_NUMBER(). Every iteration we change the ‘@from’ and ‘@to’ variables to bring the next set of records.

See example below.

Paging using CTE

Paging using CTE

Old Approach

--
--  Using CTE and ROW_NUMBER()
--
; WITH SV_RowNumber
AS
(
   SELECT
           StoredValueID
         , LocalID
         , ExpireDate
         , ROW_NUMBER() OVER (ORDER BY StoredValueID ASC) AS RowN
   FROM dbo.StoredValue
)
SELECT *
FROM SV_RowNumber
WHERE RowN BETWEEN 25 AND 34
ORDER BY StoredValueID ASC
GO

New Approach

Starting Sql Server 2012, there is an easier way using OFFSET and FETCH to ORDER BY clause. With these, you can retrieve the next set of ‘n’ records after skipping ‘m’ records. See example below:

--
-- Using OFFSET & FETCH
--
SELECT StoredValueID
     , LocalID
     , ExpireDate
FROM dbo.StoredValue
ORDER BY StoredValueID ASC OFFSET 24 ROWS FETCH NEXT 10 ROWS ONLY
GO
Using OFFSET FETCH

Using OFFSET FETCH

Points to keep in mind:

  • In general both approaches are efficient.
    • In many cases, the query cost with CTE is same as using OFFSET/FETCH (with some exceptions)
    • In some cases, using OFFSET/FETCH is many times more efficient than CTE
OFFSET FETCH Query Cost Comparision

OFFSET FETCH Query Cost Comparision

  • Using OFFSET/FETCH makes the code more intuitive
  • OFFSET/FETCH could only be used in ORDER BY clause
  • OFFSET clause is required for FETCH clause.
  • TOP cannot be used in the same query as OFFSET. If you do, you’ll see the following error.
Msg 10741, Level 15, State 2, Line 5
 A TOP can not be used in the same query or sub-query as a OFFSET.
Hope this helps,
_Sqltimes

Interesting topic today.

When you create UNIQUE constraint on a table, to prevent any duplicate records, there are a couple of options.

  1. Create UNIQUE INDEX with necessary columns
  2. CREATE  UNIQUE CONSTRAINT, if it is a single column

In this process, there are a couple of nuances that change this behavior; In particular, the focus is on how IGNORE_DUP_KEY flag affects the behavior. Lets look in detail with and without a UNIQUE constraint.

In this post, the focus is on the affects of IGNORE_DUP_KEY flag on UNIQUE and non-UNIQUE indexes.

With UNIQUE Index

For the same table, lets create a regular index (non-UNIQUE) index with IGNORE_DUP_KEY setting and see how it behaves when a duplicate record is inserted.


--
-- Create a sample table
--
CREATE TABLE dbo.Test_IGNORE_DUP_KEY (
    ID    INT          NOT NULL  IDENTITY(1,1) PRIMARY KEY CLUSTERED
  , Name  VARCHAR(10)  NOT NULL
  , SSN   VARCHAR(15)  NOT NULL
)
GO

--
-- Insert one sample record.
--
INSERT INTO dbo.Test_IGNORE_DUP_KEY (Name, SSN)
VALUES('Joe', '000-00-0000')
GO

SELECT * FROM dbo.Test_IGNORE_DUP_KEY
GO

Lets see the result.

Sample table to test IGNORE_DUP_KEY

Sample table to test IGNORE_DUP_KEY

IGNORE_DUP_KEY = ON

With IGNORE_DUP_KEY = ON, when a duplicate record is inserted, the new duplicate record does not get inserted as expected, but it does not generate any errors. So, this flag, tells Sql Server to ignore any duplicate records.

--
-- Create UNIQUE index with "IGNORE_DUP_KEY = ON"
--
CREATE UNIQUE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
   ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = ON)
GO

Now, when new duplicate record is inserted, the follow “informational message” is displayed. The duplicate record is not inserted into the table. When multiple duplicate records are inserted, all duplicate records are ignored, while non duplicate entries are successfully inserted.

Duplicate key was ignored.

(0 row(s) affected)
IGNORE_DUP_KEY : Informational Message

IGNORE_DUP_KEY : Informational Message


With IGNORE_DUP_KEY = OFF

--
-- Create UNIQUE index with "IGNORE_DUP_KEY = OFF"
--
CREATE UNIQUE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
   ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = OFF)
GO

Upon INSERTing a new duplicate record, we get the following error message

Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.Test_IGNORE_DUP_KEY' with unique 
index 'UQ_Test_IGNORE_DUP_KEY_SSN'. The duplicate key value is (000-00-0000).
The statement has been terminated.

IGNORE_DUP_KEY : Error Message

IGNORE_DUP_KEY : Error Message


Without UNIQUE Index

Now lets check the same setting without a UNIQUE index or constraint on this column and measure the behavior.


IGNORE_DUP_KEY = ON

--
-- Regular Index with "IGNORE_DUP_KEY = ON"
--
CREATE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
 ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = ON)
GO

Obviously, this does not make sense. So, Sql Server throws the following error.

Msg 1916, Level 16, State 4, Line 1
CREATE INDEX options nonunique and ignore_dup_key are mutually exclusive.

In simple English, this setting is like saying. Don’t watch for duplicate records, but if you see them, do something (IGNORE).When you are not watching for duplicate records, we cannot ask for Sql Server to take some action.

IGNORE_DUP_KEY : Incorrect Setting

IGNORE_DUP_KEY : Incorrect Setting


IGNORE_DUP_KEY = OFF

--
-- Regular Index with "IGNORE_DUP_KEY = OFF"
--
CREATE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = OFF)
GO

Surprisingly this does not throw any error. One way to look at this is, “Don’t look for duplicate records, and there are no settings set to ON to watch for.” Since IGNORE_DUP_KEY=OFF, there is no setting to watch for or no overlay conditions on top on regular index. So, when a duplicate record is  inserted, it is successfully inserted into the table.

IGNORE_DUP_KEY : No Affect

IGNORE_DUP_KEY : No Affect

Summary

In summary, this is how these two settings work together.

When index is UNIQUE + IGNORE_DUP_KEY=ON

  • Duplicate entries are ignored with an informational message (no error raised).
  • Non-duplicate records are entered successfully.
  • When mixed set is INSERTED, only informational message is displayed.
    • Good records are inserted and duplicate are ignored.

When the index is UNIQUE + IGNORE_DUP_KEY=OFF

  • Clear error message is displayed when duplicate entries inserted. Statement is terminated.
  • Non-duplicate records are entered successfully.
  • When mixed set is INSERTED, error message is displayed with and entire transaction is terminated

When the index is Non-UNIQUE + IGNORE_DUP_KEY=ON

  • Setting is not allowed.

When the index is Non-UNIQUE + IGNORE_DUP_KEY=OFF

  • INSERT is successful with duplicate entries.

Flash Card Summary

IGNORE_DUP_KEY = ON IGNORE_DUP_KEY = OFF
UNIQUE Index Duplicates are ignored with informational message. Entire transaction is terminated with error message.
Non-Unique Index Setting not allowed. Duplicate entries are inserted successfully.
Hope this helps,

_Sqltimes

Quick one today:

Sometimes during normal DBA operations, there would be a need to check for the existence of any active transaction that could be preventing the transaction log from rolling forward with its virtual logs. There is a good command that helps you pull up that exact transaction.

--
-- Check oldest active transaction
--
DBCC OPENTRAN
GO
Oldest Open Transaction

Oldest Open Transaction

 

As you can see the result has the session_ID and other details related to the oldest transaction. They are helpful in taking the next steps resolving the issues. If you prefer, the result could be returned in a table format. Just use TABLERESULT as shows below.

DBCC OPENTRAN WITH TABLERESULTS
GO

 

Hope this helps,
_Sqltimes

Quick one today:

Sometimes, there are times where there is a need to quickly prevent/disable any client connections to Sql Server. The occasions that warrant such drastic need are rare, but they do occur once in a while.

During a scheduled and planned deployment, we go through the steps to disable web and app servers to stop any connections to database servers. Once the traffic is dead, we’ll start with our database deployment steps. This is standard approach and it works.

Today, we’ll talk about a couple of different approaches that are more drastic. These are useful only in some rare occasions where the above step is not possible.

Options

  1. Disable TCP/IP
  2. Disable “Remote Connections”

Details

Option 1: Disable TCP/IP

Generally all connections to Sql Server from outside the server machine itself come through TCP/IP. Other network protocols are still used, but not with the same frequency. As a side note, only enable those protocols that you actually need; And disable all others. In our production environment, we only enable “Shared Memory” & “TCP/IP”.

  • TCP/IP is used by all client connections that connect to Sql Server over network. Like my laptop SSMS connecting to Sql Server on a lab server. It uses TCP/IP protocol.
  • Shared Memory is used when I’m logged into the lab server (RDP) and open SSMS to connect to Sql Server on the same machine. Or connecting to Sql Server running on my laptop using SSMS on my laptop.

So, we only enable Shared Memory and TCP/IP.

Coming back to the point, to disable TCP/IP, open Sql Server Configuration Manager, and go to “Sql Server Network Configuration”, then click on the relevant instance. Seethe image below.

Disable TCP IP

Disable TCP IP

Option 2: Disable Remote Connections

Go to Sql Server Instance level “Server Properties” by right clicking on the Server and choosing Properties. Go to “Connections” and un-check the option to “Allow remote connections to this server”. See the image below.

Disable Remote Connections

Disable Remote Connections

 

Please note that this is not a standard approach, but just a technique for outlier situations.

 

 

Hope this helps,

_Sqltimes

Quick one today:

Checking for existence of temporary tables is not the same as regular tables in user databases. See below:

--
-- Sample code to test the existence of temporary table
--
CREATE TABLE #Customer(
       ID            BIGINT    NOT NULL   PRIMARY KEY CLUSTERED
     , Exists_Flag   BIT       NOT NULL   DEFAULT(0)
)
GO

IF OBJECT_ID('tempdb..#Customer') IS NOT NULL
     PRINT '#Customer Table Exists'
GO
Existence of Temporary Table

Existence of Temporary Table

Hope this helps,
_Sqltimes

Earlier a few weeks ago, I ran into an interesting error message.

Msg 3098, Level 16, State 2, Line 10
The backup cannot be performed because 'COMPRESSION' was requested after the media was 
formatted with an incompatible structure. To append to this media set, either omit 
'COMPRESSION' or specify 'NO_COMPRESSION'. Alternatively, you can create a new media 
set by using WITH FORMAT in your BACKUP statement. If you use WITH FORMAT on an 
existing media set, all its backup sets will be overwritten.

Msg 3013, Level 16, State 1, Line 10
BACKUP LOG is terminating abnormally.

Recreate the Error

These are the steps I performed, when I ran into this error.

Step 1: Perform database backup using BACKUP LOG command without the clause “WITH COMPRESSION”

--
-- Perform LOG backup (without COMPRESSION)
--
BACKUP LOG Ahold_RT
TO DISK = N'M:\MSSQL\Backup\MSSQLSERVER\Without_Compression.TRN'
GO

Step 2: Perform backup again (using same file and location), but this time use “WITH COMPRESSION”

--
-- Now perform backup to the same file and location <strong>WITH COMPRESSION</strong>
--
BACKUP LOG Ahold_RT
TO DISK = N'M:\MSSQL\Backup\MSSQLSERVER\Without_Compression.TRN'
WITH COMPRESSION
GO

Result: Error message

Explanation

Two points:

  1. When Sql Server performs backup, it stores the backup file in a certain format.
  2. When we perform multiple backups using the same file, the new backup data is appended to the original backup file.

Given these, it is important the the first backup and second backup going to the same file and location, need to be in the same format.

Solution

  • Either perform backup to a different location. Or
  • Perform backup using “WITH FORMAT” clause to invalidate the previous file and then create a fresh new one with new format.
--
-- Perform backup using 'WITH FORMAT'
--
BACKUP LOG Ahold_RT TO DISK = N'M:\MSSQL\Backup\MSSQLSERVER\Without_Compression.TRN'
WITH COMPRESSION
   , FORMAT
GO

 

Hope this helps,

_Sqltimes

Follow

Get every new post delivered to your Inbox.