Feeds:
Posts
Comments

Archive for December, 2014

OUTPUT is an interesting clause added starting Sql Server 2005.

DELETE

Recently, in my previous post, we saw an example with some insight into the usage of OUTPUT clause with DELETE statement. In this post we’ll cover INSERT, UPDATE and MERGE.

UPDATE

In an UPDATE statement, both INSERTED.* and DELETED.* internal tables are available. They store the before and after snapshot of the affected records.

Alternatively, the resultant records could be stored in a temporary table for further processing.

--
-- Usage example of OUTPUT Clause with UPDATE statement
--
UPDATE SomeTable
SET Status = 69874
OUTPUT INSERTED.CustomerPK
     , INSERTED.AdjAmount
     , DELETED.AveragePoints
WHERE TargetID = 1656
AND Status = 0
GO

INSERT

In INSERT statement, when OUTPUT clause is used, only the INSERTED.* internal table is available. The result of this table could be sent directly back or could be stored in a temporary table or table variable for further processing.

--
-- Usage example of OUTPUT Clause with INSERT statement
--
INSERT INTO dbo.SomeTable (CustomerID, AdjAmt, LastUpdate)
OUTPUT INSERTED.CustomerID
     , INSERTED.AdjAmt
     , INSERTED.LastUpdate
INTO @MyTableVariable
VALUES (123, 300.0, GETDATE())

SELECT *
FROM @MyTableVariable
GO
INSERT Statement OUTPUT Clause

INSERT Statement OUTPUT Clause


 

MERGE

In MERGE statement also, we could use both INSERTED.* and DELETED.* internal tables to retrieve the snapshot of data set before and after the change. One unique aspect to this is the $action that indicates the action that resulted in the record. It says either INSERT or DELETE. Keep in mind, MERGE statement is added since Sql Server 2008.

--
-- Usage example of OUTPUT Clause with MERGE statement
--
MERGE SomeTarget AS TGT
USING (
        -- some source data set
      ) AS SRC
ON TGT.CustomerPK = SRC.CustomerPK

WHEN MATCHED THEN
    UPDATE SET TGT.WaitingAck= 0

WHEN NOT MATCHED THEN
    INSERT (CustomerPK, AdjAmount, LastUpdate)
    VALUES (123, 300.0, GETDATE())

OUPUT $action
    , INSERTED.CustomerPK
    , INSERTED.AdjAmount
    , INSERTED.LastUpdate
    , DELETED.CustomerPK
    , DELETED.AdjAmount
    , DELETED.LastUpdate
GO

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Every once in a while, there comes a need to install Sql Server Management Studio on a new laptop. There seems to be some confusion surrounding installation of Management Studio (SSMS) with or without installing the entire engine.

Yes, you can install SSMS without installing the entire database engine. Two options:

  1. Usually SSMS comes with any Sql Server engine installation a.k.a. SQL Server Express Edition or some other edition. During the installation, you can check only those options that are relevant to you.
  2. There is an easier option as well. Just download and install SSMS. See below:

Go to this link on Microsoft website for Microsoft® SQL Server® 2012 Service Pack 2 (SP2) Express. When you hit download, a pop up window appears with options. In that only choose SSMS (64 bit or 32 bit). Voilà!! download and install. All set.

 

Download and Install SSMS

Download and Install SSMS

 

Hope this helps,

_Sqltimes

Read Full Post »

Quick one today:

A few months ago, this interesting error occurred. As needed new FileGroups were being created and destroyed and that when I ran into this interesting error.

Follow these steps to recreate the issue:

--
-- First create FG and file
--
ALTER DATABASE SampleDB ADD FILEGROUP [SV_FG]
GO

ALTER DATABASE SampleDB ADD FILE (
    NAME       =  N'StoredValue_FG'
  , FILENAME   =  N'I:\MSSQL\Data\StoredValue_FG.NDF'
  , SIZE       =  100 MB
  , FILEGROWTH =  10%
)
TO FILEGROUP [SV_FG]
GO

--
-- Drop them both
--
ALTER DATABASE SampleDB
REMOVE FILE StoredValue_FG
GO

ALTER DATABASE SampleDB
REMOVE FILEGROUP [SV_FG]
GO

--
-- Now, when you recreate the same FG and file, you get this error
--
ALTER DATABASE SampleDB ADD FILEGROUP [SV_FG]
GO

ALTER DATABASE SampleDB ADD FILE (
    NAME       =  N'StoredValue_FG'
  , FILENAME   =  N'I:\MSSQL\Data\StoredValue_FG.NDF'
  , SIZE       =  100 MB
  , FILEGROWTH =  10%
)
TO FILEGROUP [SV_FG]
GO

 

Error Message on a Sql Server 2012 Instance 

Msg 1833, Level 16, State 2, Line 1
File 'StoredValue_FG' cannot be reused until after the next BACKUP LOG operation. 
If the database is participating in an availability group, a dropped file can be reused only after the 
truncation LSN of the primary availability replica has passed the drop LSN of the file and 
a subsequent BACKUP LOG operation has completed.
 

Error Messaage on a Sql Server 2008 R2 Instance

Msg 1833, Level 16, State 1, Line 1
File 'I:\MSSQL\Data\StoredValue_FG.NDF' cannot be reused until after the next BACKUP LOG operation.
Msg 1833, Level 16, State 2, Line 1
File 'StoredValue_FG' cannot be reused until after the next BACKUP LOG operation.

Solution:

As the error message indicates, take a LOG backup first and then re run the script. New file will be successfully added to the FileGroup.

More importantly, what is the reason for this error? My guess is as we run DML/DDL statements that activity is logged in detail in the transactional log. So when a new data file is created, the transactional log makes a note of it. So, when you create a new data file, Sql Server checks if a data file with same name exists. If it does it throws an error. But in this case, the data file with same name does not exits. But it exists with in the same active transaction log and that is why we see this error.

But once the active log window passes, we do not see this error. More importantly, in SIMPLE recovery mode, this processes of creating, dropping and recreating a data file with the same name does not throw any error. Because, CHECKPOINT occurs and the active log is moves past the section where the old data file creation is noted. The behavior measure so far points to this pattern, but your comments are welcome.

What do you think? Please share your thoughts in comments.

 

Hope this helps,

_Sqltimes

Read Full Post »

Quick one today:

In our production code, there are several incidents where “SET ROWCOUNT” was implemented. Over the years, code has evolved with involvement from different developers and they might have made decisions to use this as an immediate fix for the issue at hand. Let’s dig a bit deeper into this feature.

Essentially, SET ROWCOUNT tells Sql Server to stop any further processing once the number is reached. For example, if I set

SET ROWCOUNT 1800
GO

Any SELECT, UPDATE, DELETE statements following this will stop processing any more records once it hits 1800 record count. On the surface this seems like a cool feature, and it is; But it has some limitations going forward.

  • Microsoft recommends that we discontinue using this style as they are planning to stop its affect on DML statements (UPDATE/DELETE/INSERT).
    • Microsoft recommends using TOP statement instead
  • If both TOP statement and SET ROWCOUNT are used, SET ROWCOUNT over rides (when ROWCOUNT value is smaller than TOP value)
  • This setting comes into play during execution and not at parse time.
  • If you want to reset it, then use the statement below.
SET ROWCOUNT 0
GO

 

Hope this helps,

_Sqltimes

Read Full Post »

Question: Paging through records

Quite often, when paging through a large set of records, we display a set of records at a time on user interface. Upon clicking next, the next set of records are displayed. In short, this type of paging could be achieved using CTE and ROW_NUMBER(). Every iteration we change the ‘@from’ and ‘@to’ variables to bring the next set of records.

See example below.

Paging using CTE

Paging using CTE

Old Approach

--
--  Using CTE and ROW_NUMBER()
--
; WITH SV_RowNumber
AS
(
   SELECT
           StoredValueID
         , LocalID
         , ExpireDate
         , ROW_NUMBER() OVER (ORDER BY StoredValueID ASC) AS RowN
   FROM dbo.StoredValue
)
SELECT *
FROM SV_RowNumber
WHERE RowN BETWEEN 25 AND 34
ORDER BY StoredValueID ASC
GO

New Approach

Starting Sql Server 2012, there is an easier way using OFFSET and FETCH to ORDER BY clause. With these, you can retrieve the next set of ‘n’ records after skipping ‘m’ records. See example below:

--
-- Using OFFSET & FETCH
--
SELECT StoredValueID
     , LocalID
     , ExpireDate
FROM dbo.StoredValue
ORDER BY StoredValueID ASC OFFSET 24 ROWS FETCH NEXT 10 ROWS ONLY
GO
Using OFFSET FETCH

Using OFFSET FETCH

Points to keep in mind:

  • In general both approaches are efficient.
    • In many cases, the query cost with CTE is same as using OFFSET/FETCH (with some exceptions)
    • In some cases, using OFFSET/FETCH is many times more efficient than CTE
OFFSET FETCH Query Cost Comparision

OFFSET FETCH Query Cost Comparision

  • Using OFFSET/FETCH makes the code more intuitive
  • OFFSET/FETCH could only be used in ORDER BY clause
  • OFFSET clause is required for FETCH clause.
  • TOP cannot be used in the same query as OFFSET. If you do, you’ll see the following error.
Msg 10741, Level 15, State 2, Line 5
 A TOP can not be used in the same query or sub-query as a OFFSET.
Hope this helps,
_Sqltimes

Read Full Post »

Interesting topic today.

When you create UNIQUE constraint on a table, to prevent any duplicate records, there are a couple of options.

  1. Create UNIQUE INDEX with necessary columns
  2. CREATE  UNIQUE CONSTRAINT, if it is a single column

In this process, there are a couple of nuances that change this behavior; In particular, the focus is on how IGNORE_DUP_KEY flag affects the behavior. Lets look in detail with and without a UNIQUE constraint.

In this post, the focus is on the affects of IGNORE_DUP_KEY flag on UNIQUE and non-UNIQUE indexes.

With UNIQUE Index

For the same table, lets create a regular index (non-UNIQUE) index with IGNORE_DUP_KEY setting and see how it behaves when a duplicate record is inserted.


--
-- Create a sample table
--
CREATE TABLE dbo.Test_IGNORE_DUP_KEY (
    ID    INT          NOT NULL  IDENTITY(1,1) PRIMARY KEY CLUSTERED
  , Name  VARCHAR(10)  NOT NULL
  , SSN   VARCHAR(15)  NOT NULL
)
GO

--
-- Insert one sample record.
--
INSERT INTO dbo.Test_IGNORE_DUP_KEY (Name, SSN)
VALUES('Joe', '000-00-0000')
GO

SELECT * FROM dbo.Test_IGNORE_DUP_KEY
GO

Lets see the result.

Sample table to test IGNORE_DUP_KEY

Sample table to test IGNORE_DUP_KEY

IGNORE_DUP_KEY = ON

With IGNORE_DUP_KEY = ON, when a duplicate record is inserted, the new duplicate record does not get inserted as expected, but it does not generate any errors. So, this flag, tells Sql Server to ignore any duplicate records.

--
-- Create UNIQUE index with "IGNORE_DUP_KEY = ON"
--
CREATE UNIQUE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
   ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = ON)
GO

Now, when new duplicate record is inserted, the follow “informational message” is displayed. The duplicate record is not inserted into the table. When multiple duplicate records are inserted, all duplicate records are ignored, while non duplicate entries are successfully inserted.

Duplicate key was ignored.

(0 row(s) affected)
IGNORE_DUP_KEY : Informational Message

IGNORE_DUP_KEY : Informational Message


With IGNORE_DUP_KEY = OFF

--
-- Create UNIQUE index with "IGNORE_DUP_KEY = OFF"
--
CREATE UNIQUE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
   ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = OFF)
GO

Upon INSERTing a new duplicate record, we get the following error message

Msg 2601, Level 14, State 1, Line 1
Cannot insert duplicate key row in object 'dbo.Test_IGNORE_DUP_KEY' with unique 
index 'UQ_Test_IGNORE_DUP_KEY_SSN'. The duplicate key value is (000-00-0000).
The statement has been terminated.

IGNORE_DUP_KEY : Error Message

IGNORE_DUP_KEY : Error Message


Without UNIQUE Index

Now lets check the same setting without a UNIQUE index or constraint on this column and measure the behavior.


IGNORE_DUP_KEY = ON

--
-- Regular Index with "IGNORE_DUP_KEY = ON"
--
CREATE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
 ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = ON)
GO

Obviously, this does not make sense. So, Sql Server throws the following error.

Msg 1916, Level 16, State 4, Line 1
CREATE INDEX options nonunique and ignore_dup_key are mutually exclusive.

In simple English, this setting is like saying. Don’t watch for duplicate records, but if you see them, do something (IGNORE).When you are not watching for duplicate records, we cannot ask for Sql Server to take some action.

IGNORE_DUP_KEY : Incorrect Setting

IGNORE_DUP_KEY : Incorrect Setting


IGNORE_DUP_KEY = OFF

--
-- Regular Index with "IGNORE_DUP_KEY = OFF"
--
CREATE NONCLUSTERED INDEX UQ_Test_IGNORE_DUP_KEY_SSN
ON dbo.Test_IGNORE_DUP_KEY (SSN ASC)
WITH (IGNORE_DUP_KEY = OFF)
GO

Surprisingly this does not throw any error. One way to look at this is, “Don’t look for duplicate records, and there are no settings set to ON to watch for.” Since IGNORE_DUP_KEY=OFF, there is no setting to watch for or no overlay conditions on top on regular index. So, when a duplicate record is  inserted, it is successfully inserted into the table.

IGNORE_DUP_KEY : No Affect

IGNORE_DUP_KEY : No Affect

Summary

In summary, this is how these two settings work together.

When index is UNIQUE + IGNORE_DUP_KEY=ON

  • Duplicate entries are ignored with an informational message (no error raised).
  • Non-duplicate records are entered successfully.
  • When mixed set is INSERTED, only informational message is displayed.
    • Good records are inserted and duplicate are ignored.

When the index is UNIQUE + IGNORE_DUP_KEY=OFF

  • Clear error message is displayed when duplicate entries inserted. Statement is terminated.
  • Non-duplicate records are entered successfully.
  • When mixed set is INSERTED, error message is displayed with and entire transaction is terminated

When the index is Non-UNIQUE + IGNORE_DUP_KEY=ON

  • Setting is not allowed.

When the index is Non-UNIQUE + IGNORE_DUP_KEY=OFF

  • INSERT is successful with duplicate entries.

Flash Card Summary

IGNORE_DUP_KEY = ON IGNORE_DUP_KEY = OFF
UNIQUE Index Duplicates are ignored with informational message. Entire transaction is terminated with error message.
Non-Unique Index Setting not allowed. Duplicate entries are inserted successfully.
Hope this helps,

_Sqltimes

Read Full Post »

Quick one today:

Sometimes during normal DBA operations, there would be a need to check for the existence of any active transaction that could be preventing the transaction log from rolling forward with its virtual logs. There is a good command that helps you pull up that exact transaction.

--
-- Check oldest active transaction
--
DBCC OPENTRAN
GO
Oldest Open Transaction

Oldest Open Transaction

 

As you can see the result has the session_ID and other details related to the oldest transaction. They are helpful in taking the next steps resolving the issues. If you prefer, the result could be returned in a table format. Just use TABLERESULT as shows below.

DBCC OPENTRAN WITH TABLERESULTS
GO

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Sometimes, there are times where there is a need to quickly prevent/disable any client connections to Sql Server. The occasions that warrant such drastic need are rare, but they do occur once in a while.

During a scheduled and planned deployment, we go through the steps to disable web and app servers to stop any connections to database servers. Once the traffic is dead, we’ll start with our database deployment steps. This is standard approach and it works.

Today, we’ll talk about a couple of different approaches that are more drastic. These are useful only in some rare occasions where the above step is not possible.

Options

  1. Disable TCP/IP
  2. Disable “Remote Connections”

Details

Option 1: Disable TCP/IP

Generally all connections to Sql Server from outside the server machine itself come through TCP/IP. Other network protocols are still used, but not with the same frequency. As a side note, only enable those protocols that you actually need; And disable all others. In our production environment, we only enable “Shared Memory” & “TCP/IP”.

  • TCP/IP is used by all client connections that connect to Sql Server over network. Like my laptop SSMS connecting to Sql Server on a lab server. It uses TCP/IP protocol.
  • Shared Memory is used when I’m logged into the lab server (RDP) and open SSMS to connect to Sql Server on the same machine. Or connecting to Sql Server running on my laptop using SSMS on my laptop.

So, we only enable Shared Memory and TCP/IP.

Coming back to the point, to disable TCP/IP, open Sql Server Configuration Manager, and go to “Sql Server Network Configuration”, then click on the relevant instance. Seethe image below.

Disable TCP IP

Disable TCP IP

Option 2: Disable Remote Connections

Go to Sql Server Instance level “Server Properties” by right clicking on the Server and choosing Properties. Go to “Connections” and un-check the option to “Allow remote connections to this server”. See the image below.

Disable Remote Connections

Disable Remote Connections

 

Please note that this is not a standard approach, but just a technique for outlier situations.

 

 

Hope this helps,

_Sqltimes

Read Full Post »

Quick one today:

Checking for existence of temporary tables is not the same as regular tables in user databases. See below:

--
-- Sample code to test the existence of temporary table
--
CREATE TABLE #Customer(
       ID            BIGINT    NOT NULL   PRIMARY KEY CLUSTERED
     , Exists_Flag   BIT       NOT NULL   DEFAULT(0)
)
GO

IF OBJECT_ID('tempdb..#Customer') IS NOT NULL
     PRINT '#Customer Table Exists'
GO
Existence of Temporary Table

Existence of Temporary Table

Hope this helps,
_Sqltimes

Read Full Post »