Feeds:
Posts
Comments

Quick one today:

Common Table Expression, since it was introduced in Sql Server 2005 is another great feature. The value is immense and cannot be understated.

  • It reduced the need for temporary tables and table variables in scripts a lot.
  • Make code more intuitive
  • All the tables in multiple CTE’s are touched only when the final query at the bottom of CTE is executed – resulting in better utilization of resources.
  • There are lot more uses, but I’ll stop here.

Today, I wanted to touch on one aspect of CTE. Ordering the data inside CTE. Due to the nature of CTE, ordering data inside CTE does not make sense.

Example:

--
-- This is not allowed
--
WITH Sample_CTE (ID)
AS (
     SELECT Col14 AS ID
     FROM SomeTable AS C
     INNER JOIN OtherTable AS S
          ON C.LocationID = S.LocationID
     ORDER BY C.PKID ASC
)
SELECT *
FROM Sample_CTE
GO

Error Message:

Msg 1033, Level 15, State 1, Line 6
The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, 
and common table expressions, unless TOP or FOR XML is also specified.

CTE is just logical representation of a group of qualifying records. The order of records is not guaranteed — as it depends on several factors of the table storage (indexes, heap, order). But if you have TOP clause in the select statement, then it is allowed.

--
-- This is allowed
--
WITH Sample_CTE (ID)
AS    (
        SELECT TOP 1 Col14 AS ID
        FROM SomeTable AS C
        INNER JOIN OtherTable AS S
            ON C.LocationID = S.LocationID
        ORDER BY C.PKID ASC
    )
SELECT * FROM
Sample_CTE
GO

 

Hope this helps,

_Sqltimes

Quick one today:

On a table, we could create Clustered and NonClustered indexes. Each table can only have 1 clustered index, but multiple nonclustered index.

When a primary key is created on a column, it by default becomes Clustered index. But sometimes, we run into a situation where we need to separate PRIMARY KEY and Clustered index. It sounds counter intuitive, but this distinction is important and in some cases necessary to achieve better performance.

As an example, we use Product table.

To accomplish this, we need the following steps.

Step 1: Drop the Primary Key constraint. This also drops the Clustered index on that column. Recreate Primary Key constraint with Nonclustered index

Step 2: Create Clustered index on a different column

Step 1: Drop PK Constraint and recreate it without Clustered index

--
-- First DROP the Primary Key constraint and then recreate it with NonClustered Index
--
ALTER TABLE dbo.Product
DROP CONSTRAINT PK_Product_ID
GO

--
-- Notice the NONCLUSTERED clause
--
ALTER TABLE dbo.Product
ADD CONSTRAINT PK_Product_ID PRIMARY KEY NONCLUSTERED (Id ASC)
ON [nCI_01]
GO

Step 2: Recreate CLUSTERED INDEX without Primary Key

--
-- Create Clustered index on a different column
--
CREATE CLUSTERED INDEX CI_Product_Kiosk_ID
     ON dbo.Product (Kiosk_ID ASC)
WITH (
     SORT_IN_TEMPDB = ON
   , MAXDOP         = 2
   , FILLFACTOR     = 80
) ON [CI_01]
GO
Hope this helps,

_Sqltimes

Quick one today:

Rare, but sometimes, there is a need to INSERT specific values into a table that has an IDENTITY specified on one of the columns. Luckily, Sql Server provides mechanisms to insert data in such situations.

When we try to insert values into an IDENTITY column, we get errors like this.

Msg 544, Level 16, State 1, Line 1
 Cannot insert explicit value for identity column in table 'Product' when IDENTITY_INSERT is set to OFF.
--
-- Using IDENTITY_INSERT ON
--
SET IDENTITY_INSERT dbo.Products ON
GO

INSERT INTO dbo.Products (ProductID, ProductTypeID, Description, LastUpdate)
VALUES (1232, 345, 'tooth Paste', GETDATE())
GO

Now, after done inserting records, reset the IDENTITY_INSERT flag using SET statement.

--
-- Reset the IDENTITY_INSERT flag
--
SET IDENTITY_INSERT dbo.Products OFF
GO
Hope this helps,

_Sqltimes

Quick one today:

Earlier last month, while running server side traces on a lab Sql Server machine, I ran into this interesting error.

Error Details:

Windows error occurred while running SP_TRACE_CREATE. Error = 0x80070005(Access is denied.).
Msg 19062, Level 16, State 1, Procedure sp_trace_create, Line 1
Could not create a trace file.
Trace File Permissions

Trace File Permissions

 
 This is easy to fix. Go to the folder where you are attempting to store the trace files. Open the folder properties and go to Security tab.
Make sure the Sql Server service account is added to the permissions list with necessary read and write permissions.
Hope this helps,

_Sqltimes

Quick one today:

Sql Server has interesting functions; Today we’ll look at one.

Let’s start with a simple example:

--
-- FORMAT function with integer
--
SELECT FORMAT(12312312,'#,###,###,###,###')
GO

Result:

Format_Function_Custom

Format_Function_Custom

 

 

--
--  FORMAT function with date
--
DECLARE @DT DATETIME = '2014-10-24'
SELECT FORMAT(@DT, 'yyyy/MM/dd', 'en-US') AS [US_Date_with_slash]
     , FORMAT(@DT, 'd', 'en-US')          AS [US_Date_with_hyphen]
     , FORMAT(@DT, 'D', 'en-gb')          AS [GB_Date_with_slash]
     , FORMAT(@DT, 'd', 'en-gb')          AS [GB_Date_with_hyphen]
GO
Format_Function_Date

Format_Function_Date

Read MSDN for more information.

Hope this helps,
_Sqltimes

In troubleshooting some issues with databases, sometimes you need to capture server side traces. Sometimes Profiler could help capture specific events as needed. Another option is to use the server side stored procedures to capture trace into a file.  There are several points to keep in mind when you run server side traces, but for this post, the content will be limited to stopping and starting server side traces.

These are the steps I follow to create server side traces:

  1. Step 1: Create trace definition using sp_Trace_Create
  2. Step 2: Add individual events to capture using sp_Trace_SetEvent
  3. Step 3: Add filters to limit the amount of transactions you capture using sp_Trace_SetFilter
  4. Step 4: Start the trace using sp_trace_setstatus

Now, following are the steps to start, stop and delete the definition from Sql Server.

Get a list of all traces that are running. TraceID is used as a parameter for setting status.

--
-- Get a list of all traces running
--
SELECT *
FROM sys.traces
GO
Now using the trace ID set the status to either start, stop or delete.
--
-- Start trace
--
EXEC sp_trace_setstatus 2, 1
GO

--
-- Stop trace
--
EXEC sp_trace_setstatus 2, 0
GO

--
-- Delete trace definition
--
EXEC sp_trace_setstatus 2, 2
GO

Note: Please note that Microsoft is planning to discontinue this feature as they encourage usage of Extended Events

 

Hope this helps,
_Sqltimes

Recently, at one of our customer sites, we noticed this interesting anomaly.

In this database, there are several tables and each table has several statistics. Statistics could be created in several ways.

  • Index based statistics
  • Column Statistics
    • User created
    • Auto created Statistics (that are created by Sql Server)

Problem

When they perform nightly maintenance, some how the ‘auto created’ statistics were not getting updated. The ‘is_auto_update_stats_on’ & ‘is_auto_update_stats_async_on’ options are set to 1. But still some statistics are not getting updated.

Usually, there are many ways to update statstics:

  • Use maintenance plan to perform ‘Update Statics’ (most popular)
  • Use “UPDATE STATISTICS” command to update selective tables or selective statistics
  • sp_updatestats : Updates all statistics in the entire database
  • etc

In a  database environment, given the size and traffic and possible maintenance window duration, the option we choose varies. If you have sufficient maintenance window, we could use a SQL Job to update statistics on all tables in a database using a maintenance plan approach or ‘sp_updatestats’. But in some cases, we may not have enough time to run updates on all statistics. So, we end up selecting a list of statistics and running “UPDATE STATICS <stats_name>” on each of them.

Solution

Guess what happend, in this case. Yes, the other (auto created) statistics will not be updated. And this is what happened at our customer site. They could not afford to perform complete update stats each night and ended up choosing a few stats each night. This inadvertently resulted in ‘auto created’ statistics from not getting updated.

Index stats will be updated when you rebuild indexes.

The lesson is not to depend on the database settings alone (‘is_auto_update_stats_on’ & ‘is_auto_update_stats_async_on’). Have your own maintenance plans to update statistics all on all tables.

Options:

  1. Use UPDATE STATISTICS
  2. Use sp_updatestats
  3. Use maintenance plans

Option 1

Rather than choosing each statistic name in “UPDATE STATISTICS” command, just give the table name. It updates all statistics on that particular table. Below is the image from our lab server as I was running UPDATE STATISTICS <table name>.

In the below image, is a snapshot from the table while the statistics were being updated. Hence you see some are updated and some are still in the process of getting updated. Interesting point is that the ‘auto created’ statistics also get updated using this command with table name.


--

-- Update stats for the entire table

--

UPDATE STATISTICS dbo.CPE_RA_Archive WITH FULLSCAN

Result

Update Statistics

Update Statistics

Option 2

--
--  Update statistics for entire database
--
EXEC sp_updatestats 'resample'

Result

UpdateStatics with Procedure

UpdateStatics with Procedure

Option 3

Use maintenance plan to perform “Update Stats” on the entire database. This in turn calls UPDATE STATISTICS on each table resulting in updating both user & auto created stats.

Hope this helps,
_Sqltimes

Follow

Get every new post delivered to your Inbox.