Archive for November, 2015

Quick one today:

Earlier this interesting error occurred in our environment. When we tried to use DAC, we ran into this error:

Dedicated administrator connections are not supported. (ObjectExplorer)

For a few moments it seemed unsettling. If a DAC connection cannot connect so Sql Server, then we are in big trouble. Upon some contemplation (a.k.a. banging head on desk), it became apparent that there is huge silver lining to this. This error is be design.

When you open SSMS, the pop up window that comes up for connection is for Object Explorer, not query window. DAC connections are not allowed for Object Explorer window connections. Hence the error. DAC is supposed to be used only during rare events; For that it is necessary to make sure the connections are not used for purposes other than the absolute intended purpose. Hence DAC prevents inadvertent connection requests to Object Explorer with a safe error.

Use Database Engine Query to open new DAC connection.

Hope this helps,

Read Full Post »

Quick one today:

Interesting topic today: Does recompiling procedure stop current executions of the procedure?

Of late, in our lab, we’ve been needing to run several performance tests (volume testing) to measure performance metrics under different load and traffic patterns. In one of those iterations, we ran into a peculiar situation, where we needed to recompile stored procedures every few hours. For some reason, the cached procedure was becoming less efficient, even though it was not flagged as expired to be recompiled automatically. One thing that became clear was, at any given time, several incidents of the same procedures are being executed by different application processes. So, when a procedure is recompiled, it does not impact any of the procedures that are already in execution. It only impacts any new executions.

When a procedure is recompiled:

  • Sql Server gathers all the meta data needed to generate optimal procedure plan based on the state of the database at that given time period.
  • Once recompilation is complete, it forces all new executions from then on to use new plan.
  • The current running procedures (that started before recompilation) still use the same old plan.

Let’s look at an example:


    RAISERROR ('1', 10, 1) WITH NOWAIT
    WAITFOR DELAY '00:00:30'
    PRINT '2'

EXEC dbo.testt

While this procedure is running in the above connection, open a different session and run the following command that forces recompilation and new execution.

EXEC sp_recompile 'dbo.testt' 

EXEC dbo.testt 

Object ‘dbo.testt’ was successfully marked for recompilation.

Hope this helps,

Read Full Post »

Quick one today:

Sometimes, in our lab and performance testing environments, we end up in a situation where we need to remove and re-add indexes while running tests on different scenarios. In these situations, rather than DROPping and re-CREATing indexes, there is a simpler way to manage these things. The underlying activities that happen under the hood are the same in either case, but the big difference is in the amount of time a developer/DBA spends doing these actions.

DISABLE & ENABLE (REBUILD) gives a simpler and cleaner way to carry out the same operation. Let’s look at an example:

--Disable Index
ALTER INDEX IX_Product_ProductName ON dbo.Product DISABLE

--Enable Index
ALTER INDEX IX_Product_ProductName ON dbo.Product REBUILD

REBUILD is the keyword that results in EABLEing the index. Under the hood, it just rebuilds the index using the same definition. Since we did not drop the index, the definition is still present in Sql Server. So, it just rebuilds it. Sometimes using the rebuild options available in WITH clause we could configure the rebuild operation to fit our needs. See example below.

--Enable Index  using WITH options
ALTER INDEX IX_Product_ProductName ON dbo.Product REBUILD

Some points to  keep in mind:

  • In Sql Server databases Clustered Index is a special index. The index is where the entire table data is stored. So, if we disable clustered index, the entire data becomes inaccessible. Data is not lost, but it becomes inaccessible to users. Just REBUILD it to gain access to data.
  • Disabling clustered index on indexed view, deletes the view. So take extra caution with indexed views.

In some cases, when you want to deal with all the indexes on a table in one shot, use the below example.

-- Disable all indexes on the table 

--Enable all indexes on the table 
Hope this helps,

Read Full Post »

Recently we ran into some deadlocks in our performance test environment that resulted in some good analysis and resolution techniques. In this post, those techniques will be presented in an overly simplified way to be used only for reference. Thorough analysis and learning needs to happen on each of these techniques before they are implemented.

Different types of deadlocks:

  • Bookmark deadlock or Keylook up deadlock or RDI lookup deadlock
  • Serializable range scan deadlock
  • Cascading constraint deadlock
  • Intra-query parallelism deadlock
  • Reverse object order deadlock

We’ll look at each and possible resolution ideas:

Bookmark Deadlock or Key Lookup Deadlock or RDI Lookup Deadlock

These generally occur between a SELECT statement and UPDATE/INSERT/DELETE statement. Usually SELECT statement is selected as victim to be rolled back.

  • These could be resolved by adding covering index to the nonclustered index, so key or RID or Bookmark lookup could be eliminated.
  • Resist the urge to use NOLOCK hints in SELECT statement. On the surface it seems like a good approach, but its a bad practice.
  • Resist the urge to keep adding more and more indexes. Maintain a safe balance of indexes for optimal performance.

Serializable Range Scan Deadlock

The isolation level SERIALIZABLE is the most restrictive isolation level. It results in using range locks. When locks from higher compatibility level (shared-shared, shared-update, etc) are escalated to lower compatibility level (exclusive) between two processes that are waiting on each other, then this type of deadlock occurs. There are no each techniques for this, so try not to use SERIALIZABLE isolation level unless it is absolutely necessary.

See if

  • SELECT operation could be performed without such range locks.
  • This transaction required SERIALIZABLE isolation level. How about READ COMMITTED (with SNAPSHOT), if it helps.

Cascading Constraint Deadlock

In a parent-child table scenario, if a record from parent table is being deleted or updated, then necessary locks child table records become necessary. This prevents orphan records in child tables. If two different processes, individually perform operations on related records these two tables, then they end up deadlocking with each other.

To prevent:

  • Always deal with records in child table, before manipulating parent records
  • Resist the urge to use WITH DELETE CASCADE option on parent tables.
  • Make sure necessary non-clustered indexes are created on the foreign key in child table

Intra-Query Parallelism Deadlock

This is the most interesting deadlock of all. When a large workload/query is executed, sometimes Sql Server carries out the work using multi-threading. These threads end up blocking each other resulting in deadlocks. There is no much you can do, in these scenarios. In resource list, multiple exchangeEvents will be captured, each having their own ecid, kpid but the same SPID.

See if you could:

  • Carry out the operations using lesser degree of parallelism.

Reverse Object Order Deadlock

As the name suggests, this occurs when two different processes hold locks on different resources, while requesting lock on second resource in opposite order. This is not good. Code change would be necessary to fix this issue.

See if you could:

  • Access resources (tables, indexes, etc) in the same order in each process. This sounds outlandish and hard to believe, but in reality business processes always happen in a particular order (with some exceptions). So always maintain the order in data access as well.
Hope this helps,

Read Full Post »