Feeds:
Posts
Comments

Archive for November, 2016

Quick one today:

Earlier this interesting error message appeared in our environment, during some data retrieval. Looks like we cannot query DMVs on a remote instance using Linker Server syntax of Server.Instance.Schema.Object standard. Below is the query that queries DMV’s on remote machine:

Error:

--
-- Query remote server DMV's
--
SELECT TOP 10 *
FROM [RemoteServer].[master].sys.dm_exec_query_stats AS qs
CROSS APPLY [RemoteServer].[master].sys.dm_exec_sql_text (qs.sql_handle) AS qt
GO

When we run it, Sql Server throws this error:

Msg 4122, Level 16, State 1, Line 11
Remote table-valued function calls are not allowed.

 

Solution:

Now, we need a different approach. Enter OpenQuery. Let’s try the same query logic with OpenQuery.

--
-- Query remote DMV's with OPENQUERY
--
SELECT *
FROM OPENQUERY(RemoteServer, 'SELECT TOP 10 *
                              FROM [master].sys.dm_exec_query_stats AS qs
                              CROSS APPLY [master].sys.dm_exec_sql_text (qs.sql_handle) AS qt
                             ')
GO
Use OpenQuery

Use OpenQuery

 

DATA ACCESS for Distributed Query:

Please note that for OpenQuery to run successfully, you need DATA ACCESS enabled with the RemoteServer. If not, you’ll run into an error like this:

Msg 7411, Level 16, State 1, Line 36
Server 'RemoteServer' is not configured for DATA ACCESS.

Use sp_serveroption to enable DATA ACCESS option for the RemoteServer. DATA ACCESS enables linked server for distributed query access to the remote server. See sample below:

--
-- Enable DATA ACCESS for RemoteServer
--
EXEC sp_serveroption 'RemoteServer', 'DATA ACCESS', 'TRUE'
GO

Once DATA ACCESS is enabled, the OpenQuery runs correctly.

Hope this helps,
_Sqltimes

Read Full Post »

In our lab machines, sometimes quick clean up activities become necessary; They occur frequently before and after some large batch testing scripts. Such situations include activities like:

  1. Reducing size of either log or data file
  2. Emptying transactional log file
  3. Deleting transactional log file

Note: Please be advised that such operations are not recommended on a production database. These will result in unpredictable and sometimes reduced performance.

In recenlt posts, we’ve convered the use of SHRINKFILE in different scenarios:

Important Points to keep in mind:

  • SHRINK operation could be stopped at anytime without losing the work completed thus far. It retains the progress made (re-allocations)
  • Shrinking data or log file does not require single-user mode on the database. Other user activity could be running in parallel without any interference with SHRINK work.
  • SHRINK process could be delayed due to blocking from other user activity, so if possible, perform SHRINK operation when there is lesser traffic.
  • SHRINK operation is a single threaded operation, that methodically works through each data block. So it is time consuming.
  • SHRINK one file at a time (rather than in parallel)

 

Following are the steps we follow:

Reducing Size of Log or Data File

In lab environment, to reduce the size of a bloated log or data file, we implement a version of the following steps:

Step 1:

  • Before freeing up any space back to Operating sytem, we need to adjust the way space is occupied by all the database pages.
  • Sql Server will reallocate all used pages from the end of the physical file to earlier portions.
  • This allows end of the physical file to be freed up.
--
-- SHRINK the data file down to 1 GB (reallocation)
--
USE [SampleDB]
GO
DBCC SHRINKFILE (N'Sample_Data2' , 1024) -- Reduce it to 1 GB
GO

Step 2:

  • Once reallocation or adjustment is complete, we could issue TRUNCATEONLY option to free up that space back to Operating System.
  • This is when we see that the physical file reducing in size.
--
-- Release space back to OS
--
USE [SampleDB]
GO
DBCC SHRINKFILE (N'Sample_Data2', TRUNCATEONLY)
GO

Emptying Transactional Log File

In lab environment, to empty entire transactional log file, we implement a version of the following steps:

--
-- To remove secondary log file, first we need to empty it. Then remove it
--
DBCC SHRINKFILE (SampleDB_log2, EMPTYFILE)
GO

Deleting Transactional Log File

In lab environment, to delete a transactional log file, we implement a version of the following steps:

--
-- To remove secondary log file, first we need to empty it. Then remove it
--
DBCC SHRINKFILE (SampleDB_log2, EMPTYFILE)
GO

ALTER DATABASE SampleDB
REMOVE FILE SampleDB_log2
GO

For more details, please refer to BoL

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

In replication, there are several amazing features & configurations to make it robust, dependable & highly performing. These settings need to be correctly leveraged to squeeze out the best performance needed or applicable for each environment. Today, we’ll cover a popular setting called NOT FOR REPLICATION on IDENTITY columns.

Concept:

In short, when NOT FOR REPLICATION is enabled on IDENTITY columns (or other constraints), the IDENTITY value is not incremented when INSERTs occur due to replication traffic. But all other direct application traffic will increment IDENTITY value.

Imagine a Sql Server Publisher, let’s say P, that is publishing data to a Sql Server Subscriber, let’s say S. Now, both P & S have table called SampleTable with an IDENTITY column called ID. To make it easy to see the difference, let’s make their IDENTITY definition different at each location (P & S).

  • At Publisher, the IDENTITY value is defined as (1,10).
    • So, its values will be 1, 11, 21, 31, 41, etc.
  • At Subscriber, it is defined as (2, 10).
    • So, its values will be 2, 12, 22, 32, 42, etc.

The Set Up

With the above points, let’s create the table and set up replication between P & S. Followins some of the code used to create table at Publisher (P).

At Publisher

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(1,10)  NOT FOR REPLICATION   PRIMARY KEY   CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('A')
)
GO

At Subscriber:

Similarly, on Subscriber, create a similar table with different IDENTITY definition.

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(2,10)  NOT FOR REPLICATION   PRIMARY KEY     CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('B')
)
GO

So, there is no overlap between IDENTITY values generated at P & S.

Now let’s watch their behavior, as data in INSERTED into both servers.

  1. When data in INSERTED directly into each location (P & S)
  2. When data is indirectly INSERTED into S due to replication traffic from P

Below is some more code used to check IDENTITY values, Insert new data, etc. in these expirements.

--
-- Query the data
--
SELECT *
FROM dbo.SampleTable
ORDER BY ID ASC

--
-- Check the value of IDENTITY column at each step
--
SELECT IDENT_CURRENT('SampleTable')

--
-- Insert data directly into P
--
INSERT INTO dbo.SampleTable DEFAULT VALUES
GO

--
-- Manually insert data to introduce interesting scenarios
--
SET IDENTITY_INSERT dbo.SampleTable ON
INSERT INTO dbo.SampleTable (ID) VALUES(201)
SET IDENTITY_INSERT dbo.SampleTable OFF
GO

Run Experiments

With the above set up, lets run through some scenarios and observe Sql Server behavior in each situation.

Scenario 1:

When data in INSERTed directly into P:

  • The IDENTITY values increment with each insert as 1, 11, 21, 31, etc.
  • Subsequently, those records are replicated to S, with same IDENTITY values.
  • But in all of this, the IDENTITY value at S, stays at 2
    • Since NOT FOR REPLICATION is set on the IDENTITY column on S.

When data is INSERTed directly to S:

  • The IDENTITY values are incrementing as per definition to 2, 12, 22, etc
  • Irrespective of the replication traffic from P, the IDENTITY at S only depends on the records INSERTed directly into S.
  • Table at S, has records from both P & S.
    • S will look something like: 1, 2, 11, 12, 21, 22, 31, 32, etc
    • Table at P, will look at 1, 11, 21, 31, etc

Scenario 2: IDENTITY_INSERT

When manual entry is made at P (using IDENTITY_INSERT) to a new IDENTITY value that does not match with the pattern of IDENTITY definition, subsequent IDENTITY values, at P, are based on the highest entry in the table. It uses the same INCREMENT definition, but it is incremented based on the current highest entry value in the table.

At Publisher:

  • Let’s say the SampleTable, at P, has entries like 1, 11, 21, 31 with next IDENTITY value as 41.
  • Now, if a new record is entered manually using IDENTITY_INSERT, with new value as 26. It is successfully INSERTed.
    • Next IDENTITY value still remains at 41.
  • We can keep repeating these steps with different values like 7, 9, 13, 15, 17, 25, 28, 29 (as long as they are below 31).
    • INSERTs will be successful with no impact to next IDENTITY value, which is still at 41.
  • Now, if you perform a regular INSERT, the new record will get IDENTITY value as 41.

At Subscriber:

  • At S, all new entries, 26, 7, 9, 13, 15, 41, etc, are successfully INSERTed with no impact to IDENTITY definition at S.
    • At S, the next identity value is still set to 42
  • Any new direct INSERTs at S, will get IDENTITY values consistent with its previous behavior a.k.a. 42, 52, etc

Scenario 3: PRIMARY KEY Violation

Now, lets make a manual entry at P that matches with the next IDENTITY value at S.

  • For this, let’s assume that the highest value at P is 41, with next IDENTITY value as 51
  • At S, the current highest value is 52, with next IDENTITY value as 62.

Introduce problems:

  • At P, perform a manual INSERT (with IDENTITY_INSERT), with ID value as 62.
    • INSERT is successful at P; And it is replicated to S successfully.
  • After above operation, next IDENTITY value
    • At P is set to 72 (62+10).
    • At S, it is still at 62 (even though a new record in INSERTed with 62). Since NOT FOR REPLICATION is set, replication traffic does not influence IDENTITY increments at S.
  • Now, when a new record is directly INSERTed into S, the next IDENTITY value will be computed as 62, which results in PRIMARY KEY violation.
    • Violation of PRIMARY KEY constraint 'PK_SampleTable'. Cannot insert duplicate key in object 'dbo.SampleTable'
    • Interestingly, the next IDENTITY value for S, is incremented to 72.
    • Subsequent direct INSERTs into S will be 72, 82, etc

Viscious cycle:

  • In the above test, the next IDENTITY value at P is still at 72.
  • Similarly, the next IDENTITY value at S, is also set to 72.
  • So any new inserts at P, will be replicated to S with 72, 82, 92, etc.
    • If there are any existing records, at S, with same identity values, then replication traffic (from P to S) will fail with primary key violation.
    • But if S does not have any records with those identity values (from P), then replication traffic (a.k.a. 82, 92, 102) from P is successfully INSERTed into S
    • Any new traffic, directly at S, will run into PRIMARY KEY violation.
  • So, the summary is, one BAD entry is all it takes to screw up the IDENTITY definition with NOT FOR REPLICATION.

Solution:

  • When this happens, just RESEED, Identity values at P to a non-overlapping value that is consistent with its expected behavior.
    • Something like 151 or 201. To give it a fresh start with no overlaps with P or S existing records.
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

In one of the previous posts, we saw one of the techniques to query available free space in transactional log files. Today, we’ll see another technique to gather the same metric from one place for both data file & transactional log file.

We rely on sysfiles system catalog.


--
-- Free space in data files
--
SELECT  name AS [FileName]
      , size*8.0/(1024*1024) AS [Total_File_Size_GB]
      , FILEPROPERTY(name, 'SPACEUSED')*8.0/(1024*1024) AS [Used_GB]
      , (size*8.0/(1024*1024)) - (FILEPROPERTY(name, 'SPACEUSED')*8.0/(1024*1024)) AS [FreeSpace_GB]
FROM sysfiles
GO

--
-- Using DBCC command for Log files
--
DBCC SQLPERF(LOGSPACE)
GO
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

A few days ago, there was a need to UPDATE a column in production environment to correct some values, as they were in incorrect status. The column was in VARBINARY. So, the question is how do we concatenate values to a VARBINARY column.

Answer is simple. Implicit concatenation with + works.

Let’s take an example: convert Test into VARBINARY and try to add a letter T at the end.

--
-- Convert Test into VARBINARY
--
SELECT  CONVERT(VARBINARY, 'Test') AS [Test_InVarBinary]
GO
-- Result is : 0x54657374

--
-- Now lets append T in BINARY format (0x54)
--
SELECT 0x54657374 + 0x54 AS [Concatenate_Binary_Values]
GO
-- Result is : 0x5465737454

--
-- Let's convert 0x5465737454 into character
--
SELECT CONVERT(VARCHAR, 0x5465737454) AS [TestT_In_Varchar]
GO
-- Result is TestT

As you can see implicit concatenation works with binary values.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Its rare, but there sometimes we do run into opportunities to perform edition upgrade on Sql Server environments. Following steps present a walk through of the edition upgrade process.

Working on this task provided another opportunity to appreciate the great value Sql Server installer provides; It is really a value-packed tool (see previous post on a different feature).

Some points to keep in mind:

  • We can perform edition upgrade, but edition downgrade is not possible.
    • Ex: cannot go from Enterprise to Standard.
  • In my experiments, Sql Server restart was not necessary, but it is better to restart Sql Server (and all Sql Services) to allow the new features to kick-in.

Step 1:

Open Sql Server installation wizard. Go to Maintenance tab and click on Edition Upgrade.

Edition Upgrade Step 1

Edition Upgrade Step 1

Step 2:

When you click on Edition Upgrade, it opens up the “Upgrade the Edition of Sql Server 2012” wizard. First, it performs checks to make sure ‘Support Files’ are available. In the next screen, enter the product key to validate the upgrade.

Edition Upgrade Step 2

Edition Upgrade Step 2

Step 3:

As you walk through the wizard, select the instance that you want to upgrade. In this case, DEFAULT instance is being upgraded. All components (shared) related to edition upgrade will also be upgraded.

Edition Upgrade Step 3

Edition Upgrade Step 3

Step 4:

Upon clicking next, it runs through checks to make sure all the rules are in compliance. The next “Ready to upgrade” screen lists all the components that will be upgraded. Review the list and make sure they match your goals.

Edition Upgrade Step 4

Edition Upgrade Step 4

Step 5:

After completing upgrade, it displays the status of upgrade for each component in the “Complete” screen.

Edition Upgrade Step 5

Edition Upgrade Step 5

 

Hope this helps,
_Sqltimes

Read Full Post »