Feeds:
Posts
Comments

Archive for April, 2014

Quick one today:

A few days go, as part of implementing table partitioning, I ran into this interesting error.

Msg 2726, Level 16, State 1, Line 1
Partition function 'pf_SampleTable' uses 1 columns which does not match with the number of partition columns used to partition the table or index.

At first glance, the error message did not make any sense, but after reading it over a couple of times it started to make sense (this is another reason why grammar is important, even in error messages)

This error is because of not mentioning the same number of columns in the”ON” clause of the “CREATE TABLE” section as the number of columns mentioned in the partitioning function definition.

Let us take an example to fully understand this simple, but nebulous error message.

--
-- Partition function - INT
--
CREATE PARTITION FUNCTION pf_SampleTable_PartitionKey (INT)
AS RANGE LEFT FOR VALUES (1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000, 10000)
GO

--
-- Partition Scheme
--
CREATE PARTITION SCHEME ps_SampleTable_PartitionKey
 AS PARTITION pf_SampleTable_PartitionKey
TO (ST_FG1, ST_FG2, ST_FG3, ST_FG4, ST_FG5, ST_FG6, ST_FG7, ST_FG8, ST_FG9, ST_FG10, ST_FG_n)
GO

When I created the table ‘SampleTable’, we specify partition function name in the filegroup area to partition table according to the configuration specified in the partition function & scheme.

--
-- Partition table uisng partition function
--
CREATE TABLE dbo.SampleTable(
      SampleTable_ID      INT                 NOT NULL    IDENTITY(1,1)
    , LocalID             BIGINT              NOT NULL
    , ServerSerial        INT                 NOT NULL
) ON [pf_SampleTable_PartitionKey]
GO

The error is caused because in the ON clause, I did not mention the column name to use for partitioning the table. Once I changed the code, it worked well.

--
-- Partition table uisng partition function
--
CREATE TABLE dbo.SampleTable(
      SampleTable_ID      INT                 NOT NULL    IDENTITY(1,1)
    , LocalID             BIGINT              NOT NULL
    , ServerSerial        INT                 NOT NULL
) ON pf_SampleTable_PartitionKey(SampleTable_ID)
GO

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Starting Sql Server 2008, we have a new way to declare variables and assign values. Earlier we used to use DECLARE statement to declare variables and then in a separate statement, using SET or SELECT statement, assign values to them. Sql Server 2008 combines them to make it easy.

Let’s take an example:

--
-- Declare variables and assign values in seprate statements
--
DECLARE   @Age INT
        , @Name VARCHAR(30)

SET @Age = 32
SET @Name = 'SqlTimes'

Now, we have a new way, that is both simple and intuitive

--
-- New way to declare and assign values
--
DECLARE  @Age  INT         = 32
       , @Name VARCHAR(30) = 'SqlTimes'

One catch to this is, if you want to assign value of declared variable to another variable, they need to be separate in statements. See below:

--
-- Assign one variable value to another
--
DECLARE  @Age  INT         = 32

DECLARE @SomeAge INT = @Age + 20

Now, let’s take it a bit further. Let us look at compound assignment. What is the output of the below statements?

--
-- Precedence with compound assignment
--
DECLARE  @Age     INT         = 5

SELECT @Age *= @Age
     , @Age += @Age
GO 

Let’s break it down. The following is the order in which Sql Server processes the above statement.

  1. Sql Server, first assigns literal 5 to @Age
  2. Two, multiplies @Age (5) with @Age (5), resulting in 25
  3. Three, adds @Age (25) to @Age(25).
  4. Final result is 50

First operation in SELECT statement is processed first, then goes to the rest of the statements in the order.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Sometimes, but not too often (thank fully), we end up in situations where we need to KILL some processes (SPIDs). There are several reasons for this.

  • Either a long running query (that is just SELECTing)
  • Query that is blocking other processes
  • Query is has been in waiting state for a long while, where is is waiting for other resources to free up.
--
-- Kill SPID 54
--
KILL 54;
GO

After you issue the KILL command, Sql Server starts the rollback process to be able to clear the work performed so far and then terminates the SPID. To check the status of the termination process, you can issue this ‘STATUS’ command:

--
--  Query the status of a KILL command issues previously
--
KILL 54 WITH STATUSONLY;
GO

--
-- Output looks like this
--
spid 54: Transaction rollback in progress. Estimated rollback completion: 20% Estimated time left: 50 seconds.

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

In the last few weeks, there have been multiple incidents where there was a need to remove/add/modify data files on a Sql Server databaes. So, it is time to add it as a new post.

Altering data files to a database is made easier in newer versions of Sql Server.

 

Add a new data file to DBADB database

--
-- Add a new FileGroup for new data file
--
ALTER DATABASE DBADB
ADD FILEGROUP nCI_01
GO

--
-- Add new file DBADB_Data3
--
ALTER DATABASE DBADB
ADD FILE
(
     Name = N'DBADB_Data3.NDF'
   , FileName = N'G:\Data\DBADB_Data3.ndf'
   , SIZE = 10 GB
   , FILEGROWTH = 10%
)
TO FILEGROUP [nCI_01]
GO

 

Remove existing file

Existing data files could be removed after making them empty. This process was covered in previous post. Once a data file is empty, then it could be removed using the command below.

--
-- Remove data file
--
ALTER DATABASE DBADB
REMOVE FILE DBADB_Data3
GO

 

Modifying an existing file

Increasing the data file size to 20 GB (from 10 GB)

--
-- Modify a data file
--
ALTER DATABASE DBADB
MODIFY FILE
(
 NAME = DBADB_Data3
 , SIZE = 20 GB
)
GO

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

A few days ago, one of your customers, upon request, sent us their Perfmon files from their database server. We have a template to capture all pertinent Sql Server metrics from PerfMon that we provide to all our customers. When needed we request them to enable the process and let it capture data for a specified amount of time.

Sometimes, customers have their own configuration that writes PerfMon data into a binary performance log file (*.blg) format. In such situations, Relog command could help convert BLG files into CSV (text file format).

Relog creates new performance logs from data in existing performance logs by changing the sampling rate and/or converting the file format. Supports all performance log formats, including Windows NT 4.0 compressed logs.

Using the flag “-f”, specify the output file format (CSV in this case).

Here is the command:

relog -f csv inputfile.blg -o outputFile.csv

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Temporary tables and table variables are a great addition in Sql Server. Pretty much every database code take advantage of this feature, for right reasons. Now, with this increased usage, comes responsibility to do two things:

  • Use the feature correctly & efficiently
  • Create efficiencies in TempDB database.

Keep in mind that for

  • Indexes and Stats: Temporary tables you can create indexes and statistics, but for table variables statistics are not an option [indexes could be created, but only as column constraint]. See below:
    • --
      --  Create table variable
      --
      DECLARE @tvariable TABLE (
           ID   INT   NOT NULL   PRIMARY KEY CLUSTERED
      )
      GO
      
    • Because of lack of statistics, Sql Server Optimizer it cannot fully come up with the best query plan to execute code with table variables.
    • “Estimated rows” and “Actual rows” will vary a lot. So keep an eye out to monitor such statements.
  • Transaction Integrity: Temporary tables participate in TRANSACTION and follow the rules to maintain transaction integrity. Table variables are just like any other variables (a.k.a integer or character variables) and do not participate in TRANSACTION.
    • This is both good and bad.
    • When you ROLLBACK, data in table variables is not rolledback.
    • So, you do not want to use them for maintaining integrity of transactions
    • But they are great for troubleshooting purposes.

Hope this helps,
_Sqltimes

Read Full Post »

Sql Server, starting from 2005, has extensive methods to parse, slice & dice XML files. Before that, for DBA’s, XML is not a favorite topic — especially from T-SQL point of view. Application languages like C#, ASP.Net and Java have many methods and programmers usually prefer to handle XML in application code rather than T-SQL. But it all changed since Sql Server 2005 and got much better in Sql Server 2008, 2008 R2 and Sql Server 2012.

Today, we’ll look at OPENXML and how it allows us to convert XML document into a table — then you can query it as you need.

The diagram below is from MSDN, that shows conceptually how XML document is parsed and converted into a table like representation.

From MSDN - OPENXML parses XML file

From MSDN – OPENXML parses XML file

 

Let us take an example to demonstrate some of the querying nuances:

<root>
    <Assigned ID="1">
        <Store StoreID="14"/>
        <Store StoreID="15"/>
        <Store StoreID="17" Name="Atlanta"/>
    </Assigned>
    <Assigned ID="2">
        <Store StoreID="24" Name="Chicago"/>
        <Store StoreID="23" Name="Boston"/>
    </Assigned>
</root>

We can query any data values from the above XML as needed. For example, let’s retrieve

  • ID from Assigned
  • StoreID from Store
  • Name from Store

To do this, we

  1. First read the XML file into a variable
  2. Use  ‘sp_xml_preparedocument’ create XML DOM
  3. Pass it to OPENXML
  4. Specify what values you want to read
  5. Voila !!

Sidebar: ‘sp_xml_preparedocument’ parses XML document into a document object model (DOM) tree representation for easy consumption. 

<pre>DECLARE   @xml XML
        , @docid INT

--
-- sample xml document
--
SET @xml = N'<root><Assigned ID="1"><Store StoreID="14"/><Store StoreID="15"/><Store StoreID="17" Name="Atlanta"/></Assigned><Assigned ID="2"><Store StoreID="24" Name="Chicago"/><Store StoreID="23" Name="Boston"/></Assigned></root>'

--
-- Prepare the XML input so read from it can be much faster.
--
EXEC sp_xml_preparedocument @docid OUTPUT, @xml;

--
--  Query XML file to retrieve selected values
--
SELECT *
FROM OPENXML (@docid, '/root/Assigned/Store')   -- lowest parsing location
WITH (
          AssignedID    INT          '../@ID'    -- go a level above and retrieve @ID from Assigned
        , StoreID       INT          '@StoreID'  -- value from current parse location
        , Name          VARCHAR(20)  '@Name'     -- value from current parse location
     )

--
--  remove XML document from memory
--
EXEC sp_xml_removedocument @docid
GO

 

Result:

OPENXML_Results

OPENXML_Results

Sidebar: sp_xml_removedocument‘ removes the document pointer from memory; A good practice to follow. 

 

Hope this helps,

_Sqltimes

Read Full Post »

Older Posts »