Feeds:
Posts
Comments

Quick one today:

In the last few weeks, there have been multiple incidents where there was a need to remove/add/modify data files on a Sql Server databaes. So, it is time to add it as a new post.

Altering data files to a database is made easier in newer versions of Sql Server.

 

Add a new data file to DBADB database

--
-- Add a new FileGroup for new data file
--
ALTER DATABASE DBADB
ADD FILEGROUP nCI_01
GO

--
-- Add new file DBADB_Data3
--
ALTER DATABASE DBADB
ADD FILE
(
     Name = N'DBADB_Data3.NDF'
   , FileName = N'G:\Data\DBADB_Data3.ndf'
   , SIZE = 10 GB
   , FILEGROWTH = 10%
)
TO FILEGROUP [nCI_01]
GO

 

Remove existing file

Existing data files could be removed after making them empty. This process was covered in previous post. Once a data file is empty, then it could be removed using the command below.

--
-- Remove data file
--
ALTER DATABASE DBADB
REMOVE FILE DBADB_Data3
GO

 

Modifying an existing file

Increasing the data file size to 20 GB (from 10 GB)

--
-- Modify a data file
--
ALTER DATABASE DBADB
MODIFY FILE
(
 NAME = DBADB_Data3
 , SIZE = 20 GB
)
GO

 

Hope this helps,
_Sqltimes

Quick one today:

A few days ago, one of your customers, upon request, sent us their Perfmon files from their database server. We have a template to capture all pertinent Sql Server metrics from PerfMon that we provide to all our customers. When needed we request them to enable the process and let it capture data for a specified amount of time.

Sometimes, customers have their own configuration that writes PerfMon data into a binary performance log file (*.blg) format. In such situations, Relog command could help convert BLG files into CSV (text file format).

Relog creates new performance logs from data in existing performance logs by changing the sampling rate and/or converting the file format. Supports all performance log formats, including Windows NT 4.0 compressed logs.

Using the flag “-f”, specify the output file format (CSV in this case).

Here is the command:

relog -f csv inputfile.blg -o outputFile.csv

 

Hope this helps,
_Sqltimes

Quick one today:

Temporary tables and table variables are a great addition in Sql Server. Pretty much every database code take advantage of this feature, for right reasons. Now, with this increased usage, comes responsibility to do two things:

  • Use the feature correctly & efficiently
  • Create efficiencies in TempDB database.

Keep in mind that for

  • Temporary tables you can create indexes and statistics, but for table variables statistics are not an option [indexes could be created, but as column constraint]
    •  
      --
      --  Create table variable
      --
      DECLARE @tvariable TABLE (
           ID   INT   NOT NULL   PRIMARY KEY CLUSTERED
      )
      GO
      
    • Because of lack of statistic, Sql Server Optimizer it cannot fully come up with the best query plan to execute code with table variables.
    • “Estimated rows” and “Actual rows” will vary a lot. So keep an eye out to monitor such statements.
  • Temporary tables participate in TRANSACTION and follow the rules to maintain transaction integrity. Table variables are just like any other variables (a.k.a integer or character variables) and do not participate in TRANSACTION.
    • This is both good and bad.
    • When you ROLLBACK, data in table variables is not rolledback.
    • So, you do not want to use them for maintaining integrity of transactions
    • But they are great for troubleshooting purposes.

Hope this helps,
_Sqltimes

Sql Server, starting from 2005, has extensive methods to parse, slice & dice XML files. Before that, for DBA’s, XML is not a favorite topic — especially from T-SQL point of view. Application languages like C#, ASP.Net and Java have many methods and programmers usually prefer to handle XML in application code rather than T-SQL. But it all changed since Sql Server 2005 and got much better in Sql Server 2008, 2008 R2 and Sql Server 2012.

Today, we’ll look at OPENXML and how it allows us to convert XML document into a table — then you can query it as you need.

The diagram below is from MSDN, that shows conceptually how XML document is parsed and converted into a table like representation.

From MSDN - OPENXML parses XML file

From MSDN – OPENXML parses XML file

 

Let us take an example to demonstrate some of the querying nuances:

<root>
    <Assigned ID="1">
        <Store StoreID="14"/>
        <Store StoreID="15"/>
        <Store StoreID="17" Name="Atlanta"/>
    </Assigned>
    <Assigned ID="2">
        <Store StoreID="24" Name="Chicago"/>
        <Store StoreID="23" Name="Boston"/>
    </Assigned>
</root>

We can query any data values from the above XML as needed. For example, let’s retrieve

  • ID from Assigned
  • StoreID from Store
  • Name from Store

To do this, we

  1. First read the XML file into a variable
  2. Use  ‘sp_xml_preparedocument’ create XML DOM
  3. Pass it to OPENXML
  4. Specify what values you want to read
  5. Voila !!

Sidebar: ‘sp_xml_preparedocument’ parses XML document into a document object model (DOM) tree representation for easy consumption. 

<pre>DECLARE   @xml XML
        , @docid INT

--
-- sample xml document
--
SET @xml = N'<root><Assigned ID="1"><Store StoreID="14"/><Store StoreID="15"/><Store StoreID="17" Name="Atlanta"/></Assigned><Assigned ID="2"><Store StoreID="24" Name="Chicago"/><Store StoreID="23" Name="Boston"/></Assigned></root>'

--
-- Prepare the XML input so read from it can be much faster.
--
EXEC sp_xml_preparedocument @docid OUTPUT, @xml;

--
--  Query XML file to retrieve selected values
--
SELECT *
FROM OPENXML (@docid, '/root/Assigned/Store')   -- lowest parsing location
WITH (
          AssignedID    INT          '../@ID'    -- go a level above and retrieve @ID from Assigned
        , StoreID       INT          '@StoreID'  -- value from current parse location
        , Name          VARCHAR(20)  '@Name'     -- value from current parse location
     )

--
--  remove XML document from memory
--
EXEC sp_xml_removedocument @docid
GO

 

Result:

OPENXML_Results

OPENXML_Results

Sidebar: sp_xml_removedocument‘ removes the document pointer from memory; A good practice to follow. 

 

Hope this helps,

_Sqltimes

Quick one today:

Once in a while we all need to capture SQL traces from production or development database server and need to analyze it. Sql Trace files is a binary files that could be opened in Sql Profiler. Often times, it is easier to analyze the trace thoroughly when you load the data into a table, allowing you to run diagnostic queries to find out evidence to the question you are looking for.

Below is a method used to load Sql Trace file into a SQL table.

--
--   Load trace into a table
--
SELECT * INTO TraceTable
FROM ::fn_trace_gettable('T:\Trace\LockEscalation.trc', default)
GO

As you can see, the first parameter is the location of the trace file; Second one is to indicate the number of such files to be loaded. Often times, trace files, based on how trace is captured, end up being spread over multiple files. So the second parameter allows us to read specified number of files to be loaded into the SQL table.

If you specify ‘default‘, SQL loads all matching files in that location.

Make sure you create indexes on necessary tables to improve your analysis query performance.

 

Hope this helps,

_Sqltimes

Quick one today:

For every Sql Server release, Microsoft provides this free ebook that is very useful to improve our understanding of the new features. Below is the ebook for Sql Server 2014.

Free eBook on Sql Server 2014

Sql Server 2014

Hope you enjoy reading it just as much. Here is the link: Sql Server 2014 Free e-book.

 

Hope this helps,
_Sqltimes

EXCEPT is a great operator. More importantly it is a set operator. So, by design it is a bit more efficient. This is part of the great set operators like:

  • INTERSECT
  • UNION
  • EXCEPT
  • UNION ALL

EXCEPT

Except does two things:

  • Returns values from left query (or data  set) that are not in the right query (or data set)
  • Return distinct values (like UNION does)

It checks the data from both the data sets and returns distinct values from left side that do not exist in right side data set. This is a great operator to use when cleaning data or comparing data to perform some clean up operations i.e. de-dup or data-sanitize. See the venn diagram below:

Except Operator in Venn Diagram

Except Operator in Venn Diagram

This is great for several reasons, but the best reasons are ones based on its prerequisites:

  • Data types must match
  • Columns and order must match
  • Not a prerequisite, but it can compare NULL values. [i.e. SET ANSI_NULLS OFF]. It can compare NULL values as if they are regular literals.

It automatically makes sure the data type matches and you can pick and choose the data set you generate to compare.

Let’s take an example:

Step 1: Create tables

--
-- Create MainTable (left hand side table in comparision)
--
DROP TABLE dbo.MainTable
GO

CREATE TABLE dbo.MainTable
(
   ID             INT           NOT NULL IDENTITY(1,1) PRIMARY KEY CLUSTERED
 , Name           VARCHAR(20)   NULL
 , Age            INT           NULL
 , Date_Of_Birth  DATETIME      NULL
)
GO

--
-- Create 'Compare' table. Right hand side table
--
DROP TABLE dbo.Compare
GO

CREATE TABLE dbo.Compare
(
   ID             INT           NOT NULL IDENTITY(1,1) PRIMARY KEY CLUSTERED
 , Name           VARCHAR(20)   NULL
 , Age            INT           NULL
 , Date_Of_Birth  DATETIME      NULL
)
GO

Step 2: Load some records

--
-- Load data into both the tables
--
INSERT INTO dbo.MainTable (Name, Age, Date_Of_Birth)
VALUES
   ('Jon', 20, NULL)
 , ('Ben', 40, '2012-02-02')
 , ('Test', 45, '2013-05-05')

GO

INSERT INTO dbo.Compare (Name, Age, Date_Of_Birth)
VALUES
   ('Jon', 20, GETDATE())
 , ('Ben', 40, '2012-02-01')
 , ('Test', 45, '2013-05-05')
GO

Step 3: Let’ compare the data

Looking at the data, it could be noticed that there are two records that are different between the tables (ID 1 & 2). But in dbo.MainTable, for record ID = 1, has NULL values in Date_of_Birth column. So, lets see how Sql Server performs the comparision.

--
-- Use EXCEPT operator to compare
--
SELECT  M.ID
      , M.Name
      , M.Age
      , M.Date_Of_Birth
FROM dbo.MainTable AS M
EXCEPT
SELECT  C.ID
      , C.Name
      , C.Age
      , C.Date_Of_Birth
FROM dbo.Compare AS C
GO

Step 4: Results

As you can see the results match with our expectations. It identifies both ID 1 & 2 are different from data set on the right. This is a great tool in many ways.

 

Except Operator Results

Except Operator Results

 

Hope this helps,
_Sqltimes

Follow

Get every new post delivered to your Inbox.