Feeds:
Posts
Comments

Archive for June, 2015

Quick one today:

Earlier, we ran into an interesting error in our lab environment.

Msg 8115, Level 16, State 1, Line 2
Arithmetic overflow error converting IDENTITY to data type int.
Arithmetic overflow occurred.

Initially, this error did not make sense. There are no calculations performed on this column data for it to overflow, but then it dawned on us. Its the INT vs. BIGINT issue. The column has INT as the datatype and it has reached its maximum limit: 2,147,483,647. After that, not a single record could get into the table.

Immediately, changed the column datatype to BIGINT to let the application processes continue to pump data into the table. Surprisingly, there were not errors from the .Net application from the datatype change.

--
--  Change datatype of column
--
ALTER TABLE dbo.SampleTable
ALTER COLUMN ID BIGINT
GO

One interesting point that came up during all of this was, the time it took to change from INT to BIGINT. It was completed in under a few seconds (around 30 seconds). For a table this big, the initial guesstimate was a few minutes, but that was proven completely wrong. When we recently, performed similar task on a VARCHAR column (on a large table), it took more than an hour. INT is 4 bytes, BIGINT is 8 bytes. So, if we increase a VARCHAR column by 4 bytes, it should also, in theory, take the same time, right?

In our lab, to be sure, I ran a few tests to measure the difference. And the difference is consistent. Converting INT to BIGINT takes 90 seconds, but a CHAR change from 10 to 14 takes 180 seconds. Any thoughts?

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Recently, in our lab, we ran into this interesting error:

Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized.

Turns out, we were attempting to multiple operations there were mutually exclusive. For example,

  • Adding a new data file while backup operation is running
  • Removing data file while running backup operation
  • Shrink file during backup operation
  • etc.

Once the backup is completed, we were able to perform these operations. When backup runs, it reads every extent and writes into a backup file. So, during this, any attempts to change the underlying file structures (where the extents live) could not be allowed (obviously).

There may be several such operations that are mutually exclusive; So it is good to learn and list them so we do not do this again. More will be added as they come up.

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Sometimes there is a need to query Environment Variables using T-SQL. ‘xp_cmdshell’ allows us to query environment variables. First we need to enable ‘xp_cmdshell’ in ‘sys.configurations’ using EXEC sp_configure. It is an advanced option, so enable ‘advanced options’ first.

--
-- Set Configurations
--
SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 1
RECONFIGURE
GO

EXEC sp_configure 'xp_cmdshell', 1
RECONFIGURE
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 0
RECONFIGURE
GO

SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

Then use xp_cmdshell to query each environment variable, as needed. Or you could create a temporary table and INSERT the result into it.

--
-- Query environment variables
--
exec xp_cmdshell 'echo %NUMBER_OF_PROCESSORS%'
exec xp_cmdshell 'echo %ProgramFiles%'
exec xp_cmdshell 'echo %LTRXDATABASE%'
GO

Result:

Environment Variables Output

Environment Variables Output

Reset the sys.configurations settings after you are done.

--
-- Reset Configurations
--
SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 1
RECONFIGURE
GO

EXEC sp_configure 'xp_cmdshell', 0
RECONFIGURE
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 0
RECONFIGURE
GO

SELECT * FROM sys.configurations
WHERE name = 'xp_cmdshell'
GO

 

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Recently, we kept running into this question from several people. So, adding a dedicated post made sense.

In short, yes. Rebuilding indexes increases database file size. There are some nuances, but in general terms it is true. Both ONLINE or OFFLINE rebuild/reindexing operations increase file size. For more information about reindexing indexes and heaps, please refer to these previous posts.

Sql Server needs extra space to build a new index structure with data from old index. Once completed, my guess is that, it switches to the new structure in new storage location. This operation needs new space.

In ONLINE operation, the table is still available for queries. So, the guess here is that, Sql Server creates a bookmark in the LOG file before rebuilding the index. Then continues rebuilding index, while table is made available for queries. Once completed, it goes to LOG file to add/delete any new records since the bookmark. Once completed, it creates an exclusive lock on the table for a very short duration and redirects new queries to the new index. Then a background process will recollect the space from old index.

--
-- Rebuild all indexs on a table
--
ALTER INDEX ALL ON dbo.StoredValue REBUILD
GO

Before Rebuilding the size of data file is:

Rebuild Index Before

Rebuild Index Before

After Rebuilding the size of data file is:

Rebuild Indexes After

Rebuild Indexes After

Couple of points:

MDF file size increases the first time. When I re-run the REBUILD again, the size does not increase. It uses the freed-up space from old index location to perform new rebuilds.

LDF file size increases every time. In some cases, we may not notice the increase during the first REBUILD, as there might be enough existing free space. But when I re-run it multiple times it increases dramatically. But if you take LOG backups between REBUILD’s the LDF size does not increase.

Hope this helps,
_Sqltimes

Read Full Post »

There are several ways to measure, capture & analyze meta to fully map out Sql Server performance on a given server (hardware). Regularly capturing this data allows identifying trends of resource consumption and helps in Capacity Planning.

PerfMon (Performance Monitor) has Data Collectors that allow us to capture good meta-data. Operating System calculates this data all the time and this tool allows us to capture that data into a text file with minimal overhead.

Following sections show the configuration steps:

Step 1 : Create a Data Collector Set

Open PerfMon and go to ‘Data Collector Sets’. Highlight ‘User Defined’ section; In the right panel, right click >> New >> click on ‘Data Collector Set’.

Data Collector Set

Data Collector Set

Step 2 : Create a Data Collector Set

In the next window, give a name to this collector and choose ‘Create from a template’ option.

Create From Template

Create From Template

Step 3 : Configure the metrics

Microsoft has a standard template for capturing metrics for Sql Server. Use that xml file for PerfMon. See the image below.

Configure metrics to capture

Configure metrics to capture

Step 4 : Save into a file

In the next screen, save the metrics into a text file. Make sure you save it into a non-data drive on the machine. Either to a maintenance drive or something that is not actively used by Sql Server. This process does not need much resources, but to be safe, save the file on to maintenance drive.

Save to file

Save to file

Step 5 : Save and Go to properties

Choose ‘Open properties for this data collector set’ and hit Finish.

Go to properties

Go to properties

Step 6 : Schedule

In properties window, under ‘Schedule’ tab, click ‘Add’ and create a schedule. Try to run it all day every day, so you have a clear picture of underlying performance day after day for better trend analysis.

Schedule

Schedule

Step 7 : Stop Condition (just in case)

Under ‘Stop Condition’ tab, set up the maximum file size to be sure the file does not get too big. Also configure the ‘Overall Duration’ to make sure it runs only for 24 hours at a time. After each day, a new file is created.

Stop Condition

Stop Condition

Step 8 : Configure Frequency

Hit ‘Apply’ in above step. Under ‘User Defined’ section, highlight the new data collector. On the right panel, highlight the performance counter. Right click and go to properties. Under ‘Performance Counters’ tab, select the frequency that you’d like to capture this data. Anything between 5 seconds to 1 minute is okay, depending on the situation.

For troubleshooting situation, may be once every 5 seconds could help. During normal operations, once every 1 minute is okay too.

Set frequency and file format

Set frequency and file format

Step 9 : Run it and voila

Go back to ‘User Defined’ section in ‘Data Collector Sets’ and highlight the new collector. Right click and hit ‘Start’. If all goes well, a new file is created in the ‘S:\PerLogs\’ folder with data for all the metrics.

New File each day

New File each day

Hope this helps,
_Sqltimes

Read Full Post »