Feeds:
Posts
Comments

Quick one today:

How to get a list of objects (or indexes) that are compressed or have COMPRESSION enabled?

I frequently use the below script to get a complete list. Just modify it as needed to fit your specific need.

--
--  Get a list of indexes that have COMPRESSION enabled
--
SELECT	  O.name                      AS [Table_Name]
        , O.object_id                 AS [Object_ID]
        , S.name                      AS [Schema_Name]
        , P.rows                      AS [Number_of_Rows]
        , P.data_compression_desc     AS [Compression_Description]
        , I.name                      AS [Index_Name]
        , I.type_desc                 AS [Index_Type]
        , I.index_id                  AS [Index_ID]
        , D.name                      AS [FileGroup]
FROM sys.partitions AS P
INNER JOIN sys.objects AS O
	ON P.object_id = O.object_id
INNER JOIN sys.schemas AS S
	ON S.schema_id = O.schema_id
INNER JOIN sys.indexes AS I
	ON I.index_id = P.index_id
	AND I.object_id = O.object_id
INNER JOIN sys.data_spaces AS D
	ON D.data_space_id = I.data_space_id

WHERE O.type = 'U'               -- only user objects
AND P.data_compression > 0       -- only the ones with COMPRESSION enabled
AND S.name NOT IN ('sys')        -- exclude system objects

ORDER BY  O.name ASC
	, I.index_id ASC
	, D.name ASC
GO

Hope this helps,
_Sqltimes

Quick one today:

How do we assign permissions on TABLE TYPEs to user accounts?

Earlier last week, this was a point that needed to be covered in our deployment code. We had permissions script that lists out permissions for each database user on all the objects. Since we recently started adding TABLE TYPE’s, this was never included. Now we are, so how do we assign permissions on TABLE TYPE to user accounts?

--
--    Assign permissions on TABLE TYPE to user account
--
GRANT CONTROL		ON TYPE::ExternalKeyTableType					TO [User_account]
GO

CONTROL permission implies all the other permissions on a TABLE TYPE.

When TABLE TYPE is a parameter in a stored procedure, then you want to grant REFERENCES permission as well.

--
--    Assign REFERENCES permission, if table type is used as parameter in stored procedures
--
GRANT REFERENCES		ON TYPE::ExternalKeyTableType					TO [User_account]
GO

 

Hope this helps,

_Sqltimes

Quick one today:

Earlier last week, I was playing with one of our customer databases in our lab environment. As part of implementing table partitioning, some tables changes were necessary; Moving data from one set of tables to another. In the middle of running those scripts, this error occurred:

Msg 1105, Level 17, State 2, Line 1
Could not allocate space for object 'dbo.SORT temporary run storage: 140751663071232' 
in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by 
deleting unneeded files, dropping objects in the filegroup, adding additional files 
to the filegroup, or setting autogrowth on for existing files in the filegroup.

Msg 9002, Level 17, State 4, Line 1
The transaction log for database 'tempdb' is full due to 'ACTIVE_TRANSACTION'. 

All the heavy data movement, between tables and reindexing, has incurred a lot of growth in the transactional log file of the user database; And equivalent growth in the data/log files of tempdb. So, we took the following steps to rectify it:

As the script took a few hours to execute and complete all the data movement. We did the following steps, every few minutes:

  • Issue CHECKPOINT on the tempdb

USE tempdb
GO
CHECKPOINT
GO

  • Take Transactional log backup on the user database

BACKUP LOG CustomerXS TO DISK = N'M:\MSSQL\Backup\MSSQLSERVER\XS and RT\XS_Movement.trn'
WITH COMPRESSION
GO

This allowed us to create more free space in the database files to accommodate space for new activity.

Hope this helps,
_Sqltimes

Quick one today:

Sometimes, in our lab, we need to do some preparatory work before doing the actual work. This involves loading or moving large volumes of data from one table to another or something similar. While in the zone of doing such activities, the transactional log gets full and my work gets stopped in the middle. It could be tempdb or user db, but usually it is transactional log file getting full. This is always frustrating as we rarely have frequent transactional backup jobs running on lab servers.

So, right before I get into the zone of doing this preparatory work, I do this on the lab database to prevent any transactional log full errors, either from tempdb or user database.

--
--  Sample code to clear out transactional log from tempdb and user db
--
USE tempdb
GO

Declare @Cnt INT = 0

WHILE @Cnt < 100
BEGIN
 PRINT '@Cnt' + CONVERT(VARCHAR, @Cnt)

 CHECKPOINT

BACKUP LOG UserDB_Name TO DISK = N'M:\MSSQL\Backup\MSSQLSERVER\TempTRNBackups\Delete_this_later.trn'
 WITH COMPRESSION
 , STATS = 1

WAITFOR DELAY '00:05:00'

SELECT @Cnt = @Cnt + 1
END
GO

Hope this helps,
_Sqltimes

A few months ago, we put in more layers of monitors on our production database servers. One such improvement is based on logins. Both OS logins and Sql Server logins. On any given regular day, not many people log into production servers (OS – login); May be 1 or 2 in a day. So, it will be good monitor if anyone logs in and causes repeated login failures. Similarly for database servers.

For database servers, we monitor all login failures —  as there should not be any DB login failures. If repeated failures occur that it is an indication of a problem. Either unauthorized principal attempt or DDoS attack or authorized principal attempt. Either way it needs to be monitored.

With all this extra monitoring, we’ve been receiving a lot of login alerts. But most of them are false positives — like Password Expiration, Password reset, User account locked, etc. After getting too many of these, we started looking into them. A typical alert message would look like this:

Monitor:                 SQLApplicationLogMonitors: DB-SVR1 -- Security Login Failures with ID 4776
Group:                   SQL Application Log Monitors
Status:                  Microsoft-Windows-Security-Auditing:4776 on DB-SVR1, 
Category:                14336
Message:                 The computer attempted to validate the credentials for an  account.
Authentication Package:  MICROSOFT_AUTHENTICATION_PACKAGE_V1_0
Logon Account:           JohnDoe
Source Workstation:      DB-SVR1
Error Code:              0xc0000071
Matched on:              Event: Microsoft-Windows-Security-Auditing:4776
Type:                    Error
Timestamp:               18:29:19 01/10/14
Sample #:                35013

Upon looking into this further, we understood that these false positives errors are not errors, but log entries indicated the nature of action under the category called “security” — we were monitoring everything under this category.

Following table shows, what each of these Error Codes mean:

Error Code:

Error Code

Error Description

C0000064

user name does not exist

C000006A

user name is correct but the password is wrong

C0000234

user is currently locked out

C0000072

account is currently disabled

C000006F

user tried to logon outside his day of week or time of day restrictions

C0000070

workstation restriction

C0000193

account expiration

C0000071

expired password

C0000224

user is required to change password at next logon

C0000225

evidently a bug in Windows and not a risk

 

Equipped with this better understanding we were able to configure the monitors to only look for real errors and not informational.

Hope this helps,
_SqlTimes

Quick one today:

Sometimes, during troubleshooting, PRINT commands come very handy in traversing the progress of procedure (or any batch of SQL code) execution.

When you have a large amount of code with hundreds of of lines of code, you might end up with many PRINT statements capturing the status.

The result might end up looking like this:

--
--  Storing procedure execution progress in a status variable for troubleshooting
--
DECLARE @ProgressReport VARCHAR(8000)

SELECT @ProgressReport = '1 : Start : ' + CONVERT(VARCHAR, GETDATE(), 109)
SELECT @ProgressReport = @ProgressReport + '2 : @SV_Stats : ' + CONVERT(VARCHAR, GETDATE(), 109)
SELECT @ProgressReport = @ProgressReport + '3 : Starting While : ' + CONVERT(VARCHAR, GETDATE(), 109)
SELECT @ProgressReport = @ProgressReport + '@SVP_ID : ' + CONVERT(VARCHAR, 4654)
SELECT @ProgressReport = @ProgressReport + '@SwitchOut_PartitionNumber : ' + CONVERT(VARCHAR, 54)
SELECT @ProgressReport = @ProgressReport + '4 : Switch completed : ' + CONVERT(VARCHAR, GETDATE(), 109)
SELECT @ProgressReport = @ProgressReport + '5 : Merge Completed : ' + CONVERT(VARCHAR, GETDATE(), 109)
SELECT @ProgressReport = @ProgressReport + '6 : While Completed : ' + CONVERT(VARCHAR, GETDATE(), 109)

PRINT @ProgressReport
GO

Non user friendly result

The result looks like this (not user friendly):

NewLine CarriageReturn Before

NewLine CarriageReturn Before

So, to make it pretty, or more readable, ‘new line’ & ‘carriage return’ characters come handy.

User friendly format

--
--  Storing procedure execution progress in a status variable for troubleshooting with CHAR(13) + CHAR(10)
--

DECLARE @ProgressReport VARCHAR(8000)

SELECT @ProgressReport = '1 : Start : ' + CONVERT(VARCHAR, GETDATE(), 109) + CHAR(13) + CHAR(10)
SELECT @ProgressReport = @ProgressReport + '2 : @SV_Stats : ' + CONVERT(VARCHAR, GETDATE(), 109) + CHAR(13) + CHAR(10)
SELECT @ProgressReport = @ProgressReport + '3 : Starting While : ' + CONVERT(VARCHAR, GETDATE(), 109) + CHAR(13) + CHAR(10)
SELECT @ProgressReport = @ProgressReport + CHAR(9) + '@SVP_ID : ' + CONVERT(VARCHAR, 4654) + CHAR(13) + CHAR(10)
SELECT @ProgressReport = @ProgressReport + CHAR(9) + '@SwitchOut_PartitionNumber : ' + CONVERT(VARCHAR, 54) + CHAR(13) + CHAR(10)
SELECT @ProgressReport = @ProgressReport + CHAR(9) + '4 : Switch completed : ' + CONVERT(VARCHAR, GETDATE(), 109) + CHAR(13) + CHAR(10)
SELECT @ProgressReport = @ProgressReport + CHAR(9) + '5 : Merge Completed : ' + CONVERT(VARCHAR, GETDATE(), 109) + CHAR(13) + CHAR(10)
SELECT @ProgressReport = @ProgressReport + '6 : While Completed : ' + CONVERT(VARCHAR, GETDATE(), 109) + CHAR(13) + CHAR(10)

PRINT @ProgressReport 
GO
NewLine CarriageReturn After

NewLine CarriageReturn After

 

With CHAR(13) + CHAR(10) the result looks more user-friendly; Looks more like the actual code and its status at every step. Saves a lot of time in troubleshooting. From MSDNCHAR can be used to insert control characters into character strings. The following table shows some frequently used control characters.”

Control character

Value

Tab char(9)
Line feed char(10)
Carriage return char(13)

Hope this helps,
_Sqltimes

Quick one today:

Again today, I ran into another interesting error message.

In our lab environment, playing with one of our customer databases, I ran into this interesting error.

Msg 1833, Level 16, State 2, Line 5
File 'filename' cannot be reused until after the next BACKUP LOG operation. If the 
database is participating in an availability group, a dropped file can be reused 
only after the truncation LSN of the primary availability replica has passed the 
drop LSN of the file and a subsequent BACKUP LOG operation has completed.

Essentially, it is saying:

  • The same file name (ndf file) cannot be reused until BACKUP LOG is performed
  • If you have Availability Groups configured, same file name cannot be re-created, until the “remove file” change has successfully synchronized to the replicas.

 

This is how I ran into this issue:

Step 1: Create a new FG and File

--
-- Create a filegroup
--
ALTER DATABASE SampleDB
ADD FILEGROUP [SV_FG]
GO

--
-- Add a data file to it
--
ALTER DATABASE SampleDB
ADD FILE (
     NAME      =   N'NewDataFile'
   , FILENAME  =   N'I:\MSSQL\Data\NewDataFile.NDF'
   , SIZE      =   2 GB
   , FILEGROWTH=   10%
)
TO FILEGROUP [SV_FG]
GO

Step 2: DROP the FG and data file

--
-- Now drop the file and its associated data file
--
ALTER DATABASE SampleDB
REMOVE FILE NewDataFile
GO

ALTER DATABASE SampleDB
REMOVE FILEGROUP [SV_FG]
GO

Step 3: Recreate same FG and data file

--
-- Now re-create the same filegroup and file.
-- FG runs successfully, but when you create the file, it throws an error message
--

--
-- Create a filegroup
--
ALTER DATABASE SampleDB
ADD FILEGROUP [SV_FG]
GO

--
-- Add a data file to it
--
ALTER DATABASE SampleDB
ADD FILE (
NAME = N'NewDataFile'
, FILENAME = N'I:\MSSQL\Data\NewDataFile.NDF'
, SIZE = 2 GB
, FILEGROWTH= 10%
)
TO FILEGROUP [SV_FG]
GO

Resolution

Creating FG runs successfully, but creating the data file throws an error. Following the instructions in the error message, once I performed transactional log backup, a new data file (.ndf) was successfully created with the same old name.

--
-- Perform transactionallog backup
--
BACKUP LOG SampleDB
TO DISK = N'M:\MSSQL\Backup\MSSQLSERVER\SampleDB.TRN'
GO

Now, when the same script to create new filegroup and file are executed, it runs successfully.

--
-- Add a data file to it
--
ALTER DATABASE SampleDB
ADD FILE (
     NAME      =   N'NewDataFile'
   , FILENAME  =   N'I:\MSSQL\Data\NewDataFile.NDF'
   , SIZE      =   2 GB
   , FILEGROWTH=   10%
)
TO FILEGROUP [SV_FG]
GO

Conclusions

So, why does Sql Server care that we take a transactional backup when a data file is dropped and being re-created. My guess is:

  • The “REMOVE FILE” change must be synchronized to the replicas before a new file with same name is created.
  • Sql Server wants to secure the changes. Since file level changes are not the same as table or function or other objects, may be Sql Server want to secure these into a safe backup file, then move on to making more changes. If you have a better or correct explanation, please share in the comments.

Hope this helps,
_Sqltimes

Follow

Get every new post delivered to your Inbox.