Feeds:
Posts
Comments

Archive for June, 2013

A few days ago, on one of the production database servers we ran into space issue. One of the databases, which is usually at 300 GB size with increase on 1 0r 2 GB per day increased to more than 500 GB in a couple of days. This is one of the critical servers that houses data received from devices located all over the globe. So we had to quickly do something before the database comes to a halt.

Side Note: When database grows and takes up all the space available on the disk, it goes into read only mode. I am not sure if it goes into READONLY mode, but it responds only to SELECT queries, but not UPDATE/INSERT statements that add data to tables in the database.

So, we immediately added a couple of data files on a different drive to let the database grow and give us some time to implement a complete solution. In the last few weeks, the amount of data received has increased significantly and the purge job that was supposed to keep it relatively small has not been running as expected. It was running successfully so there were no alerts raised. But it was not DELETing  the data as expected.

We refactored the purge job and let it run for a few hours. After a couple of days of this, we were able to purge more than a billion records and clear up more than 200 GB free space.

Side Note: This database should’ve been designed with table partitions to allow for simple maintenance, but we did not. May be in near future we’ll look into it.

Now we need to free up some of this space and release it back to the Operating System. When I ran SHRINKFILE we ran into this error:


DBCC SHRINKFILE (DB_Data, 307200, NOTRUNCATE)  -- to 300 GB
GO

ERROR:

File of database cannot be shrunk as it is either being shrunk by another process or is empty.

--  DBCC SHRINKFILE (DB_Data, 307200, TRUNCATEONLY) -- resize to 300 GB

--  GO

No ones else is running SHRINKFILE at the same time and I know the file is not empty. Also, the backup operation is not running (sometimes backup operation causes this error)

According to this MSDN article we did “You may continue to receive this error message until you restart the SQL Server service. Or, you may continue to receive this error message until you take the database offline and then put the database online again.” But that did not solve the problem. Then on further efforts, we ran into this article on DBA Diaries.

We tried option 2 and it worked. Increase the data file by a few MB and reattempt SHRINKFILE operation.

ALTER DATABASE [DB]
MODIFY FILE
 (
 NAME = N'DB_Dat'
 , SIZE = 307210 MB
 )
GO

After this, I ran the SHRINKFILE again and was able to reclaim free space on the drive.

Side Note: SHRINKFILE is not a common operation. Very rarely does any one run it on production machines and is usually a tool for a unique situation or as a last resort. Also, it creates a lot of fragmentation. Please read this MSDN article thoroughly before attempting it.

Hope this helps,
_Sqltimes

Read Full Post »

A few weeks ago, as I was running some test on our Database Mirroring session in DBA environment (development for DBAs), I ran into this error:


Msg 1473, Level 16, State 6, Line 1
 This SQL Server edition does not allow changing the safety level. ALTER DATABASE command failed

My earlier (incorrect) understanding was that, synchronous mirroring is possible only with Enterprise edition of Sql Server. All other editions only allow asynchronous transmission of logs from Principal to Mirror. But after running into this error, I now stand corrected.

This is the complete story:

Operating

Mode

Transaction

safety

Transfer mechanism

Quorum

required

Witness

server

Failover

Type

High Availability

FULL

Synchronous

Y

Y

Automatic or Manual

High Protection

FULL

Synchronous

Y

N

Manual only

High Performance

OFF

Asynchronous

N

N/A

Forced only

Transaction Safety & Transfer Mechanism

Standard edition can only perform in SAFETY FULL transaction safety mode; which means that transactions occurring at Principal are synchronously transmitted to Mirror. When transactions occur on Principal, the information first goes to Principal‘s transaction log. As it is hardening, the logs are sent to Mirror. Once the transaction is committed on Mirror, then ‘ack‘ is sent to Principal and then the transaction is committed on Principal. On successful completion of this cycle, commit ack is sent back to the application/user that initiated the transaction.

Operating Mode

With SAFETY FULL, if we have a witness, it will be automatic failover, if not its a manual failover. SAFETY FULL has two operating modes

  • High Availability :  Witness is configured allowing automatic failover
  • High Protection: Witness is not configured, so manual failover.
Feature Name Enterprise Standard Workgroup Web Express Express Tools Express Advanced
Database mirroring 2 Yes (full) Yes (safety full only) Witness only Witness only Witness only Witness only Witness only

2Asynchronous database mirroring is supported only by SQL Server 2005 Enterprise Edition SP1 and later version

High Performance is only available in Enterprise Edition.

In High Performance, transactions are committed on Principal as they occur first. Then asynchronously, they are transferred to Mirror. This reduces latency in sending commit back to the user/application that started the transaction.

 

 

Hope this helps,
_Sqltimes

Read Full Post »

Sql Server is a complicated and efficient data storage and retrieval engine. It has many unique and fascinating techniques to make the overall engine more efficient. One such concept is Advanced Scanning.

In any database, you’ll have many users running queries simultaneously. All queries, more or less, query the same set of tables in the database. We have some popular (core) tables and a bunch of look up or supporting tables. While each user may be querying a unique set of records, many times, there is a lot of overlap on the records queried by multiple users. This is where Advanced Scanning comes into play.

According to MSDN: “In SQL Server Enterprise, the advanced scan feature allows multiple tasks to share full table scans. If the execution plan of a Transact-SQL statement requires a scan of the data pages in a table and the Database Engine detects that the table is already being scanned for another execution plan, the Database Engine joins the second scan to the first, at the current location of the second scan. The Database Engine reads each page one time and passes the rows from each page to both execution plans. This continues until the end of the table is reached.

Example from the MSDN article:

For example, assume that you have a table with 500,000 pages.

Advanced Scanning

Advanced Scanning

UserA executes a Transact-SQL statement that requires a scan of the table. When that scan has processed 100,000 pages, UserB executes another Transact-SQL statement that scans the same table. The Database Engine schedules one set of read requests for pages after 100,001, and passes the rows from each page back to both scans. When the scan reaches the 200,000th page, UserC executes another Transact-SQL statement that scans the same table. Starting with page 200,001, the Database Engine passes the rows from each page it reads back to all three scans. After it reads the 500,000th row, the scan for UserA is complete, and the scans for UserB and UserC wrap back and start to read the pages starting with page 1. When the Database Engine gets to page 100,000, the scan for UserB is completed. The scan for UserC then keeps going alone until it reads page 200,000. At this point, all the scans have been completed.

This allows to share the buffer space between queries that are retrieving the same set of records and reduces disk contention. Read once and use it multiple times for multiples queries.

Side Note: By default, Shared Buffer is shared among all the queries. Any data that is already available in Shared Buffer would be used to respond to queries rather than going to disk every time. As you can see above Advanced Scanning is goes beyond that. 

Hope this helps,
_Sqltimes

Read Full Post »