Feeds:
Posts
Comments

Archive for May, 2013

Another quick one:

Sometimes when you are try to move data/log files from one location to another you employ the option of moving all the objects from the file and emptying it out. Then using ALTER DATABASE we could try to delete it. But when you do that we run into this error.

Server: Msg 5020, Level 16, State 1, Line 1
The primary data or log file cannot be removed from a database.

This is because, by default Sql Server will not allow deleting the first data/log file created when you created the database.

Since deleting the primary data/log file is not allowed, we can move the data/log file to a new location by bringing the database offline. This is not a favourite option for everyone, but this is a solution.

  • Step 1: Bring the database OFFLINE
 ALTER DATABASE DBADB SET OFFLINE 
  • Step 2: Move the file to the new location.
  • Step 3: Add the new file location to the metadata

ALTER DATABASE DBADB
MODIFY FILE
(
NAME = N'DBADB_Log'
, FILENAME = N'L:\MSSQL\Log\DBADB_Log.LDF'
)

  • Step 4: Bring the database ONLINE
ALTER DATABASE DBADB SET ONLINE

If you have a better solution, please share it in the comments section.

Hope this helps,
_Sqltimes

Read Full Post »

Although the day started as just any other day, it is ending with a great surprise leaving much desired satisfaction and motivation. I had an opportunity to use MERGE in a production environment.

It was a few years ago (I think so), that I first heard about MERGE statement and got excited and learnt all about it and ran it several times in Play and Dev environments, but never was in a situation where using MERGE seemed appropriate and perfect !! Feels like all my efforts have been stacking up to this event.

Now, we are in the middle of a data migration effort that involves migrating selected seuset of data from old PROD database to new PROD database, before ‘un-plugging’ the old database. Though, both the databases point to (or used by) the same application, the underlying data model is much different. Another reason why DBAs must be involved in data model design discussions. Agile does not give anyone a free pass to do careless development and assure yourself by saying “we’ll change it later”. Anyway, I digress.

After realizing that developers could not migrate the data themselves, they called us, the DBAs.

Back to MERGE. So after building out SQL scripts to accomplish this migration on a set-based approach (not procedural), I asked the developers to run some tests. After all the testing one major flaw was identified. There were duplicate entries in one table. One table had some overlapping accounts between old and new database. Now, how do I solve this without making the code any more complicated.

Off the top of my head, there are two ways:

  • Use MERGE
  • Use two separate transactions; One does INSERT, the second does UPDATE for matching records.

Perfect !! Use MERGE.


WITH Accounts_CTE (Merchant_ID, Old_Account_ID, Old_Enterprise_ID)
 AS
 (
    SELECT Merchant_ID AS [Merchant_ID]
       ...
       ...
       ...
    WHERE A.Account_Type IN ('pp')
 )

MERGE Sales.dbo.Account AS TGT -- Target Table

USING ( SELECT Merchant_ID
             , Old_Account_ID
             , Old_Enterprise_ID 
        FROM Accounts_CTE) as SRC 
    (Merchant_ID, Old_Account_ID, Old_Enterprise_ID)      -- this is the source table

    ON (TGT.merchant_id = SRC.merchant_id)      -- important condition for MERGE

WHEN MATCHED THEN
   UPDATE SET TGT.Old_Account_ID    =  SRC.Old_Account_ID
            , TGT.Old_Enterprise_ID =  SRC.Old_Enterprise_ID

WHEN NOT MATCHED THEN
   INSERT (merchant_id, Old_Account_ID, Old_Enterprise_ID)
   VALUES ([Merchant_ID, Old_Account_ID, Old_Enterprise_ID);

GO

This simple and intuitive code, helped me avoid two separate transaction; In worst case a procedural looping method; On top of it all MERGE is optimized in recent version to be more efficient than any other individual queries. You can read all about it in this MSDN article.

Note: End MERGE statement with a semicolon ;

As expected, with the above logic, using MERGE, I am able to INSERT new records into the table (from CTE) and update records where there is matching. Do all of this in one swell swoop. I think MERGE deserves a separate post of its own detailing the benefits and applicable scenarios.

Hope this helps,
_Sqltimes

Read Full Post »

Guys, a quick post.

Sometimes when you run SHRINKFILE on the log file we get this error:

Cannot shrink log file 2 (DBADB_log) because the logical log file located at the end of the file is in use.
(1 row(s) affected)
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

There are a couple of options to fix this:

  • Try quering sys.databases catalog table.
    • SELECT name, log_reuse_wait_desc FROM sys.databases
    • The column ‘log_reuse_wait_desc‘ shows the reason why. In my case if was waiting for LOG backup to occur before the file could be shrunk. Once I performed the TRN backup, the SHRINKFILE ran successfully.
  • Change recovery model
    • If the above step does not work, try changing the recovery model of the  database from FULL to SIMPLE. Once in SIMPLE mode, the LOG file would be automatically reduced to smaller size.Then revert to FULL recovery model again.
    • Make sure you perform a FULL backup and TRN backup after the recovery model change. This is why
  • Use these queries to measure the usage in each file.
-- all files
SELECT name
, size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files

-- Log file usage
DBCC SQLPERF (LOGSPACE)

Hope this helps,

_SqlTimes

Read Full Post »

Quick one:

Using this query you can check the size of a database file and track its growth in the last few days.

MSDB database, when it performs backup, I guess it measures each databases current file size (and other calculations) before performing a backup. So we could leverage this data to map the overall growth in the database file size.


SELECT BS.database_name
 , BF.logical_name
 , BF.file_size/(1024*1024*1024)
 , BS.backup_finish_date
 , BF.physical_name
FROM msdb.dbo.backupfile BF
INNER JOIN msdb.dbo.backupset AS BS
 ON BS.backup_set_id = BF.backup_set_id
 AND BS.database_name = 'NKEnterprise'            --  name of the database
WHERE logical_name = 'NKEnterprise_Data'     -- I want to know the growth this particular data file
ORDER BY BS.backup_finish_date

DB File Size Growth Track

DB File Size Growth Track

Hope this helps,
_Sqltimes

Read Full Post »

Guys,

There is a huge collection of free electronic books available on Microsoft website for some (or most) of their technologies. This link may or may not last for too long. Grab what you need soon !!!

http://social.technet.microsoft.com/wiki/contents/articles/11608.e-book-gallery-for-microsoft-technologies.aspx

I downloaded High Availability book and read one of the chapters. It reaffirmed the solution we implemented to one of one of the problems we were facing with Database Mirroring.

Yeay !! Thanks Microsoft.

Hope you take advantage of it,
_Sqltimes

Read Full Post »