Feeds:
Posts
Comments

Archive for the ‘Weird Anamoly’ Category

Problem

While gathering replication backlog details, ran into this interesting error. The goal was to run the sp_replmonitorsubscriptionpendingcmds stored procedure and store the output in a table. As we’ve seen in a recent post, redirecting output of a stored procedure execution into a table is possible; But in this case, it throws an error saying that is not allowed.

INSERT INTO #DC1_Repl_Backlog
EXEC  sp_replmonitorsubscriptionpendingcmds
		  @publisher	= 'InstanceName'
		, @publisher_db	= 'DBName'
		, @publication	= 'Publication'
		, @subscriber	= 'Subscriber'
		, @subscriber_db= 'DBName2'
		, @subscription_type = '0'
GO
Msg 8164, Level 16, State 1, Procedure sp_replmonitorsubscriptionpendingcmds, Line 233
An INSERT EXEC statement cannot be nested.

(0 row(s) affected)

With the available information, right now, a clear & coherent explanation fo this behavior is not available from my end. But my guess is this; The code inside this stored procedure must be using a similar INSERT INTO #table EXEC sp_xyz, hence the error “INSERT EXEC statement cannot be nested

Resolution

OPENROWSET helps in getting around this. See the sample code below:

--
--
--
IF OBJECT_ID('tempdb..#DC1_Repl_Backlog') IS NOT NULL
	DROP TABLE #DC1_Repl_Backlog

CREATE TABLE #DC1_Repl_Backlog (
	  pendingcmdcount	BIGINT
	, estimatedprocesstime	BIGINT
)

INSERT #DC1_Repl_Backlog (pendingcmdcount, estimatedprocesstime)
SELECT *
FROM OPENROWSET('SQLOLEDB',
		'Server=InstanceName;Trusted_Connection=yes;',
		'EXEC Ditribution.dbo.sp_replmonitorsubscriptionpendingcmds
					  @publisher = ''PublisherInstance''
					, @publisher_db	= ''DBName''
					, @publication = ''Publication''
					, @subscriber = ''Subscriber''
					, @subscriber_db = ''DBName2''
					, @subscription_type = ''0'''
		) 

SELECT *
FROM #DC1_Repl_Backlog
GO
Hope this helps,
_Sqltimes
Advertisements

Read Full Post »

Interesting one today:

For the last few months, on and off, there have been opportunities to run some interesting tests in our lab environment. this resulted in some good posts in the last few weeks. Adding to that tradition is another interesting topic Uniquifier.

Context:

Imagine a table with multiple records and a Clustered and bunch of NonClustered indexes. In the non-clustered indexes, the b-tree is structured based on the index keys and at the bottom of the tree, the leaf points back to clustered index using Clustering Key. Now imagine the same scenario with a non-unique clustered index, so there could be multiple records with same clustering key values. The dependent, non-clustered indexes now will need a way to uniquely identify between the identical looking entries. Enter Uniquifier column !!

Solution

An extra 4-byte column called uniquifier is added to all non-clustered indexes to uniquely distinguish between multiple index entries that result in pointing to the same clustering key.

Let’s take an example:

We’ll re-use some of the code from previous posts for this.

--
-- Create a dummy table to test DBCC IND
--
IF EXISTS (SELECT * FROM sys.objects WHERE name = 'Uniquifier_Test' AND type = 'U')
 DROP TABLE dbo.Uniquifier_Test
GO

CREATE TABLE dbo.Uniquifier_Test (
    ID INT NOT NULL DEFAULT (1)
  , Name VARCHAR(5) NOT NULL DEFAULT('aa')
)
GO

--
-- Create Clustered and NonClustered indexes
--
CREATE CLUSTERED INDEX CI_Uniquifier_Test_ID
    ON dbo.Uniquifier_Test(ID ASC)
GO

CREATE NONCLUSTERED INDEX nCI_Uniquifier_Test_Name
    ON dbo.Uniquifier_Test(Name ASC)
GO

--
-- Let's insert some dummy records
--
INSERT INTO dbo.Uniquifier_Test (ID, Name)
VALUES (1, 'aa')
     , (2, 'bb')
     , (3, 'cc')
     , (4, 'dd')
GO

SELECT *
FROM dbo.Uniquifier_Test
GO

Now, lets look at the contents of the non-clustered index pages. Fr more details on retrieving PageID and query page contents, please refer to the previous posts.

--
-- Retrieve PageID of nCI
--
DBCC IND (test, Uniquifier_Test, -1)
GO

--
-- Retrieve contents of nCI page
--
DBCC TRACEON (3604)
DBCC PAGE(test, 1, 34535, 3)
GO

As you can see, along with the nCI key column, Name, we also have the clustering key (ID) added to the nCI b-tree structure. Along with that there is a new column called UNIQUIFIER added to the non-clustered index pages. Since we did not add any duplicate values, the UNIQUIFIER column is set to zero.

Uniquifier (before duplicate entries)q

Uniquifier (before duplicate entries)

Now, lets add some duplicate entries.

--
-- Let's insert some duplicate records
--
INSERT INTO dbo.Uniquifier_Test (ID, Name)
VALUES (1, 'aa')
     , (2, 'bb')
GO

As you can see, where there are duplicate entries, the UNIQUIFIER column adds a unique value to distinguish between them. This incremental number is just within the same set of duplicate entries. Two duplicate rows for aa & 1 have the UNIQUIFIER value set to 0 and 1 respectively. For the next set of duplicates the incremental value starts over from 0. So it can accomodate a lot of duplicate entries.

Uniquifier after adding duplicate entries

Uniquifier after adding duplicate entries

So, the uniquifier column helps Sql Server distinguish between two identical entries in the non-clustered index.

Hope this helps,
_Sqltimes

Read Full Post »

Interesting topic today:

In Sql Server, as we all know, concurrency is maintained through locks & latches. When a particular row is being retrieved for any activity, a lock is requested on that row a.k.a. Shared lock or Exclusive lock, depending on the nature of the activity performed. Once activity is completed, the lock is released. So, when a large number of records in a table are retrieved, then something interesting happens. Sql Server, rather than issuing individuals locks on each row, it perform Lock Escalation to lock the entire table.

This behavior is helpful in some scenarios and detrimental in others. Each lock takes some resources (memory, etc). So establishing a large number of smaller locks (ROWLOCKs) aggregates to a lot of resources. So, Sql Server escalates the lock to either PAGE level or TABLE level (depending on the scenario). Sometimes this could result in longer waits for the PAGE (or entire table) to be freed from other locks, before this new lock request could be granted.

To avoid all of this, sometimes, developers use ROWLOCK to force Sql Server to use ROWLOCK even when performing operations on larger number of records (to avoid waiting for entire PAGE/TABLE to be free from other locks). There are several pros and cons to this approach. As always, the details of the situation guide the best approach in each scenario. One exception to this scenario is, even though we use ROWLOCK, sometimes Sql Server will force lock escalation.

Today, we’ll look at this one aspect of this automatic lock escalation:

Question: At what point does Sql Server, force lock escalation even when ROWLOCK is used?

Let’s run DELETE on a table and see how lock escalation happens. Three diferent levels:

  • 2000 ROWLOCK requests
  • 4000 ROWLOCK requests
  • 6000 ROWLOCK requests

Step 1: At 2000 ROWLOCKS

--
-- Let's run a large DELETE operation
--
DELETE TOP (2000) FROM dbo.SampleTable WITH (ROWLOCK) GO

Now, in a different SSMS window, let’s check the number of locks on the table.

--
-- Check locks on the table
--
EXEC sp_lock 56
WAITFOR DELAY '00:00:01'    -- keep running every second, to capture locks from other window
GO 10

 

Row Level Locks Granted

Row Level Locks Granted

As you can see ROWLOCKS on keys are GRANTed. Now, lets increase the batch size and see where the force lock escalation happens.

Step 2: At 4000 ROWLOCKS

--
-- Let's run a large DELETE operation
--
DELETE TOP (4000)
FROM dbo.SampleTable WITH (ROWLOCK)
GO

Let’s check the lock situation:

ROWLOCKs granted at 4000

ROWLOCKs granted at 4000

So, even at 4000, something interesting happens. Some ROWLOCKs are issues and some PAGELOCKS are issued. Looks like for some rows, the lock is escalated to the PAGELOCK level for efficiency. Let’ continue the effort

Step 3: At 6000 ROWLOCKS

--
-- Let's run a large DELETE operation
--
DELETE TOP (6000)
FROM dbo.SampleTable WITH (ROWLOCK)
GO

Let’s review the locks situation:

ROWLOCKs at 6000

ROWLOCKs at 6000

BINGO !!! As you can see the lock escalation occurs to TABLOCK level at 6000.

Initially, ROWLOCKS are issued; Then just a second later, some locks are escalated to PAGE and subsequently to TABLE level. This is interesting.

Another Nuance: Percentage

Question: Does this escalation occur based on a strict number (or range) or is it based on percentage of records being accessed in a table?

  • From the tests, it seems like the LOCK ESCALATION occurs based on the number, and not percentage of records being manipulated.
Table Size Total Record Count Count of records Lock Requested Type of Lock Granted
Small 2,100 2000 ROWLOCK
Medium 35,000 2000 ROWLOCK
Large 500,000 2000 ROWLOCK

Conclusion:

  1. Sql Server makes intelligent estimations on what is better at each level and makes best decisions to escalate locks as needed. Keep the 2000, 4000 & 6000 as general rule in mind and not as a set in stone rule.
  2. At any given point, Sql Server makes the best judgement call on what is more efficient.
  3. Lock Escalation is based on the number of records on which lock is requested and not on the percentage of records relative to the total records in the table.

Important Note:

  1. This behavior applies only to Sql Server 2012 as this behavior varies from version to version.
  2. In the past, for Sql Server 2008 and before, the number was at 1800 – 2200 before TABLE lock escalation occurs.
  3. If this continues, may be for Sql Server 2016, the number would be slightly higher, as Microsoft improves the lock efficiency by reducing the amount f resources required for each lock.
  4. This does not mean to go ‘free range’ crazy and use ROWLOCK on every query (obviously) & use 4000 everytime. Keep this information handy in making data retrieval decisions. Each ROWLOCK takes resources (memory & CPU); So we need to use caution and minimize the overhead. This a refined technique to be used in infrequent occassions that seem the most suitable for such techniques. Perfom tests before using in production.

 

Hope this helps,
_Sqltimes

Read Full Post »

In our lab machines, sometimes quick clean up activities become necessary; They occur frequently before and after some large batch testing scripts. Such situations include activities like:

  1. Reducing size of either log or data file
  2. Emptying transactional log file
  3. Deleting transactional log file

Note: Please be advised that such operations are not recommended on a production database. These will result in unpredictable and sometimes reduced performance.

In recenlt posts, we’ve convered the use of SHRINKFILE in different scenarios:

Important Points to keep in mind:

  • SHRINK operation could be stopped at anytime without losing the work completed thus far. It retains the progress made (re-allocations)
  • Shrinking data or log file does not require single-user mode on the database. Other user activity could be running in parallel without any interference with SHRINK work.
  • SHRINK process could be delayed due to blocking from other user activity, so if possible, perform SHRINK operation when there is lesser traffic.
  • SHRINK operation is a single threaded operation, that methodically works through each data block. So it is time consuming.
  • SHRINK one file at a time (rather than in parallel)

 

Following are the steps we follow:

Reducing Size of Log or Data File

In lab environment, to reduce the size of a bloated log or data file, we implement a version of the following steps:

Step 1:

  • Before freeing up any space back to Operating sytem, we need to adjust the way space is occupied by all the database pages.
  • Sql Server will reallocate all used pages from the end of the physical file to earlier portions.
  • This allows end of the physical file to be freed up.
--
-- SHRINK the data file down to 1 GB (reallocation)
--
USE [SampleDB]
GO
DBCC SHRINKFILE (N'Sample_Data2' , 1024) -- Reduce it to 1 GB
GO

Step 2:

  • Once reallocation or adjustment is complete, we could issue TRUNCATEONLY option to free up that space back to Operating System.
  • This is when we see that the physical file reducing in size.
--
-- Release space back to OS
--
USE [SampleDB]
GO
DBCC SHRINKFILE (N'Sample_Data2', TRUNCATEONLY)
GO

Emptying Transactional Log File

In lab environment, to empty entire transactional log file, we implement a version of the following steps:

--
-- To remove secondary log file, first we need to empty it. Then remove it
--
DBCC SHRINKFILE (SampleDB_log2, EMPTYFILE)
GO

Deleting Transactional Log File

In lab environment, to delete a transactional log file, we implement a version of the following steps:

--
-- To remove secondary log file, first we need to empty it. Then remove it
--
DBCC SHRINKFILE (SampleDB_log2, EMPTYFILE)
GO

ALTER DATABASE SampleDB
REMOVE FILE SampleDB_log2
GO

For more details, please refer to BoL

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

In replication, there are several amazing features & configurations to make it robust, dependable & highly performing. These settings need to be correctly leveraged to squeeze out the best performance needed or applicable for each environment. Today, we’ll cover a popular setting called NOT FOR REPLICATION on IDENTITY columns.

Concept:

In short, when NOT FOR REPLICATION is enabled on IDENTITY columns (or other constraints), the IDENTITY value is not incremented when INSERTs occur due to replication traffic. But all other direct application traffic will increment IDENTITY value.

Imagine a Sql Server Publisher, let’s say P, that is publishing data to a Sql Server Subscriber, let’s say S. Now, both P & S have table called SampleTable with an IDENTITY column called ID. To make it easy to see the difference, let’s make their IDENTITY definition different at each location (P & S).

  • At Publisher, the IDENTITY value is defined as (1,10).
    • So, its values will be 1, 11, 21, 31, 41, etc.
  • At Subscriber, it is defined as (2, 10).
    • So, its values will be 2, 12, 22, 32, 42, etc.

The Set Up

With the above points, let’s create the table and set up replication between P & S. Followins some of the code used to create table at Publisher (P).

At Publisher

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(1,10)  NOT FOR REPLICATION   PRIMARY KEY   CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('A')
)
GO

At Subscriber:

Similarly, on Subscriber, create a similar table with different IDENTITY definition.

--
-- CREATE TABLE with IDENTITY set for NOT FOR REPLICATION
--
CREATE TABLE dbo.SampleTable(
     ID     INT          NOT NULL  IDENTITY(2,10)  NOT FOR REPLICATION   PRIMARY KEY     CLUSTERED
   , Name   VARCHAR(20)  NULL      DEFAULT('B')
)
GO

So, there is no overlap between IDENTITY values generated at P & S.

Now let’s watch their behavior, as data in INSERTED into both servers.

  1. When data in INSERTED directly into each location (P & S)
  2. When data is indirectly INSERTED into S due to replication traffic from P

Below is some more code used to check IDENTITY values, Insert new data, etc. in these expirements.

--
-- Query the data
--
SELECT *
FROM dbo.SampleTable
ORDER BY ID ASC

--
-- Check the value of IDENTITY column at each step
--
SELECT IDENT_CURRENT('SampleTable')

--
-- Insert data directly into P
--
INSERT INTO dbo.SampleTable DEFAULT VALUES
GO

--
-- Manually insert data to introduce interesting scenarios
--
SET IDENTITY_INSERT dbo.SampleTable ON
INSERT INTO dbo.SampleTable (ID) VALUES(201)
SET IDENTITY_INSERT dbo.SampleTable OFF
GO

Run Experiments

With the above set up, lets run through some scenarios and observe Sql Server behavior in each situation.

Scenario 1:

When data in INSERTed directly into P:

  • The IDENTITY values increment with each insert as 1, 11, 21, 31, etc.
  • Subsequently, those records are replicated to S, with same IDENTITY values.
  • But in all of this, the IDENTITY value at S, stays at 2
    • Since NOT FOR REPLICATION is set on the IDENTITY column on S.

When data is INSERTed directly to S:

  • The IDENTITY values are incrementing as per definition to 2, 12, 22, etc
  • Irrespective of the replication traffic from P, the IDENTITY at S only depends on the records INSERTed directly into S.
  • Table at S, has records from both P & S.
    • S will look something like: 1, 2, 11, 12, 21, 22, 31, 32, etc
    • Table at P, will look at 1, 11, 21, 31, etc

Scenario 2: IDENTITY_INSERT

When manual entry is made at P (using IDENTITY_INSERT) to a new IDENTITY value that does not match with the pattern of IDENTITY definition, subsequent IDENTITY values, at P, are based on the highest entry in the table. It uses the same INCREMENT definition, but it is incremented based on the current highest entry value in the table.

At Publisher:

  • Let’s say the SampleTable, at P, has entries like 1, 11, 21, 31 with next IDENTITY value as 41.
  • Now, if a new record is entered manually using IDENTITY_INSERT, with new value as 26. It is successfully INSERTed.
    • Next IDENTITY value still remains at 41.
  • We can keep repeating these steps with different values like 7, 9, 13, 15, 17, 25, 28, 29 (as long as they are below 31).
    • INSERTs will be successful with no impact to next IDENTITY value, which is still at 41.
  • Now, if you perform a regular INSERT, the new record will get IDENTITY value as 41.

At Subscriber:

  • At S, all new entries, 26, 7, 9, 13, 15, 41, etc, are successfully INSERTed with no impact to IDENTITY definition at S.
    • At S, the next identity value is still set to 42
  • Any new direct INSERTs at S, will get IDENTITY values consistent with its previous behavior a.k.a. 42, 52, etc

Scenario 3: PRIMARY KEY Violation

Now, lets make a manual entry at P that matches with the next IDENTITY value at S.

  • For this, let’s assume that the highest value at P is 41, with next IDENTITY value as 51
  • At S, the current highest value is 52, with next IDENTITY value as 62.

Introduce problems:

  • At P, perform a manual INSERT (with IDENTITY_INSERT), with ID value as 62.
    • INSERT is successful at P; And it is replicated to S successfully.
  • After above operation, next IDENTITY value
    • At P is set to 72 (62+10).
    • At S, it is still at 62 (even though a new record in INSERTed with 62). Since NOT FOR REPLICATION is set, replication traffic does not influence IDENTITY increments at S.
  • Now, when a new record is directly INSERTed into S, the next IDENTITY value will be computed as 62, which results in PRIMARY KEY violation.
    • Violation of PRIMARY KEY constraint 'PK_SampleTable'. Cannot insert duplicate key in object 'dbo.SampleTable'
    • Interestingly, the next IDENTITY value for S, is incremented to 72.
    • Subsequent direct INSERTs into S will be 72, 82, etc

Viscious cycle:

  • In the above test, the next IDENTITY value at P is still at 72.
  • Similarly, the next IDENTITY value at S, is also set to 72.
  • So any new inserts at P, will be replicated to S with 72, 82, 92, etc.
    • If there are any existing records, at S, with same identity values, then replication traffic (from P to S) will fail with primary key violation.
    • But if S does not have any records with those identity values (from P), then replication traffic (a.k.a. 82, 92, 102) from P is successfully INSERTed into S
    • Any new traffic, directly at S, will run into PRIMARY KEY violation.
  • So, the summary is, one BAD entry is all it takes to screw up the IDENTITY definition with NOT FOR REPLICATION.

Solution:

  • When this happens, just RESEED, Identity values at P to a non-overlapping value that is consistent with its expected behavior.
    • Something like 151 or 201. To give it a fresh start with no overlaps with P or S existing records.
Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

Recently, in a post, we covered about covered some ideas optimizing transactional log file by reducing or removing too many VLFs. This is an important step in optimizing Sql Server performance, especially for VLDBs.

Important Artifact 1:

There are some interesting nuances to Transactional log file architecture and its fascinating  operational subtleties. As Microsoft documented it extensively, there are something called VLFs in a LDF file (transactional log file). The way each VLF is utilized in a circular -linked-list fashion in an important artifact to finding optimization approaches.

Important Artifact 2:

Adding to that, Paul Randall’s post here uncovers some nuances to Sql Server internals algorithm in which it extends LDF files. This is key to the way we configure LDF size for each database usage levels.

After reviewing the above topics and supporting artifacts, the optimal approach to configuring transactional log files (LDF) for VLDBs to achieve elimination of 1 MB AutoGrowth frequently are two ways:

  1. Versions before Sql Server 2014
  2. Sql Server 2014 and newer

Versions before Sql Server 2014

Pre-cofigure larger transactional log file size in 8 GB increments (after initial size); This results in 16 VLFs in each 8 GB growth increment, with 512 MB for each VLF.

Example:

  • If you need LDF file size less than or equal to 8 GB, start with 8 GB size.
  • From 8 to 16 GB, use 16 GB LDF file size.
  • 16 – 24 GB, use 24 GB as initial size.
  • 72 – 80 GB, use 80 GB as initial size.

Important: It is important to start with 8 GB and keep increasing by 8 GB to larger file size, rather than just going straight up to 80 GB. It is important to perform actions in this sequence, because of the algorithm that assigns VLFs to each size increment.

See the script below for detailed understanding:

--
--	Perform increments in 8 GB (to create 512MB VLFs)
--
ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 8 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 16 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 24 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 32 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 40 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 48 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 56 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 64 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 72 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 80 GB);
GO

 

Initial or increment Allocation size NoF VLFs VLF size Total Log size Total VLFs
Initial 500 KB 2 250 KB 500 KB 2
Increemnt 8 GB 16 512 MB 8 GB 18
Increment 8 GB 16 512 MB 16 GB 34
Increment 8 GB 16 512 MB 24 GB 50
Increment 8 GB 16 512 MB 32 GB 66
Increment 8 GB 16 512 MB 40 GB 82
Increment 8 GB 16 512 MB 48 GB 98
Increment 8 GB 16 512 MB 56 GB 114
Increment 8 GB 16 512 MB 64 GB 130
Increment 8 GB 16 512 MB 72 GB 146
Increment 8 GB 16 512 MB 80 GB 162

 

Sql Server 2014 and Newer

Starting Sql Server 2014 , the algorithm that assigns VLF for each new LDF size increment has undergone significant changes. Keeping then in mind, we need a different approach to configuring LDF size.

Note: We start with initial 8 GB; Add another 8 GB; From them on, add by 1 GB increment up to required size.


--
-- Perform increments in 8 GB (to create 512MB VLFs), then increase it by 1 GB
--
ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 8 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 16 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 17 GB);
GO

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 18 GB);
GO

..
..
..

ALTER DATABASE SampleDB MODIFY FILE (NAME = N'SampleDB_log', SIZE = 100 GB);
GO

 

Initial or increment Allocation size NoF VLFs VLF size Total Log size Total VLFs
Initial 8 GB 16 512 MB 8 GB 16
Increment 8 GB 16 512 MB 16 GB 32
Increment 1 GB 1 1 GB 17 GB 33
Increment 1 GB 1 1 GB 18 GB 34
Increment 1 GB 1 1 GB 19 GB 35
Increment 1 GB 1 1 GB 20 GB 36
Increment 1 GB 1 1 GB 100 GB 116

 

Other better practices:

  • No benefit to having multiple LDF files
  • Better to have larger size VLFs than too small size (that could result in frequent small increments – not good)
  • Since instant file initialization does not work for LDF files, it might take a few seconds to set up LDF to desired final size.

 

Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »