Feeds:
Posts
Comments

Quick one today:

For many years, we’ve heard Active/Active & Active/Passive terms in the context of Failover Clustering; They have an implied understanding of what each means. As technology evolves and reaches to greater robustness & availability architectures/deployments, these terms have become misleading and ambiguous.

Active/Active could also imply that some load balancing is occurring across the nodes, which is not the case.

Newer versions of Failover Clustering architectures include running multiple FCI on a single node; And potential of multiple (more than 2) nodes in a single Cluster. So these terms Active/Active or Active/Passive become ambiguous, imperfect & misleading.

Since these terms have been in general usage for several years, its hard to kill old habits. But its worthwhile to adapt and embrace complexity of newer technologies and their terminology. One thing that helped me was to refer to each failover cluster instance separately with details of where it is running.

For more information, please refer to this white paper from Microsoft.

Just a rant…

Hope this helps,
_Sqltimes

Quick one today:

Sometimes, in PowerShell, we run into this error:

.\Generate_DB_Traffic.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about_signing" for more details.
 At line:1 char:26
 + .\Generate_DB_Traffic.ps1 <<<<
 + CategoryInfo : NotSpecified: (:) [], PSSecurityException
 + FullyQualifiedErrorId : RuntimeException

One way that we could get around this necessary restriction is by changing Execution Policy setting. First let’s check the current settings using ‘Get-ExecutionPolicy‘ command

Get-ExecutionPolicy -List
PowerShell Execution Policy

PowerShell Execution Policy

For the scope you need, use the command ‘Set-ExecutionPolicy‘ to set to appropriate policy to be able to run your powershell scripts. For example, on my machine, this worked:

Set-ExecutionPolicy -Scope LocalMachine Unrestricted

Note: 64-bit OS, have both 32-bit & 64-bit versions of PowerShell, so make sure settings are set correctly on both (or whichever one you are using).

PowerShell Set ExecutionPolicy

PowerShell Set ExecutionPolicy

 

Hope this helps,
_Sqltimes

Quick one today:

To continue the trend of volume testing posts in the last few days, lets add another tip that allows us to simulate database traffic as if its coming from multiple webservers. Powershell allows us to spawn multiple threads with each generating database traffic. The example below, makes it easier:

 

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#    The script below spawns threads.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}

When we run this, it spawns multiple windows where each window generates traffic independent of other each other.

Spawn Multiple Thread Traffic

Spawn Multiple Thread Traffic

Hope this helps,
_Sqltimes

Quick one today:

In our lab, as part of load testing, we simulate large database traffic. There are many ways to accomplish this, but as a DBA we need the most simplistic way; That obviously means PowerShell or DOS or SSMS. Today, we’ll look into one way using PowerShell.

Imagine having all your database code in a several powershell scripts generating traffic for different functional; Now we trigger them in a staggered fashion from within another control file (powershell script).

Imagine, the load generating files as:

  • Generate_HR_Traffic.ps1
  • Generate_Sales_Traffic.ps1
  • Generate_Accounting_Traffic.ps1
  • Generate_Reporting_Traffic.ps1
  • etc

This is how we call them all in a sequence from a different control file called KickOffTraffic.ps1

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
# Loop and keep calling the other ps1 file that loads data
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
for ($i = 1; $i < 1000000; $i++)
{
    & .\Generate_HR_Traffic.ps1
    & .\Generate_Sales_Traffic.ps1
    & .\Generate_Accounting_Traffic.ps1
    & .\Generate_Reporting_Traffic.ps1
}
Hope this helps,
_Sqltimes

Quick one today:

Occasionally, we run into some situations where we need to manipulate data in text files. Either it is handling many files or manipulating large data sets. One need that keeps arising more frequently than others is the need to concatenate files into one.

There are a few ways to accomplish this; Below are some ideas:

Poweshell:

#
# Read all files with wild card Old*.txt into one file called 'NewFile.txt'
#
Get-Content Old*.txt | Set-Content NewFile.txt

#
# Concatenate all files into a new file called Newfile.txt
#    '>' represents writing into a new file
#    '>>' represents appending to an existing file
#
cat Old*.txt > NewFile.txt

#
# Similarly, use Get-Context and '>' or '>>' to concatenate & write into a new file
#    '>' represents writing into a new file
#    '>>' represents appending to an existing file
#
Get-Content Old*.txt > NewFile.txt
Get-Content OneMore.txt >> NewFile.txt
 

 

MS-DOS:

Similarly, there are several ways to concatenate files from DOS, here we’ll cover one such ways:

Note Copy command allows similar operation to concatenate all files into one file. When order is important, use available flags within Copy command

copy Old*.txt NewFile.txt 

 

Hope this helps,
_Sqltimes

Interesting topic today:

On a busy production environment there are always several processing reading, updating, deleting & inserting records into different sections of the same table’s B-Tree. This is part of normal operations. To a large extent B-Trees allow these operations to occur as efficiently as possible minimizing overhead or contention. On occasions there will be overlaps that results in transient blocking; Where Process A’s UPDATE is blocking Process B’s SELECT operation (if there is some overlap of records between both operations).

While this is unfortunate, this is a natural outcome at higher concurrency levels. Good thing is this is just transient. Sql Server is very efficient in juggling these processes so all operations complete as fast as possible, while maintaining integrity of the data (ACID properties).

In newer versions of Sql Server, starting 2008, there are several ways to minimize this overlap or contention. Isolation Levels, Table Partitioning, Efficient code constructs, Best practices for code, etc. Today, in this post, we’ll touch on a specific example in such busy VLDB systems.

Case:

In our production system, we have a busy large table with billions of records. A lot of processes are simultaneously trying to SELECT, UPDATE, DELETE & INSERT into this table every second of the day; Obviously, this results in some occasional, transient contention and short bursts of blocking. In worst case, there are events of extended blocking or repetitive extended blocking or deadlocks. This results in heavy latency in application performance.

Adding to this, we have a new requirement to DELETE records that meet certain conditions; all through the day. Now this gets tricky. If it were daily purges (at night), we could handle them through partitioning. But these do not align with partitioned scheme, so we needed a new solution.

Solution:

Introducing READPAST table hint.

In short, when we use this table hint, Sql Server when attempting to read records, if they are locked by a different process, it moves on. It just does not read them. So, row-level locks are skipped. In default behavior, when qualifying records are locked by a different process, our process waits until they are released or ‘LOCK_TIMEOUT’ occurs.

Note: This behavior is not good for general queries. Ex: When you want to get a SUM of all order amounts, you want correct number. Not a ball park figure. So, it these situations it does not apply.

For us, we just needed a way to DELETE as many records as possible, when possible. It does not matter if we DELETE all or some. Just deleting as many as possible without causing any disruptive overhead is important.

We did something like this:

Just a code snippet copied from a larger code section, so it is not complete.

--
-- In a loop keep deleting data
--
WHILE @RunAgain = 1
BEGIN

    -- Reset loop flag
    SELECT @RunAgain = 0

    DELETE TOP (1000)
    FROM dbo.SomeTable WITH (ROWLOCK, READPAST)
    WHERE Deleted = 1


    --
    -- Gather status information after execution
    --
    SELECT @Err = @@ERROR
         , @TotalPurged = @TotalPurged + @@ROWCOUNT
         , @RC = @@ROWCOUNT
         , @Iteration = DATEDIFF(SECOND, @LoopTime, GETDATE())

    SELECT @ProgressReport = @ProgressReport + 'Touch point at : ' + CONVERT(VARCHAR, @RC) + ' deletes took ' + CONVERT(VARCHAR, @Iteration) + ' seconds (rounded down)' + CHAR(13) + CHAR(10)
         , @LoopTime = GETDATE()

    --
    -- Keep purging in a loop, only if both conditions are met
    -- 1) We actually purged 1000 records
    -- 2) The purge has been running for less than 5 minutes
    --
    IF (@RC = 1000) AND (DATEDIFF(SECOND, @StartTime, GETDATE()) < 580)
        SELECT @RunAgain = 1
END
Hope this helps,
_Sqltimes

Quick one today:

Trace Flags are a unique feature in Sql Server. They allow enabling certain features, that could change the way Sql Server behaves. Generally, these are helpful in troubleshooting any uncommon scenarios or identifying performance bottlenecks.

There are two types of trace flags. Global and Local (session). Global trace flags are enabled globally for the entire instance; Local flags are applicable just to the current session. Some trace flags are global flags (ex: 1204), others are local trace flags; We also have some that could be enabled either globally or locally (Ex : 4199).

CHECK STATUS

To check what TRACE FLAGS are enabled on our instances there are several ways to accomplish this, so here we’ll look at a couple of them. Following scripts allow us to check the status of any status flag; If they are enabled locally (for session) or globally.

--
-- Check if the trace flags are enabled globally
--
DBCC TRACESTATUS(1204, -1)
GO

DBCC TRACESTATUS (1222, -1)
GO

DBCC TRACESTATUS (1117, -1)
GO

--
-- Or run this alternative
--
DBCC TRACESTATUS(1204, 1222, 1117, -1)
GO
Trace Flag Status

Trace Flag Status

ENABLE TRACE FLAGS

Now, use the following code to enable them globally. Trace flags 1204 & 1222 are used to troubleshoot deadlocks. For more information on types of deadlocks, please review this post. And techniques to capture deadlock traces, please review this post.

--
-- Enable Trace Flags globally
--
DBCC TRACEON (1204,-1)
GO

DBCC TRACEON (1222,-1)
GO
Set TRACEON

Set TRACEON

Please note that if the instance is restarted these settings will not be in place after restart. For that we need to enable them during startup, as part of startup parameters.

DISABLE TRACE FLAGS

Similarly, when you are ready to turn them off, use the below script:

--
-- Turn trace flags off globally
--
DBCC TRACEOFF (1204,-1)
GO

DBCC TRACEOFF (1222,-1)
GO
Disable Trace Flags

Disable Trace Flags

 

Hope this helps,

Follow

Get every new post delivered to your Inbox.