Feeds:
Posts
Comments

Archive for April, 2016

Quick one today:

Sometimes, in PowerShell, we run into this error:

.\Generate_DB_Traffic.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about_signing" for more details.
 At line:1 char:26
 + .\Generate_DB_Traffic.ps1 <<<<
 + CategoryInfo : NotSpecified: (:) [], PSSecurityException
 + FullyQualifiedErrorId : RuntimeException

One way that we could get around this necessary restriction is by changing Execution Policy setting. First let’s check the current settings using ‘Get-ExecutionPolicy‘ command

Get-ExecutionPolicy -List
PowerShell Execution Policy

PowerShell Execution Policy

For the scope you need, use the command ‘Set-ExecutionPolicy‘ to set to appropriate policy to be able to run your powershell scripts. For example, on my machine, this worked:

Set-ExecutionPolicy -Scope LocalMachine Unrestricted

Note: 64-bit OS, have both 32-bit & 64-bit versions of PowerShell, so make sure settings are set correctly on both (or whichever one you are using).

PowerShell Set ExecutionPolicy

PowerShell Set ExecutionPolicy

 

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

To continue the trend of volume testing posts in the last few days, lets add another tip that allows us to simulate database traffic as if its coming from multiple webservers. Powershell allows us to spawn multiple threads with each generating database traffic. The example below, makes it easier:

 

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
#    The script below spawns threads.
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}
cmd /c start powershell -Command {.\KickOffTraffic.ps1}

When we run this, it spawns multiple windows where each window generates traffic independent of other each other.

Spawn Multiple Thread Traffic

Spawn Multiple Thread Traffic

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

In our lab, as part of load testing, we simulate large database traffic. There are many ways to accomplish this, but as a DBA we need the most simplistic way; That obviously means PowerShell or DOS or SSMS. Today, we’ll look into one way using PowerShell.

Imagine having all your database code in a several powershell scripts generating traffic for different functional; Now we trigger them in a staggered fashion from within another control file (powershell script).

Imagine, the load generating files as:

  • Generate_HR_Traffic.ps1
  • Generate_Sales_Traffic.ps1
  • Generate_Accounting_Traffic.ps1
  • Generate_Reporting_Traffic.ps1
  • etc

This is how we call them all in a sequence from a different control file called KickOffTraffic.ps1

# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
#
# Loop and keep calling the other ps1 file that loads data
#
# - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
for ($i = 1; $i < 1000000; $i++)
{
    & .\Generate_HR_Traffic.ps1
    & .\Generate_Sales_Traffic.ps1
    & .\Generate_Accounting_Traffic.ps1
    & .\Generate_Reporting_Traffic.ps1
}
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Occasionally, we run into some situations where we need to manipulate data in text files. Either it is handling many files or manipulating large data sets. One need that keeps arising more frequently than others is the need to concatenate files into one.

There are a few ways to accomplish this; Below are some ideas:

Poweshell:

#
# Read all files with wild card Old*.txt into one file called 'NewFile.txt'
#
Get-Content Old*.txt | Set-Content NewFile.txt

#
# Concatenate all files into a new file called Newfile.txt
#    '>' represents writing into a new file
#    '>>' represents appending to an existing file
#
cat Old*.txt > NewFile.txt

#
# Similarly, use Get-Context and '>' or '>>' to concatenate & write into a new file
#    '>' represents writing into a new file
#    '>>' represents appending to an existing file
#
Get-Content Old*.txt > NewFile.txt
Get-Content OneMore.txt >> NewFile.txt
 

 

MS-DOS:

Similarly, there are several ways to concatenate files from DOS, here we’ll cover one such ways:

Note Copy command allows similar operation to concatenate all files into one file. When order is important, use available flags within Copy command

copy Old*.txt NewFile.txt 

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting topic today:

On a busy production environment there are always several processing reading, updating, deleting & inserting records into different sections of the same table’s B-Tree. This is part of normal operations. To a large extent B-Trees allow these operations to occur as efficiently as possible minimizing overhead or contention. On occasions there will be overlaps that results in transient blocking; Where Process A’s UPDATE is blocking Process B’s SELECT operation (if there is some overlap of records between both operations).

While this is unfortunate, this is a natural outcome at higher concurrency levels. Good thing is this is just transient. Sql Server is very efficient in juggling these processes so all operations complete as fast as possible, while maintaining integrity of the data (ACID properties).

In newer versions of Sql Server, starting 2008, there are several ways to minimize this overlap or contention. Isolation Levels, Table Partitioning, Efficient code constructs, Best practices for code, etc. Today, in this post, we’ll touch on a specific example in such busy VLDB systems.

Case:

In our production system, we have a busy large table with billions of records. A lot of processes are simultaneously trying to SELECT, UPDATE, DELETE & INSERT into this table every second of the day; Obviously, this results in some occasional, transient contention and short bursts of blocking. In worst case, there are events of extended blocking or repetitive extended blocking or deadlocks. This results in heavy latency in application performance.

Adding to this, we have a new requirement to DELETE records that meet certain conditions; all through the day. Now this gets tricky. If it were daily purges (at night), we could handle them through partitioning. But these do not align with partitioned scheme, so we needed a new solution.

Solution:

Introducing READPAST table hint.

In short, when we use this table hint, Sql Server when attempting to read records, if they are locked by a different process, it moves on. It just does not read them. So, row-level locks are skipped. In default behavior, when qualifying records are locked by a different process, our process waits until they are released or ‘LOCK_TIMEOUT’ occurs.

Note: This behavior is not good for general queries. Ex: When you want to get a SUM of all order amounts, you want correct number. Not a ball park figure. So, it these situations it does not apply.

For us, we just needed a way to DELETE as many records as possible, when possible. It does not matter if we DELETE all or some. Just deleting as many as possible without causing any disruptive overhead is important.

We did something like this:

Just a code snippet copied from a larger code section, so it is not complete.

--
-- In a loop keep deleting data
--
WHILE @RunAgain = 1
BEGIN

    -- Reset loop flag
    SELECT @RunAgain = 0

    DELETE TOP (1000)
    FROM dbo.SomeTable WITH (ROWLOCK, READPAST)
    WHERE Deleted = 1


    --
    -- Gather status information after execution
    --
    SELECT @Err = @@ERROR
         , @TotalPurged = @TotalPurged + @@ROWCOUNT
         , @RC = @@ROWCOUNT
         , @Iteration = DATEDIFF(SECOND, @LoopTime, GETDATE())

    SELECT @ProgressReport = @ProgressReport + 'Touch point at : ' + CONVERT(VARCHAR, @RC) + ' deletes took ' + CONVERT(VARCHAR, @Iteration) + ' seconds (rounded down)' + CHAR(13) + CHAR(10)
         , @LoopTime = GETDATE()

    --
    -- Keep purging in a loop, only if both conditions are met
    -- 1) We actually purged 1000 records
    -- 2) The purge has been running for less than 5 minutes
    --
    IF (@RC = 1000) AND (DATEDIFF(SECOND, @StartTime, GETDATE()) < 580)
        SELECT @RunAgain = 1
END
Hope this helps,
_Sqltimes

Read Full Post »