Feeds:
Posts
Comments

Archive for the ‘Technical Documentation’ Category

Quick one today:

A few days ago, this error appeared on one of our lab machines.

Error Message:

Msg 15281, Level 16, State 1, Line 16
SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of component 
'Ad Hoc Distributed Queries' because this component is turned off as part of 
the security configuration for this server. A system administrator can enable 
the use of 'Ad Hoc Distributed Queries' by using sp_configure. For more information 
about enabling 'Ad Hoc Distributed Queries', search for 'Ad Hoc Distributed Queries'
in SQL Server Books Online.

Resolution:

This is an easy error to fix — as the error message is pretty verbose and self-explanatory. After making sure that ad hoc distributed queries are allowed (acceptable to be executed) in your sql environment, run the following query to enable execution of ad hoc in your Sql instance.

--
-- Check current status of Ad Hoc Distributed Queries
--
SELECT * FROM sys.configurations WHERE name LIKE '%ad hoc Dis%'
GO

--
-- Enable Ad Hoc Distributed Queries
--
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC sp_configure 'Ad Hoc Distributed Queries', 1
GO
EXEC sp_configure 'show advanced options', 0
GO
RECONFIGURE
GO

 

Hope this helps,
_Sqltimes

 

Advertisements

Read Full Post »

Interesting one today:

A few months ago, we had an issue with a database in our lab environment where the database ended up in SUSPECT mode due to storage issues. Once the storage was fixed, we were able to perform troubleshooting steps on the database. Of those, today we’ll only cover the step that was used to rebuild the transactional log file (rest of the steps will be covered in a future post).

Error Message:

An error occurred while processing the log for database 'SampleDB'. 
If possible, restore from backup. 
If a backup is not available, it might be necessary to rebuild the log.

 

NOTE: Rebuilding the transaction log is always the last option. There are other, safer options to troubleshoot when databases are in SUSPECT mode. Use this mode only after you’ve exhausted all other options like

  • CHECKDB
  • Restore from valid backup
  • Repairing with EMERGENCY mode

 

NOTE: Rebuilding transactional log file will break the restore chain; So any previous transactional log files could not be applied with the backups going forward.

Rebuild Transactional Log File

 

Step 1: Let’s identify the transactional log file name and path. Use the following query:

--
-- Gather Logical & Physical names of the database
--
SELECT D.name AS [DatabaseName]
	, M.name AS [LogicalName]
	, M.physical_name AS [PhysicalName]
FROM sys.master_files AS M
INNER JOIN sys.databases AS D 
	ON d.database_id = m.database_id
	AND D.name = 'SampleDB'
GO

Let’s use the name & path in the script below.

 

Step 2: Prepare the database

Before we rebuild the transactional log file, we need to set the database in EMERGENCY mode & SINGLE_USER mode. Use the script below:

--
-- Set database in EMERGENCY mode & SINGLE_USER mode
--
ALTER DATABASE SampleDB  SET EMERGENCY
GO
ALTER DATABASE SampleDB  SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

 

 

Step 3: Rebuild Transactional Log

This is an undocumented & unsupported command, so use caution before you run this in production. Steps like this must be taken only upon guidance from Microsoft CSS.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Rebuild log file
--
ALTER DATABASE SampleDB 
REBUILD LOG
ON 
	( NAME = 'SampleDB_log'
	, FILENAME = 'L:\MSSQL\LOGS\SampleDB_log.MDF'
	)
GO

 

Step 4: Post Rebuild

Upon successful log rebuild, lets take a few precautions to make sure everything is till good.

With this script, we could create a new transactional log file. After you runs it, make sure you run CHECKDB & backups.

--
-- Post rebuild steps
--
DBCC checkdb(SampleDB)
GO

ALTER DATABASE SampleDB  SET MULTI_USER
GO

SELECT DATABASEPROPERTYEX('SampleDB', 'Status')
GO

BACKUP DATABASE SampleDB TO DISK = N'Z:\MSSQL\Backup\SampleDB_AfterTLogRebuild.BAK'
GO

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

Last time we walked through the T-SQL steps to insert a tracer token to measure latency in replication topology. Today, we’ll look at an alternate method to query tracer token details; a.k.a. MStracer_tokens &  MStracer_history meta tables.

Along with the T-SQL procedures (sys.sp_helptracertokenhistory), Sql Server also provides a way to query the tracer tokens using metadata tables i.e. MStracer_tokens &  MStracer_history.  They keep track of details for each token. Querying them will provide us necessary information.

--
-- Query tracer token tables
--
SELECT	  publication_id
	, agent_id
	, t.publisher_commit
	, t.distributor_commit
	, h.subscriber_commit

FROM MStracer_tokens t
JOIN MStracer_history h
	ON t.tracer_id = h.parent_tracer_id

ORDER BY t.publisher_commit DESC
GO
Query Tracer Token Details

Query Tracer Token Details

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

This article is part 3 in a series on Advanced Query Tuning Concepts, that are good to be familiar with. Full list is here.

Hash Join

When both the data sets are large with unsorted & unindexed data sets, Hash Join is the best operator. This is the most complicated data set to process and Hash Join could process them efficiently.

HashJoin_QueryPlan.png

Hash Joins follow a complicated logic in identifying matching records. We’ll get into the details in a future post.

There are different types of Hash Joins

  1. In-memory hash join
  2. Grace hash join
  3. Recursive hash join

Note:

Hash Joins end up being used a lot in intermediary steps. During large table joins, when virtual tables are generated during intermediary steps, the subsequent joins on those intermediary tables are performed using Hash Joins — as these intermediary tables are not indexed or sorted

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

This article is part 2 in a series on Advanced Query Tuning Concepts, that are good to be familiar with. Full list is here.

Merge Join

When both the inputs are fairly large size with indexed and sorted data sets, Merge Join is very efficient in returning matching records. See the image below:

MergeJoin_QueryPlan

In this example, we have 2 similarly sized tables (this is important), that are Indexed (and hence sorted — this is also important). When they are joined on ID columns (ON B.ID = S.ID) that are indexed, Sql Server uses Merge Join

Simple Explanation:

From the sorted input lists, Sql Server takes one record from each table and compares them; If they match, it is returned. If not, the lower value row is discarded and next row from the same table is obtained for next comparison. It keeps iterating until the end of that table.

  • Both the tables need to be similarly sized in comparison to each other.
  • The join columns in both the data sets need to be indexed & sorted
    • This is important for efficient processing of the records.
    • If they are not already sorted, Merge Join adds a sorting step resulting in longer processing times — making it inefficient.

 

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

This is Part 1 of a series on Advanced Query Tuning Concepts, that are good to be familiar with. Today we’ll cover Nested Loop Joins. Full list is here.

Nested Loop Join

Nested Loop Join is utilized when, in a join, one input is a small data set (fewer than 10 records) and the other is a large data set that is indexed on the columns used in join.

NestedLoopJoin_Plan

Simple Explanation:

For each record in the SmallTable, it searches entire LargeTable for matching records. It keeps iterating for all the records in the SmallTable. On the first glance it seems like an inefficient method, but it is the most efficient method.

  • Since we are making the SmallTable as the outer input table, it limits the number of times we need to loop through.
  • Since the LargeTable is indexed, a quick Index Seek returns the matching value for the given value from SmallTable (i.e. ON B.ID = S.ID).
    • Hence the reason for Index Scan for SmallTable, with Index Seek for Large table.
    • The other way round would be very inefficient (Scaning BigTable and Seeking on SmallTable)

In this combination of Small vs. Large data sets, Nested Loop Join is the most efficient operator.

 

 

Hope this helps,
_Sqltimes

Read Full Post »

Interesting one today:

Usually, during query performance tuning, we look at query plan to see where most of the time is spent during query execution. Microsoft uses different Graphical Execution Plan Icons to to help easily convey the query execution steps & related costs at each step. Today we’ll look at three of those that are crucial in distilling large data sets and returning only the pertinent records. They are essentially Join Operators used in comparing multiple large data sets and retrieve necessary records.

  1. Nested Loop Join
  2. Merge Join
  3. Hash Join
    • In-memory hash join
    • Grace hash join
    • Recursive hash join

These are part of Advanced Query Tuning Concepts, that are good to be familiar with.

We’ll cover each in an individual post with an example, in this 3 part series.

  1. Nested Loop Join: Best when one of the data sets is small
  2. Merge Join: Best when both the data sets are of similar sizes
  3. Hash Join: Can efficiently handle large data sets, either sorted or not.

 

Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »