Feeds:
Posts
Comments

Quick one today:

Recently, we were tasked with an interesting task. One of our applications stores digital shopping receipts in a database table in binary format. A different team wanted an export of all the millions of such digital receipts in separate files for their ETL engine. Initially, T-SQL was used to export all the data and it worked well for a few thousand to a million rows. But soon we realized that we need a more scalable solution. After exporting the data into RAW files using T-SQL, we then needed to move these files from the database server to another server. From that server we need to download then over VPN to different team’s developer laptop. Windows explorer is okay with a few thousand files; But once we get to copying/moving millions of files, it creeps to a halt. Gets painfully slow. So we needed another way that transfers the files directly to the developer’s laptop.

Drum roll !! Enter SSIS !!

In Sql Server Data Tools, open a new Integration Services Project.

Step 1: Create Data Flow Task

Step 1: SSIS Export VARBINARY to RAW File

Step 1: SSIS Export VARBINARY to RAW File

Step 2: Create Ole DB Source with necessary SQL query to retrieve data from the column.

Step 2: SSIS Export VARBINARY to RAW File

Step 2: SSIS Export VARBINARY to RAW File

Step 3: Create Export column from the “Other transformations”

Step 3 : SSIS Export VARBINARY to RAW File

Step 3 : SSIS Export VARBINARY to RAW File

Step 4: Make some property changes as needed.

Step 4 : SSIS Export VARBINARY to RAW File

Step 4 : SSIS Export VARBINARY to RAW File

Step 5: Run it

Voila !! All files are now brought from the database server, over VPN, to the developers local laptop (once authentication & authorization is properly put in place)

Hope this helps,
_Sqltimes

Quick one today:

A few months ago, there was a new requirement to export millions of records in a table column to raw files on a disk. This column contains millions of shopping receipts stored in binary format in database tables. Early on, I stumbled up on a tip that gave me a good idea of how to approach this problem.

This uses T-SQL with help from ‘ OLE Automation Procedures’ to accomplish this task. Towards that, we need to enable ‘Ole Automation Procedures’ advanced options.

--
-- Enable 'Ole Automation Procedures'
--
SELECT * FROM sys.configurations
WHERE name = 'Ole Automation Procedures'
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 1
RECONFIGURE
GO

EXEC sp_configure 'Ole Automation Procedures', 1
RECONFIGURE
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 0
RECONFIGURE
GO

SELECT * FROM sys.configurations
WHERE name = 'Ole Automation Procedures'
GO

 

asdas

--
-- Code to export VARBINARY into RAW files using TSQL
--
DECLARE @SQLIMG VARCHAR(MAX)
 , @IMG_PATH VARBINARY(MAX)
 , @TIMESTAMP VARCHAR(MAX)
 , @ObjectToken INT

DECLARE IMGPATH CURSOR FAST_FORWARD FOR
 SELECT logcontent
 FROM dbo.TransactionLog
 WHERE EnterpriseID = '423-DZFDSF-23432-254E57A1-45A7-4DBF'

OPEN IMGPATH
FETCH NEXT FROM IMGPATH INTO @IMG_PATH

WHILE @@FETCH_STATUS = 0
BEGIN
 --
 -- Create unique file name for each row of data
 --
 SET @TIMESTAMP = 'M:\MSSQL\Hyvee\Receipts_' + replace(replace(replace(replace(convert(varchar,getdate(),121),'-','_'),':','_'),'.','_'),' ','_') + '.ZIP'

 -- PRINT @TIMESTAMP
 -- PRINT @SQLIMG

 --
 -- Use Ole Automation to save the contents to a file on disk
 --
 EXEC sp_OACreate 'ADODB.Stream', @ObjectToken OUTPUT
 EXEC sp_OASetProperty @ObjectToken, 'Type' , 1
 EXEC sp_OAMethod @ObjectToken, 'Open'
 EXEC sp_OAMethod @ObjectToken, 'Write' , NULL , @IMG_PATH
 EXEC sp_OAMethod @ObjectToken, 'SaveToFile', NULL , @TIMESTAMP , 2
 EXEC sp_OAMethod @ObjectToken, 'Close'
 EXEC sp_OADestroy @ObjectToken

 FETCH NEXT FROM IMGPATH INTO @IMG_PATH
END

CLOSE IMGPATH
DEALLOCATE IMGPATH
GO

 

OleAutomation Procedures RAW Files

OleAutomation Procedures RAW Files

At the end of it, reset the ‘Ole Automation Procedures’ settings.

<pre>--
-- Enable 'Ole Automation Procedures'
--
SELECT * FROM sys.configurations
WHERE name = 'Ole Automation Procedures'
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 1
RECONFIGURE
GO

EXEC sp_configure 'Ole Automation Procedures', 0
RECONFIGURE
GO

EXEC sp_configure 'SHOW ADVANCED OPTIONS', 0
RECONFIGURE
GO

SELECT * FROM sys.configurations
WHERE name = 'Ole Automation Procedures'
GO

Hope this helps,
_Sqltimes

 

Quick one today:

Table variables are ubiquitous. There are several benefits to using them in your T-SQL code. There are some nuances between table variable and temporary table that we discussed previously; Today, we’ll look at dropping table variable.

How do we DROP a table variable? A table variable is not fully a table, so we cannot use DROP statement or TRUNCATE statement.

--
--   Throw error
--
DECLARE @Table_Variable TABLE (ID2 INT)
INSERT INTO @Table_Variable (ID2) VALUES(1), (2)
SELECT * FROM @Table_Variable

TRUNCATE TABLE @Table_Variable  -- This code throws error
GO

--
-- Error message
--
Msg 102, Level 15, State 1, Line 5
Incorrect syntax near '@Table_Variable'.

We get the same error message when we use TRUNCATE TABLE or DROP TABLE with table variable.

Solution:

--
--  Correct way to empty table variable
--
DECLARE @Table_Variable TABLE (ID2 INT)
INSERT INTO @Table_Variable (ID2) VALUES(1), (2)
SELECT * FROM @Table_Variable

DELETE @Table_Variable
GO

After DELETE, only the contents are removed; The variable and its structure are still available for INSERTing again, within the scope.

Hope this helps,
_Sqltimes

Quick one today:

Recently, ran into an interesting error message that looks very complicated, but is fairly straight one.

Msg 177, Level 15, State 1, Line 1
The IDENTITY function can only be used when the SELECT statement has an INTO clause.

The code looked something like this:

--
-- Error code
--
SELECT    ID = IDENTITY(INT, 1,1)
		, Name
		, type_desc
FROM sys.objects
GO

This is an interesting usage of IDENTITY() function – with INT datatype declaration within the function itself, with SEED value.

Looks like for such usage, we need to use INTO clause redirecting the return dataset into a table. Like this:

--
-- Correct code
--
SELECT    ID = IDENTITY(INT, 1,1)
		, Name
		, type_desc
INTO #Test_Table

FROM sys.objects
GO

Hope this helps,
_Sqltimes

Quick one today:

As alwways, there is value in formatting SQL code in a proper way. There are several ‘proper’ ways to format code; And all are great. To each, their own.

But one formatting tip that everyone appreciates is keeping Sql Server identifiers or keywords in UPPER CASE.

There is a keyboard shortcut, that allows this operation : Ctrl + Shift + U

--
--  Some random un-formatted code.
--
begin
set nocount on
declare @starttime datetime
declare @hierarchyid int
declare @nodeid bigint
declare @statuscode int

Highlight the word that you want to change to UPPER CASE and use the keyboard short cut: Ctrl + Shift + U. Similarly, changing to lower case could be accomplished with Ctrl + Shift + L

--
--  After formatting
--
BEGIN
SET NOCOUNT ON

DECLARE @starttime		DATETIME
DECLARE @hierarchyid	INT
DECLARE @nodeid			BIGINT
DECLARE @statuscode		INT

 

Hope this helps,
_Sqltimes

Quick one today:

A few weeks ago, I ran into an interesting error message:

Error Message:

DBCC SHRINKDATABASE: File ID 1 of database ID 2 was skipped because the file does not have enough free space to reclaim.
DBCC SHRINKDATABASE: File ID 2 of database ID 2 was skipped because the file does not have enough free space to reclaim.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
 

After looking into it a bit further, these are the conclusions.

  • SHRINKDATABASE runs on the entire database, unlike SHRINKFILE. So, when I issued the command, the command internally tried to SHRINK both data and log file. The shrink percentage parameter indicates the amount of “shrink” that it is attempting on the database (data & log file). If there is not enough free space inside the data or log file, then Sql Server throws this error out saying it could not proceed. If the expected free space is not available on both the files, the command errors out.
  • SHRINKDATABASE cannot reduce the data and log file size beyond the initially configured number. This could also be a factor.

So, first we need to understand how much free space is available, so I could shrink based on that. For that use the following command:

--
--  How much free space is available
--
DBCC SHRINKDATABASE (SampleDB, TABULAR)
GO

 

This give output in the following format:

DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages
2 1 163392 1024 552 496
2 2 27408 64 27408 64

Once you know how much free space is available, then you could re-run the SHRINKDATABASE command with pertinent parameter values.

 

Note:

  • Running SHRINKDATABASE command on production systems is not advisable. Careful analysis needs to happen before any steps are taken.
  • Also, this (TABULAR) option is an undocumented feature, so it could change in future

Hope this helps,

Quick one today:

Recently, I ran into this interesting error message. The actions that resulted in this error are silly, but the message is interesting. The language used in the error message dignifies the actions to the same level as other valid error messages. But in my opinion these actions are silly and it is nice of Microsoft to be kind and gentle on us.

Code that resulted in this error:

--
--  Incorrect
--
PRINT 'Purged Ended on : ' + CONVERT(VARCHAR, DATETIME)
GO

Interesting error:

Msg 128, Level 15, State 1, Line 11
 The name "DATETIME" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.

Corrected code sample that works:

--
--  Correct
--
PRINT 'Purged Started on : ' + CONVERT(VARCHAR, GETDATE())
GO
Hope this helps,
_Sqltimes
Follow

Get every new post delivered to your Inbox.