Feeds:
Posts
Comments

Archive for the ‘Sql Server 2016’ Category

Quick one today:

A few days ago, as part of installing Sql Server 2016, we needed to install R Service on a lab machine. During installation, we learnt that the Sql Server DVD/media does not have necessary CAB files to perform installation. We ran into this situation:

Setup was unable to contact the download server. 
Provide the location of the Microsoft R Open and Microsoft R Server 
installation files and click 'Next'. 
The installation files can be downloaded from:

https://go.microsoft.com/fwlink/?LinkId=824879&lcid=1033
 https://go.microsoft.com/fwlink/?LinkId=824881&lcid=1033

Sql2016_R_Install

Resolution:

Since our lab machines do not have internet access, these CAB files could not be downloaded by the install wizard. Using the URL’s, we could download the necessary install CAB files (2 files) to a local laptop and then copy the files over to the lab server.

Then point the install wizard to this location (local to the server) where both the CAB files are present. See image below:

Sql2016_R_Install_with_CAB_Files

 

Once the CAB files path is identified, rest of the installation progresses smoothly. See image below with R Service installed.

Sql2016_R_Install_Successful

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Recently, in one of our lab machines, we ran into this errors during replication reconfiguration (rebuilding) efforts.

Msg 18752, Level 16, State 1, Procedure sp_replcmds, Line 1
Only one Log Reader Agent or log-related procedure (sp_repldone, sp_replcmds, 
and sp_replshowcmds) can connect to a database at a time. If you executed a 
log-related procedure, drop the connection over which the procedure was executed 
or execute sp_replflush over that connection before starting the Log Reader Agent 
or executing another log-related procedure.

The error message, though verbose, it was a bit confusing to understand the underlying cause for this error. Once, we reiterated the steps that resulted in this error, it became clear as to what could have contributed to this error. We had replication set upon a lab machine, and before this error occurred, we were attempting to drop transactional replication using SSMS GUI.

Turns out Log Reader agent was not properly stopped before attempting this; To resolve this, just stop the Log Reader Agent. To be sure, go to SSMS > Replication > Right click on Publication and go to ‘View Log Reader Agent Status‘.

Once Log Reader Agent is stopped, we were able proceed with dropping Transactional replication configuration.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

In the past, we’ve covered a MAXDOP query hint with details and nuances (and here too). This time, we’ll get into another interesting query hint MAXRECURSION.

One of my colleagues needed to come up with a numbers table with values like this, where there are multiple batches and each batch has some records.

Batch RecordsInBatch
1 1
1 2
1 3
1 4
1 2500
2 1
2 2
.. ..
2 2500
.. ..
999 2500

This could be achieved with recursive CTE with a table OPTION (MAXRECURSION = n).

--
--  Numbers table with recursive CTE
--
DECLARE @BN INT = 999, @RN INT = 2500

;WITH BatchNumbers (BatchNum)
AS	(
		SELECT 1 AS BatchNum
		UNION ALL
		SELECT BatchNum + 1 AS BatchNum
		FROM BatchNumbers
		WHERE BatchNum < @BN
	),
RecordNumbers (RecNum)
AS	(
		SELECT 1 AS RecNum
		UNION ALL
		SELECT RecNum + 1 AS RecNum
		FROM RecordNumbers
		WHERE RecNum < @RN
	)
SELECT BatchNumbers.BatchNum, RecordNumbers.RecNum
FROM RecordNumbers
CROSS JOIN BatchNumbers
ORDER BY BatchNumbers.BatchNum, RecordNumbers.RecNum
OPTION (MAXRECURSION 2500)
GO

MAXRECURSION – points to keep in mind

  • Server-wide, the limit is set to 100.
    • Use MAXRECURSION query hint to prevent infinite loops
  • Server-wide default is set to 100
  • When MAXRECURSION 0 is specified, it is same as infinite loops (no limit)
  • In query hint, values for MAXRECURSION range from 0 to 32,767
  • In cases where incorrect code enters into production, that results in infinite loops, MAXRECURSION could be used to limit the loops.
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Earlier today, in our SSRS lab, after deploying reports using PowerShell, we ran into this error:

The dataset `DataSource1' refers to the shared data source `DataSource1', 
which is not published on the report server. The shared data source `DataSource1' 
must be published before this report can run.

Obviously, since we are still in the process of figuring out the correct “SSRS Deployment Steps” for automation, we may have missed a step or two. In this case, we missed/forgot the “data source association to each RDL” step.

Resolution:

Since we are still figuring out an automated step to it, for now, this is what we did to resolve this issue manually.

  1. Go to Reports Manager URL on IE browser (with Administrator rights)
  2. Go to the individual report/RDL and hover mouse over it.
  3. Click on the right side yellow drop down and go to ‘Manage’
  4. Go to ‘Data Sources’ tab
  5. Assign the correct shared data source (see below image for clarity)
SSRS_Associate_DataSource_to_RDL.PNG
Hope this helps,
_Sqltimes

Read Full Post »

Quick one today:

Earlier today, when installing Sql Server 2016, ran into this interesting error:

Oracle JRE 7 Update 51 (64-bit) or higher is required for Polybase

oraclejre_error

Looks like JRE (Java Runtime Engine) is needed for Polybase application to run. So, lets get it from Oracle website here. Download from the JRE section (see image below).

oraclejre_error2

In the new page, “Accept License Agreement” & download the version pertinent to your environment.

oraclejre_error3

After installing JRE, the checks ran successfully for install.

Hope this helps,
_Sqltimes

Read Full Post »

Quick one day:

Lately, while working on SSRS, we’ve uncovered some tidbits that have been helping us identify the root cause. Here is one of them:

In our environment, we pull SSRS reports from a client application (web page); For the most part, things run smooth. On occasion, when we run into issues, we need to be able to pull up the underlying error message. In SSRS, by default, error reporting back to calling application (client webpage) is disabled. But it is easy to enable it as and when needed and then turn it off, once done.

Go to SSMS and connect to Reporting Service. Right click on the root and go to properties. Under Advanced tab, under Security section, look for EnableRemoteErrors and set the value as True from the drop down. This sends error back to the client, so we can see what is going on. See the image below for clarity.

SSRS_Report_Errors_To_Client.PNG

 

Read Full Post »

Interesting one today:

On one of our production machines, we recently added a new LUN to a SQL cluster. A task like this is a team effort. Sysadmins perform some steps and DBA carry out the remaining. In this article, the main focus is on covering the steps after the LUN is added to the OS & Sql Cluster by the sysadmins.  For context, we’ll start with high level steps before going into details.

Sysadmins steps

  1. Add new storage to the machine/OS as an available storage
  2. Format the available drive with appropriate settings (cluster size) and add it as a new drive
  3. Make drive available to the Cluster using “Add Disk” screen in FailOver Cluster Management tool.

DBAs steps

  1. Add available storage to Sql Cluster
  2. Configure dependency (and check the report before & after)
  3. Add data file on the new cluster storage drive

Here we’ll cover the DBA steps in detail:

Some of these steps were covered under a different recent article as part of dealing with an error message, but here we’ll cover it as a task by itself (which it is).

Add New Storage

Once sysadmins have made the new storage as an ‘available storage’ to OS Cluster, it needs to be added as a new storage location to the SQL Cluster.

In FailOver cluster manager, go to Sql Server Resource Group for this SQL Cluster and right click for detailed options and choose “Add Storage” (see image below)

sqlcluster_addnewstorage_to_os_cluster

Once successful, go to Storage\Disks under in FailOver Cluster Manager to confirm the availability. See image below:

sqlcluster_addnewdrive

Configure Dependency

Adding the storage is an important step, and equally important step is adding the new drive to Sql Cluster Dependency Chain. Dependency Chain informs Sql Sever “how to act”, when any resource in the Cluster becomes unavailable. Some resources automatically trigger cluster failover to other node; some resources do not. This decision is made based on the configurations in Dependency Chain.

Example:

Critical: Data drive/LUN that has database files is critical for optimal availability of the Sql Cluster. So, if it becomes unavailable, failing over to other available nodes is imperative to keep the cluster available.

Non-Critical: In some scenarios, Sql Server Agent is not considered as Critical. So if it stops for some reason, Cluster will make multiple attempts to start it on the same node, but may not necessarily cause failover.

This is a business decision. All these “response actions” will be configured in Cluster settings.

Now, check the dependency report (before); We can see that new drive exists in Cluster, but is not yet added to the Dependency Chain.

SqlCluster_DependencyReport.PNG

To Configure Dependency Chain, go to the Sql Server Resource Group under Roles in FailOver Cluster Manager. See the image below for clarity:

Then go to the bottom section for this Resource Group, where all the individual resources that are part of this Resource Group are displayed.

Under “Other Resources“, right click on Sql Server Resource and choose properties.

do As show

sqlcluster_addnewstorage_add_to_dependency

In the “Sql Server Properties” window, we can see the existing resources already added to dependency chain logic.

SqlCluster_Dependency_Before.PNG

Now, go to the end of the logic list and choose “AND” for condition and pick the new Cluster Storage to be included. See image below for clarity:

SqlCluster_Dependency_After.PNG

After saving the settings, regenerate the Dependency Chain report. Now, we’ll see the new drive as part of the logic.

sqlcluster_dependencyreport_after

Add Database Data File to New Cluster Storage

Now, that the new drive is ready, we could easily add a new data file to the new location.

--
-- Add data file to new storage location
--
USE [master]
GO
ALTER DATABASE [SampleDB]
ADD FILE
	(
		  NAME 			= 	N'SampleDB_Data3'
		, FILENAME 		= 	N'U:\MSSQL\Data\SampleDB_Data3.NDF'
		, SIZE 			= 	3500 GB
		, FILEGROWTH 	= 	100 GB
		, MAXSIZE 		= 	3900 GB
	)
TO FILEGROUP [PRIMARY]
GO
Hope this helps,
_Sqltimes

Read Full Post »

Older Posts »